[DISCUSS] KIP-308: Support dynamic update of max.connections.per.ip/max.connections.per.ip.overrides configs

2018-06-21 Thread Manikumar
Hi all,

I have created a KIP to add support for dynamic update of
max.connections.per.ip/max.connections.per.ip.overrides configs

*https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=85474993
*

Any feedback is appreciated.

Thanks


Build failed in Jenkins: kafka-trunk-jdk10 #244

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] KAFKA-7082: Concurrent create topics may throw NodeExistsException

[jason] KAFKA-4682; Revise expiration semantics of consumer group offsets

--
[...truncated 1.98 MB...]
org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldNotAllowInvalidStoreNameOnReduce PASSED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull STARTED

org.apache.kafka.streams.kstream.internals.KGroupedTableImplTest > 
shouldThrowNullPointerOnReduceWhenSubtractorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfMergerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeCount PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldMaterializeAggregated PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnCountIfMaterializedIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnReduceIfReducerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldAggregateSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldCountSessionWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldReduceWindowed PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnMaterializedAggregateIfAggregatorIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfInitializerIsNull PASSED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMergerIsNull STARTED

org.apache.kafka.streams.kstream.internals.SessionWindowedKStreamImplTest > 
shouldThrowNullPointerOnAggregateIfMerge

Build failed in Jenkins: kafka-1.1-jdk7 #153

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-7082: Concurrent create topics may throw NodeExistsException

--
[...truncated 420.95 KB...]
kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > p

Jenkins build is back to normal : kafka-trunk-jdk8 #2760

2018-06-21 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7082) Concurrent createTopics calls may throw NodeExistsException

2018-06-21 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7082.

Resolution: Fixed
  Reviewer: Jun Rao

> Concurrent createTopics calls may throw NodeExistsException
> ---
>
> Key: KAFKA-7082
> URL: https://issues.apache.org/jira/browse/KAFKA-7082
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0
>Reporter: Ismael Juma
>Assignee: Ismael Juma
>Priority: Critical
>  Labels: regression
> Fix For: 2.0.1, 1.1.2
>
>
> This exception is unexpected causing an `UnknownServerException` to be thrown 
> back to the client. Example below:
> {code}
> org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = 
> NodeExists for /config/topics/connect-configs
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:119)
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:472)
> at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1400)
> at kafka.zk.KafkaZkClient.create$1(KafkaZkClient.scala:262)
> at 
> kafka.zk.KafkaZkClient.setOrCreateEntityConfigs(KafkaZkClient.scala:269)
> at 
> kafka.zk.AdminZkClient.createOrUpdateTopicPartitionAssignmentPathInZK(AdminZkClient.scala:99)
> at kafka.server.AdminManager$$anonfun$2.apply(AdminManager.scala:126)
> at kafka.server.AdminManager$$anonfun$2.apply(AdminManager.scala:81)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
> at scala.collection.Iterator$class.foreach(Iterator.scala:891)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-0.10.2-jdk7 #220

2018-06-21 Thread Apache Jenkins Server
See 



Jenkins build is back to normal : kafka-1.0-jdk7 #211

2018-06-21 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-1.0-jdk7 #210

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade to Gradle 4.8.1 (#5265)

--
[...truncated 1.85 MB...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldThroughOnUnassignedStateStoreAccess STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldThroughOnUnassignedStateStoreAccess PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorNameWhenConnectingProcessorAndStateStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSourceWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddMoreThanOnePatternSourceNode STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddMoreThanOnePatt

Build failed in Jenkins: kafka-0.10.2-jdk7 #219

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade to Gradle 4.8.1 (#5267)

--
[...truncated 1.31 MB...]
org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenAnExceptionOccursOnTaskCloseDuringShutdown PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringCloseTopologyWhenSuspendingState
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenExceptionOccursDuringCloseTopologyWhenSuspendingState
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
testPartitionAssignmentChange PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMaybeClean 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnTaskCloseForUnassignedSuspendedTask STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldReleaseStateDirLockIfFailureOnTaskCloseForUnassignedSuspendedTask PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMetrics 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > testMetrics 
PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldCloseSuspendedTasksThatAreNoLongerAssignedToThisStreamThreadBeforeCreatingNewTasks
 STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldCloseSuspendedTasksThatAreNoLongerAssignedToThisStreamThreadBeforeCreatingNewTasks
 PASSED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenAnExceptionOccursOnTaskFlushDuringShutdown 
STARTED

org.apache.kafka.streams.processor.internals.StreamThreadTest > 
shouldNotViolateAtLeastOnceWhenAnExceptionOccursOnTaskFlushDuringShutdown PASSED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
testConfigFromStreamsConfig STARTED

org.apache.kafka.streams.processor.internals.StreamsKafkaClientTest > 
testConfigFromStreamsConfig PASSED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
logAndSkipOnInvalidTimestamp STARTED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
logAndSkipOnInvalidTimestamp PASSED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
extractMetadataTimestamp STARTED

org.apache.kafka.streams.processor.LogAndSkipOnInvalidTimestampTest > 
extractMetadataTimestamp PASSED

org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
failOnInvalidTimestamp STARTED

org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
failOnInvalidTimestamp PASSED

org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
extractMetadataTimestamp STARTED

org.apache.kafka.streams.processor.FailOnInvalidTimestampTest > 
extractMetadataTimestamp PASSED

org.apache.kafka.streams.processor.WallclockTimestampExtractorTest > 
extractSystemTimestamp STARTED

org.apache.kafka.streams.processor.WallclockTimestampExtractorTest > 
extractSystemTimestamp PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apach

Build failed in Jenkins: kafka-trunk-jdk8 #2759

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade to Gradle 4.8.1 (#5263)

--
[...truncated 870.09 KB...]
kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannel

Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-21 Thread John Roesler
I've updated the KIP and draft PR accordingly.

On Thu, Jun 21, 2018 at 2:03 PM John Roesler  wrote:

> Interesting... I did not initially consider it because I didn't want to
> have an impact on anyone's Streams apps, but now I see that unless
> developers have subclassed `Windows`, the number of segments would always
> be 3!
>
> There's one caveat to this, which I think was a mistake. The field
> `segments` in Windows is public, which means that anyone can actually set
> it directly on any Window instance like:
>
> TimeWindows tw = TimeWindows.of(100);
> tw.segments = 12345;
>
> Bypassing the bounds check and contradicting the javadoc in Windows that
> says users can't directly set it. Sadly there's no way to "deprecate" this
> exposure, so I propose just to make it private.
>
> With this new knowledge, I agree, I think we can switch to
> "segmentInterval" throughout the interface.
>
> On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang  wrote:
>
>> Hello John,
>>
>> Thanks for the KIP.
>>
>> Should we consider making the change on `Stores#persistentWindowStore`
>> parameters as well?
>>
>>
>> Guozhang
>>
>>
>> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler  wrote:
>>
>> > Hi Ted,
>> >
>> > Ah, when you made that comment to me before, I thought you meant as
>> opposed
>> > to "segments". Now it makes sense that you meant as opposed to
>> > "segmentSize".
>> >
>> > I named it that way to match the peer method "windowSize", which is
>> also a
>> > quantity of milliseconds.
>> >
>> > I agree that "interval" is more intuitive, but I think I favor
>> consistency
>> > in this case. Does that seem reasonable?
>> >
>> > Thanks,
>> > -John
>> >
>> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
>> >
>> > > Normally size is not measured in time unit, such as milliseconds.
>> > > How about naming the new method segmentInterval ?
>> > > Thanks
>> > >  Original message From: John Roesler <
>> j...@confluent.io>
>> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
>> Subject:
>> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
>> > > WindowBytesStoreSupplier
>> > > Hello All,
>> > >
>> > > I'd like to propose KIP-319 to fix an issue I identified in
>> KAFKA-7080.
>> > > Specifically, we're creating CachingWindowStore with the *number of
>> > > segments* instead of the *segment size*.
>> > >
>> > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
>> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
>> > >
>> > > additionally, here's a draft PR for clarity:
>> > > https://github.com/apache/kafka/pull/5257
>> > >
>> > > Please let me know what you think!
>> > >
>> > > Thanks,
>> > > -John
>> > >
>> >
>>
>>
>>
>> --
>> -- Guozhang
>>
>


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-21 Thread Ted Yu
bq. propose just to make it private

+1

On Thu, Jun 21, 2018 at 12:03 PM, John Roesler  wrote:

> Interesting... I did not initially consider it because I didn't want to
> have an impact on anyone's Streams apps, but now I see that unless
> developers have subclassed `Windows`, the number of segments would always
> be 3!
>
> There's one caveat to this, which I think was a mistake. The field
> `segments` in Windows is public, which means that anyone can actually set
> it directly on any Window instance like:
>
> TimeWindows tw = TimeWindows.of(100);
> tw.segments = 12345;
>
> Bypassing the bounds check and contradicting the javadoc in Windows that
> says users can't directly set it. Sadly there's no way to "deprecate" this
> exposure, so I propose just to make it private.
>
> With this new knowledge, I agree, I think we can switch to
> "segmentInterval" throughout the interface.
>
> On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang  wrote:
>
> > Hello John,
> >
> > Thanks for the KIP.
> >
> > Should we consider making the change on `Stores#persistentWindowStore`
> > parameters as well?
> >
> >
> > Guozhang
> >
> >
> > On Wed, Jun 20, 2018 at 1:31 PM, John Roesler  wrote:
> >
> > > Hi Ted,
> > >
> > > Ah, when you made that comment to me before, I thought you meant as
> > opposed
> > > to "segments". Now it makes sense that you meant as opposed to
> > > "segmentSize".
> > >
> > > I named it that way to match the peer method "windowSize", which is
> also
> > a
> > > quantity of milliseconds.
> > >
> > > I agree that "interval" is more intuitive, but I think I favor
> > consistency
> > > in this case. Does that seem reasonable?
> > >
> > > Thanks,
> > > -John
> > >
> > > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
> > >
> > > > Normally size is not measured in time unit, such as milliseconds.
> > > > How about naming the new method segmentInterval ?
> > > > Thanks
> > > >  Original message From: John Roesler <
> > j...@confluent.io>
> > > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org
> Subject:
> > > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > > > WindowBytesStoreSupplier
> > > > Hello All,
> > > >
> > > > I'd like to propose KIP-319 to fix an issue I identified in
> KAFKA-7080.
> > > > Specifically, we're creating CachingWindowStore with the *number of
> > > > segments* instead of the *segment size*.
> > > >
> > > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > > >
> > > > additionally, here's a draft PR for clarity:
> > > > https://github.com/apache/kafka/pull/5257
> > > >
> > > > Please let me know what you think!
> > > >
> > > > Thanks,
> > > > -John
> > > >
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-21 Thread John Roesler
Hi Ted,

Ok, I also prefer it, and I find this reasoning compelling.

Thanks,
-John

On Thu, Jun 21, 2018 at 3:40 AM Ted Yu  wrote:

> Window is a common term used in various streaming processing systems whose
> unit is time unit.
>
> Segment doesn't seem to be as widely used in such context.
> I think using interval in the method name would clearly convey the meaning
> intuitively.
>
> Thanks
>
>
> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler  wrote:
>
> > Hi Ted,
> >
> > Ah, when you made that comment to me before, I thought you meant as
> opposed
> > to "segments". Now it makes sense that you meant as opposed to
> > "segmentSize".
> >
> > I named it that way to match the peer method "windowSize", which is also
> a
> > quantity of milliseconds.
> >
> > I agree that "interval" is more intuitive, but I think I favor
> consistency
> > in this case. Does that seem reasonable?
> >
> > Thanks,
> > -John
> >
> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
> >
> > > Normally size is not measured in time unit, such as milliseconds.
> > > How about naming the new method segmentInterval ?
> > > Thanks
> > >  Original message From: John Roesler <
> j...@confluent.io>
> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > > WindowBytesStoreSupplier
> > > Hello All,
> > >
> > > I'd like to propose KIP-319 to fix an issue I identified in KAFKA-7080.
> > > Specifically, we're creating CachingWindowStore with the *number of
> > > segments* instead of the *segment size*.
> > >
> > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > >
> > > additionally, here's a draft PR for clarity:
> > > https://github.com/apache/kafka/pull/5257
> > >
> > > Please let me know what you think!
> > >
> > > Thanks,
> > > -John
> > >
> >
>


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-21 Thread John Roesler
Interesting... I did not initially consider it because I didn't want to
have an impact on anyone's Streams apps, but now I see that unless
developers have subclassed `Windows`, the number of segments would always
be 3!

There's one caveat to this, which I think was a mistake. The field
`segments` in Windows is public, which means that anyone can actually set
it directly on any Window instance like:

TimeWindows tw = TimeWindows.of(100);
tw.segments = 12345;

Bypassing the bounds check and contradicting the javadoc in Windows that
says users can't directly set it. Sadly there's no way to "deprecate" this
exposure, so I propose just to make it private.

With this new knowledge, I agree, I think we can switch to
"segmentInterval" throughout the interface.

On Wed, Jun 20, 2018 at 5:06 PM Guozhang Wang  wrote:

> Hello John,
>
> Thanks for the KIP.
>
> Should we consider making the change on `Stores#persistentWindowStore`
> parameters as well?
>
>
> Guozhang
>
>
> On Wed, Jun 20, 2018 at 1:31 PM, John Roesler  wrote:
>
> > Hi Ted,
> >
> > Ah, when you made that comment to me before, I thought you meant as
> opposed
> > to "segments". Now it makes sense that you meant as opposed to
> > "segmentSize".
> >
> > I named it that way to match the peer method "windowSize", which is also
> a
> > quantity of milliseconds.
> >
> > I agree that "interval" is more intuitive, but I think I favor
> consistency
> > in this case. Does that seem reasonable?
> >
> > Thanks,
> > -John
> >
> > On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
> >
> > > Normally size is not measured in time unit, such as milliseconds.
> > > How about naming the new method segmentInterval ?
> > > Thanks
> > >  Original message From: John Roesler <
> j...@confluent.io>
> > > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> > > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > > WindowBytesStoreSupplier
> > > Hello All,
> > >
> > > I'd like to propose KIP-319 to fix an issue I identified in KAFKA-7080.
> > > Specifically, we're creating CachingWindowStore with the *number of
> > > segments* instead of the *segment size*.
> > >
> > > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> > >
> > > additionally, here's a draft PR for clarity:
> > > https://github.com/apache/kafka/pull/5257
> > >
> > > Please let me know what you think!
> > >
> > > Thanks,
> > > -John
> > >
> >
>
>
>
> --
> -- Guozhang
>


Jenkins build is back to normal : kafka-trunk-jdk10 #243

2018-06-21 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-1.1-jdk7 #152

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[lindong28] MINOR: Upgrade to Gradle 4.8.1

--
[...truncated 419.35 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnNoLeaderForPartitionIfThrown PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldSurviveFastLeaderChange PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
offsetsShouldNotGoBackwards PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldFollowLeaderEpochBasicWorkflow PASSED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs STARTED

kafka.server.epoch.EpochDrivenReplicationProtocolAcceptanceTest > 
shouldNotAllowDivergentLogs PASSED

kafka.server.ScramServerStartupTest > testAuthentications STARTED

kafka.server.ScramServerStartupTest > testAuthentications P

[jira] [Created] (KAFKA-7089) Extend `kafka-consumer-groups.sh` to show "beginning offsets"

2018-06-21 Thread Matthias J. Sax (JIRA)
Matthias J. Sax created KAFKA-7089:
--

 Summary: Extend `kafka-consumer-groups.sh` to show "beginning 
offsets"
 Key: KAFKA-7089
 URL: https://issues.apache.org/jira/browse/KAFKA-7089
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Reporter: Matthias J. Sax


Currently, `kafka-consumer-groups.sh` only shows "current offset", "end offset" 
and "lag". It would be helpful to extend the tool to also show 
"beginning/earliest offset".



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread Dong Lin
Thank you all for helping to test and vote for this release!

Ismael: Certainly, I agree we should bump up the gradle version. I will
prepare RC1 today.

On Thu, Jun 21, 2018 at 7:05 AM, Ismael Juma  wrote:

> Thanks Dong. One issue that came up is that Gradle dependency resolution no
> longer works with Java 7 due to a minimum TLS version change in Maven
> Central. Gradle 4.8.1 solves the issue (
> https://github.com/apache/kafka/pull/5263). This means that if someone
> tries to build from source with Java 7, it won't work unless they have the
> artifacts in the local cache already. Do you think it makes sense to do RC1
> with the Gradle bump to avoid this issue?
>
> Ismael
>
> On Tue, Jun 19, 2018 at 4:29 PM Dong Lin  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 1.1.1.
> >
> > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> since
> > that release. A few of the more significant fixes include:
> >
> > KAFKA-6925  - Fix
> memory
> > leak in StreamsMetricsThreadImpl
> > KAFKA-6937  - In-sync
> > replica delayed during fetch if replica throttle is exceeded
> > KAFKA-6917  - Process
> > txn
> > completion asynchronously to avoid deadlock
> > KAFKA-6893  - Create
> > processors before starting acceptor to avoid ArithmeticException
> > KAFKA-6870  -
> > Fix ConcurrentModificationException in SampledStat
> > KAFKA-6878  - Fix
> > NullPointerException when querying global state store
> > KAFKA-6879  - Invoke
> > session init callbacks outside lock to avoid Controller deadlock
> > KAFKA-6857  - Prevent
> > follower from truncating to the wrong offset if undefined leader epoch is
> > requested
> > KAFKA-6854  - Log
> > cleaner
> > fails with transaction markers that are deleted during clean
> > KAFKA-6747  - Check
> > whether there is in-flight transaction before aborting transaction
> > KAFKA-6748  - Double
> > check before scheduling a new task after the punctuate call
> > KAFKA-6739  -
> > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > KAFKA-6728  -
> > Fix NullPointerException when instantiating the HeaderConverter
> >
> > Kafka 1.1.1 release plan:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> >
> > Release notes for the 1.1.1 release:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-1.1.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
> > https://github.com/apache/kafka/tree/1.1.1-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/11/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/11/protocol.html
> >
> > * Successful Jenkins builds for the 1.1 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> kafka-1.1-jdk7/150/
> >
> > Please test and verify the release artifacts and submit a vote for this
> RC,
> > or report any issues so we can fix them and get a new RC out ASAP.
> Although
> > this release vote requires PMC votes to pass, testing, votes, and bug
> > reports are valuable and appreciated from everyone.
> >
> > Cheers,
> > Dong
> >
>


Build failed in Jenkins: kafka-0.10.1-jdk7 #138

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade to Gradle 4.8.1 (#5268)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H30 (ubuntu xenial) in workspace 

Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/0.10.1^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/0.10.1^{commit} # timeout=10
Checking out Revision 670c70e18c91064e46cd4cd2d4b5d7a3476d5578 
(refs/remotes/origin/0.10.1)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 670c70e18c91064e46cd4cd2d4b5d7a3476d5578
Commit message: "MINOR: Upgrade to Gradle 4.8.1 (#5268)"
 > git rev-list --no-walk 0f3affc0f40751dc8fd064b36b6e859728f63e37 # timeout=10
ERROR: No tool found matching GRADLE_2_4_RC_2_HOME
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-0.10.1-jdk7] $ /bin/bash -xe /tmp/jenkins6005953637102261344.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Support for running Gradle using Java 7 has been deprecated and is scheduled to 
be removed in Gradle 5.0. Please see 
https://docs.gradle.org/4.4.1/userguide/java_plugin.html#sec:java_cross_compilation
 for more details.
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.12.0/gradle-versions-plugin-0.12.0.pom
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.5.0/grgit-1.5.0.pom
Download 
https://jcenter.bintray.com/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.pom
Download 
https://jcenter.bintray.com/org/eclipse/jgit/org.eclipse.jgit.ui/4.1.1.201511131810-r/org.eclipse.jgit.ui-4.1.1.201511131810-r.pom
Download 
https://jcenter.bintray.com/org/eclipse/jgit/org.eclipse.jgit/4.1.1.201511131810-r/org.eclipse.jgit-4.1.1.201511131810-r.pom
Download 
https://jcenter.bintray.com/org/eclipse/jgit/org.eclipse.jgit-parent/4.1.1.201511131810-r/org.eclipse.jgit-parent-4.1.1.201511131810-r.pom
Download 
https://jcenter.bintray.com/org/eclipse/jdt/org.eclipse.jdt.annotation/1.1.0/org.eclipse.jdt.annotation-1.1.0.pom
Download 
https://jcenter.bintray.com/org/scoverage/gradle-scoverage/2.1.0/gradle-scoverage-2.1.0.jar
Download 
https://jcenter.bintray.com/com/github/ben-manes/gradle-versions-plugin/0.12.0/gradle-versions-plugin-0.12.0.jar
Download 
https://jcenter.bintray.com/org/eclipse/jdt/org.eclipse.jdt.annotation/1.1.0/org.eclipse.jdt.annotation-1.1.0.jar
Download 
https://jcenter.bintray.com/org/eclipse/jgit/org.eclipse.jgit/4.1.1.201511131810-r/org.eclipse.jgit-4.1.1.201511131810-r.jar
Download https://jcenter.bintray.com/org/ajoberstar/grgit/1.5.0/grgit-1.5.0.jar
Download 
https://jcenter.bintray.com/org/eclipse/jgit/org.eclipse.jgit.ui/4.1.1.201511131810-r/org.eclipse.jgit.ui-4.1.1.201511131810-r.jar
Building project 'core' with Scala version 2.10.6

FAILURE: Build failed with an exception.

* What went wrong:
A problem occurred configuring project ':core'.
> Could not resolve all files for configuration ':core:scoverage'.
   > Could not resolve org.scoverage:scalac-scoverage-plugin_2.10:1.3.0.
 Required by:
 project :core
  > Could not resolve org.scoverage:scalac-scoverage-plugin_2.10:1.3.0.
 > Could not get resource 
'https://repo.maven.apache.org/maven2/org/scoverage/scalac-scoverage-plugin_2.10/1.3.0/scalac-scoverage-plugin_2.10-1.3.0.pom'.
> Could not GET 
'https://repo.maven.apache.org/maven2/org/scoverage/scalac-scoverage-plugin_2.10/1.3.0/scalac-scoverage-plugin_2.10-1.3.0.pom'.
   > Received fatal alert: protocol_version
   > Could not resolve org.scoverage:scalac-scoverage-runtime_2.10:1.3.0.
 Required by:
 project :core
  > Could not resolve org.scoverage:scalac-scoverage-runtime_2.10:1.3.0.
 > Could not get resource 
'https://repo.mav

Build failed in Jenkins: kafka-trunk-jdk10 #242

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Upgrade to Gradle 4.8.1 (#5263)

--
[...truncated 1.53 MB...]

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithSuccessOnAddPartitionsWhenStateIsCompleteAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareAbortState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithErrorsNoneOnAddPartitionWhenNoErrorsAndPartitionsTheSame PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestOnEndTxnWhenTransactionalIdIsEmpty PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithInvalidRequestAddPartitionsToTransactionWhenTransactionalIdIsEmpty
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAppendPrepareAbortToLogOnEndTxnWhenStatusIsOngoingAndResultIsAbort PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAcceptInitPidAndReturnNextPidWhenTransactionalIdIsNull PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRemoveTransactionsForPartitionOnEmigration PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldWaitForCommitToCompleteOnHandleInitPidAndExistingTransactionInPrepareCommitState
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordin

Re: [DISCUSS] - KIP-314: KTable to GlobalKTable Bi-directional Join

2018-06-21 Thread Adam Bellemare
Hi Guozhang

*Re: Questions*
*1)* I do not yet have a solution to this, but I also did not look that
closely at it when I begun this KIP. I admit that I was unaware of exactly
how the GlobalKTable worked alongside the KTable/KStream topologies. You
mention "It means the two topologies will be merged, and that merged
topology can only be executed as a single task, by a single thread. " - is
the problem here that the merged topology would be parallelized to other
threads/instances? While I am becoming familiar with how the topologies are
created under the hood, I am not yet fully clear on the implications of
your statement. I will look into this further.

*2)* " do you mean that although we have a duplicated state store:
ModifiedEvents in addition to the original Events with only the enhanced
key, this is not avoidable anyways even if we do re-keying?" Yes, that is
correct, that is what I meant. I need to improve my knowledge around this
component too. I have been browsing the KIP-213 discussion thread and
looking at Jan's code

*Re: Comments*
*1) *Makes sense. I will update the diagram accordingly. Thanks!

*2)* Wouldn't outer join require that we emit records from the right
GlobalKTable that have no match in the left KTable? This seems undefined to
me with the current proposal (above issues aside), since multiple threads
would be producing the same output event for a single GlobalKTable update.


Questions for you both:
Q1) Is a KTable always materialized? I am looking at the code under the
hood, and it seems to me that it's either materialized with an explicit
Materialized object, or it's given an anonymous name and the default serdes
are used. Am I correct in this observation?


Thanks,
Adam



On Wed, Jun 20, 2018 at 6:44 PM, Guozhang Wang  wrote:

> Hello Adam,
>
> Thanks for proposing the KIP. A few meta comments:
>
> 1. As Matthias mentioned, the current GlobalKTable is designed to be
> read-only, and not driving any computations (btw the global store backing a
> GlobalKTable should also be read-only). Behind the scene the global store
> updating task and the regular streams task are two separate ones running
> two separate processor topologies by two threads: the global store updating
> task's topology is simply a source node, plus a processor node (let's call
> it the update-processor) that puts to the store. If we allow the
> GlobalKTable to drive the join, then we need the underlying global store's
> update processor to link to the downstream processors of the normal regular
> task's topology in order to pass the joined results to downstream. It means
> the two topologies will be merged, and that merged topology can only be
> executed as a single task, by a single thread. We need to think of a way
> how to work around this issue first of all before proceeding to next steps.
>
> 2. Not clear what do you mean by "In terms of data complexity, any pattern
> that requires us to rekey the data once is equivalent in terms of data
> capacity requirements.." do you mean that although we have a duplicated
> state store: ModifiedEvents in addition to the original Events with only
> the enhanced key, this is not avoidable anyways even if we do re-keying?
> Note that in
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 213+Support+non-key+joining+in+KTable?preview=/74684836/
> 74687529/Screenshot%20from%202017-11-18%2023%3A26%3A52.png
> we were considering if it is still possible to only materialize the joining
> tables once each still, i.e. not having a duplicated store. So I think it
> is not necessarily the case that we have to duplicate the KTable's store.
>
>
> One minor comment:
>
> 1. In `*KTable as Driver, joined on GlobalKTable join mechanism`* section,
> I think we still need to join the old value with the global store to form a
> pair of "" joined result, so that the resulting KTable can still
> be applied in another aggregation operator that allows correct addition /
> subtraction logic.
>
> 2. For KTable-KTable join, we have inner / left / outer, while for
> KStream-KTable / GlobalKTable join we only have inner / left, and the
> reason is that for stream-table joins outer join makes less sense; should
> we consider outer for KTable-GlobalKTable join as well?
>
>
> Guozhang
>
>
> On Tue, Jun 19, 2018 at 10:27 AM, Adam Bellemare  >
> wrote:
>
> > Matthias
> >
> > Thanks for the links. I have seen those before but I will dig deeper into
> > them, especially around the CombinedKey and the flush + cache + rangescan
> > functionality. I believe Jan had a PR with many of the changes in there,
> > perhaps I can use some of the work that was done there to help leverage a
> > similar (or identical) design.
> >
> > I will certainly be able to make a PoC before going to vote on this one.
> It
> > is a larger change and I suspect that we will need to review some of the
> > finer points to ensure that the design is still suitable and sufficiently
> > performant. I'll post back when I have something mo

[jira] [Created] (KAFKA-7088) Kafka streams thread waits infinitely on transaction init

2018-06-21 Thread Lukasz Gluchowski (JIRA)
Lukasz Gluchowski created KAFKA-7088:


 Summary: Kafka streams thread waits infinitely on transaction init
 Key: KAFKA-7088
 URL: https://issues.apache.org/jira/browse/KAFKA-7088
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.1
 Environment: Linux 4.14.33-51.37.amzn1.x86_64 #1 SMP Thu May 3 
20:07:43 UTC 2018 
kafka-streams (client) 1.0.1
kafka broker 1.1.0
Java version:
OpenJDK Runtime Environment (build 1.8.0_171-b10)
OpenJDK 64-Bit Server VM (build 25.171-b10, mixed mode)

kakfa config overrides:
num.stream.threads: 6
session.timeout.ms: 1
request.timeout.ms: 11000
fetch.max.wait.ms: 500
max.poll.records: 1000

topic has 24 partitions
Reporter: Lukasz Gluchowski


A kafka stream application thread stops processing without any feedback. The 
topic has 24 partitions and I noticed that processing stopped only for some 
partitions. I will describe what happened to partition:10. The application is 
still running (now for about 8 hours) and that thread is hanging there and no 
rebalancing that took place.

There is no error (we have a custom `Thread.UncaughtExceptionHandler` which was 
not called). I noticed that after couple of minutes stream stopped processing 
(at offset 32606948 where log-end-offset is 33472402). 

Broker itself is not reporting any active consumer in that consumer group and 
the only info I was able to gather was from thread dump:
{code:java}
"mp_ads_publisher_pro_madstorage-web-corotos-prod-9db804ae-2a7a-431f-be09-392ab38cd8a2-StreamThread-33"
 #113 prio=5 os_prio=0 tid=0x7fe07c6349b0 nid=0xf7a waiting on condition 
[0x7fe0215d4000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0xfec6a2f8> (a 
java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at 
java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at 
org.apache.kafka.clients.producer.internals.TransactionalRequestResult.await(TransactionalRequestResult.java:50)
at 
org.apache.kafka.clients.producer.KafkaProducer.initTransactions(KafkaProducer.java:554)
at 
org.apache.kafka.streams.processor.internals.StreamTask.(StreamTask.java:151)
at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:404)
at 
org.apache.kafka.streams.processor.internals.StreamThread$TaskCreator.createTask(StreamThread.java:365)
at 
org.apache.kafka.streams.processor.internals.StreamThread$AbstractTaskCreator.createTasks(StreamThread.java:350)
at 
org.apache.kafka.streams.processor.internals.TaskManager.addStreamTasks(TaskManager.java:137)
at 
org.apache.kafka.streams.processor.internals.TaskManager.createTasks(TaskManager.java:88)
at 
org.apache.kafka.streams.processor.internals.StreamThread$RebalanceListener.onPartitionsAssigned(StreamThread.java:259)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onJoinComplete(ConsumerCoordinator.java:264)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.joinGroupIfNeeded(AbstractCoordinator.java:367)
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:316)
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.poll(ConsumerCoordinator.java:295)
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1146)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:)
at 
org.apache.kafka.streams.processor.internals.StreamThread.pollRequests(StreamThread.java:851)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:808)
at 
org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:774)
at 
org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:744){code}
 

I tried restarting application once but the situation repeated. Thread read 
some data, committed offset and stopped processing, leaving that thread in wait 
state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread Ismael Juma
Thanks Dong. One issue that came up is that Gradle dependency resolution no
longer works with Java 7 due to a minimum TLS version change in Maven
Central. Gradle 4.8.1 solves the issue (
https://github.com/apache/kafka/pull/5263). This means that if someone
tries to build from source with Java 7, it won't work unless they have the
artifacts in the local cache already. Do you think it makes sense to do RC1
with the Gradle bump to avoid this issue?

Ismael

On Tue, Jun 19, 2018 at 4:29 PM Dong Lin  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 1.1.1.
>
> Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> released with 1.1.0 about 3 months ago. We have fixed about 25 issues since
> that release. A few of the more significant fixes include:
>
> KAFKA-6925  - Fix memory
> leak in StreamsMetricsThreadImpl
> KAFKA-6937  - In-sync
> replica delayed during fetch if replica throttle is exceeded
> KAFKA-6917  - Process
> txn
> completion asynchronously to avoid deadlock
> KAFKA-6893  - Create
> processors before starting acceptor to avoid ArithmeticException
> KAFKA-6870  -
> Fix ConcurrentModificationException in SampledStat
> KAFKA-6878  - Fix
> NullPointerException when querying global state store
> KAFKA-6879  - Invoke
> session init callbacks outside lock to avoid Controller deadlock
> KAFKA-6857  - Prevent
> follower from truncating to the wrong offset if undefined leader epoch is
> requested
> KAFKA-6854  - Log
> cleaner
> fails with transaction markers that are deleted during clean
> KAFKA-6747  - Check
> whether there is in-flight transaction before aborting transaction
> KAFKA-6748  - Double
> check before scheduling a new task after the punctuate call
> KAFKA-6739  -
> Fix IllegalArgumentException when down-converting from V2 to V0/V1
> KAFKA-6728  -
> Fix NullPointerException when instantiating the HeaderConverter
>
> Kafka 1.1.1 release plan:
> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
>
> Release notes for the 1.1.1 release:
> http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> http://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> http://home.apache.org/~lindong/kafka-1.1.1-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/
>
> * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
> https://github.com/apache/kafka/tree/1.1.1-rc0
>
> * Documentation:
> http://kafka.apache.org/11/documentation.html
>
> * Protocol:
> http://kafka.apache.org/11/protocol.html
>
> * Successful Jenkins builds for the 1.1 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-1.1-jdk7/150/
>
> Please test and verify the release artifacts and submit a vote for this RC,
> or report any issues so we can fix them and get a new RC out ASAP. Although
> this release vote requires PMC votes to pass, testing, votes, and bug
> reports are valuable and appreciated from everyone.
>
> Cheers,
> Dong
>


Jenkins build is back to normal : kafka-2.0-jdk8 #50

2018-06-21 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7087) Option in TopicCommand to report not preferred leaders

2018-06-21 Thread Viktor Somogyi (JIRA)
Viktor Somogyi created KAFKA-7087:
-

 Summary: Option in TopicCommand to report not preferred leaders
 Key: KAFKA-7087
 URL: https://issues.apache.org/jira/browse/KAFKA-7087
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Viktor Somogyi
Assignee: Viktor Somogyi


Options in topic describe exists for reporting unavailable and lagging 
partitions but it is often an ask to report partitions where the active leader 
is not the preferred one. This jira adds this extra option to TopicCommand.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2758

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Update version for doc to 2.0.0 (#5262)

--
[...truncated 434.33 KB...]
kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler STARTED

kafka.zookeeper.ZooKeeperClientTest > 
testBlockOnRequestCompletionFromStateChangeHandler PASSED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString STARTED

kafka.zookeeper.ZooKeeperClientTest > testUnresolvableConnectString PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData STARTED

kafka.zookeeper.ZooKeeperClientTest > testPipelinedGetData PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChange 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetChildrenExistingZNodeWithChildren 
PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline STARTED

kafka.zookeeper.ZooKeeperClientTest > testMixedPipeline PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetDataExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry STARTED

kafka.zookeeper.ZooKeeperClientTest > testSessionExpiry PASSED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testSetDataNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testDeleteNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testExistsExistingZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics PASSED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion STARTED

kafka.zookeeper.ZooKeeperClientTest > testZNodeChangeHandlerForDeletion PASSED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode STARTED

kafka.zookeeper.ZooKeeperClientTest > testGetAclNonExistentZNode PASSED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
STARTED

kafka.zookeeper.ZooKeeperClientTest > testStateChangeHandlerForAuthFailure 
PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottle

Re: [VOTE] 2.0.0 RC0

2018-06-21 Thread Rajini Sivaram
Sorry, the documentation does go live with the RC (thanks to Ismael for
pointing this out), so here are the links:

* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html



Regards,


Rajini


On Wed, Jun 20, 2018 at 9:08 PM, Rajini Sivaram 
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the first candidate for release of Apache Kafka 2.0.0.
>
>
> This is a major version release of Apache Kafka. It includes 40 new  KIPs
> and
>
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>
>
> A few notable highlights:
>
>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
>(KIP-277)
>- SASL/OAUTHBEARER implementation (KIP-255)
>- Improved quota communication and customization of quotas (KIP-219,
>KIP-257)
>- Efficient memory usage for down conversion (KIP-283)
>- Fix log divergence between leader and follower during fast leader
>failover (KIP-279)
>- Drop support for Java 7 and remove deprecated code including old
>scala clients
>- Connect REST extension plugin, support for externalizing secrets and
>improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>- Scala API for Kafka Streams and other Streams API improvements
>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>
>
> Release notes for the 2.0.0 release:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/RELEASE_NOTES.html
>
>
> *** Please download, test and vote by Monday, June 25, 4pm PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> http://kafka.apache.org/KEYS
>
>
> * Release artifacts to be voted upon (source and binary):
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/
>
>
> * Maven artifacts to be voted upon:
>
> https://repository.apache.org/content/groups/staging/
>
>
> * Javadoc:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/javadoc/
>
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>
> https://github.com/apache/kafka/tree/2.0.0-rc0
>
>
> * Documentation:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/kafka_2.11-
> 2.0.0-site-docs.tgz
>
> (Since documentation cannot go live until 2.0.0 is released, please
> download and verify)
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/48/
>
> System tests: https://jenkins.confluent.io/job/system-test-kafka/jo
> b/2.0/6/ (2 failures are known flaky tests)
>
>
>
> Please test and verify the release artifacts and submit a vote for this RC
> or report any issues so that we can fix them and roll out a new RC ASAP!
>
> Although this release vote requires PMC votes to pass, testing, votes,
> and bug
> reports are valuable and appreciated from everyone.
>
>
> Thanks,
>
>
> Rajini
>
>
>


[jira] [Created] (KAFKA-7086) Kafka server process dies after try deleting old log files under Windows 10

2018-06-21 Thread Cezary Wagner (JIRA)
Cezary Wagner created KAFKA-7086:


 Summary: Kafka server process dies after try deleting old log 
files under Windows 10
 Key: KAFKA-7086
 URL: https://issues.apache.org/jira/browse/KAFKA-7086
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 1.1.0
 Environment: Windows 10, Windows Server 2012 R2
Reporter: Cezary Wagner


Kafka after achieving log.retention.hours dies every time with error.
{noformat}
# Log Retention Policy #

# The following configurations control the disposal of log segments. The policy 
can
# be set to delete segments after a period of time, or after a given size has 
accumulated.
# A segment will be deleted whenever *either* of these criteria are met. 
Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log 
unless the remaining
# segments drop below log.retention.bytes. Functions independently of 
log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log 
segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted 
according
# to the retention policies
log.retention.check.interval.ms=30{noformat}
Exception raised:


{noformat}
> C:\root\kafka_2.12-1.1.0\data\__consumer_offsets-3\.log.swap:
>  Proces nie mo┐e uzyskaŠ dostŕpu do pliku, poniewa┐ jest on u┐ywany przez 
> inny proces.

    at sun.nio.fs.WindowsException.translateToIOException(Unknown Source)
    at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown Source)
    at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
    at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
    at java.nio.file.Files.move(Unknown Source)
    at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:697)
    at 
org.apache.kafka.common.record.FileRecords.renameTo(FileRecords.java:212)
    at kafka.log.LogSegment.changeFileSuffixes(LogSegment.scala:415)
    at kafka.log.Log.replaceSegments(Log.scala:1644)
    at kafka.log.Cleaner.cleanSegments(LogCleaner.scala:535)
    at kafka.log.Cleaner.$anonfun$doClean$6(LogCleaner.scala:462)
    at kafka.log.Cleaner.$anonfun$doClean$6$adapted(LogCleaner.scala:461)
    at scala.collection.immutable.List.foreach(List.scala:389)
    at kafka.log.Cleaner.doClean(LogCleaner.scala:461)
    at kafka.log.Cleaner.clean(LogCleaner.scala:438)
    at kafka.log.LogCleaner$CleanerThread.cleanOrSleep(LogCleaner.scala:305)
    at kafka.log.LogCleaner$CleanerThread.doWork(LogCleaner.scala:291)
    at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:82)
    Suppressed: java.nio.file.FileSystemException: 
C:\root\kafka_2.12-1.1.0\data\__consumer_offsets-3\.log.cleaned
 -> 
C:\root\kafka_2.12-1.1.0\data\__consumer_offsets-3\.log.swap:
 Proces nie mo┐e uzyskaŠ dostŕpu do pliku, poniewa┐ jest on u┐ywany przez inny 
proces.

    at sun.nio.fs.WindowsException.translateToIOException(Unknown 
Source)
    at sun.nio.fs.WindowsException.rethrowAsIOException(Unknown 
Source)
    at sun.nio.fs.WindowsFileCopy.move(Unknown Source)
    at sun.nio.fs.WindowsFileSystemProvider.move(Unknown Source)
    at java.nio.file.Files.move(Unknown Source)
    at 
org.apache.kafka.common.utils.Utils.atomicMoveWithFallback(Utils.java:694)
    ... 12 more
[2018-06-21 13:06:34,196] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir C:\root\kafka_2.12-1.1.0\data (kafka.server.ReplicaManager)
[2018-06-21 13:06:34,209] INFO [ReplicaFetcherManager on broker 0] Removed 
fetcher for partitions 
__consumer_offsets-22,test-14,__consumer_offsets-30,test-6,__consumer_offsets-8,__consumer_offsets-21,test-17,__consumer_offsets-4,INTEGRATION_TESTS_DEBUG.monitoring_events-0,test-20,__consumer_offsets-27,__consumer_offsets-7,__consumer_offsets-9,test-29,__consumer_offsets-46,test-23,test-24,test-11,test-10,__consumer_offsets-25,__consumer_offsets-35,__consumer_offsets-41,__consumer_offsets-33,__consumer_offsets-23,__consumer_offsets-49,test-28,__consumer_offsets-47,test-19,__consumer_offsets-16,test-0,__consumer_offsets-28,test-7,__consumer_offsets-31,__consumer_offsets-36,__consumer_offsets-42,__consumer_offsets-3,test-18,__consumer_offsets-18,test-22,test-25,__consumer_offsets-37,test-5,__consumer_offsets-15,__consumer_offsets-24,__consumer_offsets-38,__consumer_offsets-17,test-8,__consumer_offsets-48,test-1,__consumer_offsets-19,test-26,__consumer_offsets-11,__consumer_offsets-13,

Build failed in Jenkins: kafka-2.0-jdk8 #49

2018-06-21 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6800: Update SASL/PLAIN and SCRAM docs to use KIP-86 
callbacks

--
[...truncated 433.19 KB...]
kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest

[jira] [Created] (KAFKA-7085) Producer hangs on TimeoutException: Failed to update metadata after 60000 ms.

2018-06-21 Thread Martin Vysny (JIRA)
Martin Vysny created KAFKA-7085:
---

 Summary: Producer hangs on TimeoutException: Failed to update 
metadata after 6 ms.
 Key: KAFKA-7085
 URL: https://issues.apache.org/jira/browse/KAFKA-7085
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 1.1.0
Reporter: Martin Vysny


I start Kafka in Docker:

docker run --rm -d -p 2181:2181 -p 9092:9092 --env ADVERTISED_HOST="127.0.0.1" 
spotify/kafka

 

Then, I connect KafkaProducer to that Kafka (127.0.0.1:9092) and immediately I 
call producer.send() . The send() method always blocks for 60s and then fails 
and callback receives TimeoutException. The producer is therefore unable to 
send any message which renders it useless.

A workaround is to sleep for 1-2 seconds after the Producer is constructed and 
before first send() is invoked - that apparently gives enough time to Kafka to 
synchronize whatever it needs and no longer block endlessly in send()



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-317: Transparent Data Encryption

2018-06-21 Thread Viktor Somogyi
Hi Sönke,

Compressing before encrypting has its dangers as well. Suppose you have a
known compression format which adds a magic header and you're using a block
cipher with a small enough block, then it becomes much easier to figure out
the encryption key. For instance you can look at Snappy's stream
identifier: https://github.com/google/snappy/blob/master/framing_format.txt
. Based on this you should only use block ciphers where block sizes are
much larger then 6 bytes. AES for instance should be good with its 128 bits
= 16 bytes but even this isn't entirely secure as the first 6 bytes already
leaked some information - and it depends on the cypher that how much it is.
Also if we suppose that an adversary accesses a broker and takes all the
data, they'll have much easier job to decrypt it as they'll have much more
examples.
So overall we should make sure to define and document the compatible
encryptions with the supported compression methods and the level of
security they provide to make sure the users are fully aware of the
security implications.

Cheers,
Viktor

On Tue, Jun 19, 2018 at 11:55 AM Sönke Liebau
 wrote:

> Hi Stephane,
>
> thanks for pointing out the broken pictures, I fixed those.
>
> Regarding encrypting before or after batching the messages, you are
> correct, I had not thought of compression and how this changes things.
> Encrypted data does not really encrypt well. My reasoning at the time
> of writing was that if we encrypt the entire batch we'd have to wait
> for the batch to be full before starting to encrypt. Whereas with per
> message encryption we can encrypt them as they come in and more or
> less have them ready for sending when the batch is complete.
> However I think the difference will probably not be that large (will
> do some testing) and offset by just encrypting once instead of many
> times, which has a certain overhead every time. Also, from a security
> perspective encrypting longer chunks of data is preferable - another
> benefit.
>
> This does however take away the ability of the broker to see the
> individual records inside the encrypted batch, so this would need to
> be stored and retrieved as a single record - just like is done for
> compressed batches. I am not 100% sure that this won't create issues,
> especially when considering transactions, I will need to look at the
> compression code some more. In essence though, since it works for
> compression I see no reason why it can't be made to work here.
>
> On a different note, going down this route might make us reconsider
> storing the key with the data, as this might significantly reduce
> storage overhead - still much higher than just storing them once
> though.
>
> Best regards,
> Sönke
>
> On Tue, Jun 19, 2018 at 5:59 AM, Stephane Maarek
>  wrote:
> > Hi Sonke
> >
> > Very much needed feature and discussion. FYI the image links seem broken.
> >
> > My 2 cents (if I understood correctly): you say "This process will be
> > implemented after Serializer and Interceptors are done with the message
> > right before it is added to the batch to be sent, in order to ensure that
> > existing serializers and interceptors keep working with encryption just
> > like without it."
> >
> > I think encryption should happen AFTER a batch is created, right before
> it
> > is sent. Reason is that if we want to still keep advantage of
> compression,
> > encryption needs to happen after it (and I believe compression happens
> on a
> > batch level).
> > So to me for a producer: serializer / interceptors => batching =>
> > compression => encryption => send.
> > and the inverse for a consumer.
> >
> > Regards
> > Stephane
> >
> > On 19 June 2018 at 06:46, Sönke Liebau  .invalid>
> > wrote:
> >
> >> Hi everybody,
> >>
> >> I've created a draft version of KIP-317 which describes the addition
> >> of transparent data encryption functionality to Kafka.
> >>
> >> Please consider this as a basis for discussion - I am aware that this
> >> is not at a level of detail sufficient for implementation, but I
> >> wanted to get some feedback from the community on the general idea
> >> before spending more time on this.
> >>
> >> Link to the KIP is:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 317%3A+Add+transparent+data+encryption+functionality
> >>
> >> Best regards,
> >> Sönke
> >>
>
>
>
> --
> Sönke Liebau
> Partner
> Tel. +49 179 7940878
> OpenCore GmbH & Co. KG - Thomas-Mann-Straße 8 - 22880 Wedel - Germany
>


Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread Mickael Maison
+1 (non-binding)
Using kafka_2.12-1.1.1.tgz, I ran quick start and checked signatures

On Thu, Jun 21, 2018 at 10:35 AM, Manikumar  wrote:
> +1 (non-binding)  Ran test,  Verified quick start,  producer/consumer perf
> tests
>
> On Thu, Jun 21, 2018 at 11:58 AM zhenya Sun  wrote:
>
>> +1 non-binding
>>
>> > 在 2018年6月21日,下午2:18,Andras Beni  写道:
>> >
>> > +1 (non-binding)
>> >
>> > Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
>> > performed a rolling restart while console-producer and console-consumer
>> ran
>> > at around 20K messages/sec. No errors or data loss.
>> >
>> > Ran unit and integration tests successfully 3 out of 5 times. Encountered
>> > some flakies:
>> > -
>> DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
>> > - LogDirFailureTest.testIOExceptionDuringCheckpoint
>> > - SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
>> >
>> >
>> > Andras
>> >
>> >
>> > On Wed, Jun 20, 2018 at 4:59 AM Ted Yu  wrote:
>> >
>> >> +1
>> >>
>> >> Ran unit test suite which passed.
>> >>
>> >> Checked signatures.
>> >>
>> >> On Tue, Jun 19, 2018 at 4:47 PM, Dong Lin  wrote:
>> >>
>> >>> Re-send to kafka-clie...@googlegroups.com
>> >>>
>> >>> On Tue, Jun 19, 2018 at 4:29 PM, Dong Lin  wrote:
>> >>>
>>  Hello Kafka users, developers and client-developers,
>> 
>>  This is the first candidate for release of Apache Kafka 1.1.1.
>> 
>>  Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
>> >> first
>>  released with 1.1.0 about 3 months ago. We have fixed about 25 issues
>> >>> since
>>  that release. A few of the more significant fixes include:
>> 
>>  KAFKA-6925  - Fix
>>  memory leak in StreamsMetricsThreadImpl
>>  KAFKA-6937  -
>> >> In-sync
>>  replica delayed during fetch if replica throttle is exceeded
>>  KAFKA-6917  -
>> >> Process
>>  txn completion asynchronously to avoid deadlock
>>  KAFKA-6893  -
>> Create
>>  processors before starting acceptor to avoid ArithmeticException
>>  KAFKA-6870  -
>>  Fix ConcurrentModificationException in SampledStat
>>  KAFKA-6878  - Fix
>>  NullPointerException when querying global state store
>>  KAFKA-6879  -
>> Invoke
>>  session init callbacks outside lock to avoid Controller deadlock
>>  KAFKA-6857  -
>> >> Prevent
>>  follower from truncating to the wrong offset if undefined leader epoch
>> >> is
>>  requested
>>  KAFKA-6854  - Log
>>  cleaner fails with transaction markers that are deleted during clean
>>  KAFKA-6747  - Check
>>  whether there is in-flight transaction before aborting transaction
>>  KAFKA-6748  -
>> Double
>>  check before scheduling a new task after the punctuate call
>>  KAFKA-6739  -
>>  Fix IllegalArgumentException when down-converting from V2 to V0/V1
>>  KAFKA-6728  -
>>  Fix NullPointerException when instantiating the HeaderConverter
>> 
>>  Kafka 1.1.1 release plan:
>>  https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
>> 
>>  Release notes for the 1.1.1 release:
>>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
>> 
>>  *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
>> 
>>  Kafka's KEYS file containing PGP keys we use to sign the release:
>>  http://kafka.apache.org/KEYS
>> 
>>  * Release artifacts to be voted upon (source and binary):
>>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/
>> 
>>  * Maven artifacts to be voted upon:
>>  https://repository.apache.org/content/groups/staging/
>> 
>>  * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
>>  https://github.com/apache/kafka/tree/1.1.1-rc0
>> 
>>  * Documentation:
>>  http://kafka.apache.org/11/documentation.html
>> 
>>  * Protocol:
>>  http://kafka.apache.org/11/protocol.html
>> 
>>  * Successful Jenkins builds for the 1.1 branch:
>>  Unit/integration tests: https://builds.apache.org/job/
>> >>> kafka-1.1-jdk7/150/
>> 
>>  Please test and verify the release artifacts and submit a vote for
>> this
>> >>> RC,
>>  or report any issues so we can fix them and get a new RC out ASAP.
>> >>> Although
>>  this release vote requires PM

Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread Manikumar
+1 (non-binding)  Ran test,  Verified quick start,  producer/consumer perf
tests

On Thu, Jun 21, 2018 at 11:58 AM zhenya Sun  wrote:

> +1 non-binding
>
> > 在 2018年6月21日,下午2:18,Andras Beni  写道:
> >
> > +1 (non-binding)
> >
> > Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
> > performed a rolling restart while console-producer and console-consumer
> ran
> > at around 20K messages/sec. No errors or data loss.
> >
> > Ran unit and integration tests successfully 3 out of 5 times. Encountered
> > some flakies:
> > -
> DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
> > - LogDirFailureTest.testIOExceptionDuringCheckpoint
> > - SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> >
> >
> > Andras
> >
> >
> > On Wed, Jun 20, 2018 at 4:59 AM Ted Yu  wrote:
> >
> >> +1
> >>
> >> Ran unit test suite which passed.
> >>
> >> Checked signatures.
> >>
> >> On Tue, Jun 19, 2018 at 4:47 PM, Dong Lin  wrote:
> >>
> >>> Re-send to kafka-clie...@googlegroups.com
> >>>
> >>> On Tue, Jun 19, 2018 at 4:29 PM, Dong Lin  wrote:
> >>>
>  Hello Kafka users, developers and client-developers,
> 
>  This is the first candidate for release of Apache Kafka 1.1.1.
> 
>  Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> >> first
>  released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> >>> since
>  that release. A few of the more significant fixes include:
> 
>  KAFKA-6925  - Fix
>  memory leak in StreamsMetricsThreadImpl
>  KAFKA-6937  -
> >> In-sync
>  replica delayed during fetch if replica throttle is exceeded
>  KAFKA-6917  -
> >> Process
>  txn completion asynchronously to avoid deadlock
>  KAFKA-6893  -
> Create
>  processors before starting acceptor to avoid ArithmeticException
>  KAFKA-6870  -
>  Fix ConcurrentModificationException in SampledStat
>  KAFKA-6878  - Fix
>  NullPointerException when querying global state store
>  KAFKA-6879  -
> Invoke
>  session init callbacks outside lock to avoid Controller deadlock
>  KAFKA-6857  -
> >> Prevent
>  follower from truncating to the wrong offset if undefined leader epoch
> >> is
>  requested
>  KAFKA-6854  - Log
>  cleaner fails with transaction markers that are deleted during clean
>  KAFKA-6747  - Check
>  whether there is in-flight transaction before aborting transaction
>  KAFKA-6748  -
> Double
>  check before scheduling a new task after the punctuate call
>  KAFKA-6739  -
>  Fix IllegalArgumentException when down-converting from V2 to V0/V1
>  KAFKA-6728  -
>  Fix NullPointerException when instantiating the HeaderConverter
> 
>  Kafka 1.1.1 release plan:
>  https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> 
>  Release notes for the 1.1.1 release:
>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
> 
>  *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> 
>  Kafka's KEYS file containing PGP keys we use to sign the release:
>  http://kafka.apache.org/KEYS
> 
>  * Release artifacts to be voted upon (source and binary):
>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/
> 
>  * Maven artifacts to be voted upon:
>  https://repository.apache.org/content/groups/staging/
> 
>  * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
>  https://github.com/apache/kafka/tree/1.1.1-rc0
> 
>  * Documentation:
>  http://kafka.apache.org/11/documentation.html
> 
>  * Protocol:
>  http://kafka.apache.org/11/protocol.html
> 
>  * Successful Jenkins builds for the 1.1 branch:
>  Unit/integration tests: https://builds.apache.org/job/
> >>> kafka-1.1-jdk7/150/
> 
>  Please test and verify the release artifacts and submit a vote for
> this
> >>> RC,
>  or report any issues so we can fix them and get a new RC out ASAP.
> >>> Although
>  this release vote requires PMC votes to pass, testing, votes, and bug
>  reports are valuable and appreciated from everyone.
> 
>  Cheers,
>  Dong
> 
> 
> 
> >>>
> >>
>
>


Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread Satish Duggana
+1 (non-binding)

- Ran testAll/releaseTarGzAll on 1.1.0-rc0 tag
- Ran through quickstart of core/streams on builds.

Thanks,
Satish.


On Thu, Jun 21, 2018 at 11:51 AM, zhenya Sun  wrote:

> +1 non-binding
>
> > 在 2018年6月21日,下午2:18,Andras Beni  写道:
> >
> > +1 (non-binding)
> >
> > Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
> > performed a rolling restart while console-producer and console-consumer
> ran
> > at around 20K messages/sec. No errors or data loss.
> >
> > Ran unit and integration tests successfully 3 out of 5 times. Encountered
> > some flakies:
> > - DescribeConsumerGroupTest.testDescribeGroupWithShortInit
> ializationTimeout
> > - LogDirFailureTest.testIOExceptionDuringCheckpoint
> > - SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> >
> >
> > Andras
> >
> >
> > On Wed, Jun 20, 2018 at 4:59 AM Ted Yu  wrote:
> >
> >> +1
> >>
> >> Ran unit test suite which passed.
> >>
> >> Checked signatures.
> >>
> >> On Tue, Jun 19, 2018 at 4:47 PM, Dong Lin  wrote:
> >>
> >>> Re-send to kafka-clie...@googlegroups.com
> >>>
> >>> On Tue, Jun 19, 2018 at 4:29 PM, Dong Lin  wrote:
> >>>
>  Hello Kafka users, developers and client-developers,
> 
>  This is the first candidate for release of Apache Kafka 1.1.1.
> 
>  Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
> >> first
>  released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> >>> since
>  that release. A few of the more significant fixes include:
> 
>  KAFKA-6925  - Fix
>  memory leak in StreamsMetricsThreadImpl
>  KAFKA-6937  -
> >> In-sync
>  replica delayed during fetch if replica throttle is exceeded
>  KAFKA-6917  -
> >> Process
>  txn completion asynchronously to avoid deadlock
>  KAFKA-6893  -
> Create
>  processors before starting acceptor to avoid ArithmeticException
>  KAFKA-6870  -
>  Fix ConcurrentModificationException in SampledStat
>  KAFKA-6878  - Fix
>  NullPointerException when querying global state store
>  KAFKA-6879  -
> Invoke
>  session init callbacks outside lock to avoid Controller deadlock
>  KAFKA-6857  -
> >> Prevent
>  follower from truncating to the wrong offset if undefined leader epoch
> >> is
>  requested
>  KAFKA-6854  - Log
>  cleaner fails with transaction markers that are deleted during clean
>  KAFKA-6747  - Check
>  whether there is in-flight transaction before aborting transaction
>  KAFKA-6748  -
> Double
>  check before scheduling a new task after the punctuate call
>  KAFKA-6739  -
>  Fix IllegalArgumentException when down-converting from V2 to V0/V1
>  KAFKA-6728  -
>  Fix NullPointerException when instantiating the HeaderConverter
> 
>  Kafka 1.1.1 release plan:
>  https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> 
>  Release notes for the 1.1.1 release:
>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
> 
>  *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
> 
>  Kafka's KEYS file containing PGP keys we use to sign the release:
>  http://kafka.apache.org/KEYS
> 
>  * Release artifacts to be voted upon (source and binary):
>  http://home.apache.org/~lindong/kafka-1.1.1-rc0/
> 
>  * Maven artifacts to be voted upon:
>  https://repository.apache.org/content/groups/staging/
> 
>  * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
>  https://github.com/apache/kafka/tree/1.1.1-rc0
> 
>  * Documentation:
>  http://kafka.apache.org/11/documentation.html
> 
>  * Protocol:
>  http://kafka.apache.org/11/protocol.html
> 
>  * Successful Jenkins builds for the 1.1 branch:
>  Unit/integration tests: https://builds.apache.org/job/
> >>> kafka-1.1-jdk7/150/
> 
>  Please test and verify the release artifacts and submit a vote for
> this
> >>> RC,
>  or report any issues so we can fix them and get a new RC out ASAP.
> >>> Although
>  this release vote requires PMC votes to pass, testing, votes, and bug
>  reports are valuable and appreciated from everyone.
> 
>  Cheers,
>  Dong
> 
> 
> 
> >>>
> >>
>
>


[jira] [Created] (KAFKA-7084) NewTopicBuilder#config should accept Map rather than Map

2018-06-21 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created KAFKA-7084:
-

 Summary: NewTopicBuilder#config should accept Map 
rather than Map
 Key: KAFKA-7084
 URL: https://issues.apache.org/jira/browse/KAFKA-7084
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai


The field "config" in NewTopic is Map[String, String] but 
NewTopicBuilder#config accept the Map[String, Object] and then call 
Object#toString to convert the Map[String, Object] to Map[String, String]. That 
is weird since users have to trace the source code to understand how kafka 
generate the Map[String, String].

we should deprecate NewTopicBuilder#config(Map[String, Object]) and add an 
alternative method NewTopicBuilder#config"s"(Map[String, String])



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-319: Replace segments with segmentSize in WindowBytesStoreSupplier

2018-06-21 Thread Ted Yu
Window is a common term used in various streaming processing systems whose
unit is time unit.

Segment doesn't seem to be as widely used in such context.
I think using interval in the method name would clearly convey the meaning
intuitively.

Thanks


On Wed, Jun 20, 2018 at 1:31 PM, John Roesler  wrote:

> Hi Ted,
>
> Ah, when you made that comment to me before, I thought you meant as opposed
> to "segments". Now it makes sense that you meant as opposed to
> "segmentSize".
>
> I named it that way to match the peer method "windowSize", which is also a
> quantity of milliseconds.
>
> I agree that "interval" is more intuitive, but I think I favor consistency
> in this case. Does that seem reasonable?
>
> Thanks,
> -John
>
> On Wed, Jun 20, 2018 at 1:06 PM Ted Yu  wrote:
>
> > Normally size is not measured in time unit, such as milliseconds.
> > How about naming the new method segmentInterval ?
> > Thanks
> >  Original message From: John Roesler 
> > Date: 6/20/18  10:45 AM  (GMT-08:00) To: dev@kafka.apache.org Subject:
> > [DISCUSS] KIP-319: Replace segments with segmentSize in
> > WindowBytesStoreSupplier
> > Hello All,
> >
> > I'd like to propose KIP-319 to fix an issue I identified in KAFKA-7080.
> > Specifically, we're creating CachingWindowStore with the *number of
> > segments* instead of the *segment size*.
> >
> > Here's the jira: https://issues.apache.org/jira/browse/KAFKA-7080
> > Here's the KIP: https://cwiki.apache.org/confluence/x/mQU0BQ
> >
> > additionally, here's a draft PR for clarity:
> > https://github.com/apache/kafka/pull/5257
> >
> > Please let me know what you think!
> >
> > Thanks,
> > -John
> >
>