[jira] [Resolved] (KAFKA-6445) Remove deprecated metrics in 2.0

2018-06-14 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6445.

   Resolution: Fixed
 Assignee: Dong Lin  (was: Charly Molter)
Fix Version/s: (was: 2.1.0)
   2.0.0

> Remove deprecated metrics in 2.0
> 
>
> Key: KAFKA-6445
> URL: https://issues.apache.org/jira/browse/KAFKA-6445
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Charly Molter
>Assignee: Dong Lin
>Priority: Trivial
> Fix For: 2.0.0
>
>
> As part of KIP-225 we've replaced a metric and deprecated the old one.
> We should remove these metrics in 2.0.0 this Jira is to track all of the 
> metrics to remove in 2.0.0.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-7055) Kafka Streams Processor API allows you to add sinks and processors without parent

2018-06-14 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-7055.

   Resolution: Fixed
Fix Version/s: 2.0.0

> Kafka Streams Processor API allows you to add sinks and processors without 
> parent
> -
>
> Key: KAFKA-7055
> URL: https://issues.apache.org/jira/browse/KAFKA-7055
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
>Reporter: Nikki Thean
>Assignee: Nikki Thean
>Priority: Minor
>  Labels: easyfix
> Fix For: 2.0.0
>
>
> The Kafka Streams Processor API allows you to define a Topology and connect 
> sources, processors, and sinks. From reading through the code, it seems that 
> you cannot forward a message to a downstream node unless it is explicitly 
> connected to the upstream node (from which you are forwarding the message) as 
> a child. Here is an example where you forward using name of downstream node 
> rather than child index 
> ([https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorContextImpl.java#L117]).
> However, I've been able to connect processors and sinks to the topology 
> without including parent names, i.e with empty vararg, using this method: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/Topology.java#L423].
> As any attempt to forward a message to those nodes will throw a 
> StreamsException, I suggest throwing an exception if a processor or sink is 
> added without at least one upstream node. There is a method in 
> `InternalTopologyBuilder` that allows you to connect processors by name after 
> you add them to the topology, but it is not part of the external Processor 
> API.
> In addition (or alternatively), I suggest making [the error message for when 
> users try to forward messages to a node that is not 
> connected|https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorContextImpl.java#L119]
>  more descriptive, like this one for when a user attempts to access a state 
> store that is not connected to the processor: 
> [https://github.com/apache/kafka/blob/trunk/streams/src/main/java/org/apache/kafka/streams/processor/internals/ProcessorContextImpl.java#L75-L81]



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.1-jdk7 #147

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6975; Fix replica fetching from non-batch-aligned log start offset

--
[...truncated 232.88 KB...]
kafka.api.TransactionsTest > testSendOffsets STARTED

kafka.api.TransactionsTest > testSendOffsets PASSED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance STARTED

kafka.api.ConsumerBounceTest > testCloseDuringRebalance PASSED

kafka.api.ConsumerBounceTest > testClose STARTED

kafka.api.ConsumerBounceTest > testClose PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable STARTED

kafka.api.ConsumerBounceTest > testSubscribeWhenTopicUnavailable PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures STARTED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures SKIPPED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled STARTED

kafka.api.UserQuotaTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.UserQuotaTest > testThrottledProducerConsumer STARTED

kafka.api.UserQuotaTest > testThrottledProducerConsumer PASSED

kafka.api.UserQuotaTest > testQuotaOverrideDelete STARTED

kafka.api.UserQuotaTest > testQuotaOverrideDelete PASSED

kafka.api.UserQuotaTest > testThrottledRequest STARTED

kafka.api.UserQuotaTest > testThrottledRequest PASSED

kafka.api.ApiVersionTest > testMinVersionForMessageFormat STARTED

kafka.api.ApiVersionTest > testMinVersionForMessageFormat PASSED

kafka.api.ApiVersionTest > testApply STARTED

kafka.api.ApiVersionTest > testApply PASSED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms STARTED

kafka.api.SaslMultiMechanismConsumerTest > testMultipleBrokerMechanisms PASSED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslMultiMechanismConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslMultiMechanismConsumerTest > testSimpleConsumption PASSED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec STARTED

kafka.api.GroupCoordinatorIntegrationTest > 
testGroupCoordinatorPropagatesOfffsetsTopicCompressionCodec PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoConsumeWithDescribeAclViaAssign 
PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign STARTED

kafka.api.SslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithDescribeAcl PASSED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe STARTED

kafka.api.SslEndToEndAuthorizationTest > testProduceConsumeViaSubscribe PASSED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl STARTED

kafka.api.SslEndToEndAuthorizationTest > testNoProduceWithoutDescribeAcl PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testZkAclsDisabled PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testCoordinatorFailover PASSED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption STARTED

kafka.api.SaslPlainPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.MetricsTest > testMetrics STARTED

kafka.api.MetricsTest > testMetrics PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
STARTED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testClose STARTED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush STARTED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition STARTED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > 

Jenkins build is back to normal : kafka-trunk-jdk8 #2737

2018-06-14 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-06-14 Thread Ted Yu
Currently in KafkaConfig.scala :

  val QueuedMaxRequests = 500

It would be good if you can include the default value for this new config
in the KIP.

Thanks

On Thu, Jun 14, 2018 at 4:28 PM, Lucas Wang  wrote:

> Hi Ted, Dong
>
> I've updated the KIP by adding a new config, instead of reusing the
> existing one.
> Please take another look when you have time. Thanks a lot!
>
> Lucas
>
> On Thu, Jun 14, 2018 at 2:33 PM, Ted Yu  wrote:
>
> > bq.  that's a waste of resource if control request rate is low
> >
> > I don't know if control request rate can get to 100,000, likely not. Then
> > using the same bound as that for data requests seems high.
> >
> > On Wed, Jun 13, 2018 at 10:13 PM, Lucas Wang 
> > wrote:
> >
> > > Hi Ted,
> > >
> > > Thanks for taking a look at this KIP.
> > > Let's say today the setting of "queued.max.requests" in cluster A is
> > 1000,
> > > while the setting in cluster B is 100,000.
> > > The 100 times difference might have indicated that machines in cluster
> B
> > > have larger memory.
> > >
> > > By reusing the "queued.max.requests", the controlRequestQueue in
> cluster
> > B
> > > automatically
> > > gets a 100x capacity without explicitly bothering the operators.
> > > I understand the counter argument can be that maybe that's a waste of
> > > resource if control request
> > > rate is low and operators may want to fine tune the capacity of the
> > > controlRequestQueue.
> > >
> > > I'm ok with either approach, and can change it if you or anyone else
> > feels
> > > strong about adding the extra config.
> > >
> > > Thanks,
> > > Lucas
> > >
> > >
> > > On Wed, Jun 13, 2018 at 3:11 PM, Ted Yu  wrote:
> > >
> > > > Lucas:
> > > > Under Rejected Alternatives, #2, can you elaborate a bit more on why
> > the
> > > > separate config has bigger impact ?
> > > >
> > > > Thanks
> > > >
> > > > On Wed, Jun 13, 2018 at 2:00 PM, Dong Lin 
> wrote:
> > > >
> > > > > Hey Luca,
> > > > >
> > > > > Thanks for the KIP. Looks good overall. Some comments below:
> > > > >
> > > > > - We usually specify the full mbean for the new metrics in the KIP.
> > Can
> > > > you
> > > > > specify it in the Public Interface section similar to KIP-237
> > > > >  > > > > 237%3A+More+Controller+Health+Metrics>
> > > > > ?
> > > > >
> > > > > - Maybe we could follow the same pattern as KIP-153
> > > > >  > > > > 153%3A+Include+only+client+traffic+in+BytesOutPerSec+metric>,
> > > > > where we keep the existing sensor name "BytesInPerSec" and add a
> new
> > > > sensor
> > > > > "ReplicationBytesInPerSec", rather than replacing the sensor name "
> > > > > BytesInPerSec" with e.g. "ClientBytesInPerSec".
> > > > >
> > > > > - It seems that the KIP changes the semantics of the broker config
> > > > > "queued.max.requests" because the number of total requests queued
> in
> > > the
> > > > > broker will be no longer bounded by "queued.max.requests". This
> > > probably
> > > > > needs to be specified in the Public Interfaces section for
> > discussion.
> > > > >
> > > > >
> > > > > Thanks,
> > > > > Dong
> > > > >
> > > > >
> > > > > On Wed, Jun 13, 2018 at 12:45 PM, Lucas Wang <
> lucasatu...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > Hi Kafka experts,
> > > > > >
> > > > > > I created KIP-291 to add a separate queue for controller
> requests:
> > > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-291%
> > > > > > 3A+Have+separate+queues+for+control+requests+and+data+requests
> > > > > >
> > > > > > Can you please take a look and let me know your feedback?
> > > > > >
> > > > > > Thanks a lot for your time!
> > > > > > Regards,
> > > > > > Lucas
> > > > > >
> > > > >
> > > >
> > >
> >
>


Jenkins build is back to normal : kafka-0.10.2-jdk7 #213

2018-06-14 Thread Apache Jenkins Server
See 




[DISCUSS] KIP-316: Command-line overrides for ConnectDistributed worker properties

2018-06-14 Thread Kevin Lafferty
Hi all,

I created KIP-316, and I would like to initiate discussion.

The KIP is here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-316%3A+Command-line+overrides+for+ConnectDistributed+worker+properties

Thanks,
Kevin


Jenkins build is back to normal : kafka-trunk-jdk10 #214

2018-06-14 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-7060) Command-line overrides for ConnectDistributed worker properties

2018-06-14 Thread Kevin Lafferty (JIRA)
Kevin Lafferty created KAFKA-7060:
-

 Summary: Command-line overrides for ConnectDistributed worker 
properties
 Key: KAFKA-7060
 URL: https://issues.apache.org/jira/browse/KAFKA-7060
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Reporter: Kevin Lafferty


This Jira is for tracking the implementation for 
[KIP-316|https://cwiki.apache.org/confluence/display/KAFKA/KIP-316%3A+Command-line+overrides+for+ConnectDistributed+worker+properties].



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-06-14 Thread Lucas Wang
Hi Ted, Dong

I've updated the KIP by adding a new config, instead of reusing the
existing one.
Please take another look when you have time. Thanks a lot!

Lucas

On Thu, Jun 14, 2018 at 2:33 PM, Ted Yu  wrote:

> bq.  that's a waste of resource if control request rate is low
>
> I don't know if control request rate can get to 100,000, likely not. Then
> using the same bound as that for data requests seems high.
>
> On Wed, Jun 13, 2018 at 10:13 PM, Lucas Wang 
> wrote:
>
> > Hi Ted,
> >
> > Thanks for taking a look at this KIP.
> > Let's say today the setting of "queued.max.requests" in cluster A is
> 1000,
> > while the setting in cluster B is 100,000.
> > The 100 times difference might have indicated that machines in cluster B
> > have larger memory.
> >
> > By reusing the "queued.max.requests", the controlRequestQueue in cluster
> B
> > automatically
> > gets a 100x capacity without explicitly bothering the operators.
> > I understand the counter argument can be that maybe that's a waste of
> > resource if control request
> > rate is low and operators may want to fine tune the capacity of the
> > controlRequestQueue.
> >
> > I'm ok with either approach, and can change it if you or anyone else
> feels
> > strong about adding the extra config.
> >
> > Thanks,
> > Lucas
> >
> >
> > On Wed, Jun 13, 2018 at 3:11 PM, Ted Yu  wrote:
> >
> > > Lucas:
> > > Under Rejected Alternatives, #2, can you elaborate a bit more on why
> the
> > > separate config has bigger impact ?
> > >
> > > Thanks
> > >
> > > On Wed, Jun 13, 2018 at 2:00 PM, Dong Lin  wrote:
> > >
> > > > Hey Luca,
> > > >
> > > > Thanks for the KIP. Looks good overall. Some comments below:
> > > >
> > > > - We usually specify the full mbean for the new metrics in the KIP.
> Can
> > > you
> > > > specify it in the Public Interface section similar to KIP-237
> > > >  > > > 237%3A+More+Controller+Health+Metrics>
> > > > ?
> > > >
> > > > - Maybe we could follow the same pattern as KIP-153
> > > >  > > > 153%3A+Include+only+client+traffic+in+BytesOutPerSec+metric>,
> > > > where we keep the existing sensor name "BytesInPerSec" and add a new
> > > sensor
> > > > "ReplicationBytesInPerSec", rather than replacing the sensor name "
> > > > BytesInPerSec" with e.g. "ClientBytesInPerSec".
> > > >
> > > > - It seems that the KIP changes the semantics of the broker config
> > > > "queued.max.requests" because the number of total requests queued in
> > the
> > > > broker will be no longer bounded by "queued.max.requests". This
> > probably
> > > > needs to be specified in the Public Interfaces section for
> discussion.
> > > >
> > > >
> > > > Thanks,
> > > > Dong
> > > >
> > > >
> > > > On Wed, Jun 13, 2018 at 12:45 PM, Lucas Wang 
> > > > wrote:
> > > >
> > > > > Hi Kafka experts,
> > > > >
> > > > > I created KIP-291 to add a separate queue for controller requests:
> > > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-291%
> > > > > 3A+Have+separate+queues+for+control+requests+and+data+requests
> > > > >
> > > > > Can you please take a look and let me know your feedback?
> > > > >
> > > > > Thanks a lot for your time!
> > > > > Regards,
> > > > > Lucas
> > > > >
> > > >
> > >
> >
>


Build failed in Jenkins: kafka-0.11.0-jdk7 #379

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-6711: GlobalStateManagerImpl should not write offsets of 
in-memory

--
[...truncated 979.90 KB...]

kafka.utils.UtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.UtilsTest > testCsvMap STARTED

kafka.utils.UtilsTest > testCsvMap PASSED

kafka.utils.UtilsTest > testInLock STARTED

kafka.utils.UtilsTest > testInLock PASSED

kafka.utils.UtilsTest > testSwallow STARTED

kafka.utils.UtilsTest > testSwallow PASSED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask STARTED

kafka.utils.timer.TimerTest > testAlreadyExpiredTask PASSED

kafka.utils.timer.TimerTest > testTaskExpiration STARTED

kafka.utils.timer.TimerTest > testTaskExpiration PASSED

kafka.utils.timer.TimerTaskListTest > testAll STARTED

kafka.utils.timer.TimerTaskListTest > testAll PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentWithOfflineReplicaHaltingProgress PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerEpochPersistsWhenAllBrokersDown PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicCreationWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline STARTED

kafka.controller.ControllerIntegrationTest > 
testPartitionReassignmentResumesAfterReplicaComesOnline PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionDisabled PASSED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testTopicPartitionExpansionWithOfflineReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica STARTED

kafka.controller.ControllerIntegrationTest > 
testPreferredReplicaLeaderElectionWithOfflinePreferredReplica PASSED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection STARTED

kafka.controller.ControllerIntegrationTest > 
testAutoPreferredReplicaLeaderElection PASSED

kafka.controller.ControllerIntegrationTest > testTopicCreation STARTED

kafka.controller.ControllerIntegrationTest > testTopicCreation PASSED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment STARTED

kafka.controller.ControllerIntegrationTest > testPartitionReassignment PASSED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion STARTED

kafka.controller.ControllerIntegrationTest > testTopicPartitionExpansion PASSED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch STARTED

kafka.controller.ControllerIntegrationTest > 
testControllerMoveIncrementsControllerEpoch PASSED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled STARTED

kafka.controller.ControllerIntegrationTest > 
testLeaderAndIsrWhenEntireIsrOfflineAndUncleanLeaderElectionEnabled PASSED

kafka.controller.ControllerIntegrationTest > testEmptyCluster STARTED

kafka.controller.ControllerIntegrationTest > testEmptyCluster PASSED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
STARTED

kafka.controller.ControllerIntegrationTest > testPreferredReplicaLeaderElection 
PASSED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
STARTED

kafka.controller.ControllerEventManagerTest > testEventThatThrowsException 
PASSED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent STARTED

kafka.controller.ControllerEventManagerTest > testSuccessfulEvent PASSED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException 
STARTED

kafka.controller.ControllerFailoverTest > testHandleIllegalStateException PASSED

kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > 

Build failed in Jenkins: kafka-trunk-jdk10 #213

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use ListOffsets request instead of SimpleConsumer in

--
[...truncated 1.30 MB...]

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testOffsetCommitWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testOffsetCommitWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
PASSED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset PASSED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers PASSED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicOrPartitionForBadPartitionAndNoErrorsForGoodPartition
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicOrPartitionForBadPartitionAndNoErrorsForGoodPartition
 PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
STARTED

kafka.server.LogOffsetTest > 

Re: [DISCUSS] KIP-291: Have separate queues for control requests and data requests

2018-06-14 Thread Ted Yu
bq.  that's a waste of resource if control request rate is low

I don't know if control request rate can get to 100,000, likely not. Then
using the same bound as that for data requests seems high.

On Wed, Jun 13, 2018 at 10:13 PM, Lucas Wang  wrote:

> Hi Ted,
>
> Thanks for taking a look at this KIP.
> Let's say today the setting of "queued.max.requests" in cluster A is 1000,
> while the setting in cluster B is 100,000.
> The 100 times difference might have indicated that machines in cluster B
> have larger memory.
>
> By reusing the "queued.max.requests", the controlRequestQueue in cluster B
> automatically
> gets a 100x capacity without explicitly bothering the operators.
> I understand the counter argument can be that maybe that's a waste of
> resource if control request
> rate is low and operators may want to fine tune the capacity of the
> controlRequestQueue.
>
> I'm ok with either approach, and can change it if you or anyone else feels
> strong about adding the extra config.
>
> Thanks,
> Lucas
>
>
> On Wed, Jun 13, 2018 at 3:11 PM, Ted Yu  wrote:
>
> > Lucas:
> > Under Rejected Alternatives, #2, can you elaborate a bit more on why the
> > separate config has bigger impact ?
> >
> > Thanks
> >
> > On Wed, Jun 13, 2018 at 2:00 PM, Dong Lin  wrote:
> >
> > > Hey Luca,
> > >
> > > Thanks for the KIP. Looks good overall. Some comments below:
> > >
> > > - We usually specify the full mbean for the new metrics in the KIP. Can
> > you
> > > specify it in the Public Interface section similar to KIP-237
> > >  > > 237%3A+More+Controller+Health+Metrics>
> > > ?
> > >
> > > - Maybe we could follow the same pattern as KIP-153
> > >  > > 153%3A+Include+only+client+traffic+in+BytesOutPerSec+metric>,
> > > where we keep the existing sensor name "BytesInPerSec" and add a new
> > sensor
> > > "ReplicationBytesInPerSec", rather than replacing the sensor name "
> > > BytesInPerSec" with e.g. "ClientBytesInPerSec".
> > >
> > > - It seems that the KIP changes the semantics of the broker config
> > > "queued.max.requests" because the number of total requests queued in
> the
> > > broker will be no longer bounded by "queued.max.requests". This
> probably
> > > needs to be specified in the Public Interfaces section for discussion.
> > >
> > >
> > > Thanks,
> > > Dong
> > >
> > >
> > > On Wed, Jun 13, 2018 at 12:45 PM, Lucas Wang 
> > > wrote:
> > >
> > > > Hi Kafka experts,
> > > >
> > > > I created KIP-291 to add a separate queue for controller requests:
> > > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-291%
> > > > 3A+Have+separate+queues+for+control+requests+and+data+requests
> > > >
> > > > Can you please take a look and let me know your feedback?
> > > >
> > > > Thanks a lot for your time!
> > > > Regards,
> > > > Lucas
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-7010) Rename ResourceNameType.ANY to MATCH

2018-06-14 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7010.

Resolution: Fixed

merged the PR to trunk and 2.0 branch.

> Rename ResourceNameType.ANY to MATCH
> 
>
> Key: KAFKA-7010
> URL: https://issues.apache.org/jira/browse/KAFKA-7010
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, security
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Major
> Fix For: 2.0.0
>
>
> Following on from the PR 
> [#5117|[https://github.com/apache/kafka/pull/5117]...] and discussions with 
> Colin McCabe...
> The current ResourceNameType.ANY may be misleading as it performs pattern 
> matching for wildcard and prefixed bindings. Where as ResourceName.ANY just 
> brings back any resource name.
> Renaming to ResourceNameType.MATCH and adding more Java doc should clear this 
> up.
> Finally, ResourceNameType is no longer appropriate as the type is used in 
> ResourcePattern and ResourcePatternFilter. Hence rename to PatternType.
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2736

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use KafkaConsumer in GetOffsetShell (#5220)

[github] MINOR: Use ListOffsets request instead of SimpleConsumer in

--
[...truncated 477.52 KB...]
kafka.utils.SchedulerTest > testNonPeriodicTask STARTED

kafka.utils.SchedulerTest > testNonPeriodicTask PASSED

kafka.utils.SchedulerTest > testRestart STARTED

kafka.utils.SchedulerTest > testRestart PASSED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler STARTED

kafka.utils.SchedulerTest > testReentrantTaskInMockScheduler PASSED

kafka.utils.SchedulerTest > testPeriodicTask STARTED

kafka.utils.SchedulerTest > testPeriodicTask PASSED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered STARTED

kafka.utils.LoggingTest > testLog4jControllerIsRegistered PASSED

kafka.utils.LoggingTest > testLogName STARTED

kafka.utils.LoggingTest > testLogName PASSED

kafka.utils.LoggingTest > testLogNameOverride STARTED

kafka.utils.LoggingTest > testLogNameOverride PASSED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod STARTED

kafka.utils.ZkUtilsTest > testGetSequenceIdMethod PASSED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testAbortedConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions STARTED

kafka.utils.ZkUtilsTest > testGetAllPartitionsTopicWithoutPartitions PASSED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath STARTED

kafka.utils.ZkUtilsTest > testSuccessfulConditionalDeletePath PASSED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath STARTED

kafka.utils.ZkUtilsTest > testPersistentSequentialPath PASSED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing STARTED

kafka.utils.ZkUtilsTest > testClusterIdentifierJsonParsing PASSED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition STARTED

kafka.utils.ZkUtilsTest > testGetLeaderIsrAndEpochForPartition PASSED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 STARTED

kafka.utils.CoreUtilsTest > testGenerateUuidAsBase64 PASSED

kafka.utils.CoreUtilsTest > testAbs STARTED

kafka.utils.CoreUtilsTest > testAbs PASSED

kafka.utils.CoreUtilsTest > testReplaceSuffix STARTED

kafka.utils.CoreUtilsTest > testReplaceSuffix PASSED

kafka.utils.CoreUtilsTest > testCircularIterator STARTED

kafka.utils.CoreUtilsTest > testCircularIterator PASSED

kafka.utils.CoreUtilsTest > testReadBytes STARTED

kafka.utils.CoreUtilsTest > testReadBytes PASSED

kafka.utils.CoreUtilsTest > testCsvList STARTED

kafka.utils.CoreUtilsTest > testCsvList PASSED

kafka.utils.CoreUtilsTest > testReadInt STARTED

kafka.utils.CoreUtilsTest > testReadInt PASSED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate STARTED

kafka.utils.CoreUtilsTest > testAtomicGetOrUpdate PASSED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID STARTED

kafka.utils.CoreUtilsTest > testUrlSafeBase64EncodeUUID PASSED

kafka.utils.CoreUtilsTest > testCsvMap STARTED

kafka.utils.CoreUtilsTest > testCsvMap PASSED

kafka.utils.CoreUtilsTest > testInLock STARTED

kafka.utils.CoreUtilsTest > testInLock PASSED

kafka.utils.CoreUtilsTest > testTryAll STARTED

kafka.utils.CoreUtilsTest > testTryAll PASSED

kafka.utils.CoreUtilsTest > testSwallow STARTED

kafka.utils.CoreUtilsTest > testSwallow PASSED

kafka.utils.IteratorTemplateTest > testIterator STARTED

kafka.utils.IteratorTemplateTest > testIterator PASSED

kafka.utils.json.JsonValueTest > testJsonObjectIterator STARTED

kafka.utils.json.JsonValueTest > testJsonObjectIterator PASSED

kafka.utils.json.JsonValueTest > testDecodeLong STARTED

kafka.utils.json.JsonValueTest > testDecodeLong PASSED

kafka.utils.json.JsonValueTest > testAsJsonObject STARTED

kafka.utils.json.JsonValueTest > testAsJsonObject PASSED

kafka.utils.json.JsonValueTest > testDecodeDouble STARTED

kafka.utils.json.JsonValueTest > testDecodeDouble PASSED

kafka.utils.json.JsonValueTest > testDecodeOption STARTED

kafka.utils.json.JsonValueTest > testDecodeOption PASSED

kafka.utils.json.JsonValueTest > testDecodeString STARTED

kafka.utils.json.JsonValueTest > testDecodeString PASSED

kafka.utils.json.JsonValueTest > testJsonValueToString STARTED

kafka.utils.json.JsonValueTest > testJsonValueToString PASSED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonObjectOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption STARTED

kafka.utils.json.JsonValueTest > testAsJsonArrayOption PASSED

kafka.utils.json.JsonValueTest > testAsJsonArray STARTED

kafka.utils.json.JsonValueTest > testAsJsonArray PASSED

kafka.utils.json.JsonValueTest > testJsonValueHashCode STARTED

kafka.utils.json.JsonValueTest > testJsonValueHashCode PASSED

kafka.utils.json.JsonValueTest > testDecodeInt STARTED

kafka.utils.json.JsonValueTest > testDecodeInt PASSED

kafka.utils.json.JsonValueTest > 

Jenkins build is back to normal : kafka-1.1-jdk7 #145

2018-06-14 Thread Apache Jenkins Server
See 




Re: Are defaults serde in Kafka streams doing more harm then good ?

2018-06-14 Thread Guozhang Wang
2) Pre-registering serdes and data types for Kafka topics as well as state
stores could be a good feature to add.

3) For this, we can consider capturing the ClassCastException in serde
callers and returns a more informative error.


Guozhang

On Wed, Jun 13, 2018 at 8:34 PM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Thanks Matthias and Guozhang
>
> 1) regarding having json protobuf or avro across the entire topology this
> makes sense. I still wish the builder could take a 'defaultSerde' for value
> and keys to make types explicit throughout the topology vs a class as
> string in a properties. That might also help with Java types through the
> topology as now we can infer that the default serde implies T as the
> operators are chained
>
> 1*) I still think as soon as a 'count' or any 'window' happens the user
> needs to override the default serde which can be confusing for end users
>
> 2) I very much agree a type and serde map could be very useful.
>
> 2*) big scala user here but this will affect maybe 10 percent of the user
> unfortunately. Java is still where people try most things out. Still very
> excited for that release !
>
> 3) haven't dug through the code, but how easy would it be to indicate to
> the end user that a default serde was used during a runtime error ? This
> could be a very quick kip-less win for the developers
>
> On Thu., 14 Jun. 2018, 12:28 am Guozhang Wang,  wrote:
>
> > Hello Stéphane,
> >
> > Good question :) And there have been some discussions about the default
> > serdes in the past in the community, my two cents about this:
> >
> > 1) When a user tries out Streams for the first time she is likely to use
> > some primitive typed data as her first POC app, in which case the data
> > types of the intermediate streams can change frequently and hence a
> default
> > serde would not help much but may introduce confusions; on the other
> hand,
> > in real production environment users are likely to use some data schema
> > system like Avro / Protobuf, and hence their declared serde may well be
> > consistent. For example if you are using Avro with GenericRecord, then
> all
> > the value types throughout your topology may be of the same type, so just
> > declaring a `Serdes` would help. Over time,
> > this is indeed what we have seen from practical user scenarios.
> >
> > 2) So to me the question is for top-of-the-funnel adoptions, could we
> make
> > the OOTB experience better with serdes for users. We've discussed some
> > ideas around this topic, like improving our typing systems so that users
> > can specify some serdes per type (for primitive types we can
> pre-register a
> > list of default ones as well), and the library can infer the data types
> and
> > choose which serde to use automatically. However for Java type erasure
> > makes it tricky (I think it is still the case in Java8), and we cannot
> > always make it work. And that's where we paused on investigating further.
> > Note that in the coming 2.0 release we have a Scala API for Streams where
> > default serdes are indeed dropped since with Scala we can safely rely on
> > implicit typing inference to override the serdes automatically.
> >
> >
> >
> > Guozhang
> >
> >
> > On Tue, Jun 12, 2018 at 6:32 PM, Stephane Maarek <
> > steph...@simplemachines.com.au> wrote:
> >
> > > Hi
> > >
> > > Coming from a user perspective, I see a lot of beginners not
> > understanding
> > > the need for serdes and misusing the default serde settings.
> > >
> > > I believe default serdes do more harm than good. At best, they save a
> bit
> > > of boilerplate code but hide the complexity of serde happening at each
> > > step. At worst, they generate confusion and make debugging tremendously
> > > hard as the errors thrown at runtime don't indicate that the serde
> being
> > > used is the default one.
> > >
> > > What do you think of deprecating them as well as any API that does not
> > use
> > > explicit serde?
> > >
> > > I know this may be a "tough change", but in my opinion it'll allow for
> > more
> > > explicit development and easier debugging.
> > >
> > > Regards
> > > Stéphane
> > >
> >
> >
> >
> > --
> > -- Guozhang
> >
>



-- 
-- Guozhang


Jenkins build is back to normal : kafka-2.0-jdk8 #28

2018-06-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #212

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Use KafkaConsumer in GetOffsetShell (#5220)

--
[...truncated 1.30 MB...]

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testOffsetCommitWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testOffsetCommitWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnDistinctListenerWithInconsistentListenersAcrossBrokers 
PASSED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset PASSED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset PASSED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers 
STARTED

kafka.server.KafkaApisTest > 
testMetadataRequestOnSharedListenerWithInconsistentListenersAcrossBrokers PASSED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testAddPartitionsToTxnWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddOffsetToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition STARTED

kafka.server.KafkaApisTest > testTxnOffsetCommitWithInvalidPartition PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedMessageFormatForBadPartitionAndNoErrorsForGoodPartition
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleEndTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicOrPartitionForBadPartitionAndNoErrorsForGoodPartition
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicOrPartitionForBadPartitionAndNoErrorsForGoodPartition
 PASSED

kafka.server.LogOffsetTest > testFetchOffsetsBeforeWithChangingSegmentSize 
STARTED

kafka.server.LogOffsetTest > 

Jenkins build is back to normal : kafka-1.0-jdk7 #202

2018-06-14 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk8 #2735

2018-06-14 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-201: Rationalising Policy interfaces

2018-06-14 Thread Anna Povzner
Hi Tom,

Just wanted to check what you think about the comments I made in my last
message. I think this KIP is a big improvement to our current policy
interfaces, and really hope we can get this KIP in.

Thanks,
Anna

On Thu, May 31, 2018 at 3:29 PM, Anna Povzner  wrote:

> Hi Tom,
>
>
> Thanks for the KIP. I am aware that the voting thread was started, but
> wanted to discuss couple of concerns here first.
>
>
> I think the coupling of RequestedTopicState#generatedReplicaAssignment()
> and TopicState#replicasAssignments() does not work well in case where the
> request deals only with a subset of partitions (e.g., add partitions) or no
> assignment at all (alter topic config). In particular:
>
> 1) Alter topic config use case: There is no replica assignment in the
> request, and generatedReplicaAssignment()  returning either true or false
> is both misleading. The user can interpret this as assignment being
> generated or provided by the user originally (e.g., on topic create), while
> I don’t think we track such thing.
>
> 2) On add partitions, we may have manual assignment for new partitions.
> What I understood from the KIP,  generatedReplicaAssignment() will return
> true or false based on whether new partitions were manually assigned or
> not, while TopicState#replicasAssignments() will return replica
> assignments for all partitions. I think it is confusing in a way that
> assignment of old partitions could be auto-generated but new partitions are
> manually assigned.
>
> 3) Generalizing #2, suppose in a future, a user can re-assign replicas for
> a set of partitions.
>
>
> One way to address this with minimal changes to proposed API is to rename
> RequestedTopicState#generatedReplicaAssignment() to 
> RequestedTopicState#manualReplicaAssignment()
> and change the API behavior and description to : “True if the client
> explicitly provided replica assignments in this request, which means that
> some or all assignments returned by TopicState#replicasAssignments() are
> explicitly requested by the user”. The user then will have to diff
> TopicState#replicasAssignments() from clusterState and TopicState#
> replicasAssignments()  from RequestedTopicState, and assume that
> assignments that are different are manually assigned (if
> RequestedTopicState#manualReplicaAssignment()  returns true). We will
> need to clearly document this and it still seems awkward.
>
>
> I think a cleaner way is to make RequestedTopicState to provide replica
> assignments only for partitions that were manually assigned replicas in the
> request that is being validated. Similarly, for alter topic validation, it
> would be nice to make it more clear for the user what has been changed. I
> remember that you already raised that point earlier by comparing current
> proposed API with having separate methods for each specific command.
> However, I agree that it will make it harder to change the interface in the
> future.
>
>
> Could we explore the option of pushing methods that are currently in
> TopicState to CreateTopicRequest and AlterTopicRequest? TopicState will
> still be used for requesting current topic state via ClusterState.
>
> Something like:
>
> interface CreateTopicRequest extends AbstractRequestMetadata {
>
>   // requested number of partitions or if manual assignment is given,
> number of partitions in the assignment
>
>   int numPartitions();
>
>   // requested replication factor, or if manual assignment is given,
> number of replicas in assignment for partition 0
>
>   short replicationFactor();
>
>  // replica assignment requested by the client, or null if assignment is
> auto-generated
>
>  map> manualReplicaAssignment();
>
>  map configs();
>
> }
>
>
> interface AlterTopicRequest extends AbstractRequestMetadata {
>
>   // updated topic configs, or null if not changed
>
>   map updatedConfigs();
>
>   // proposed replica assignment in this request, or null. For adding new
> partitions request, this is proposed replica assignment for new partitions.
> For replica re-assignment case, this is proposed new assignment.
>
>   map> proposedReplicaAssignment();
>
>   // new number of partitions (due to increase/decrease), or null if
> number of partitions not changed
>
>   Integer updatedNumPartitions()
>
> }
>
>
> I did not spend much time on my AlterTopicRequest interface proposal, but
> the idea is basically to return only the parts which were changed. The
> advantage of this approach over having separate methods for each specific
> alter topic request is that it is more flexible for future mixing of what
> can be updated in the topic state.
>
>
> What do you think?
>
>
> Thanks,
>
> Anna
>
>
> On Mon, Oct 9, 2017 at 1:39 AM, Tom Bentley  wrote:
>
>> I've added RequestedTopicState, as discussed in my last email.
>>
>> I've also added a paragraph to the migration plan about old clients making
>> policy-violating delete topics or delete records request.
>>
>> If no further comments a forthcoming in the next day or two 

[jira] [Resolved] (KAFKA-2837) FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure

2018-06-14 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2837?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-2837.

   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   2.0.0

This test has been removed.

> FAILING TEST: kafka.api.ProducerBounceTest > testBrokerFailure 
> ---
>
> Key: KAFKA-2837
> URL: https://issues.apache.org/jira/browse/KAFKA-2837
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.9.0.0
>Reporter: Gwen Shapira
>Assignee: jin xing
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0
>
>
> {code}
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at org.junit.Assert.assertTrue(Assert.java:52)
>   at 
> kafka.api.ProducerBounceTest.testBrokerFailure(ProducerBounceTest.scala:117)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>

[jira] [Resolved] (KAFKA-4237) Avoid long request timeout for the consumer

2018-06-14 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-4237.

Resolution: Duplicate

Duplicate of KAFKA-7050.

> Avoid long request timeout for the consumer
> ---
>
> Key: KAFKA-4237
> URL: https://issues.apache.org/jira/browse/KAFKA-4237
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Reporter: Jason Gustafson
>Priority: Major
>
> In the consumer rebalance protocol, the JoinGroup can stay in purgatory on 
> the server for as long as the rebalance timeout. For the Java client, that 
> means that the request timeout must be at least as large as the rebalance 
> timeout (which is governed by {{max.poll.interval.ms}} since KIP-62 and 
> {{session.timeout.ms}} before then). By default, since 0.10.1, this is 5 
> minutes plus some change, which makes the clients slow to detect some hard 
> failures.
> To fix this, two options come to mind:
> 1. Right now, all request APIs are limited by the same request timeout in 
> {{NetworkClient}}, but there's not really any reason why this must be so. We 
> could use a separate timeout for the JoinGroup request (the implementations 
> of this is straightforward: 
> https://github.com/confluentinc/kafka/pull/108/files).
> 2. Alternatively, we could prevent the server from holding the JoinGroup in 
> purgatory for such a long time. Instead, it could return early from the 
> JoinGroup (say before the session timeout has expired) with an error code 
> (e.g. REBALANCE_IN_PROGRESS), which tells the client that it should just 
> resend the JoinGroup.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-2.0-jdk8 #27

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6711: GlobalStateManagerImpl should not write offsets of 
in-memory

--
[...truncated 945.78 KB...]
kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testOfflineReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToNewReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToNewReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionSuccessfulToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testReplicaDeletionSuccessfulToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testNewReplicaToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOnlineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToOnlineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNonexistentReplicaToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOfflineReplicaToNonexistentReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidOnlineReplicaToReplicaDeletionIneligibleTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionSuccessfulToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionSuccessfulTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionIneligibleToReplicaDeletionStartedTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidReplicaDeletionStartedToOfflineReplicaTransition PASSED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition STARTED

kafka.controller.ReplicaStateMachineTest > 
testInvalidNewReplicaToReplicaDeletionStartedTransition PASSED


[jira] [Resolved] (KAFKA-6975) AdminClient.deleteRecords() may cause replicas unable to fetch from beginning

2018-06-14 Thread Jason Gustafson (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6975?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6975.

   Resolution: Fixed
Fix Version/s: (was: 1.1.1)
   (was: 1.0.2)

> AdminClient.deleteRecords() may cause replicas unable to fetch from beginning
> -
>
> Key: KAFKA-6975
> URL: https://issues.apache.org/jira/browse/KAFKA-6975
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 1.1.0, 1.0.1
>Reporter: Anna Povzner
>Assignee: Anna Povzner
>Priority: Blocker
> Fix For: 2.0.0
>
>
> AdminClient.deleteRecords(beforeOffset(offset)) will set log start offset to 
> the requested offset. If the requested offset is in the middle of the batch, 
> the replica will not be able to fetch from that offset (because it is in the 
> middle of the batch). 
> One use-case where this could cause problems is replica re-assignment. 
> Suppose we have a topic partition with 3 initial replicas, and at some point 
> the user issues  AdminClient.deleteRecords() for the offset that falls in the 
> middle of the batch. It now becomes log start offset for this topic 
> partition. Suppose at some later time, the user starts partition 
> re-assignment to 3 new replicas. The new replicas (followers) will start with 
> HW = 0, will try to fetch from 0, then get "out of order offset" because 0 < 
> log start offset (LSO); the follower will be able to reset offset to LSO of 
> the leader and fetch LSO; the leader will send a batch in response with base 
> offset  stop the fetcher thread. The end result is that the new replicas will not be 
> able to start fetching unless LSO moves to an offset that is not in the 
> middle of the batch, and the re-assignment will be stuck for a possibly a 
> very log time. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


?????? Create account and access KIP

2018-06-14 Thread ????????????
Yep . it seems ok . thanks


--  --
??: "Guozhang Wang";
: 2018??6??14??(??) 10:56
??: "dev";

: Re: Create account and access KIP



Hello,

I saw you have been granted the permission, could you try to create the
wiki page?


Guozhang


On Thu, Jun 14, 2018 at 7:26 AM,  <1878707...@qq.com> wrote:

> Hi team:
> Scala administrative client AdminClient is deprecated and replaced by
> KafkaAdminClient .Some Consumer Group management method should migrate from
> Scala Admin Client to Java Version .
> So someone could help me open the permission of KIP and I want to
> create one to record it .
> (my user name is : darion.yaphet)
> thanks ~~~




-- 
-- Guozhang

Re: Create account and access KIP

2018-06-14 Thread Guozhang Wang
Hello,

I saw you have been granted the permission, could you try to create the
wiki page?


Guozhang


On Thu, Jun 14, 2018 at 7:26 AM, 逐风者的祝福 <1878707...@qq.com> wrote:

> Hi team:
> Scala administrative client AdminClient is deprecated and replaced by
> KafkaAdminClient .Some Consumer Group management method should migrate from
> Scala Admin Client to Java Version .
> So someone could help me open the permission of KIP and I want to
> create one to record it .
> (my user name is : darion.yaphet)
> thanks ~~~




-- 
-- Guozhang


[jira] [Created] (KAFKA-7059) Offer new constructor on ProducerRecord

2018-06-14 Thread JIRA
Matthias Weßendorf created KAFKA-7059:
-

 Summary: Offer new constructor on ProducerRecord 
 Key: KAFKA-7059
 URL: https://issues.apache.org/jira/browse/KAFKA-7059
 Project: Kafka
  Issue Type: Improvement
Reporter: Matthias Weßendorf
 Fix For: 2.0.1


creating a ProducerRecord, with custom headers requires usage of a constructor 
with a slightly longer arguments list.

 

It would be handy or more convenient if there was a ctor, like:

{code}
public ProducerRecord(String topic, K key, V value, Iterable headers)
{code}
 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Create account and access KIP

2018-06-14 Thread ????????????
Hi team:
Scala administrative client AdminClient is deprecated and replaced by 
KafkaAdminClient .Some Consumer Group management method should migrate from 
Scala Admin Client to Java Version . 
So someone could help me open the permission of KIP and I want to create 
one to record it . 
(my user name is : darion.yaphet)
thanks ~~~

Build failed in Jenkins: kafka-trunk-jdk8 #2734

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6711: GlobalStateManagerImpl should not write offsets of 
in-memory

--
[...truncated 468.51 KB...]
kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNonexistentPartitionToOfflinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransitionZkUtilsExceptionFromCreateStates 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidNewPartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testNewPartitionToOnlinePartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionErrorCodeFromStateLookup PASSED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown STARTED

kafka.controller.PartitionStateMachineTest > 
testOnlinePartitionToOnlineTransitionForControlledShutdown PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransitionZkUtilsExceptionFromStateLookup 
PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOnlinePartitionToNonexistentPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testInvalidOfflinePartitionToNewPartitionTransition PASSED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition STARTED

kafka.controller.PartitionStateMachineTest > 
testOfflinePartitionToOnlinePartitionTransition PASSED

kafka.tools.ConsumerPerformanceTest > 
testConfigUsingNewConsumerUnrecognizedOption STARTED

kafka.tools.ConsumerPerformanceTest > 
testConfigUsingNewConsumerUnrecognizedOption PASSED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigUsingNewConsumer STARTED

kafka.tools.ConsumerPerformanceTest > testConfigUsingNewConsumer PASSED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody STARTED

kafka.tools.ConsumerPerformanceTest > testNonDetailedHeaderMatchBody PASSED

kafka.tools.ConsumerPerformanceTest > testConfigUsingOldConsumer STARTED

kafka.tools.ConsumerPerformanceTest > testConfigUsingOldConsumer PASSED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage STARTED

kafka.tools.MirrorMakerTest > 
testDefaultMirrorMakerMessageHandlerWithNoTimestampInSourceMessage PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandler PASSED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
STARTED

kafka.tools.MirrorMakerTest > testDefaultMirrorMakerMessageHandlerWithHeaders 
PASSED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex STARTED

kafka.tools.MirrorMakerIntegrationTest > testCommaSeparatedRegex PASSED

kafka.tools.ConsoleProducerTest > testParseKeyProp STARTED

kafka.tools.ConsoleProducerTest > testParseKeyProp PASSED


Jenkins build is back to normal : kafka-trunk-jdk10 #210

2018-06-14 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-6711) GlobalStateManagerImpl should not write offsets of in-memory stores in checkpoint file

2018-06-14 Thread Rajini Sivaram (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6711?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajini Sivaram resolved KAFKA-6711.
---
   Resolution: Fixed
 Reviewer: Matthias J. Sax
Fix Version/s: 2.0.0

> GlobalStateManagerImpl should not write offsets of in-memory stores in 
> checkpoint file
> --
>
> Key: KAFKA-6711
> URL: https://issues.apache.org/jira/browse/KAFKA-6711
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.1
>Reporter: Cemalettin Koç
>Assignee: Cemalettin Koç
>Priority: Major
>  Labels: newbie
> Fix For: 2.0.0
>
>
> We are using an InMemoryStore along with GlobalKTable and I noticed that 
> after each restart I am losing all my data. When I debug it, 
> `/tmp/kafka-streams/category-client-1/global/.checkpoint` file contains 
> offset for my GlobalKTable topic. I had checked GlobalStateManagerImpl 
> implementation and noticed that it is not guarded for cases similar to mine. 
> I am fairly new to Kafka land and probably there might be another way to fix 
> issue. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7058) ConnectSchema#equals() broken for array-typed default values

2018-06-14 Thread Gunnar Morling (JIRA)
Gunnar Morling created KAFKA-7058:
-

 Summary: ConnectSchema#equals() broken for array-typed default 
values
 Key: KAFKA-7058
 URL: https://issues.apache.org/jira/browse/KAFKA-7058
 Project: Kafka
  Issue Type: Bug
Reporter: Gunnar Morling


{ConnectSchema#equals()} calls {{Objects#equals()}} for the schemas' default 
values, but this doesn't work correctly if the default values in fact are 
arrays. In this case, always {false} will be returned, also if the default 
value arrays actually are the same.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7057) Consumer stop polling

2018-06-14 Thread Moshe Lavi (JIRA)
Moshe Lavi created KAFKA-7057:
-

 Summary: Consumer stop polling
 Key: KAFKA-7057
 URL: https://issues.apache.org/jira/browse/KAFKA-7057
 Project: Kafka
  Issue Type: Bug
  Components: consumer, controller
Affects Versions: 0.10.1.1
Reporter: Moshe Lavi


We build 3 Kafka brokers (0.10.1.1) version using Spring Cloud Stream consumer 
to poll messages.
We encountered consumer lags alerted and found some consumers were blocked and 
not polling anymore messages. This requires us to restart the microservice 
where that consumer resides.

I wonder if this has to do with lack of available threads or to the fact there 
heartbeat daemon does not exist/work.


*The thread dump shows:*

kafka-coordinator-heartbeat-thread | SiteAgreementItem" #4943 daemon prio=5 
os_prio=0 tid=0x7f3abdd08000 nid=0x83ac waiting for monitor entry 
[0x7f3a5dcdb000]

   java.lang.Thread.State: BLOCKED (on object monitor)

    at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.disableWakeups(ConsumerNetworkClient.java:409)

    - waiting to lock <*0x0005df800450*> (a 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient)

    at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:264)

    at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:865)

    - locked <0x0005df800488> (a 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)

 

-kafka-consumer-1" #4940 prio=5 os_prio=0 tid=0x7f3a8d433800 nid=0x838e 
runnable [0x7f3a5dedd000]

   java.lang.Thread.State: RUNNABLE

    at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)

    at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)

    at 
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)

    at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)

    - locked <0x0005df7705e0> (a sun.nio.ch.Util$2)

    - locked <0x0005df7705d0> (a 
java.util.Collections$UnmodifiableSet)

    - locked <0x0005df7705f0> (a sun.nio.ch.EPollSelectorImpl)

    at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)

    at 
org.apache.kafka.common.network.Selector.select(Selector.java:470)

    at 
org.apache.kafka.common.network.Selector.poll(Selector.java:286)

    at 
org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:260)

    at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:232)

    - locked <*0x0005df800450*> (a 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient)

    at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1031)

    at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:979)

    at 
org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:532)

    at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)

    at java.util.concurrent.FutureTask.run(FutureTask.java:266)

    at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2733

2018-06-14 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-4812) We are facing the same issue as SAMZA-590 for kafka

2018-06-14 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4812?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-4812.
--
Resolution: Auto Closed

Closing inactive issue. The old consumer is no longer supported. Please upgrade 
to the Java consumer whenever possible.

> We are facing the same issue as SAMZA-590 for kafka
> ---
>
> Key: KAFKA-4812
> URL: https://issues.apache.org/jira/browse/KAFKA-4812
> Project: Kafka
>  Issue Type: Bug
>Reporter: Manjeer Srujan. Y
>Priority: Critical
>
> Dead Kafka broker ignores new leader.
> We are facing the same issue as samza issue below. But, we couldn't find any 
> fix for this in kafka. Pasted the log below for reference.
> The kafka client that we are using is below.
> group: 'org.apache.kafka', name: 'kafka_2.10', version: '0.8.2.1'
> https://issues.apache.org/jira/browse/SAMZA-590
> 2017-02-28 09:50:53.189 29708 [Thread-11-vendor-index-spout-executor[35 35]] 
> ERROR org.apache.storm.daemon.executor -  - java.lang.RuntimeException: 
> java.nio.channels.ClosedChannelException
> at 
> org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:103)
> at 
> org.apache.storm.kafka.ZkCoordinator.getMyManagedPartitions(ZkCoordinator.java:69)
> at org.apache.storm.kafka.KafkaSpout.nextTuple(KafkaSpout.java:129)
> at 
> org.apache.storm.daemon.executor$fn__7990$fn__8005$fn__8036.invoke(executor.clj:648)
> at org.apache.storm.util$async_loop$fn__624.invoke(util.clj:484)
> at clojure.lang.AFn.run(AFn.java:22)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: java.nio.channels.ClosedChannelException
> at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
> at 
> kafka.consumer.SimpleConsumer.liftedTree1$1(SimpleConsumer.scala:78)
> at 
> kafka.consumer.SimpleConsumer.kafka$consumer$SimpleConsumer$$sendRequest(SimpleConsumer.scala:68)
> at 
> kafka.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:127)
> at 
> kafka.javaapi.consumer.SimpleConsumer.getOffsetsBefore(SimpleConsumer.scala:79)
> at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:75)
> at org.apache.storm.kafka.KafkaUtils.getOffset(KafkaUtils.java:65)
> at 
> org.apache.storm.kafka.PartitionManager.(PartitionManager.java:94)
> at org.apache.storm.kafka.ZkCoordinator.refresh(ZkCoordinator.java:98)
> ... 6 more



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: Apache Kafka 2.0.0 Release Progress

2018-06-14 Thread Rajini Sivaram
Hi all,

We still have five blockers for 2.0.0. I have moved most of the other JIRAs
out, another five issues that include API/doc changes are still marked for
2.0.0. If you think any of the other JIRAs are critical to include in
2.0.0, please update the fix version, mark as blocker and ensure a PR is
ready to merge. I will wait another day before creating the first RC.
Please help to close out the 2.0.0 JIRAs by tomorrow.

Thank you!

Regards,

Rajini


On Mon, Jun 11, 2018 at 3:56 PM, Rajini Sivaram 
wrote:

> Hi all,
>
> I have moved out most of the JIRAs that aren't currently being worked on.
> We still have 24 JIRAs with 2.0.0 as the fix version (
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820).
> Six of these are marked as blockers. Since KIP freeze was delayed by a day
> due to holidays, code freeze will be on 13th of June. I will be aiming to
> create the first RC for 2.0.0 on 14th of June UK morning time. Please help
> to close out the remaining JIRAs before that.
>
> Thank you!
>
> Regards,
>
> Rajini
>
>


Jenkins build is back to normal : kafka-2.0-jdk8 #26

2018-06-14 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #209

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-7030; Add configuration to disable message down-conversion

--
[...truncated 1.58 MB...]

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnOneTopicAndPartition PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetWithUnrecognizedNewConsumerOption STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetWithUnrecognizedNewConsumerOption PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDurationToEarliest 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDurationToEarliest 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnOneTopic 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnOneTopic 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsNotExistingGroup 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsNotExistingGroup 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToZonedDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Build failed in Jenkins: kafka-trunk-jdk10 #208

2018-06-14 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Replace test usages of ClientUtils.fetchTopicMetadata with

--
[...truncated 1.58 MB...]

kafka.zk.KafkaZkClientTest > testAclManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods STARTED

kafka.zk.KafkaZkClientTest > testPreferredReplicaElectionMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateLogDir STARTED

kafka.zk.KafkaZkClientTest > testPropagateLogDir PASSED

kafka.zk.KafkaZkClientTest > testGetDataAndStat STARTED

kafka.zk.KafkaZkClientTest > testGetDataAndStat PASSED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress STARTED

kafka.zk.KafkaZkClientTest > testReassignPartitionsInProgress PASSED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths STARTED

kafka.zk.KafkaZkClientTest > testCreateTopLevelPaths PASSED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters STARTED

kafka.zk.KafkaZkClientTest > testIsrChangeNotificationGetters PASSED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion STARTED

kafka.zk.KafkaZkClientTest > testLogDirEventNotificationsDeletion PASSED

kafka.zk.KafkaZkClientTest > testGetLogConfigs STARTED

kafka.zk.KafkaZkClientTest > testGetLogConfigs PASSED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods STARTED

kafka.zk.KafkaZkClientTest > testBrokerSequenceIdMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath STARTED

kafka.zk.KafkaZkClientTest > testCreateSequentialPersistentPath PASSED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath STARTED

kafka.zk.KafkaZkClientTest > testConditionalUpdatePath PASSED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode STARTED

kafka.zk.KafkaZkClientTest > testDeleteTopicZNode PASSED

kafka.zk.KafkaZkClientTest > testDeletePath STARTED

kafka.zk.KafkaZkClientTest > testDeletePath PASSED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods STARTED

kafka.zk.KafkaZkClientTest > testGetBrokerMethods PASSED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateTokenChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions STARTED

kafka.zk.KafkaZkClientTest > testGetTopicsAndPartitions PASSED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo STARTED

kafka.zk.KafkaZkClientTest > testRegisterBrokerInfo PASSED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath STARTED

kafka.zk.KafkaZkClientTest > testConsumerOffsetPath PASSED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerManagementMethods PASSED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods STARTED

kafka.zk.KafkaZkClientTest > testTopicAssignmentMethods PASSED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges STARTED

kafka.zk.KafkaZkClientTest > testPropagateIsrChanges PASSED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods STARTED

kafka.zk.KafkaZkClientTest > testControllerEpochMethods PASSED

kafka.zk.KafkaZkClientTest > testDeleteRecursive STARTED

kafka.zk.KafkaZkClientTest > testDeleteRecursive PASSED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates STARTED

kafka.zk.KafkaZkClientTest > testGetTopicPartitionStates PASSED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification STARTED

kafka.zk.KafkaZkClientTest > testCreateConfigChangeNotification PASSED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods STARTED

kafka.zk.KafkaZkClientTest > testDelegationTokenMethods PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeInvalidJson PASSED

kafka.zk.ReassignPartitionsZNodeTest > testEncode STARTED

kafka.zk.ReassignPartitionsZNodeTest > testEncode PASSED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson STARTED

kafka.zk.ReassignPartitionsZNodeTest > testDecodeValidJson PASSED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPaths STARTED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPaths PASSED

kafka.zk.LiteralAclStoreTest > shouldRoundTripChangeNode STARTED

kafka.zk.LiteralAclStoreTest > shouldRoundTripChangeNode PASSED

kafka.zk.LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral STARTED

kafka.zk.LiteralAclStoreTest > shouldThrowFromEncodeOnNoneLiteral PASSED

kafka.zk.LiteralAclStoreTest > shouldWriteChangesToTheWritePath STARTED

kafka.zk.LiteralAclStoreTest > shouldWriteChangesToTheWritePath PASSED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPatternType STARTED

kafka.zk.LiteralAclStoreTest > shouldHaveCorrectPatternType PASSED

kafka.zk.ExtendedAclStoreTest > shouldHaveCorrectPaths STARTED

kafka.zk.ExtendedAclStoreTest > shouldHaveCorrectPaths PASSED

kafka.zk.ExtendedAclStoreTest > shouldRoundTripChangeNode STARTED

kafka.zk.ExtendedAclStoreTest > shouldRoundTripChangeNode PASSED