[jira] [Resolved] (KAFKA-7009) Mute logger for reflections.org at the warn level in system tests

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7009?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7009.
--
   Resolution: Fixed
Fix Version/s: 2.1.0
   0.11.0.3
   0.10.2.2
   0.10.1.2
   0.10.0.2

Issue resolved by pull request 5151
[https://github.com/apache/kafka/pull/5151]

> Mute logger for reflections.org at the warn level in system tests
> -
>
> Key: KAFKA-7009
> URL: https://issues.apache.org/jira/browse/KAFKA-7009
> Project: Kafka
>  Issue Type: Improvement
>  Components: KafkaConnect, system tests
>Affects Versions: 1.1.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Critical
> Fix For: 0.10.0.2, 0.10.1.2, 0.10.2.2, 2.0.0, 0.11.0.3, 2.1.0
>
>
> AK's Log4J configuration file for Connect includes [these 
> lines|https://github.com/apache/kafka/blob/trunk/config/connect-log4j.properties#L25]:
> {code}
> log4j.logger.org.apache.zookeeper=ERROR
> log4j.logger.org.I0Itec.zkclient=ERROR
> log4j.logger.org.reflections=ERROR
> {code}
> The last one suppresses lots of Reflections warnings like the following that 
> are output during classpath scanning and are harmless:
> {noformat}
> [2018-06-06 13:52:39,448] WARN could not create Vfs.Dir from url. ignoring 
> the exception and continuing (org.reflections.Reflections)
> org.reflections.ReflectionsException: could not create Vfs.Dir from url, no 
> matching UrlType was found 
> [file:/usr/bin/../share/java/confluent-support-metrics/*]
> either use fromURL(final URL url, final List urlTypes) or use the 
> static setDefaultURLTypes(final List urlTypes) or 
> addDefaultURLTypes(UrlType urlType) with your specialized UrlType.
> at org.reflections.vfs.Vfs.fromURL(Vfs.java:109)
> at org.reflections.vfs.Vfs.fromURL(Vfs.java:91)
> at org.reflections.Reflections.scan(Reflections.java:240)
> at 
> org.apache.kafka.connect.runtime.isolation.DelegatingClassLoader$InternalReflections.scan(DelegatingClassLoader.java:373)
> at org.reflections.Reflections$1.run(Reflections.java:198)
> at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
> at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> {noformat}
> The last line also need to be added to [Connect's Log4J configuration file in 
> the AK system 
> tests|https://github.com/apache/kafka/blob/trunk/tests/kafkatest/services/templates/connect_log4j.properties.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-0.11.0-jdk7 #376

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-6634: Delay starting new transaction in task.initializeTopology

--
[...truncated 903.70 KB...]

kafka.producer.ProducerTest > testSendToNewTopic STARTED

kafka.producer.ProducerTest > testSendToNewTopic PASSED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout STARTED

kafka.producer.ProducerTest > testAsyncSendCanCorrectlyFailWithTimeout PASSED

kafka.producer.ProducerTest > testSendNullMessage STARTED

kafka.producer.ProducerTest > testSendNullMessage PASSED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo STARTED

kafka.producer.ProducerTest > testUpdateBrokerPartitionInfo PASSED

kafka.producer.ProducerTest > testSendWithDeadBroker STARTED

kafka.producer.ProducerTest > testSendWithDeadBroker PASSED

kafka.producer.SyncProducerTest > testReachableServer STARTED

kafka.producer.SyncProducerTest > testReachableServer PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLarge PASSED

kafka.producer.SyncProducerTest > testNotEnoughReplicas STARTED

kafka.producer.SyncProducerTest > testNotEnoughReplicas PASSED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero STARTED

kafka.producer.SyncProducerTest > testMessageSizeTooLargeWithAckZero PASSED

kafka.producer.SyncProducerTest > testProducerCanTimeout STARTED

kafka.producer.SyncProducerTest > testProducerCanTimeout PASSED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse STARTED

kafka.producer.SyncProducerTest > testProduceRequestWithNoResponse PASSED

kafka.producer.SyncProducerTest > testEmptyProduceRequest STARTED

kafka.producer.SyncProducerTest > testEmptyProduceRequest PASSED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse STARTED

kafka.producer.SyncProducerTest > testProduceCorrectlyReceivesResponse PASSED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic STARTED

kafka.producer.AsyncProducerTest > testFailedSendRetryLogic PASSED

kafka.producer.AsyncProducerTest > testQueueTimeExpired STARTED

kafka.producer.AsyncProducerTest > testQueueTimeExpired PASSED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents STARTED

kafka.producer.AsyncProducerTest > testPartitionAndCollateEvents PASSED

kafka.producer.AsyncProducerTest > testBatchSize STARTED

kafka.producer.AsyncProducerTest > testBatchSize PASSED

kafka.producer.AsyncProducerTest > testSerializeEvents STARTED

kafka.producer.AsyncProducerTest > testSerializeEvents PASSED

kafka.producer.AsyncProducerTest > testProducerQueueSize STARTED

kafka.producer.AsyncProducerTest > testProducerQueueSize PASSED

kafka.producer.AsyncProducerTest > testRandomPartitioner STARTED

kafka.producer.AsyncProducerTest > testRandomPartitioner PASSED

kafka.producer.AsyncProducerTest > testInvalidConfiguration STARTED

kafka.producer.AsyncProducerTest > testInvalidConfiguration PASSED

kafka.producer.AsyncProducerTest > testInvalidPartition STARTED

kafka.producer.AsyncProducerTest > testInvalidPartition PASSED

kafka.producer.AsyncProducerTest > testNoBroker STARTED

kafka.producer.AsyncProducerTest > testNoBroker PASSED

kafka.producer.AsyncProducerTest > testProduceAfterClosed STARTED

kafka.producer.AsyncProducerTest > testProduceAfterClosed PASSED

kafka.producer.AsyncProducerTest > testJavaProducer STARTED

kafka.producer.AsyncProducerTest > testJavaProducer PASSED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder STARTED

kafka.producer.AsyncProducerTest > testIncompatibleEncoder PASSED

kafka.api.LogAppendTimeTest > testProduceConsume STARTED

kafka.api.LogAppendTimeTest > testProduceConsume PASSED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic STARTED

kafka.api.FetchRequestTest > testShuffleWithSingleTopic PASSED

kafka.api.FetchRequestTest > testShuffle STARTED

kafka.api.FetchRequestTest > testShuffle PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic STARTED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime STARTED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero STARTED

kafka.api.PlaintextProducerSendTest > testBatchSizeZero PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer STARTED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime STARTED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogAppendTime PASSED


[jira] [Resolved] (KAFKA-7031) Kafka Connect API module depends on Jersey

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7031?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7031.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5190
[https://github.com/apache/kafka/pull/5190]

> Kafka Connect API module depends on Jersey
> --
>
> Key: KAFKA-7031
> URL: https://issues.apache.org/jira/browse/KAFKA-7031
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Magesh kumar Nandakumar
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> The Kafka Connect API module for 2.0.0 brings in Jersey dependencies. When I 
> run {{mvn dependency:tree}} on a project that depends only on the snapshot 
> version of {{org.apache.kafka:kafka-connect-api}}, the following are shown:
> {noformat}
> [INFO] +- org.apache.kafka:connect-api:jar:2.0.0-SNAPSHOT:compile
> [INFO] |  +- org.slf4j:slf4j-api:jar:1.7.25:compile
> [INFO] |  \- 
> org.glassfish.jersey.containers:jersey-container-servlet:jar:2.27:compile
> [INFO] | +- 
> org.glassfish.jersey.containers:jersey-container-servlet-core:jar:2.27:compile
> [INFO] | |  \- 
> org.glassfish.hk2.external:javax.inject:jar:2.5.0-b42:compile
> [INFO] | +- org.glassfish.jersey.core:jersey-common:jar:2.27:compile
> [INFO] | |  +- javax.annotation:javax.annotation-api:jar:1.2:compile
> [INFO] | |  \- org.glassfish.hk2:osgi-resource-locator:jar:1.0.1:compile
> [INFO] | +- org.glassfish.jersey.core:jersey-server:jar:2.27:compile
> [INFO] | |  +- org.glassfish.jersey.core:jersey-client:jar:2.27:compile
> [INFO] | |  +- 
> org.glassfish.jersey.media:jersey-media-jaxb:jar:2.27:compile
> [INFO] | |  \- javax.validation:validation-api:jar:1.1.0.Final:compile
> [INFO] | \- javax.ws.rs:javax.ws.rs-api:jar:2.1:compile
> ...
> {noformat}
> This may have been an unintended side effect of the 
> [KIP-285|https://cwiki.apache.org/confluence/display/KAFKA/KIP-285%3A+Connect+Rest+Extension+Plugin]
>  effort, which added the REST extension for Connect.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk8 #2727

2018-06-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk10 #202

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[me] MINOR: Remove the unused field in DelegatingClassLoader

--
[...truncated 1.10 MB...]
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUserCredentialsUnavailableForScramMechanism STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUserCredentialsUnavailableForScramMechanism PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion0 STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion0 PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion1 STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion1 PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMultipleServerMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMultipleServerMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverPlaintext STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverPlaintext PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverSsl STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverSsl PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidApiVersionsRequestSequence STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidApiVersionsRequestSequence PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testDisallowedKafkaRequestsBeforeAuthentication STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testDisallowedKafkaRequestsBeforeAuthentication PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientLoginOverride STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientLoginOverride PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientDynamicJaasConfiguration STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientDynamicJaasConfiguration PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerAuthenticateCallbackHandler STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerAuthenticateCallbackHandler PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslScram STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslScram PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerDynamicJaasConfiguration STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerDynamicJaasConfiguration PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidSaslPacket STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidSaslPacket PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testSaslHandshakeRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testSaslHandshakeRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testJaasConfigurationForListener STARTED


[jira] [Resolved] (KAFKA-7043) Connect isolation whitelist does not include new primitive converters (KIP-305)

2018-06-12 Thread Ewen Cheslack-Postava (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7043?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-7043.
--
   Resolution: Fixed
Fix Version/s: 2.1.0

Issue resolved by pull request 5198
[https://github.com/apache/kafka/pull/5198]

> Connect isolation whitelist does not include new primitive converters 
> (KIP-305)
> ---
>
> Key: KAFKA-7043
> URL: https://issues.apache.org/jira/browse/KAFKA-7043
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 2.0.0
>Reporter: Randall Hauch
>Assignee: Randall Hauch
>Priority: Blocker
> Fix For: 2.0.0, 2.1.0
>
>
> KIP-305 added several new primitive converters, but the PR did not add them 
> to the whitelist for the plugin isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-0.10.2-jdk7 #211

2018-06-12 Thread Apache Jenkins Server
See 




Wiki Access

2018-06-12 Thread Nishanth Pradeep
Hello,

Could you give me access so I can create a KIP. I have been tasked with an
issue  that needs a KIP.

WIKI ID: nprad

Let me know if you need more information.

Thank You!

Best,
Nishanth Pradeep


Jenkins build is back to normal : kafka-1.0-jdk7 #199

2018-06-12 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk10 #200

2018-06-12 Thread Apache Jenkins Server
See 




Are defaults serde in Kafka streams doing more harm then good ?

2018-06-12 Thread Stephane Maarek
Hi

Coming from a user perspective, I see a lot of beginners not understanding
the need for serdes and misusing the default serde settings.

I believe default serdes do more harm than good. At best, they save a bit
of boilerplate code but hide the complexity of serde happening at each
step. At worst, they generate confusion and make debugging tremendously
hard as the errors thrown at runtime don't indicate that the serde being
used is the default one.

What do you think of deprecating them as well as any API that does not use
explicit serde?

I know this may be a "tough change", but in my opinion it'll allow for more
explicit development and easier debugging.

Regards
Stéphane


Jenkins build is back to normal : kafka-0.11.0-jdk7 #375

2018-06-12 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-trunk-jdk8 #2726

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Move FileConfigProvider to provider subpackage (#5194)

[junrao] KAFKA-7007:  Use JSON for /kafka-acl-extended-changes path (#5161)

[jason] KAFKA-6979; Add `default.api.timeout.ms` to KafkaConsumer (KIP-266)

--
[...truncated 482.21 KB...]
kafka.network.SocketServerTest > testGracefulClose STARTED

kafka.network.SocketServerTest > testGracefulClose PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > controlThrowable STARTED

kafka.network.SocketServerTest > controlThrowable PASSED

kafka.network.SocketServerTest > testRequestMetricsAfterStop STARTED

kafka.network.SocketServerTest > testRequestMetricsAfterStop PASSED

kafka.network.SocketServerTest > testConnectionIdReuse STARTED

kafka.network.SocketServerTest > testConnectionIdReuse PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
STARTED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest > testProcessorMetricsTags STARTED

kafka.network.SocketServerTest > testProcessorMetricsTags PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testConnectionId STARTED

kafka.network.SocketServerTest > testConnectionId PASSED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics STARTED

kafka.network.SocketServerTest > 
testBrokerSendAfterChannelClosedUpdatesRequestMetrics PASSED

kafka.network.SocketServerTest > testNoOpAction STARTED

kafka.network.SocketServerTest > testNoOpAction PASSED

kafka.network.SocketServerTest > simpleRequest STARTED

kafka.network.SocketServerTest > simpleRequest PASSED

kafka.network.SocketServerTest > closingChannelException STARTED

kafka.network.SocketServerTest > closingChannelException PASSED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress STARTED

kafka.network.SocketServerTest > 
testSendActionResponseWithThrottledChannelWhereThrottlingInProgress PASSED

kafka.network.SocketServerTest > testIdleConnection STARTED

kafka.network.SocketServerTest > testIdleConnection PASSED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed STARTED

kafka.network.SocketServerTest > 
testClientDisconnectionWithStagedReceivesFullyProcessed PASSED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp STARTED

kafka.network.SocketServerTest > testZeroMaxConnectionsPerIp PASSED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown STARTED

kafka.network.SocketServerTest > testMetricCollectionAfterShutdown PASSED

kafka.network.SocketServerTest > testSessionPrincipal STARTED

kafka.network.SocketServerTest > testSessionPrincipal PASSED

kafka.network.SocketServerTest > configureNewConnectionException STARTED

kafka.network.SocketServerTest > configureNewConnectionException PASSED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides STARTED

kafka.network.SocketServerTest > testMaxConnectionsPerIpOverrides PASSED

kafka.network.SocketServerTest > processNewResponseException STARTED

kafka.network.SocketServerTest > processNewResponseException PASSED

kafka.network.SocketServerTest > processCompletedSendException STARTED

kafka.network.SocketServerTest > processCompletedSendException PASSED

kafka.network.SocketServerTest > processDisconnectedException STARTED

kafka.network.SocketServerTest > processDisconnectedException PASSED

kafka.network.SocketServerTest > sendCancelledKeyException STARTED

kafka.network.SocketServerTest > sendCancelledKeyException PASSED

kafka.network.SocketServerTest > processCompletedReceiveException STARTED

kafka.network.SocketServerTest > processCompletedReceiveException PASSED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown STARTED

kafka.network.SocketServerTest > testSocketsCloseOnShutdown PASSED

kafka.network.SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone STARTED

kafka.network.SocketServerTest > 
testNoOpActionResponseWithThrottledChannelWhereThrottlingAlreadyDone PASSED

kafka.network.SocketServerTest > pollException STARTED

kafka.network.SocketServerTest > pollException PASSED

kafka.network.SocketServerTest > testSslSocketServer STARTED

kafka.network.SocketServerTest > testSslSocketServer PASSED

kafka.network.SocketServerTest > tooBigRequestIsRejected STARTED

kafka.network.SocketServerTest > tooBigRequestIsRejected PASSED

kafka.network.SocketServerTest > 

Build failed in Jenkins: kafka-0.11.0-jdk7 #374

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-6782: solved the bug of restoration of aborted messages for

--
[...truncated 978.55 KB...]

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryTailIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecorded PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecordedAndUndefinedEpochRequested STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnUnsupportedIfNoEpochRecordedAndUndefinedEpochRequested PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPersistEpochsBetweenInstances PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotClearAnythingIfOffsetToFirstOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotLetOffsetsGoBackwardsEvenIfEpochsProgress PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldGetFirstOffsetOfSubsequentEpochWhenOffsetRequestedForPreviousEpoch PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest2 PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearEarliestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldPreserveResetOffsetOnClearEarliestIfOneExists PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldUpdateOffsetBetweenEpochBoundariesOnClearEarliest PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldReturnInvalidOffsetIfEpochIsRequestedWhichIsNotCurrentlyTracked PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldFetchEndOffsetOfEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldRetainLatestEpochOnClearAllEarliestAndUpdateItsOffset PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearAllEntries PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > shouldClearLatestOnEmptyCache 
PASSED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed STARTED

kafka.server.epoch.LeaderEpochFileCacheTest > 
shouldNotResetEpochHistoryHeadIfUndefinedPassed PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldIncreaseLeaderEpochBetweenLeaderRestarts PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldAddCurrentLeaderEpochToMessagesAsTheyAreWrittenToLeader PASSED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse STARTED

kafka.server.epoch.LeaderEpochIntegrationTest > 
shouldSendLeaderEpochRequestAndGetAResponse PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica 
STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > shouldGetEpochsFromReplica PASSED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown STARTED

kafka.server.epoch.OffsetsForLeaderEpochTest > 
shouldReturnUnknownTopicOrPartitionIfThrown PASSED


[jira] [Created] (KAFKA-7051) Improve the efficiency of the ReplicaManager when there are many partitions

2018-06-12 Thread Colin P. McCabe (JIRA)
Colin P. McCabe created KAFKA-7051:
--

 Summary: Improve the efficiency of the ReplicaManager when there 
are many partitions
 Key: KAFKA-7051
 URL: https://issues.apache.org/jira/browse/KAFKA-7051
 Project: Kafka
  Issue Type: Bug
  Components: replication
Affects Versions: 0.8.0
Reporter: Colin P. McCabe
Assignee: Colin P. McCabe






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #199

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Move FileConfigProvider to provider subpackage (#5194)

--
[...truncated 1.58 MB...]
kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldAbortExpiredTransactionsInOngoingStateAndBumpEpoch PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnInvalidTxnRequestOnEndTxnRequestWhenStatusIsCompleteCommitAndResultIsNotCommit
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldReturnOkOnEndTxnWhenStatusIsCompleteCommitAndResultIsCommit PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionsOnAddPartitionsWhenStateIsPrepareCommit 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldIncrementEpochAndUpdateMetadataOnHandleInitPidWhenExistingCompleteTransaction
 PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldGenerateNewProducerIdIfEpochsExhausted PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithNotCoordinatorOnInitPidWhenNotCoordinator PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldRespondWithConcurrentTransactionOnAddPartitionsWhenStateIsPrepareAbort 
PASSED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId STARTED

kafka.coordinator.transaction.TransactionCoordinatorTest > 
shouldInitPidWithEpochZeroForNewTransactionalId PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhileProducerFenced PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testCompleteTransitionWhenAppendSucceeded PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToCoordinatorNotAvailableError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendFailToUnknownError PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldReturnNotCooridnatorErrorIfTransactionIdPartitionNotOwned PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testValidateTransactionTimeout PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldWriteTxnMarkersForTransactionInPreparedCommitState PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldOnlyConsiderTransactionsInTheOngoingStateToAbort PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 
shouldRemoveCompleteAbortExpiredTransactionalIds PASSED

kafka.coordinator.transaction.TransactionStateManagerTest > 
testAppendTransactionToLogWhilePendingStateChanged STARTED

kafka.coordinator.transaction.TransactionStateManagerTest > 

Jenkins build is back to normal : kafka-2.0-jdk8 #17

2018-06-12 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-7007) Use JSON for /kafka-acl-extended-changes path

2018-06-12 Thread Jun Rao (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao resolved KAFKA-7007.

Resolution: Fixed

merged the PR to trunk and 2.0 branch.

> Use JSON for /kafka-acl-extended-changes path
> -
>
> Key: KAFKA-7007
> URL: https://issues.apache.org/jira/browse/KAFKA-7007
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core, security
>Reporter: Andy Coates
>Assignee: Andy Coates
>Priority: Major
> Fix For: 2.0.0
>
>
> Relating to one of the outstanding work items in PR 
> [#5117|[https://github.com/apache/kafka/pull/5117]...]
>  
> Keep Literal ACLs on the old paths, using the old formats, to maintain 
> backwards compatibility.
> Have Prefixed, and any latter types, go on new paths, using JSON, (old 
> brokers are not aware of them).
> Add checks to reject any adminClient requests to add prefixed acls before the 
> cluster is fully upgraded.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #198

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[matthias] KAFKA-6782: solved the bug of restoration of aborted messages for

--
[...truncated 372.97 KB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsExportImportPlan 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 

Build failed in Jenkins: kafka-0.11.0-jdk7 #373

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6747 Check whether there is in-flight transaction before 
aborting

--
[...truncated 2.45 MB...]
org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithDuplicateSourceName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullProcessorSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotSetApplicationIdToNull PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics 
STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testSourceTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testNamedTopicMatchesAlreadyProvidedPattern PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactAndDeleteSetForWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddInternalTopicConfigWithCompactForNonWindowStores PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddTimestampExtractorWithOffsetResetAndPatternPerSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSameName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddSinkWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddProcessorWithSelfParent PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAssociateStateStoreNameWhenStateStoreSupplierIsInternal PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testTopicGroups PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > testBuild PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowOffsetResetSourceWithoutTopics PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAddNullStateStoreSupplier PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullNameWhenAddingSource PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowNullTopicWhenAddingSink PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldNotAllowToAddGlobalStoreWithSourceNameEqualsProcessorName PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
shouldAddSourceWithOffsetReset PASSED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource STARTED

org.apache.kafka.streams.processor.TopologyBuilderTest > 
testAddStateStoreWithSource PASSED


[jira] [Resolved] (KAFKA-6782) GlobalKTable GlobalStateStore never finishes restoring when consuming aborted messages

2018-06-12 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6782.

   Resolution: Fixed
Fix Version/s: 1.1.1
   1.0.2
   0.11.0.3
   2.0.0

> GlobalKTable GlobalStateStore never finishes restoring when consuming aborted 
> messages
> --
>
> Key: KAFKA-6782
> URL: https://issues.apache.org/jira/browse/KAFKA-6782
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.1.0, 1.0.1
>Reporter: Lingxiao WANG
>Assignee: Lingxiao WANG
>Priority: Major
> Fix For: 2.0.0, 0.11.0.3, 1.0.2, 1.1.1
>
>
> Same problem with https://issues.apache.org/jira/browse/KAFKA-6190, but his 
> solution which is below, works for the succeed transactional messages. But 
> when there are aborted messages, it will be in infinite loop. Here is his 
> proposition :
> {code:java}
> while (offset < highWatermark) {
>  final ConsumerRecords records = consumer.poll(100);
>  for (ConsumerRecord record : records) {
>  if (record.key() != null) {
>stateRestoreCallback.restore(record.key(), record.value());
>  }
>  offset = consumer.position(topicPartition);
>  }
>  }{code}
> Concretely, when the consumer consume a set of aborted messages, it polls 0 
> records, and the code 'offset = consumer.position(topicPartition)' doesn't 
> have any opportunity to execute.
>  So I propose to move the code 'offset = consumer.position(topicPartition)' 
> outside of the cycle to guarantee that event if no records are polled, the 
> offset can always be updated.
> {code:java}
> while (offset < highWatermark) {
>  final ConsumerRecords records = consumer.poll(100);
>  for (ConsumerRecord record : records) {
>  if (record.key() != null) {
>stateRestoreCallback.restore(record.key(), record.value());
>  }
>  }
>  offset = consumer.position(topicPartition);
>  }{code}
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-1.0-jdk7 #197

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6747 Check whether there is in-flight transaction before 
aborting

--
[...truncated 1.44 MB...]
  where W,V,K are type-variables:
W extends Window declared in method 
reduce(Reducer,Windows,StateStoreSupplier)
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:130:
 warning: [deprecation] reduce(Reducer,Windows) in KGroupedStream has 
been deprecated
public  KTable, V> reduce(final Reducer 
reducer,
 ^
  where W,V,K are type-variables:
W extends Window declared in method reduce(Reducer,Windows)
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:121:
 warning: [deprecation] reduce(Reducer,Windows,String) in 
KGroupedStream has been deprecated
public  KTable, V> reduce(final Reducer 
reducer,
 ^
  where W,V,K are type-variables:
W extends Window declared in method reduce(Reducer,Windows,String)
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:96:
 warning: [deprecation] reduce(Reducer,StateStoreSupplier) in 
KGroupedStream has been deprecated
public KTable reduce(final Reducer reducer,
^
  where V,K are type-variables:
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:82:
 warning: [deprecation] reduce(Reducer,String) in KGroupedStream has been 
deprecated
public KTable reduce(final Reducer reducer,
^
  where V,K are type-variables:
V extends Object declared in interface KGroupedStream
K extends Object declared in interface KGroupedStream
:405:
 warning: [deprecation] count(SessionWindows,StateStoreSupplier) 
in KGroupedStream has been deprecated
public KTable, Long> count(final SessionWindows sessionWindows,
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:399:
 warning: [deprecation] count(SessionWindows) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows) 
{
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:391:
 warning: [deprecation] count(SessionWindows,String) in KGroupedStream has been 
deprecated
public KTable, Long> count(final SessionWindows sessionWindows, 
final String queryableStoreName) {
 ^
  where K is a type-variable:
K extends Object declared in interface KGroupedStream
:303:
 warning: [deprecation] count(Windows,StateStoreSupplier) in 
KGroupedStream has been deprecated
public  KTable, Long> count(final Windows 
windows,
^
  where W,K are type-variables:
W extends Window declared in method 
count(Windows,StateStoreSupplier)
K extends Object declared in interface KGroupedStream
:297:
 warning: [deprecation] count(Windows) in KGroupedStream has been 
deprecated
public  KTable, Long> count(final Windows 
windows) {
^
  where W,K are type-variables:
W extends Window declared in method count(Windows)
K extends Object declared in interface KGroupedStream

Build failed in Jenkins: kafka-2.0-jdk8 #16

2018-06-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:862)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1964)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1960)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H34
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:850)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at 

[jira] [Created] (KAFKA-7050) Decrease consumer request timeout to 30s

2018-06-12 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7050:
--

 Summary: Decrease consumer request timeout to 30s
 Key: KAFKA-7050
 URL: https://issues.apache.org/jira/browse/KAFKA-7050
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson
Assignee: Jason Gustafson


Per KIP-266 discussion, we should lower the request timeout. We should also add 
new logic to override this timeout for the JoinGroup request.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7049) InternalTopicIntegrationTest sometimes fails

2018-06-12 Thread Ted Yu (JIRA)
Ted Yu created KAFKA-7049:
-

 Summary: InternalTopicIntegrationTest sometimes fails
 Key: KAFKA-7049
 URL: https://issues.apache.org/jira/browse/KAFKA-7049
 Project: Kafka
  Issue Type: Test
Reporter: Ted Yu


Saw the following based on commit fa1d0383902260576132e09bdf9efcc2784b55b4 :
{code}
org.apache.kafka.streams.integration.InternalTopicIntegrationTest > 
shouldCompactTopicsForKeyValueStoreChangelogs FAILED
java.lang.RuntimeException: Timed out waiting for completion. 
lagMetrics=[0/2] totalLag=[0.0]
at 
org.apache.kafka.streams.integration.utils.IntegrationTestUtils.waitForCompletion(IntegrationTestUtils.java:227)
at 
org.apache.kafka.streams.integration.InternalTopicIntegrationTest.shouldCompactTopicsForKeyValueStoreChangelogs(InternalTopicIntegrationTest.java:164)
{code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-6976) Kafka Streams instances going in to DEAD state

2018-06-12 Thread Matthias J. Sax (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6976?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-6976.

Resolution: Not A Bug

Closing this because it's not a bug, but a configuration issue.

> Kafka Streams instances going in to DEAD state
> --
>
> Key: KAFKA-6976
> URL: https://issues.apache.org/jira/browse/KAFKA-6976
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.0
>Reporter: Deepak Goyal
>Priority: Blocker
> Attachments: kafkaStreamsDeadState.log
>
>
> We are using Kafka 0.10.2.0, Kafka-Streams 1.1.0. We have Kafka Cluster of 16 
> machines, and topic that is being consumed by Kafka Streams has 256 
> partitions. We spawned 400 machines of Kakfa Streams application. We see that 
> all of the StreamThreads go in to DEAD state.
> {quote}{{[2018-05-25 05:59:29,282] INFO stream-thread 
> [ksapp-19f923d7-5f9e-4137-b79f-ee20945a7dd7-StreamThread-1] State transition 
> from PENDING_SHUTDOWN to DEAD 
> (org.apache.kafka.streams.processor.internals.StreamThread) [2018-05-25 
> 05:59:29,282] INFO stream-client [ksapp-19f923d7-5f9e-4137-b79f-ee20945a7dd7] 
> State transition from REBALANCING to ERROR 
> (org.apache.kafka.streams.KafkaStreams) [2018-05-25 05:59:29,282] WARN 
> stream-client [ksapp-19f923d7-5f9e-4137-b79f-ee20945a7dd7] All stream threads 
> have died. The instance will be in error state and should be closed. 
> (org.apache.kafka.streams.KafkaStreams) [2018-05-25 05:59:29,282] INFO 
> stream-thread [ksapp-19f923d7-5f9e-4137-b79f-ee20945a7dd7-StreamThread-1] 
> Shutdown complete 
> (org.apache.kafka.streams.processor.internals.StreamThread)}}
> {quote}
> Please note that when we only have 100 kafka-streams application machines, 
> things are working as expected. We see that instances are consuming messages 
> from topic.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-228 Negative record timestamp support

2018-06-12 Thread Bill Bejeck
+1

Thanks,
Bill

On Sun, Jun 10, 2018 at 5:32 PM Ted Yu  wrote:

> +1
>
> On Sun, Jun 10, 2018 at 2:17 PM, Matthias J. Sax 
> wrote:
>
> > +1 (binding)
> >
> > Thanks for the KIP.
> >
> >
> > -Matthias
> >
> > On 5/29/18 9:14 AM, Konstantin Chukhlomin wrote:
> > > Thanks, updated the KIP.
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 228+Negative+record+timestamp+support  > confluence/display/KAFKA/KIP-228+Negative+record+timestamp+support>
> > >
> > >> On May 24, 2018, at 1:39 PM, Guozhang Wang 
> wrote:
> > >>
> > >> Thanks Konstantin, I have a minor comment on the wiki page's "Proposed
> > >> Solution" section:
> > >>
> > >> "The solution is to ignore that problem and it is a choice of the user
> > >> to..":
> > >>
> > >> --
> > >>
> > >> I'd suggest we rephrase it and admit "-1 is a special value in Kafka,
> > which
> > >> means we do not have a valid way to express Wednesday, December 31,
> 1969
> > >> 11:59:59 PM UTC." rather than letting users to choose how to interpret
> > it,
> > >> since as mentioned, both at Kafka broker side, as well as at Kafka
> > client
> > >> side (Streams, Producer) we have already using this protocol to set -1
> > as a
> > >> special value of `unknown`, so users choosing how to interpret it
> freely
> > >> would lead to confusing behaviors. Instead, we should well document
> this
> > >> caveat, and educate users in the document that, given this situation,
> if
> > >> you do need to express Wednesday, December 31, 1969 11:59:59 PM UTC,
> > >> consider shifting it by one millisecond.
> > >>
> > >>
> > >> Otherwise, I'm +1 on this KIP.
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >>
> > >> On Wed, May 23, 2018 at 2:40 PM, Konstantin Chukhlomin <
> > chuhlo...@gmail.com>
> > >> wrote:
> > >>
> > >>> All,
> > >>>
> > >>> Thanks for the feedback on KIP-228. I've updated the KIP, and would
> > like
> > >>> to start to start a vote.
> > >>>
> > >>> KIP: https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > >>> 228+Negative+record+timestamp+support  > >>> confluence/display/KAFKA/KIP-228+Negative+record+timestamp+support>
> > >>>
> > >>> Pull Request: https://github.com/apache/kafka/pull/5072/files <
> > >>> https://github.com/apache/kafka/pull/5072/files>
> > >>>
> > >>> Discussion Thread: http://mail-archives.apache.
> > >>> org/mod_mbox/kafka-dev/201712.mbox/%3c178138FA-CA30-44A8-
> > >>> b92a-b966b8ca1...@gmail.com%3e  > >>> org/mod_mbox/kafka-dev/201712.mbox/%3C178138FA-CA30-44A8-
> > >>> b92a-b966b8ca1...@gmail.com%3E>
> > >>>
> > >>> Thanks!
> > >>
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >
> > >
> >
> >
>


Re: access to create KIP

2018-06-12 Thread Matthias J. Sax
Done.

On 6/12/18 12:37 AM, Chia-Ping Tsai wrote:
> dear Kafka,
> 
> Please give me the permission to create KIP. the email is 
> "chia7...@gmail.com" and the account is "chia7712"
> 
> Best Regards,
> chia-ping
> 



signature.asc
Description: OpenPGP digital signature


[jira] [Created] (KAFKA-7048) NPE when creating >1 connectors

2018-06-12 Thread Chia-Ping Tsai (JIRA)
Chia-Ping Tsai created KAFKA-7048:
-

 Summary: NPE when creating >1 connectors
 Key: KAFKA-7048
 URL: https://issues.apache.org/jira/browse/KAFKA-7048
 Project: Kafka
  Issue Type: Improvement
Reporter: Chia-Ping Tsai
Assignee: Chia-Ping Tsai
 Fix For: 2.0.0


KAFKA-6886 introduced the ConfigTransformer to transform the given 
configuration data. ConfigTransformer#transform(Map) expect the 
passed config won't be null but DistributedHerder#putConnectorConfig call the 
#transform before updating the snapshot (see below). Hence, it cause the NPE. 
{code:java}
// Note that we use the updated connector config despite the fact that we don't 
have an updated
// snapshot yet. The existing task info should still be accurate.
Map map = configState.connectorConfig(connName);
ConnectorInfo info = new ConnectorInfo(connName, config, 
configState.tasks(connName),
map == null ? null : 
connectorTypeForClass(map.get(ConnectorConfig.CONNECTOR_CLASS_CONFIG)));
callback.onCompletion(null, new Created<>(!exists, info));
return null;{code}
We can add a null check to "configs" (see below) to resolve the NPE. It means 
we WON'T pass the null configs to configTransformer
{code:java}
public Map connectorConfig(String connector) {
Map configs = connectorConfigs.get(connector);
if (configTransformer != null) { // add a condition "configs != null"
configs = configTransformer.transform(connector, configs);
}
return configs;
}{code}
 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7047) Connect isolation whitelist does not include SimpleHeaderConverter

2018-06-12 Thread Randall Hauch (JIRA)
Randall Hauch created KAFKA-7047:


 Summary: Connect isolation whitelist does not include 
SimpleHeaderConverter
 Key: KAFKA-7047
 URL: https://issues.apache.org/jira/browse/KAFKA-7047
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 1.1.0
Reporter: Randall Hauch
Assignee: Randall Hauch


The SimpleHeaderConverter added in 1.1.0 was never added to the PluginUtils 
whitelist so that this header converter is loaded in isolation.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2550) [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, there are serious performance degradation.

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2550.
--
Resolution: Auto Closed

{color:#00}Closing inactive issue. Old clients are deprecated. Please 
reopen if you think the issue still exists in newer versions.{color}
 

> [Kafka][0.8.2.1][Performance]When there are a lot of partition under a Topic, 
> there are serious performance degradation.
> 
>
> Key: KAFKA-2550
> URL: https://issues.apache.org/jira/browse/KAFKA-2550
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, producer 
>Affects Versions: 0.8.2.1
>Reporter: yanwei
>Assignee: Neha Narkhede
>Priority: Major
>
> Because of business need to create a large number of partitions,I test the 
> partition number of support.
> But I find When there are a lot of partition under a Topic, there are serious 
> performance degradation.
> Through the analysis, in addition to the hard disk is bottleneck, the client 
> is the bottleneck
> I use JProfile,producer and consumer 100 message(msg size:500byte)
> 1、Consumer high level API:(I find i can't upload picture?)
>  ZookeeperConsumerConnector.scala-->rebalance
> -->val assignmentContext = new AssignmentContext(group, consumerIdString, 
> config.excludeInternalTopics, zkClient)
> -->ZkUtils.getPartitionsForTopics(zkClient, myTopicThreadIds.keySet.toSeq)
> -->getPartitionAssignmentForTopics
> -->Json.parseFull(jsonPartitionMap) 
>  1) one topic 400 partion:
>  JProfile:48.6% cpu run time
>  2) ont topic 3000 partion:
>  JProfile:97.8% cpu run time
>   Maybe the file(jsonPartitionMap) is very big lead to parse is very slow.
>   But this function is executed only once, so the problem should not be too 
> big.
> 2、Producer Scala API:
> BrokerPartitionInfo.scala--->getBrokerPartitionInfo:
> partitionMetadata.map { m =>
>   m.leader match {
> case Some(leader) =>
>   //y00163442 delete log print
>   debug("Partition [%s,%d] has leader %d".format(topic, 
> m.partitionId, leader.id))
>   new PartitionAndLeader(topic, m.partitionId, Some(leader.id))
> case None =>
>   //y00163442 delete log print
>   //debug("Partition [%s,%d] does not have a leader 
> yet".format(topic, m.partitionId))
>   new PartitionAndLeader(topic, m.partitionId, None)
>   }
> }.sortWith((s, t) => s.partitionId < t.partitionId) 
>  
>   When partitions number>25,the function 'format' cpu run time is 44.8%.
>   Nearly half of the time consumption in the format function.whether the 
> log print open, this format will be executed.Led to the decrease of the TPS 
> for five times(25000--->5000).
>   
> 3、Producer JAVA client(clients module):
>   function:org.apache.kafka.clients.producer.KafkaProducer.send
>   I find the function 'send' cpu run time  rise with the rising number of 
> partitions ,when partions is 5000,the cpu run time is 60.8.
>   Because Kafka broker side of CPU, memory, disk, the network didn't 
> reach the bottleneck , No matter request.required.acks is set to 0 or 1, the 
> results are similar, I doubt the send there may be some bottlenecks.
>   
> Very unfortunately to upload pictures don't succeed, can't see the results.
> My test results, for a single server, a single hard disk can support 1000 
> partitions, 7 hard disk can support 3000 partitions.If can solve the 
> bottleneck for the client, then seven hard disk I estimate that can support 
> more partitions.
> Actual production configuration, could be more partitions configuration under 
> more than one TOPIC,Things could be better.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-312: Add Overloaded StreamsBuilder Build Method to Accept java.util.Properties

2018-06-12 Thread Bill Bejeck
> Since there're only two values for the optional optimization config
> introduced by KAFKA-6935, I wonder the overloaded build method (with
Properties
> instance) would make the config unnecessary.

Hi Ted, thanks for commenting.  You raise a good point.  Buy IMHO, yes we
still need the config as we want to give users the ability to turn
optimizations off/on explicitly and we haven't finalized anything
concerning how we'll pass in the parameters.  Additionally, as we release
new versions, the config will give users the ability to choose to apply all
of the latest optimizations or stick with the previous version.

Guozhang,

   > if we can hide this from the public API, to, e.g. add an additional
function
   > in InternalTopologyBuilder of InternalStreamsBuilder (since in your
current
   > working PR we're reusing InternalStreamsBuilder for the logical plan
   > generation) which can then be called inside KafkaStreams constructors?

I like the idea, but as I looked into it, there is an issue concerning the
fact that users can call Topology.describe() at any point.  So with this
approach, we could end up where the first call to Topology.describe()
errors out or returns an invalid description, then the second call is
successful.  So I don't think we'll be able to pursue this approach.


John,

I initially liked your suggestion, but I also agree with Matthias as to why
we should not use that approach either.

Thanks to all for the comments.

Bill


On Mon, Jun 11, 2018 at 4:13 PM John Roesler  wrote:

> Thanks Matthias,
>
> I buy this reasoning.
>
> -John
>
> On Mon, Jun 11, 2018 at 12:48 PM, Matthias J. Sax 
> wrote:
>
> > @John: I don't think this is a good idea. `KafkaStreams` executes a
> > `Topology` and should be agnostic if the topology was build manually or
> > via `StreamsBuilder` (at least from my point of view).
> >
> > -Matthias
> >
> > On 6/11/18 9:53 AM, Guozhang Wang wrote:
> > > Another implementation detail that we can consider: currently the
> > > InternalTopologyBuilder#setApplicationId() is used because we do not
> > have
> > > such a mechanism to pass in configs to the topology building process.
> > Once
> > > we add such mechanism we should consider removing this function as
> well.
> > >
> > >
> > > Guozhang
> > >
> > > On Mon, Jun 11, 2018 at 9:51 AM, Guozhang Wang 
> > wrote:
> > >
> > >> Hello Bill,
> > >>
> > >> While working on https://github.com/apache/kafka/pull/5163 I am
> > wondering
> > >> if we can hide this from the public API, to e.g. add an additional
> > function
> > >> in InternalTopologyBuilder of InternalStreamsBuilder (since in your
> > current
> > >> working PR we're reusing InternalStreamsBuilder for the logical plan
> > >> generation) which can then be called inside KafkaStreams constructors?
> > >>
> > >>
> > >> Guozhang
> > >>
> > >>
> > >> On Mon, Jun 11, 2018 at 9:41 AM, John Roesler 
> > wrote:
> > >>
> > >>> Hi Bill,
> > >>>
> > >>> Thanks for the KIP.
> > >>>
> > >>> Just a small thought. This new API will result in calls that look
> like
> > >>> this:
> > >>> new KafkaStreams(builder.build(props), props);
> > >>>
> > >>> Do you think that's a significant enough eyesore to warrant adding a
> > new
> > >>> KafkaStreams constructor taking a KStreamsBuilder like this:
> > >>> new KafkaStreams(builder, props);
> > >>>
> > >>> such that it would internally call builder.build(props) ?
> > >>>
> > >>> Thanks,
> > >>> -John
> > >>>
> > >>>
> > >>>
> > >>> On Fri, Jun 8, 2018 at 7:16 PM, Ted Yu  wrote:
> > >>>
> >  Since there're only two values for the optional optimization config
> >  introduced by KAFKA-6935, I wonder the overloaded build method (with
> >  Properties
> >  instance) would make the config unnecessary.
> > 
> >  nit:
> >  * @return @return the {@link Topology} that represents the specified
> >  processing logic
> > 
> >  Double @return above.
> > 
> >  Cheers
> > 
> >  On Fri, Jun 8, 2018 at 3:20 PM, Bill Bejeck 
> > wrote:
> > 
> > > All,
> > >
> > > I'd like to start the discussion for adding an overloaded method to
> > > StreamsBuilder taking a java.util.Properties instance.
> > >
> > > The KIP is located here :
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 312%3A+Add+Overloaded+StreamsBuilder+Build+Method+
> > > to+Accept+java.util.Properties
> > >
> > > I look forward to your comments.
> > >
> > > Thanks,
> > > Bill
> > >
> > 
> > >>>
> > >>
> > >>
> > >>
> > >> --
> > >> -- Guozhang
> > >>
> > >
> > >
> > >
> >
> >
>


[jira] [Resolved] (KAFKA-1181) Consolidate brokerList and topicPartitionInfo in BrokerPartitionInfo

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-1181?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-1181.
--
Resolution: Auto Closed

Closing this as BrokerPartitionInfo class removed in  KAFKA-6921

> Consolidate brokerList and topicPartitionInfo in BrokerPartitionInfo
> 
>
> Key: KAFKA-1181
> URL: https://issues.apache.org/jira/browse/KAFKA-1181
> Project: Kafka
>  Issue Type: Bug
>Reporter: Guozhang Wang
>Priority: Major
>
> brokerList in BrokerConfig is used to send the TopicMetadataRequest to known 
> brokers, and the broker id always starts at 0 and increase incrementally, AND 
> it is never updated in BrokerPartitionInfo even after the topic metadata 
> response has been received.
> The real broker ids info is actually stored in topicPartitionInfo: 
> HashMap[String, TopicMetadata]. Which is refreshed with topic metadata 
> response. Therefore we could see different broker ids from logging entris 
> reporting failues of metadata request and failures of produce requests.
> The solution here is to consolidate these two, reading the initial broker 
> list but keep it refreshed with topic metadata responses.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Hoping to see the community at Kafka Summit SF

2018-06-12 Thread Gwen Shapira
Hello Kafka users and contributors,

Kafka Summit SF call for proposal is open until Saturday, June 16. You are
all invited to submit your talk proposals. Sharing your knowledge, stories
and experience is a great way to contribute to the community.

I consistently notice that people with great stories somehow decide that
they are not good enough. I encourage you to submit anyway and let the
conference committee decide. If you want feedback on a proposal, feel free
to email me directly and I'll be happy to help.

Even if you don't submit a talk, you should absolutely attend. The talks
will be amazing and you'll become a better Kafka expert. Most of the
committers will be there, so you will really have this opportunity to
discuss the details of Kafka, why design decisions were made - and how to
contribute more to Kafka.

And since the entire Kafka community should attend, here's the community
discount code: KS18Comm25

Looking forward to your amazing abstracts and to see you all there.

Gwen Shapira


[jira] [Resolved] (KAFKA-3379) Update tools relying on old producer to use new producer.

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-3379?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-3379.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

> Update tools relying on old producer to use new producer.
> -
>
> Key: KAFKA-3379
> URL: https://issues.apache.org/jira/browse/KAFKA-3379
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.0.0
>Reporter: Ashish Singh
>Assignee: Ashish Singh
>Priority: Major
> Fix For: 2.0.0
>
>
> Following tools are using old producer.
> * ReplicationVerificationTool
> * SimpleConsumerShell
> * GetOffsetShell
> Old producer is being marked as deprecated in 0.10. These tools should be 
> updated to use new producer. To make sure that this update does not break 
> existing behavior. Below is the action plan.
> For each tool that uses old producer.
> * Add ducktape test to establish current behavior.
> * Once the tests are committed and run fine, add patch for modification of 
> these tools. The ducktape tests added in previous step should confirm that 
> existing behavior is still intact.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7046) Support new Admin API for single topic

2018-06-12 Thread darion yaphet (JIRA)
darion yaphet created KAFKA-7046:


 Summary: Support new Admin API for single topic
 Key: KAFKA-7046
 URL: https://issues.apache.org/jira/browse/KAFKA-7046
 Project: Kafka
  Issue Type: New Feature
  Components: admin
Affects Versions: 1.1.0
Reporter: darion yaphet


When I create topic delete and describe topic with AdminClient often use just 
one topic .

Currently I must warp it into a collection .

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Resolved] (KAFKA-2572) zk connection instability

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2572.
--
Resolution: Auto Closed

Closing inactive issue. Please reopen if the issue still exists in newer 
versions.

> zk connection instability
> -
>
> Key: KAFKA-2572
> URL: https://issues.apache.org/jira/browse/KAFKA-2572
> Project: Kafka
>  Issue Type: Bug
>  Components: zkclient
>Affects Versions: 0.8.2.1
> Environment: zk version 3.4.6,
> CentOS 6, 2.6.32-504.1.3.el6.x86_64
>Reporter: John Firth
>Priority: Major
> Attachments: 090815-digest.log, 090815-full.log, 091115-digest.log, 
> 091115-full.log.zip
>
>
> On several occasions, we've seen our process enter a cycle of: zk session 
> expiry; new session creation; rebalancing activity; pause during which 
> nothing is heard from the zk server. Sometimes, the reconnections are 
> successful, elements are pulled from Kafka, but then disconnection and 
> reconnection occurs shortly thereafter, causing OOMs when new elements are 
> pulled in (although OOMs were not seen in the two cases attached as 
> examples). Restarting the process that uses the zk client resolved the 
> problems in both cases.
> This behavior was seen on 09/08 and 09/11 -- the attached 'full' logs show 
> all logs entries minus entries particular to our application. For 09/08, the 
> time span is 2015-09-08T12:52:06.069-04:00 to 2015-09-08T13:14:48.250-04:00; 
> for 11/08, the time span is between 2015-09-11T01:38:17.000-04:00 to 
> 2015-09-11T07:44:47.124-04:00. The digest logs are the result of retaining 
> only error and warning entries, and entries containing any of: "begin 
> rebalancing", "end rebalancing", "timed", and "zookeeper state". For the 
> 09/11 digest logs, entries from the kafka.network.Processor logger are also 
> excised for clarity. Unfortunately, debug logging was not enabled during 
> these events.
> The 09/11 case shows repeated cycles of session expiry, followed by 
> rebalancing activity, followed by a pause during which nothing is heard from 
> the zk server, followed by a session timeout. A stable session seems to have 
> been established at 2015-09-11T04:13:47.140-04:00, but messages of the form 
> "I wrote this conflicted ephemeral node 
> [{"version":1,"subscription":{"binlogs_mailchimp_us2":100},"pattern":"static","timestamp":"1441959227564"}]
>  at 
> /consumers/prologue-second-stage_prod_us2/ids/prologue-second-stage_prod_us2_app01.c1.prologue.prod.atl01.rsglab.com-1441812334972-b967b718
>  a while back in a different session, hence I will backoff for this node to 
> be deleted by Zookeeper and retry" were logged out repeatedly until we 
> restarted the process after 2015-09-11T07:44:47.124-04:00, which marks the 
> final entry in the log.
> The 09/08 case is a little more straightforward than the 09/11 case, in that 
> a stable session was not established prior to our restarting the process.
> It's perhaps also noteworthy that in the 09/08 case, two timeouts for the 
> same session are seen during a single rebalance, at 
> 2015-09-08T12:52:19.107-04:00 and 2015-09-08T12:52:31.639-04:00. The 
> rebalance in question begins at 2015-09-08T12:52:06.667-04:00.
> The connection to ZK expires and is restablished multiple times before the 
> process is killed after 2015-09-08T13:13:41.655-04:00, which marks the last 
> entry in the logs for this day.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Jenkins build is back to normal : kafka-trunk-jdk10 #196

2018-06-12 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-110: Add Codec for ZStandard Compression (Updated)

2018-06-12 Thread Dongjin Lee
Here is the short conclusion about the license problem: *We can use zstd
and zstd-jni without any problem, but we need to include their license,
e.g., BSD license.*

Both of BSD 2 Clause License & 3 Clause License requires to include the
license used, and BSD 3 Clause License requires that the name of the
contributor can't be used to endorse or promote the product. That's it

- They are not listed in the list of prohibited licenses
 also.

Here is how Spark did for it
:

- They made a directory dedicated to the dependency license files
 and added licenses
for Zstd
 &
Zstd-jni
.
- Added a link to the original license files in LICENSE.


If needed, I can make a similar update.

Thanks for pointing out this problem, Viktor! Nice catch!

Best,
Dongjin



On Mon, Jun 11, 2018 at 11:50 PM Dongjin Lee  wrote:

> I greatly appreciate your comprehensive reasoning. so: +1 for b until now.
>
> For the license issues, I will have a check on how the over projects are
> doing and share the results.
>
> Best,
> Dongjin
>
> On Mon, Jun 11, 2018 at 10:08 PM Viktor Somogyi 
> wrote:
>
>> Hi Dongjin,
>>
>> A couple of comments:
>> I would vote for option b. in the "backward compatibility" section. My
>> reasoning for this is that users upgrading to a zstd compatible version
>> won't start to use it automatically, so manual reconfiguration is
>> required.
>> Therefore an upgrade won't mess up the cluster. If not all the clients are
>> upgraded but just some of them and they'd start to use zstd then it would
>> cause errors in the cluster. I'd like to presume though that this is a
>> very
>> obvious failure case and nobody should be surprised if it didn't work.
>> I wouldn't choose a. as I think we should bump the fetch and produce
>> requests if it's a change in the message format. Moreover if some of the
>> producers and the brokers are upgraded but some of the consumers are not,
>> then we wouldn't prevent the error when the old consumer tries to consume
>> the zstd compressed messages.
>> I wouldn't choose c. either as I think binding the compression type to an
>> API is not so obvious from the developer's perspective.
>>
>> I would also prefer to use the existing binding, however we must respect
>> the licenses:
>> "The code for these JNI bindings is licenced under 2-clause BSD license.
>> The native Zstd library is licensed under 3-clause BSD license and GPL2"
>> Based on the FAQ page
>> https://www.apache.org/legal/resolved.html#category-a
>> we may use 2- and 3-clause BSD licenses but the Apache license is not
>> compatible with GPL2. I'm hoping that the "3-clause BSD license and GPL2"
>> is really not an AND but an OR in this case, but I'm no lawyer, just
>> wanted
>> to make the point that we should watch out for licenses. :)
>>
>> Regards,
>> Viktor
>>
>>
>> On Sun, Jun 10, 2018 at 3:02 AM Ivan Babrou  wrote:
>>
>> > Hello,
>> >
>> > This is Ivan and I still very much support the fact that zstd
>> compression
>> > should be included out of the box.
>> >
>> > Please think about the environment, you can save quite a lot of hardware
>> > with it.
>> >
>> > Thank you.
>> >
>> > On Sat, Jun 9, 2018 at 14:14 Dongjin Lee  wrote:
>> >
>> > > Since there are no responses for a week, I decided to reinitiate the
>> > > discussion thread.
>> > >
>> > >
>> > >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-110%3A+Add+Codec+for+ZStandard+Compression
>> > >
>> > > This KIP is about to introduce ZStandard Compression into Apache
>> Kafka.
>> > > The reason why it is posted again has a story: It was originally
>> posted
>> > to
>> > > the dev mailing list more than one year ago but since it has no
>> > performance
>> > > report included, it was postponed later. But Some people (including
>> Ivan)
>> > > reported excellent performance report with the draft PR, this work is
>> now
>> > > reactivated.
>> > >
>> > > The updated KIP document includes some expected problems and their
>> > > candidate alternatives. Please have a look when you are free, and give
>> > me a
>> > > feedback. All kinds of participating are welcome.
>> > >
>> > > Best,
>> > > Dongjin
>> > >
>> > > --
>> > > *Dongjin Lee*
>> > >
>> > > *A hitchhiker in the mathematical world.*
>> > >
>> > > *github:  github.com/dongjinleekr
>> > > linkedin:
>> > kr.linkedin.com/in/dongjinleekr
>> > > slideshare:
>> > www.slideshare.net/dongjinleekr
>> > > *
>> > >
>> >
>>
> --
> *Dongjin Lee*
>

[jira] [Resolved] (KAFKA-6351) libs directory has duplicate javassist jars

2018-06-12 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6351?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-6351.
--
   Resolution: Fixed
Fix Version/s: 2.0.0

Closing this as javassist jar is resolving to single version in latest code.

> libs directory has duplicate javassist jars
> ---
>
> Key: KAFKA-6351
> URL: https://issues.apache.org/jira/browse/KAFKA-6351
> Project: Kafka
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.0.0
>Reporter: pre sto
>Priority: Minor
> Fix For: 2.0.0
>
>
> Downloaded kafka_2.11-1.0.0 and noticed duplicate jars under libs
> javassist-3.20.0-GA.jar
> javassist-3.21.0-GA.jar
> I assume that's a mistake



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #195

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-7029: Update ReplicaVerificationTool not to use SimpleConsumer

--
[...truncated 1.10 MB...]
org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
STARTED

org.apache.kafka.clients.NodeApiVersionsTest > testUsableVersionLatestVersions 
PASSED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister STARTED

org.apache.kafka.clients.MetadataTest > testListenerCanUnregister PASSED

org.apache.kafka.clients.MetadataTest > testTopicExpiry STARTED

org.apache.kafka.clients.MetadataTest > testTopicExpiry PASSED

org.apache.kafka.clients.MetadataTest > testFailedUpdate STARTED

org.apache.kafka.clients.MetadataTest > testFailedUpdate PASSED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime STARTED

org.apache.kafka.clients.MetadataTest > testMetadataUpdateWaitTime PASSED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
STARTED

org.apache.kafka.clients.MetadataTest > testUpdateWithNeedMetadataForAllTopics 
PASSED

org.apache.kafka.clients.MetadataTest > testClusterListenerGetsNotifiedOfUpdate 
STARTED

org.apache.kafka.clients.MetadataTest > testClusterListenerGetsNotifiedOfUpdate 
PASSED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate_RetryBackoff 
STARTED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate_RetryBackoff PASSED

org.apache.kafka.clients.MetadataTest > testMetadata STARTED

org.apache.kafka.clients.MetadataTest > testMetadata PASSED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate_OverwriteBackoff 
STARTED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate_OverwriteBackoff 
PASSED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate STARTED

org.apache.kafka.clients.MetadataTest > testTimeToNextUpdate PASSED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate STARTED

org.apache.kafka.clients.MetadataTest > testListenerGetsNotifiedOfUpdate PASSED

org.apache.kafka.clients.MetadataTest > testNonExpiringMetadata STARTED

org.apache.kafka.clients.MetadataTest > testNonExpiringMetadata PASSED

org.apache.kafka.clients.InFlightRequestsTest > 
testCompleteNextThrowsIfNoInflights STARTED

org.apache.kafka.clients.InFlightRequestsTest > 
testCompleteNextThrowsIfNoInflights PASSED

org.apache.kafka.clients.InFlightRequestsTest > 
testCompleteLastSentThrowsIfNoInFlights STARTED

org.apache.kafka.clients.InFlightRequestsTest > 
testCompleteLastSentThrowsIfNoInFlights PASSED

org.apache.kafka.clients.InFlightRequestsTest > testCompleteNext STARTED

org.apache.kafka.clients.InFlightRequestsTest > testCompleteNext PASSED

org.apache.kafka.clients.InFlightRequestsTest > testCompleteLastSent STARTED

org.apache.kafka.clients.InFlightRequestsTest > testCompleteLastSent PASSED

org.apache.kafka.clients.InFlightRequestsTest > testClearAll STARTED

org.apache.kafka.clients.InFlightRequestsTest > testClearAll PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testGetOrCreateListValue 
STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testGetOrCreateListValue 
PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testCreateTopics STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testCreateTopics PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testDeleteRecords STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testDeleteRecords PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testConnectionFailureOnMetadataUpdate STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testConnectionFailureOnMetadataUpdate PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testInvalidTopicNames 
STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testInvalidTopicNames 
PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testDescribeAcls STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testDescribeAcls PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testCreateTopicsHandleNotControllerException STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testCreateTopicsHandleNotControllerException PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testPrettyPrintException 
STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > testPrettyPrintException 
PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testTimeoutWithoutMetadata STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testTimeoutWithoutMetadata PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testListConsumerGroupsMetadataFailure STARTED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testListConsumerGroupsMetadataFailure PASSED

org.apache.kafka.clients.admin.KafkaAdminClientTest > 
testDescribeConsumerGroups STARTED


[jira] [Resolved] (KAFKA-7029) ReplicaVerificationTool should not use the deprecated SimpleConsumer

2018-06-12 Thread Ismael Juma (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-7029?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-7029.

Resolution: Fixed

> ReplicaVerificationTool should not use the deprecated SimpleConsumer
> 
>
> Key: KAFKA-7029
> URL: https://issues.apache.org/jira/browse/KAFKA-7029
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>Assignee: Manikumar
>Priority: Major
> Fix For: 2.0.0
>
>
> Unless there's a reason not to, the simplest would be to use KafkaConsumer.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk10 #194

2018-06-12 Thread Apache Jenkins Server
See 


Changes:

[ismael] MINOR: Use SL4J string interpolation instead of string concatenation

[lindong28] MINOR: Remove deprecated per-partition lag metrics

--
[...truncated 1.10 MB...]
org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUserCredentialsUnavailableForScramMechanism PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion0 STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion0 PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion1 STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testUnauthenticatedApiVersionsRequestOverSslHandshakeVersion1 PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMultipleServerMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMultipleServerMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverPlaintext STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverPlaintext PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverSsl STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslPlainOverSsl PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidApiVersionsRequestSequence STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidApiVersionsRequestSequence PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testDisallowedKafkaRequestsBeforeAuthentication STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testDisallowedKafkaRequestsBeforeAuthentication PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientLoginOverride STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientLoginOverride PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientDynamicJaasConfiguration STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testClientDynamicJaasConfiguration PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerAuthenticateCallbackHandler STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerAuthenticateCallbackHandler PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslScram STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidPasswordSaslScram PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerDynamicJaasConfiguration STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testServerDynamicJaasConfiguration PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidSaslPacket STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidSaslPacket PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testSaslHandshakeRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testSaslHandshakeRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testJaasConfigurationForListener STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 

access to create KIP

2018-06-12 Thread Chia-Ping Tsai
dear Kafka,

Please give me the permission to create KIP. the email is "chia7...@gmail.com" 
and the account is "chia7712"

Best Regards,
chia-ping


[DISCUSS] Release Plan for 1.1.1

2018-06-12 Thread Dong Lin
Hi all,

I would like to start the process for doing a 1.1.1 bug fix release. 1.1.0
was
released Mar 28, 2018, and about 2.5 months have passed and 25 bug fixes
have
accumulated so far.

A few of the more important fixes that have been merged in 1.1 branch so
far:

KAFKA-6925  - Fix memory
leak in StreamsMetricsThreadImpl
KAFKA-6937  - In-sync
replica delayed during fetch if replica throttle is exceeded
KAFKA-6917  - Process txn
completion asynchronously to avoid deadlock
KAFKA-6893  - Create
processors before starting acceptor to avoid ArithmeticException
KAFKA-6870  -
Fix ConcurrentModificationException in SampledStat
KAFKA-6878  - Fix
NullPointerException when querying global state store
KAFKA-6879  - Invoke
session init callbacks outside lock to avoid Controller deadlock
KAFKA-6857  - Prevent
follower from truncating to the wrong offset if undefined leader epoch is
requested
KAFKA-6854  - Log cleaner
fails with transaction markers that are deleted during clean
KAFKA-6747  - Check
whether there is in-flight transaction before aborting transaction
KAFKA-6748  - Double
check before scheduling a new task after the punctuate call
KAFKA-6739  -
Fix IllegalArgumentException when down-converting from V2 to V0/V1
KAFKA-6728  -
Fix NullPointerException when instantiating the HeaderConverter

https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1 has
been created to track the release progress for 1.1.1. There is currently
only one blocking issue for 1.1.1. It would be great if everyone could help
identify the issues that are blocking for 1.1.1 and mark the JIRA ticket as
blocking issue.

After Jun 18, once all blocking issues have been addressed, we will prepare
RC0 for 1.1.1 release and initiate the voting thread. We probably want to
start the voting thread by Jun 22.

Cheers,
Dong


Build failed in Jenkins: kafka-trunk-jdk10 #193

2018-06-12 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H34 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:862)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git config 
remote.origin.url https://github.com/apache/kafka.git; returned status code 4:
stdout: 
stderr: error: failed to write new configuration file 


at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1996)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1964)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommandIn(CliGitAPIImpl.java:1960)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1597)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.launchCommand(CliGitAPIImpl.java:1609)
at 
org.jenkinsci.plugins.gitclient.CliGitAPIImpl.setRemoteUrl(CliGitAPIImpl.java:1243)
at hudson.plugins.git.GitAPI.setRemoteUrl(GitAPI.java:160)
at sun.reflect.GeneratedMethodAccessor58.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.perform(RemoteInvocationHandler.java:922)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:896)
at 
hudson.remoting.RemoteInvocationHandler$RPCRequest.call(RemoteInvocationHandler.java:853)
at hudson.remoting.UserRequest.perform(UserRequest.java:207)
at hudson.remoting.UserRequest.perform(UserRequest.java:53)
at hudson.remoting.Request$2.run(Request.java:358)
at 
hudson.remoting.InterceptingExecutorService$1.call(InterceptingExecutorService.java:72)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:748)
Suppressed: hudson.remoting.Channel$CallSiteStackTrace: Remote call to 
H34
at 
hudson.remoting.Channel.attachCallSiteStackTrace(Channel.java:1693)
at hudson.remoting.UserResponse.retrieve(UserRequest.java:310)
at hudson.remoting.Channel.call(Channel.java:908)
at 
hudson.remoting.RemoteInvocationHandler.invoke(RemoteInvocationHandler.java:281)
at com.sun.proxy.$Proxy109.setRemoteUrl(Unknown Source)
at 
org.jenkinsci.plugins.gitclient.RemoteGitImpl.setRemoteUrl(RemoteGitImpl.java:295)
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:850)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1129)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1160)
at hudson.scm.SCM.checkout(SCM.java:495)
at 
hudson.model.AbstractProject.checkout(AbstractProject.java:1202)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at 
jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1724)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at