Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-08 Thread Ted Yu
I used jdk 9 build 181 on Linux to build which passed.

I added --debug --stacktrace to ./gradlew command line
Let's see if we would have more information if compileScala fails again.

On Wed, Nov 8, 2017 at 5:46 PM, Guozhang Wang  wrote:

> We saw the same error again in later builds:
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2498/console
>
> Does anything get reverted?
>
>
> Guozhang
>
>
> On Tue, Nov 7, 2017 at 8:33 AM, Ted Yu  wrote:
>
> > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2470/ is green.
> >
> > Thanks Ismael.
> >
> > On Tue, Nov 7, 2017 at 3:46 AM, Ismael Juma  wrote:
> >
> > > I changed the Jenkins jobs to use Oracle JDK 9 instead of 9.0.1 until
> > > INFRA-15448 is fixed.
> > >
> > > Ismael
> > >
> > > On Mon, Nov 6, 2017 at 6:25 PM, Ismael Juma  wrote:
> > >
> > > > Thanks!
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Nov 6, 2017 at 3:48 AM, Ted Yu  wrote:
> > > >
> > > >> Logged https://issues.apache.org/jira/browse/INFRA-15448
> > > >>
> > > >> On Thu, Nov 2, 2017 at 11:39 PM, Ismael Juma 
> > wrote:
> > > >>
> > > >> > This looks to be an issue in Jenkins, not in Kafka. Apache Infra
> > > updated
> > > >> > Java 9 to 9.0.1 and it seems to have broken some of the Jenkins
> > code.
> > > >> >
> > > >> > Ismael
> > > >> >
> > > >> > On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:
> > > >> >
> > > >> > > Looking at earlier runs, e.g. :
> > > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/
> > console
> > > >> > >
> > > >> > > FAILURE: Build failed with an exception.
> > > >> > >
> > > >> > > * What went wrong:
> > > >> > > Could not determine java version from '9.0.1'.
> > > >> > >
> > > >> > >
> > > >> > > This was the first build with 'out of range of int' exception:
> > > >> > >
> > > >> > >
> > > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/
> > console
> > > >> > >
> > > >> > >
> > > >> > > However, I haven't found the commit which was at the tip of repo
> > at
> > > >> that
> > > >> > > time.
> > > >> > >
> > > >> > >
> > > >> > > On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > > Noticed that as well, could we track down to which git commit
> /
> > > >> version
> > > >> > > > upgrade caused the issue?
> > > >> > > >
> > > >> > > >
> > > >> > > > Guozhang
> > > >> > > >
> > > >> > > > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu 
> > > wrote:
> > > >> > > >
> > > >> > > > > Hi,
> > > >> > > > > I took a look at recent runs under https://builds.apache.
> > > >> > > > > org/job/kafka-pr-jdk9-scala2.12
> > > >> > > > >
> > > >> > > > > All the recent runs failed with:
> > > >> > > > >
> > > >> > > > > Could not update commit status of the Pull Request on
> GitHub.
> > > >> > > > > org.kohsuke.github.HttpException: Server returned HTTP
> > response
> > > >> > code:
> > > >> > > > > 201, message: 'Created' for URL:
> > > >> > > > > https://api.github.com/repos/apache/kafka/statuses/
> > > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> > > >> > > > > at org.kohsuke.github.Requester.
> > > parse(Requester.java:633)
> > > >> > > > > at org.kohsuke.github.Requester.
> > > parse(Requester.java:594)
> > > >> > > > > at org.kohsuke.github.Requester._
> > to(Requester.java:272)
> > > >> > > > > at org.kohsuke.github.Requester.to
> > (Requester.java:234)
> > > >> > > > > at org.kohsuke.github.GHRepository.
> > createCommitStatus(
> > > >> > > > > GHRepository.java:1071)
> > > >> > > > >
> > > >> > > > > ...
> > > >> > > > >
> > > >> > > > > Caused by: com.fasterxml.jackson.
> > databind.JsonMappingException:
> > > >> > > > > Numeric value (4298492118) out of range of int
> > > >> > > > >  at [Source: {"url":"https://api.github.
> > com/repos/apache/kafka/
> > > >> > > statuses/
> > > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> > > >> > > > > state":"pending","description":"Build
> > > >> > > > > started sha1 is
> > > >> > > > > merged.","target_url":"https://builds.apache.org/job/kafka-
> > > >> > > > > pr-jdk9-scala2.12/2397/","context":"JDK
> > > >> > > > > 9 and Scala 2.12",
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > Should we upgrade the version for jackson ?
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > Cheers
> > > >> > > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > --
> > > >> > > > -- Guozhang
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Build failed in Jenkins: kafka-trunk-jdk7 #2956

2017-11-08 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6146; minimize the number of triggers enqueuing

--
[...truncated 381.61 KB...]

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
STARTED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest STARTED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire STARTED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset PASSED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark PASSED

kafka.server.KafkaApisTest >

Build failed in Jenkins: kafka-trunk-jdk8 #2197

2017-11-08 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6146; minimize the number of triggers enqueuing

--
[...truncated 1.39 MB...]
org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
noAuthorizationIdSpecified PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdEqualsAuthenticationId PASSED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId STARTED

org.apache.kafka.common.security.plain.PlainSaslServerTest > 
authorizatonIdNotEqualsAuthenticationId PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testTransactionalProducerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testConsumerWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials STARTED

org.apache.kafka.common.security.authenticator.ClientAuthenticationFailureTest 
> testAdminClientWithInvalidCredentials PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingUsernameSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testValidSaslScramMechanisms PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramSslServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeaderFailure PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslScramPlaintextServerWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMechanismPluggability PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testScramUsernameWithSpecialCharacters PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testApiVersionsRequestWithUnsupportedVersion PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testMissingPasswordSaslPlain PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
testInvalidLoginModule PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainPlaintextClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader STARTED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeader PASSED

org.apache.kafka.common.security.authenticator.SaslAuthenticatorTest > 
oldSaslPlainSslClientWithoutSaslAuthenticateHeaderFailure STARTED

org.apache.kaf

[GitHub] kafka pull request #4198: MINOR: make controller helper methods private

2017-11-08 Thread onurkaraman
GitHub user onurkaraman opened a pull request:

https://github.com/apache/kafka/pull/4198

MINOR: make controller helper methods private



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/onurkaraman/kafka 
make-controller-helper-methods-private

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4198.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4198


commit 52981b1095b70ebdbc00b7ddc882414b029cdcd9
Author: Onur Karaman 
Date:   2017-11-09T01:53:01Z

MINOR: make controller helper methods private




---


Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-08 Thread Ted Yu
Different error than the jdk 9.0.1 one.

What's the value for versions.jackson ?
We may need to upgrade.

On Wed, Nov 8, 2017 at 5:46 PM, Guozhang Wang  wrote:

> We saw the same error again in later builds:
> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2498/console
>
> Does anything get reverted?
>
>
> Guozhang
>
>
> On Tue, Nov 7, 2017 at 8:33 AM, Ted Yu  wrote:
>
> > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2470/ is green.
> >
> > Thanks Ismael.
> >
> > On Tue, Nov 7, 2017 at 3:46 AM, Ismael Juma  wrote:
> >
> > > I changed the Jenkins jobs to use Oracle JDK 9 instead of 9.0.1 until
> > > INFRA-15448 is fixed.
> > >
> > > Ismael
> > >
> > > On Mon, Nov 6, 2017 at 6:25 PM, Ismael Juma  wrote:
> > >
> > > > Thanks!
> > > >
> > > > Ismael
> > > >
> > > > On Mon, Nov 6, 2017 at 3:48 AM, Ted Yu  wrote:
> > > >
> > > >> Logged https://issues.apache.org/jira/browse/INFRA-15448
> > > >>
> > > >> On Thu, Nov 2, 2017 at 11:39 PM, Ismael Juma 
> > wrote:
> > > >>
> > > >> > This looks to be an issue in Jenkins, not in Kafka. Apache Infra
> > > updated
> > > >> > Java 9 to 9.0.1 and it seems to have broken some of the Jenkins
> > code.
> > > >> >
> > > >> > Ismael
> > > >> >
> > > >> > On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:
> > > >> >
> > > >> > > Looking at earlier runs, e.g. :
> > > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/
> > console
> > > >> > >
> > > >> > > FAILURE: Build failed with an exception.
> > > >> > >
> > > >> > > * What went wrong:
> > > >> > > Could not determine java version from '9.0.1'.
> > > >> > >
> > > >> > >
> > > >> > > This was the first build with 'out of range of int' exception:
> > > >> > >
> > > >> > >
> > > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/
> > console
> > > >> > >
> > > >> > >
> > > >> > > However, I haven't found the commit which was at the tip of repo
> > at
> > > >> that
> > > >> > > time.
> > > >> > >
> > > >> > >
> > > >> > > On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang <
> wangg...@gmail.com
> > >
> > > >> > wrote:
> > > >> > >
> > > >> > > > Noticed that as well, could we track down to which git commit
> /
> > > >> version
> > > >> > > > upgrade caused the issue?
> > > >> > > >
> > > >> > > >
> > > >> > > > Guozhang
> > > >> > > >
> > > >> > > > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu 
> > > wrote:
> > > >> > > >
> > > >> > > > > Hi,
> > > >> > > > > I took a look at recent runs under https://builds.apache.
> > > >> > > > > org/job/kafka-pr-jdk9-scala2.12
> > > >> > > > >
> > > >> > > > > All the recent runs failed with:
> > > >> > > > >
> > > >> > > > > Could not update commit status of the Pull Request on
> GitHub.
> > > >> > > > > org.kohsuke.github.HttpException: Server returned HTTP
> > response
> > > >> > code:
> > > >> > > > > 201, message: 'Created' for URL:
> > > >> > > > > https://api.github.com/repos/apache/kafka/statuses/
> > > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> > > >> > > > > at org.kohsuke.github.Requester.
> > > parse(Requester.java:633)
> > > >> > > > > at org.kohsuke.github.Requester.
> > > parse(Requester.java:594)
> > > >> > > > > at org.kohsuke.github.Requester._
> > to(Requester.java:272)
> > > >> > > > > at org.kohsuke.github.Requester.to
> > (Requester.java:234)
> > > >> > > > > at org.kohsuke.github.GHRepository.
> > createCommitStatus(
> > > >> > > > > GHRepository.java:1071)
> > > >> > > > >
> > > >> > > > > ...
> > > >> > > > >
> > > >> > > > > Caused by: com.fasterxml.jackson.
> > databind.JsonMappingException:
> > > >> > > > > Numeric value (4298492118) out of range of int
> > > >> > > > >  at [Source: {"url":"https://api.github.
> > com/repos/apache/kafka/
> > > >> > > statuses/
> > > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> > > >> > > > > state":"pending","description":"Build
> > > >> > > > > started sha1 is
> > > >> > > > > merged.","target_url":"https://builds.apache.org/job/kafka-
> > > >> > > > > pr-jdk9-scala2.12/2397/","context":"JDK
> > > >> > > > > 9 and Scala 2.12",
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > Should we upgrade the version for jackson ?
> > > >> > > > >
> > > >> > > > >
> > > >> > > > > Cheers
> > > >> > > > >
> > > >> > > >
> > > >> > > >
> > > >> > > >
> > > >> > > > --
> > > >> > > > -- Guozhang
> > > >> > > >
> > > >> > >
> > > >> >
> > > >>
> > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: kafka-pr-jdk9-scala2.12 keeps failing

2017-11-08 Thread Guozhang Wang
We saw the same error again in later builds:
https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2498/console

Does anything get reverted?


Guozhang


On Tue, Nov 7, 2017 at 8:33 AM, Ted Yu  wrote:

> https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2470/ is green.
>
> Thanks Ismael.
>
> On Tue, Nov 7, 2017 at 3:46 AM, Ismael Juma  wrote:
>
> > I changed the Jenkins jobs to use Oracle JDK 9 instead of 9.0.1 until
> > INFRA-15448 is fixed.
> >
> > Ismael
> >
> > On Mon, Nov 6, 2017 at 6:25 PM, Ismael Juma  wrote:
> >
> > > Thanks!
> > >
> > > Ismael
> > >
> > > On Mon, Nov 6, 2017 at 3:48 AM, Ted Yu  wrote:
> > >
> > >> Logged https://issues.apache.org/jira/browse/INFRA-15448
> > >>
> > >> On Thu, Nov 2, 2017 at 11:39 PM, Ismael Juma 
> wrote:
> > >>
> > >> > This looks to be an issue in Jenkins, not in Kafka. Apache Infra
> > updated
> > >> > Java 9 to 9.0.1 and it seems to have broken some of the Jenkins
> code.
> > >> >
> > >> > Ismael
> > >> >
> > >> > On 3 Nov 2017 1:53 am, "Ted Yu"  wrote:
> > >> >
> > >> > > Looking at earlier runs, e.g. :
> > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2384/
> console
> > >> > >
> > >> > > FAILURE: Build failed with an exception.
> > >> > >
> > >> > > * What went wrong:
> > >> > > Could not determine java version from '9.0.1'.
> > >> > >
> > >> > >
> > >> > > This was the first build with 'out of range of int' exception:
> > >> > >
> > >> > >
> > >> > > https://builds.apache.org/job/kafka-pr-jdk9-scala2.12/2389/
> console
> > >> > >
> > >> > >
> > >> > > However, I haven't found the commit which was at the tip of repo
> at
> > >> that
> > >> > > time.
> > >> > >
> > >> > >
> > >> > > On Thu, Nov 2, 2017 at 6:40 PM, Guozhang Wang  >
> > >> > wrote:
> > >> > >
> > >> > > > Noticed that as well, could we track down to which git commit /
> > >> version
> > >> > > > upgrade caused the issue?
> > >> > > >
> > >> > > >
> > >> > > > Guozhang
> > >> > > >
> > >> > > > On Thu, Nov 2, 2017 at 6:25 PM, Ted Yu 
> > wrote:
> > >> > > >
> > >> > > > > Hi,
> > >> > > > > I took a look at recent runs under https://builds.apache.
> > >> > > > > org/job/kafka-pr-jdk9-scala2.12
> > >> > > > >
> > >> > > > > All the recent runs failed with:
> > >> > > > >
> > >> > > > > Could not update commit status of the Pull Request on GitHub.
> > >> > > > > org.kohsuke.github.HttpException: Server returned HTTP
> response
> > >> > code:
> > >> > > > > 201, message: 'Created' for URL:
> > >> > > > > https://api.github.com/repos/apache/kafka/statuses/
> > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a
> > >> > > > > at org.kohsuke.github.Requester.
> > parse(Requester.java:633)
> > >> > > > > at org.kohsuke.github.Requester.
> > parse(Requester.java:594)
> > >> > > > > at org.kohsuke.github.Requester._
> to(Requester.java:272)
> > >> > > > > at org.kohsuke.github.Requester.to
> (Requester.java:234)
> > >> > > > > at org.kohsuke.github.GHRepository.
> createCommitStatus(
> > >> > > > > GHRepository.java:1071)
> > >> > > > >
> > >> > > > > ...
> > >> > > > >
> > >> > > > > Caused by: com.fasterxml.jackson.
> databind.JsonMappingException:
> > >> > > > > Numeric value (4298492118) out of range of int
> > >> > > > >  at [Source: {"url":"https://api.github.
> com/repos/apache/kafka/
> > >> > > statuses/
> > >> > > > > 3d96c6f5b2edd3c1dbea11dab003c4ac78ee141a","id":4298492118,"
> > >> > > > > state":"pending","description":"Build
> > >> > > > > started sha1 is
> > >> > > > > merged.","target_url":"https://builds.apache.org/job/kafka-
> > >> > > > > pr-jdk9-scala2.12/2397/","context":"JDK
> > >> > > > > 9 and Scala 2.12",
> > >> > > > >
> > >> > > > >
> > >> > > > > Should we upgrade the version for jackson ?
> > >> > > > >
> > >> > > > >
> > >> > > > > Cheers
> > >> > > > >
> > >> > > >
> > >> > > >
> > >> > > >
> > >> > > > --
> > >> > > > -- Guozhang
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>



-- 
-- Guozhang


Jenkins build is back to normal : kafka-trunk-jdk7 #2955

2017-11-08 Thread Apache Jenkins Server
See 



[jira] [Resolved] (KAFKA-6066) Use of SimpleDateFormat in RocksDBWindowStore may not be Threadsafe

2017-11-08 Thread Srikanth Sundarrajan (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Srikanth Sundarrajan resolved KAFKA-6066.
-
Resolution: Not A Problem

Thanks. Marking this as closed

> Use of SimpleDateFormat in RocksDBWindowStore may not be Threadsafe
> ---
>
> Key: KAFKA-6066
> URL: https://issues.apache.org/jira/browse/KAFKA-6066
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Srikanth Sundarrajan
>Priority: Minor
>
> Currently SimpleDateFormat is used to construct segmentId from segmentName 
> and vice-versa. This however may not be thread safe if WindowStore is 
> accessed by more than one SteamTask/thread concurrently. 
> Ref: *org.apache.kafka.streams.state.internals.RocksDBWindowStore#formatter*



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Build failed in Jenkins: kafka-trunk-jdk8 #2196

2017-11-08 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6179: Clear min timestamp tracker upon partition queue cleanup

--
[...truncated 3.80 MB...]
org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testInconsistentConfigs PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testJoinAssignment PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalance PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRebalanceFailedConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testHaltCleansUpWorker PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorNameConflictsWithWorkerGroupId PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnectorRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartUnknownTask PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRequestProcessingOrder PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToLeader PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartTaskRedirectToOwner PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigAdded PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorConfigUpdate PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorResumed PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testUnknownConnectorPaused PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testConnectorPausedRunningTaskOnly PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedBasicValidation PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testRestartConnector PASSED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation STARTED

org.apache.kafka.connect.runtime.distributed.DistributedHerderTest > 
testCreateConnectorFailedCustomValidation PASSED

org.apache.kafka.connect.runtime.distributed.

[jira] [Resolved] (KAFKA-6146) minimize the number of triggers enqueuing PreferredReplicaLeaderElection events

2017-11-08 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6146?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma resolved KAFKA-6146.

Resolution: Fixed

> minimize the number of triggers enqueuing PreferredReplicaLeaderElection 
> events
> ---
>
> Key: KAFKA-6146
> URL: https://issues.apache.org/jira/browse/KAFKA-6146
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 1.1.0
>Reporter: Jun Rao
>Assignee: Onur Karaman
> Fix For: 1.1.0
>
>
> We currently enqueue a PreferredReplicaLeaderElection controller event in 
> PreferredReplicaElectionHandler's handleCreation, handleDeletion, and 
> handleDataChange. We can just enqueue the event upon znode creation and after 
> preferred replica leader election completes. The processing of this latter 
> enqueue will register the exist watch on PreferredReplicaElectionZNode and 
> perform any pending preferred replica leader election that may have occurred 
> between completion and registration.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [DISCUSS] KIP-219 - Improve Quota Communication

2017-11-08 Thread Becket Qin
Since we will bump up the wire request version, another option is for
clients that are sending old request versions the broker can just keep the
current behavior. For clients sending the new request versions, the broker
can respond then mute the channel as described in the KIP wiki. In this
case, muting the channel is mostly for protection purpose. A correctly
implemented client should back off for throttle time before sending the
next request. The downside is that the broker needs to keep both logic and
it seems not gaining much benefit. So personally I prefer to just mute the
channel. But I am open to different opinions.

Thanks,

Jiangjie (Becket) Qin

On Mon, Nov 6, 2017 at 7:28 PM, Becket Qin  wrote:

> Hi Jun,
>
> Hmm, even if a connection is closed by the client when the channel is
> muted. After the channel is unmuted, it seems Selector.select() will detect
> this and close the socket.
> It is true that before the channel is unmuted the socket will be in a
> CLOSE_WAIT state though. So having an arbitrarily long muted duration may
> indeed cause problem.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Mon, Nov 6, 2017 at 7:22 PM, Becket Qin  wrote:
>
>> Hi Rajini,
>>
>> Thanks for the detail explanation. Please see the reply below:
>>
>> 2. Limiting the throttle time to connection.max.idle.ms on the broker
>> side is probably fine. However, clients may have a different configuration
>> of connection.max.idle.ms and still reconnect before the throttle time
>> (which is the server side connection.max.idle.ms). It seems another back
>> door for quota.
>>
>> 3. I agree we could just mute the server socket until
>> connection.max.idle.ms if the massive CLOSE_WAIT is a big issue. This
>> helps guarantee only connection_rate * connection.max.idle.ms sockets
>> will be in CLOSE_WAIT state. For cooperative clients, unmuting the socket
>> will not have negative impact.
>>
>> 4. My concern for capping the throttle time to metrics.window.ms is that
>> we will not be able to enforce quota effectively. It might be useful to
>> explain this with a real example we are trying to solve. We have a
>> MapReduce job pushing data to a Kafka cluster. The MapReduce job has
>> hundreds of producers and each of them sends a normal sized ProduceRequest
>> (~2 MB) to each of the brokers in the cluster. Apparently the client id
>> will ran out of bytes quota pretty quickly, and the broker started to
>> throttle the producers. The throttle time could actually be pretty long
>> (e.g. a few minute). At that point, request queue time on the brokers was
>> around 30 seconds. After that, a bunch of producer hit request.timeout.ms
>> and reconnected and sent the next request again, which causes another spike
>> and a longer queue.
>>
>> In the above case, unless we set the quota window to be pretty big, we
>> will not be able to enforce the quota. And if we set the window size to a
>> large value, the request might be throttled for longer than
>> connection.max.idle.ms.
>>
>> > We need a solution to improve flow control for well-behaved clients
>> > which currently rely entirely on broker's throttling. The KIP addresses
>> > this using co-operative clients that sleep for an unbounded throttle
>> time.
>> > I feel this is not ideal since the result is traffic with a lot of
>> spikes.
>> > Feedback from brokers to enable flow control in the client is a good
>> idea,
>> > but clients with excessive throttle times should really have been
>> > configured with smaller batch sizes.
>>
>> This is not really about a single producer with large size, it is a lot
>> of small producers talking to the client at the same time. Reducing the
>> batch size does not help much here. Also note that after the spike
>> traffic at the very beginning, the throttle time of the ProduceRequests
>> processed later are actually going to be increasing (for example, the first
>> throttled request will be throttled for 1 second, the second throttled
>> request will be throttled for 10 sec, etc.). Due to the throttle time
>> variation, if every producer honors the throttle time, there will not be a
>> next spike after the first produce.
>>
>> > We need a solution to enforce smaller quotas to protect the broker
>> > from misbehaving clients. The KIP addresses this by muting channels for
>> an
>> > unbounded time. This introduces problems of channels in CLOSE_WAIT. And
>> > doesn't really solve all issues with misbehaving clients since new
>> > connections can be created to bypass quotas.
>>
>> Our current quota only protects cooperating clients because our quota is
>> really throttling the NEXT request after process a request even if this
>> request itself has already violated quota. The misbehaving client are not
>> protected at all with the current quota mechanism. Like you mentioned, a
>> connection quota is required. We have been discussing about this at
>> LinkedIn for some time. Doing it right requires some major changes such as
>> partially reading a request to id

[GitHub] kafka pull request #4189: KAFKA-6146: minimize the number of triggers enqueu...

2017-11-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4189


---


Build failed in Jenkins: kafka-1.0-jdk7 #74

2017-11-08 Thread Apache Jenkins Server
See 


Changes:

[ismael] KAFKA-6156; Metric tag values with colons must be sanitized

--
[...truncated 23.93 KB...]
(topicPartition, new ListOffsetResponse.PartitionData(Errors.NONE, 
offsets.map(new JLong(_)).asJava))
 ^
:679:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:682:
 constructor PartitionData in class PartitionData is deprecated: see 
corresponding Javadoc for more information.
  (topicPartition, new 
ListOffsetResponse.PartitionData(Errors.forException(e), List[JLong]().asJava))
   ^
:1009:
 class ZKGroupTopicDirs in package utils is deprecated: This class has been 
deprecated and will be removed in a future release.
  val topicDirs = new ZKGroupTopicDirs(offsetFetchRequest.groupId, 
topicPartition.topic)
  ^
:122:
 object ConsumerConfig in package consumer is deprecated: This object has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerConfig instead.
  val ReplicaSocketTimeoutMs = ConsumerConfig.SocketTimeout
   ^
:123:
 object ConsumerConfig in package consumer is deprecated: This object has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerConfig instead.
  val ReplicaSocketReceiveBufferBytes = ConsumerConfig.SocketBufferSize
^
:124:
 object ConsumerConfig in package consumer is deprecated: This object has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.ConsumerConfig instead.
  val ReplicaFetchMaxBytes = ConsumerConfig.FetchSize
 ^
:210:
 value DEFAULT_SASL_ENABLED_MECHANISMS in object SaslConfigs is deprecated: see 
corresponding Javadoc for more information.
  val SaslEnabledMechanisms = SaslConfigs.DEFAULT_SASL_ENABLED_MECHANISMS
  ^
:217:
 class PartitionData in object ListOffsetRequest is deprecated: see 
corresponding Javadoc for more information.
val partitions = Map(topicPartition -> new 
ListOffsetRequest.PartitionData(earliestOrLatest, 1))
 ^
:228:
 value offsets in class PartitionData is deprecated: see corresponding Javadoc 
for more information.
  partitionData.offsets.get(0)
^
:72:
 class OldConsumer in package consumer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.KafkaConsumer instead.
new OldConsumer(conf.filterSpec, props)
^
:76:
 class NewShinyConsumer in package consumer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.KafkaConsumer instead.
  new NewShinyConsumer(Option(conf.topicArg), conf.partitionArg, 
Option(conf.offsetArg), None, getNewConsumerProps(conf), timeoutMs)
  ^
:78:
 class NewShinyConsumer in package consumer is deprecated: This class has been 
deprecated and will be removed in a future release. Please use 
org.apache.kafka.clients.consumer.KafkaConsumer instead.
  new NewShinyConsumer(Option(conf.topicArg), None, None, 
Option(conf.whitelistArg), getNewConsumerProps(conf), timeoutMs)
 

Build failed in Jenkins: kafka-trunk-jdk7 #2954

2017-11-08 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-6179: Clear min timestamp tracker upon partition queue cleanup

--
[...truncated 381.93 KB...]

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId STARTED

kafka.server.ServerGenerateBrokerIdTest > testDisableGeneratedBrokerId PASSED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
STARTED

kafka.server.ServerGenerateBrokerIdTest > testUserConfigAndGeneratedBrokerId 
PASSED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps STARTED

kafka.server.ServerGenerateBrokerIdTest > 
testConsistentBrokerIdFromUserConfigAndMetaProps PASSED

kafka.server.DelayedOperationTest > testRequestPurge STARTED

kafka.server.DelayedOperationTest > testRequestPurge PASSED

kafka.server.DelayedOperationTest > testRequestExpiry STARTED

kafka.server.DelayedOperationTest > testRequestExpiry PASSED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist STARTED

kafka.server.DelayedOperationTest > 
shouldReturnNilOperationsOnCancelForKeyWhenKeyDoesntExist PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLockOverride PASSED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations STARTED

kafka.server.DelayedOperationTest > 
shouldCancelForKeyReturningCancelledOperations PASSED

kafka.server.DelayedOperationTest > testRequestSatisfaction STARTED

kafka.server.DelayedOperationTest > testRequestSatisfaction PASSED

kafka.server.DelayedOperationTest > testDelayedOperationLock STARTED

kafka.server.DelayedOperationTest > testDelayedOperationLock PASSED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
STARTED

kafka.server.MultipleListenersWithDefaultJaasContextTest > testProduceConsume 
PASSED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest STARTED

kafka.server.ThrottledResponseExpirationTest > testThrottledRequest PASSED

kafka.server.ThrottledResponseExpirationTest > testExpire STARTED

kafka.server.ThrottledResponseExpirationTest > testExpire PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnsupportedForMessageFormatOnHandleWriteTxnMarkersWhenMagicLowerThanRequired
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleTxnOffsetCommitRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleAddPartitionsToTxnRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadUncommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion STARTED

kafka.server.KafkaApisTest > 
shouldAppendToLogOnWriteTxnMarkersWhenCorrectMagicVersion PASSED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 STARTED

kafka.server.KafkaApisTest > 
shouldThrowUnsupportedVersionExceptionOnHandleWriteTxnMarkersRequestWhenInterBrokerProtocolNotSupported
 PASSED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted STARTED

kafka.server.KafkaApisTest > 
shouldRespondWithUnknownTopicWhenPartitionIsNotHosted PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetEarliestOffsetEqualsLastStableOffset PASSED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest STARTED

kafka.server.KafkaApisTest > testReadCommittedConsumerListOffsetLatest PASSED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset STARTED

kafka.server.KafkaApisTest > 
testReadCommittedConsumerListOffsetLimitedAtLastStableOffset PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetEarliestOffsetEqualsHighWatermark PASSED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark STARTED

kafka.server.KafkaApisTest > 
testReadUncommittedConsumerListOffsetLimitedAtHighWatermark PASSED

kafka.serve

[GitHub] kafka pull request #4197: KAFKA-6190 GlobalKTable never finishes restoring w...

2017-11-08 Thread alexjg
GitHub user alexjg opened a pull request:

https://github.com/apache/kafka/pull/4197

KAFKA-6190 GlobalKTable never finishes restoring when consuming 
transactional messages

Calculate offset using consumer.position() in 
GlobalStateManagerImp#restoreState

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/alexjg/kafka 0.11.0

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4197.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4197


commit d96aac2de211ee4592e795b07d059dd4dea20f95
Author: Alex Good 
Date:   2017-11-08T23:26:09Z

Calculate offset using consumer.position() in 
GlobalStateManagerImpl#restoreState




---


[jira] [Created] (KAFKA-6190) GlobalKTable never finishes restoring when consuming transactional messages

2017-11-08 Thread Alex Good (JIRA)
Alex Good created KAFKA-6190:


 Summary: GlobalKTable never finishes restoring when consuming 
transactional messages
 Key: KAFKA-6190
 URL: https://issues.apache.org/jira/browse/KAFKA-6190
 Project: Kafka
  Issue Type: Bug
  Components: clients
 Environment: Linux
Reporter: Alex Good


When creating a GlobalKTable from a topic that contains messages that were 
produced in a transaction the GlobalStreamThread never finishes restoring the 
table. This appears to be because the `GlobalStateManagerImpl#restoreState` 
method fails to take into account the transaction markers in it's calculation 
of it's offset when reading messages and so never reaches the high watermark 
for the topic it is restoring.

To demonstrate the issue produce a few messages in a transaction to a topic, 
then attempt to restore a GlobalKTable from that topic, the store will never 
complete restoring.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6179) RecordQueue.clear() does not clear MinTimestampTracker's maintained list

2017-11-08 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6179.
--
   Resolution: Fixed
Fix Version/s: 0.11.0.2
   1.1.0
   1.0.1

Issue resolved by pull request 4186
[https://github.com/apache/kafka/pull/4186]

> RecordQueue.clear() does not clear MinTimestampTracker's maintained list
> 
>
> Key: KAFKA-6179
> URL: https://issues.apache.org/jira/browse/KAFKA-6179
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.2.1, 0.11.0.1, 1.0.0
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 1.0.1, 1.1.0, 0.11.0.2
>
>
> When a stream task is being suspended, in {{RecordQueue.clear()}} we will 
> clear the {{ArrayDeque fifoQueue}}, but we do not clear the 
> {{MinTimestampTracker}}'s maintained list. As a result if the task gets 
> resumed we will live with an empty {{fifoQueue}} while a populated 
> {{tracker}}. And hence we use reference equality to check if the smallest 
> timestamp record can be popped, we would never be able to pop any more 
> records and hence effectively leading to memory leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] kafka pull request #4186: KAFKA-6179: Clear min timestamp tracker upon parti...

2017-11-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4186


---


Re: Can someone update this KIP's status to "accepted"?

2017-11-08 Thread Dong Lin
Thanks. I have updated it to be accepted.

On Wed, Nov 8, 2017 at 1:33 PM, Jeff Widman  wrote:

> https://cwiki.apache.org/confluence/display/KAFKA/KIP-164-+Add+
> UnderMinIsrPartitionCount+and+per-partition+UnderMinIsr+metrics
>
> the JIRA shows it was shipped in 1.0, but the wiki page lists it as
> "Discussion"
>
> --
>
> *Jeff Widman*
> jeffwidman.com  | 740-WIDMAN-J (943-6265)
> <><
>


Can someone update this KIP's status to "accepted"?

2017-11-08 Thread Jeff Widman
https://cwiki.apache.org/confluence/display/KAFKA/KIP-164-+Add+UnderMinIsrPartitionCount+and+per-partition+UnderMinIsr+metrics

the JIRA shows it was shipped in 1.0, but the wiki page lists it as
"Discussion"

-- 

*Jeff Widman*
jeffwidman.com  | 740-WIDMAN-J (943-6265)
<><


[GitHub] kafka pull request #4196: MINOR: KafkaZkClient refactor. Use match instead o...

2017-11-08 Thread mimaison
GitHub user mimaison opened a pull request:

https://github.com/apache/kafka/pull/4196

MINOR: KafkaZkClient refactor. Use match instead of if/else chains

Follow up from https://github.com/apache/kafka/pull/4111

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mimaison/kafka zkclient_refactor

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4196.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4196


commit a570d29587eae543c3dd3069965c6fe73fc9f347
Author: Mickael Maison 
Date:   2017-11-08T20:33:07Z

MINOR: Refactored KafkaZkClient. Use match instead of if/else chains

commit c81f3c567324c90ae14b1ac23c8cec5169d42f4c
Author: Mickael Maison 
Date:   2017-11-08T20:56:59Z

Refactor exception cases




---


Jenkins build is back to normal : kafka-trunk-jdk7 #2953

2017-11-08 Thread Apache Jenkins Server
See 




[GitHub] kafka pull request #4195: KAFKA-5811: Add Kibosh integration for Trogdor and...

2017-11-08 Thread cmccabe
GitHub user cmccabe opened a pull request:

https://github.com/apache/kafka/pull/4195

KAFKA-5811: Add Kibosh integration for Trogdor and Ducktape

For ducktape: add Kibosh to the testing Dockerfile.
Create files_unreadable_fault_spec.py.

For trogdor: create FilesUnreadableFaultSpec.java.
Add a unit test of using the Kibosh service.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/cmccabe/kafka KAFKA-5811

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4195.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4195


commit 52b8f6da4be28ab5b39321e45380902166d16eb7
Author: Colin P. Mccabe 
Date:   2017-09-05T15:59:32Z

KAFKA-5811: Add Kibosh integration for Trogdor and Ducktape

For ducktape: add Kibosh to the testing Dockerfile.
Create files_unreadable_fault_spec.py.

For trogdor: create FilesUnreadableFaultSpec.java.
Add a unit test of using the Kibosh service.




---


[GitHub] kafka pull request #4194: KAFKA-5646: Use KafkaZkClient in AdminUtils and Dy...

2017-11-08 Thread omkreddy
GitHub user omkreddy opened a pull request:

https://github.com/apache/kafka/pull/4194

KAFKA-5646:  Use KafkaZkClient in AdminUtils and DynamicConfigManager

Use KafkaZkClient in ConfigCommand, ReassignPartitionsCommand, TopicCommand

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/omkreddy/kafka 
KAFKA-5646-ZK-ADMIN-UTILS-DYNAMIC-MANAGER

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4194.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4194


commit debc997f7cf0e2ddf119d3a50b7183e9737c2f1b
Author: Manikumar Reddy 
Date:   2017-11-08T17:52:35Z

KAFKA-5646: 1) Use KafkaZkClient in AdminUtils and  DynamicConfigManager
2) Use KafkaZkClient in ConfigCommand, ReassignPartitionsCommand, 
TopicCommand

commit 30e1e333adfda31b0f38d2755d6a8fe270d6f926
Author: Manikumar Reddy 
Date:   2017-11-08T17:55:36Z

 Update test classes to use KafkaZkClient




---


[GitHub] kafka pull request #4155: KAFKA-5645: Use async ZookeeperClient in SimpleAcl...

2017-11-08 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/4155


---


Re: [VOTE] KIP-210: Provide for custom error handling when Kafka Streams fails to produce

2017-11-08 Thread Damian Guy
+1 (binding)

On Sat, 4 Nov 2017 at 16:50 Matthias J. Sax  wrote:

> Yes. A KIP needs 3 binding "+1" to be accepted.
>
> You can still work on the PR and get it ready to get merged -- I am
> quite confident that this KIP will be accepted :)
>
>
> -Matthias
>
> On 11/4/17 3:56 PM, Matt Farmer wrote:
> > Bump! I believe I need two more binding +1's to proceed?
> >
> > On Thu, Nov 2, 2017 at 11:49 AM Ted Yu  wrote:
> >
> >> +1
> >>
> >> On Wed, Nov 1, 2017 at 4:50 PM, Guozhang Wang 
> wrote:
> >>
> >>> +1 (binding) from me. Thanks!
> >>>
> >>> On Wed, Nov 1, 2017 at 4:50 PM, Guozhang Wang 
> >> wrote:
> >>>
>  The vote should stay open for at least 72 hours. The bylaws can be
> >> found
>  here https://cwiki.apache.org/confluence/display/KAFKA/Bylaws
> 
>  On Wed, Nov 1, 2017 at 8:09 AM, Matt Farmer  wrote:
> 
> > Hello all,
> >
> > It seems like discussion around KIP-210 has gone to a lull. I've got
> >>> some
> > candidate work underway for it already, so I'd like to go ahead and
> >> call
> > it
> > to a vote.
> >
> > For reference, the KIP can be found here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-210+-+
> >
> Provide+for+custom+error+handling++when+Kafka+Streams+fails+to+produce
> >
> > Also, how long to vote threads stay open generally before changing
> the
> > status of the KIP?
> >
> > Cheers,
> > Matt
> >
> 
> 
> 
>  --
>  -- Guozhang
> 
> >>>
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
> >>
> >
>
>


[GitHub] kafka pull request #4193: KAFKA-6185: Remove channels from explictlyMutedCha...

2017-11-08 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/4193

KAFKA-6185: Remove channels from explictlyMutedChannels set when closed



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-6185-oom

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4193.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4193






---


[jira] [Created] (KAFKA-6189) Loosing messages while getting OFFSET_OUT_OF_RANGE eror in consumer

2017-11-08 Thread Andrey (JIRA)
Andrey created KAFKA-6189:
-

 Summary: Loosing messages while getting OFFSET_OUT_OF_RANGE eror 
in consumer
 Key: KAFKA-6189
 URL: https://issues.apache.org/jira/browse/KAFKA-6189
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.11.0.0
Reporter: Andrey
 Attachments: kafkaLossingMessages.png

Steps to reproduce:
* Setup test:
** producer sends messages constantly. If cluster not available, then it will 
retry
** consumer polling
** topic has 3 partitions and replication factor 3. 
** min.insync.replicas=2
** producer has "acks=all"
** consumer has default "auto.offset.reset=latest"
** consumer manually commitSync offsets after handling messages.
** kafka cluster has 3 brokers
* Kill broker 0
* In consumer's logs:
{code}
2017-11-08 11:36:33,967 INFO  
org.apache.kafka.clients.consumer.internals.Fetcher   - Fetch offset 
10706682 is out of range for partition mytopic-2, resetting offset 
[kafka-consumer]
2017-11-08 11:36:33,968 INFO  
org.apache.kafka.clients.consumer.internals.Fetcher   - Fetch offset 
8024431 is out of range for partition mytopic-1, resetting offset 
[kafka-consumer]
2017-11-08 11:36:34,045 INFO  
org.apache.kafka.clients.consumer.internals.Fetcher   - Fetch offset 
8029505 is out of range for partition mytopic-0, resetting offset 
[kafka-consumer]
{code}

After that, consumer lost several messages on each partition.

Expected:
* return upper bound of range
* consumer should resume from that offset instead of "auto.offset.reset".

Workaround:
* put "auto.offset.reset=earliest"
* get a lot of duplicate messages, instead of lost

Looks like this is what happening during the recovery from broker failure:
 !kafkaLossingMessages.png|thumbnail! 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6188) Broker fails with FATAL Shutdown - log dirs have failed

2017-11-08 Thread Valentina Baljak (JIRA)
Valentina Baljak created KAFKA-6188:
---

 Summary: Broker fails with FATAL Shutdown - log dirs have failed
 Key: KAFKA-6188
 URL: https://issues.apache.org/jira/browse/KAFKA-6188
 Project: Kafka
  Issue Type: Bug
  Components: clients, log
Affects Versions: 1.0.0
 Environment: Windows 10
Reporter: Valentina Baljak
Priority: Blocker


Just started with version 1.0.0 after a 4-5 months of using 0.10.2.1. The test 
environment is very simple, with only one producer and one consumer. Initially, 
everything started fine, stand alone tests worked as expected. However, running 
my code, Kafka clients fail after approximately 10 minutes. Kafka won't start 
after that and it fails with the same error. 

Deleting logs helps to start again, and the same problem occurs.

Here is the error traceback.

bq. [2017-11-08 08:21:57,532] INFO Starting log cleanup with a period of 30 
ms. (kafka.log.LogManager)
[2017-11-08 08:21:57,548] INFO Starting log flusher with a default period of 
9223372036854775807 ms. (kafka.log.LogManager)
[2017-11-08 08:21:57,798] INFO Awaiting socket connections on 0.0.0.0:9092. 
(kafka.network.Acceptor)
[2017-11-08 08:21:57,813] INFO [SocketServer brokerId=0] Started 1 acceptor 
threads (kafka.network.SocketServer)
[2017-11-08 08:21:57,829] INFO [ExpirationReaper-0-Produce]: Starting 
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-DeleteRecords]: Starting 
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-08 08:21:57,845] INFO [ExpirationReaper-0-Fetch]: Starting 
(kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2017-11-08 08:21:57,845] INFO [LogDirFailureHandler]: Starting 
(kafka.server.ReplicaManager$LogDirFailureHandler)
[2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Stopping serving 
replicas in dir C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
(kafka.server.ReplicaManager)
[2017-11-08 08:21:57,860] INFO [ReplicaManager broker=0] Partitions  are 
offline due to failure on log directory C:\Kafka\kafka_2.12-1.0.0\kafka-logs 
(kafka.server.ReplicaManager)
[2017-11-08 08:21:57,860] INFO [ReplicaFetcherManager on broker 0] Removed 
fetcher for partitions  (kafka.server.ReplicaFetcherManager)
[2017-11-08 08:21:57,892] INFO [ReplicaManager broker=0] Broker 0 stopped 
fetcher for partitions  because they are in the failed log dir 
C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.server.ReplicaManager)
[2017-11-08 08:21:57,892] INFO Stopping serving logs in dir 
C:\Kafka\kafka_2.12-1.0.0\kafka-logs (kafka.log.LogManager)
[2017-11-08 08:21:57,892] FATAL Shutdown broker because all log dirs in 
C:\Kafka\kafka_2.12-1.0.0\kafka-logs have failed (kafka.log.LogManager)




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6187) Remove the Logging class in favor of LazyLogging in scala-logging

2017-11-08 Thread Viktor Somogyi (JIRA)
Viktor Somogyi created KAFKA-6187:
-

 Summary: Remove the Logging class in favor of LazyLogging in 
scala-logging
 Key: KAFKA-6187
 URL: https://issues.apache.org/jira/browse/KAFKA-6187
 Project: Kafka
  Issue Type: Task
  Components: core
Reporter: Viktor Somogyi
Assignee: Viktor Somogyi


In KAFKA-1044 we removed the hard dependency on junit and enabled users to 
exclude it in their environment without causing any problems. We also agreed to 
remove the kafka.utils.Logging class as it can be made redundant by LazyLogging 
in scala-logging.
In this JIRA we will get rid of Logging by replacing its remaining 
functionalities with other features.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KAFKA-6186) RocksDB based WindowStore fail to create db file on Windows OS

2017-11-08 Thread Damian Guy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6186?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Damian Guy resolved KAFKA-6186.
---
Resolution: Duplicate

> RocksDB based WindowStore fail to create db file on Windows OS
> --
>
> Key: KAFKA-6186
> URL: https://issues.apache.org/jira/browse/KAFKA-6186
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 1.0.0
> Environment: Windows OS
>Reporter: James Wu
>
> Code snippet just like below
> ...
> textLines.flatMapValues(value -> 
> Arrays.asList(pattern.split(value.toLowerCase(.groupBy((key, word) -> 
> word)
> 
> .windowedBy(TimeWindows.of(1)).count(Materialized.as("Counts"));
> ...
> Run it on Windows, then the exception is throw as below
> Caused by: org.rocksdb.RocksDBException: Failed to create dir: 
> F:\tmp\kafka-streams\wordcount-lambda-example\1_0\Counts\Counts:151009920:
>  Invalid argument
>   at org.rocksdb.RocksDB.open(Native Method) ~[rocksdbjni-5.7.3.jar:na]
>   at org.rocksdb.RocksDB.open(RocksDB.java:231) ~[rocksdbjni-5.7.3.jar:na]
>   at 
> org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)
>  ~[kafka-streams-1.0.0.jar:na]
>   ... 29 common frames omitted
> Checked the code, I found the issue is caused by line 72 in 
> org.apache.kafka.streams.state.internals.Segments
> String segmentName(final long segmentId) {
> // previous format used - as a separator so if this changes in the 
> future
> // then we should use something different.
> return name + ":" + segmentId * segmentInterval;
> }
> "segmentName" is passed to RocksDB, RockDB will use it as file name to create 
> the DB file, as we known, the ":" cannot be part of file name in Windows OS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KAFKA-6186) RocksDB based WindowStore fail to create db file on Windows OS

2017-11-08 Thread James Wu (JIRA)
James Wu created KAFKA-6186:
---

 Summary: RocksDB based WindowStore fail to create db file on 
Windows OS
 Key: KAFKA-6186
 URL: https://issues.apache.org/jira/browse/KAFKA-6186
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 1.0.0
 Environment: Windows OS
Reporter: James Wu


Code snippet just like below

...
textLines.flatMapValues(value -> 
Arrays.asList(pattern.split(value.toLowerCase(.groupBy((key, word) -> word)

.windowedBy(TimeWindows.of(1)).count(Materialized.as("Counts"));
...

Run it on Windows, then the exception is throw as below

Caused by: org.rocksdb.RocksDBException: Failed to create dir: 
F:\tmp\kafka-streams\wordcount-lambda-example\1_0\Counts\Counts:151009920: 
Invalid argument
at org.rocksdb.RocksDB.open(Native Method) ~[rocksdbjni-5.7.3.jar:na]
at org.rocksdb.RocksDB.open(RocksDB.java:231) ~[rocksdbjni-5.7.3.jar:na]
at 
org.apache.kafka.streams.state.internals.RocksDBStore.openDB(RocksDBStore.java:197)
 ~[kafka-streams-1.0.0.jar:na]
... 29 common frames omitted

Checked the code, I found the issue is caused by line 72 in 
org.apache.kafka.streams.state.internals.Segments

String segmentName(final long segmentId) {
// previous format used - as a separator so if this changes in the 
future
// then we should use something different.
return name + ":" + segmentId * segmentInterval;
}

"segmentName" is passed to RocksDB, RockDB will use it as file name to create 
the DB file, as we known, the ":" cannot be part of file name in Windows OS.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-08 Thread Sandeep Nemuri
Congratulations Onur!!

On Wed, Nov 8, 2017 at 9:19 AM, UMESH CHAUDHARY  wrote:

> Congratulations Onur!
>
> On Tue, 7 Nov 2017 at 21:44 Jun Rao  wrote:
>
> > Affan,
> >
> > All known problems in the controller are described in the doc linked from
> > https://issues.apache.org/jira/browse/KAFKA-5027.
> >
> > Thanks,
> >
> > Jun
> >
> > On Mon, Nov 6, 2017 at 11:00 PM, Affan Syed  wrote:
> >
> > > Congrats Onur,
> > >
> > > Can you also share the document where all known problems are listed; I
> am
> > > assuming these bugs are still valid for the current stable release.
> > >
> > > Affan
> > >
> > > - Affan
> > >
> > > On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:
> > >
> > > > Hi, everyone,
> > > >
> > > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> > Onur
> > > > Karaman.
> > > >
> > > > Onur's most significant work is the improvement of Kafka controller,
> > > which
> > > > is the brain of a Kafka cluster. Over time, we have accumulated
> quite a
> > > few
> > > > correctness and performance issues in the controller. There have been
> > > > attempts to fix controller issues in isolation, which would make the
> > code
> > > > base more complicated without a clear path of solving all problems.
> > Onur
> > > is
> > > > the one who took a holistic approach, by first documenting all known
> > > > issues, writing down a new design, coming up with a plan to deliver
> the
> > > > changes in phases and executing on it. At this point, Onur has
> > completed
> > > > the two most important phases: making the controller single threaded
> > and
> > > > changing the controller to use the async ZK api. The former fixed
> > > multiple
> > > > deadlocks and race conditions. The latter significantly improved the
> > > > performance when there are many partitions. Experimental results show
> > > that
> > > > Onur's work reduced the controlled shutdown time by a factor of 100
> > times
> > > > and the controller failover time by a factor of 3 times.
> > > >
> > > > Congratulations, Onur!
> > > >
> > > > Thanks,
> > > >
> > > > Jun (on behalf of the Apache Kafka PMC)
> > > >
> > >
> >
>



-- 
*  Regards*
*  Sandeep Nemuri*


[GitHub] kafka pull request #4192: MINOR: Remove unnecessary batch iteration in FileR...

2017-11-08 Thread ijuma
GitHub user ijuma opened a pull request:

https://github.com/apache/kafka/pull/4192

MINOR: Remove unnecessary batch iteration in FileRecords.downConvert



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ijuma/kafka 
avoid-unnecessary-batch-iteration-in-down-convert

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4192.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4192


commit 7d1f1d1741079b0f5a255a91c3a17c101049bec2
Author: Ismael Juma 
Date:   2017-11-08T08:55:35Z

MINOR: Remove unnecessary batch iteration in FileRecords.downConvert




---


[GitHub] kafka pull request #4191: KAFKA-6184: report a metric of the lag between the...

2017-11-08 Thread huxihx
GitHub user huxihx opened a pull request:

https://github.com/apache/kafka/pull/4191

KAFKA-6184: report a metric of the lag between the consumer offset ...

Add `records-lead` and partition-level 
`{topic}-{partition}.records-lead-min|avg` for fetcher metrics.

@junrao  Please kindly review. Thanks.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/huxihx/kafka KAFKA-6184

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/4191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #4191


commit c51f4221905e7601526c320922ab7d9e2061a4e4
Author: huxihx 
Date:   2017-11-08T08:21:17Z

KAFKA-6184: report a metric of the lag between the consumer offset and the 
start offset of the log

Add `records-lead` and partition-level 
`{topic}-{partition}.records-lead-min|avg` for fetcher metrics.




---