Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Ismael Juma
Hi Jay,

Thanks for summarising the reasoning for the current approach. On the topic
of additional jars, the obvious example that came up recently is sharing
JSON serializers between connect and streams. Given the desire not to add a
Jackson dependency to clients, it seems like adding a kafka-serializer-json
(or something like that) may be needed. This is similar to the
kafka-log4j-appender jar that we have today.

When you look at it this way, then the situation is not as clear-cut as
initially described. Perhaps a way to explain this is that we only add
additional modules when they introduce a new dependency.

Finally, it seems a bit weird to add something to `common` that is, in
fact, not common. Would it not make sense to have a separate package for
pluggable core/server classes (because they are pluggable we want them to
be in Java and not to be associated with a particular Scala version)?

Ismael

On Wed, Apr 20, 2016 at 4:52 PM, Jay Kreps  wrote:

> Yeah our take when we came up with this approach was pretty much what Gwen
> is saying:
> 1. In practice you either need the server or client to do anything and the
> server depends on the client so bundling common and client doesn't hurt.
> 2. Our experience with more granular jars (not in Kafka) was that although
> it feels "cleaner" the complexity comes quickly for a few reasons. First it
> gets hard to detangle the more granular packages (e.g. somebody needs to
> use something in Utils in the authorizer package and then you no longer
> have a dag). Second people end up mixing and matching in ways you didn't
> anticipate which causes crazy heisenbugs (e.g. they depend on two different
> versions of the client via transitive dependencies and somehow end up with
> client version x and common version y due to duplicate entries on the class
> path).
>
> I'm not really arguing that this approach is superior, I'm just saying this
> is the current approach and that is the reason we went with it.
>
> So I could see splitting common and client and you could even further split
> the producer and consumer and multiple sub-jars in common, and if this was
> the approach I think a separate authorizer jar would make sense. But in the
> current approach I think the authorizer stuff would be most consistent as a
> public package in common. It is true that this means you build against more
> stuff then needed but I'm not sure this has any negative implications in
> practice.
>
> -Jay
>
> On Wed, Apr 20, 2016 at 4:17 PM, Gwen Shapira  wrote:
>
> > But its just a compile-time dependency, right?
> > Since the third-party-authorizer-implementation will be installed on a
> > broker where the common classes will exist anyway.
> >
> >
> > On Wed, Apr 20, 2016 at 3:13 PM, Ashish Singh 
> wrote:
> > > Jay,
> > >
> > > Thanks for the info. I think having common in clients jar makes sense,
> as
> > > their is no direct usage of it. i.e., without depending on or using
> > > clients. Authorizer is a bit different, as third party implementations
> do
> > > not really need anything from clients or server, all they need is
> > > Authorizer interface and related classes. If we move authorizer into
> > > common, then third party implementations will have to depend on
> clients.
> > > Though third party implementations depending on clients is not a big
> > > problem, right now they depend on core, I think it is cleaner to have
> > > dependency on minimal modules. Would you agree?
> > >
> > > On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:
> > >
> > >> I think it's great that we're moving the interface to java and fixing
> > some
> > >> of the naming foibles.
> > >>
> > >> This isn't explicit in the KIP which just refers to the java package
> > name
> > >> (I think), but it looks like you are proposing adding a new authorizer
> > jar
> > >> for this new package and adding it as a dependency for the client jar.
> > This
> > >> is a bit inconsistent with how we decided to package stuff when we
> > started
> > >> with the new clients so it'd be good to work that out.
> > >>
> > >> To date the categorization has been:
> > >> 1. Anything which is just in the clients is in org.apache.clients
> under
> > >> clients/
> > >> 2. Anything which is in the server is kafka.* which is under core/
> > >> 3. Anything which is needed in both places (as it sounds like some
> enums
> > >> for authorization are?) is in common which is under clients/
> > >>
> > >> org.apache.clients and org.apache.common are both pure java and
> > dependency
> > >> free other than the compression libraries and slf4j and are packaged
> > into
> > >> the kafka-clients.java, the server has it's own jar which has richer
> > >> dependencies and depends on the client jar.
> > >>
> > >> There are other ways this could have been done--e.g. common could have
> > been
> > >> its own library or even split into multiple sub-libraries--but the
> > decision
> 

Build failed in Jenkins: kafka-trunk-jdk8 #541

2016-04-20 Thread Apache Jenkins Server
See 

Changes:

[me] MINOR: Fix comment in DistributedHerder

[cshapi] KAFKA-3548: Use root locale for case transformation of constant strings

--
[...truncated 2499 lines...]

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.PlaintextProducerSendTest > testSerializerConstructors PASSED

kafka.api.PlaintextProducerSendTest > testWrongSerializer PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithCreateTime PASSED

kafka.api.PlaintextProducerSendTest > 
testSendCompressedMessageWithLogAppendTime PASSED

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > 

Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Gwen Shapira
Thanks!

On Wed, Apr 20, 2016 at 8:00 PM, Ashish Singh  wrote:
> OK, in that case we can move the authorizer interface and related classes
> to existing org.apache.kafka.common.security.auth. I have updated KIP to
> reflect this.
>
>
> On Wed, Apr 20, 2016 at 4:52 PM, Jay Kreps  wrote:
>
>> Yeah our take when we came up with this approach was pretty much what Gwen
>> is saying:
>> 1. In practice you either need the server or client to do anything and the
>> server depends on the client so bundling common and client doesn't hurt.
>> 2. Our experience with more granular jars (not in Kafka) was that although
>> it feels "cleaner" the complexity comes quickly for a few reasons. First it
>> gets hard to detangle the more granular packages (e.g. somebody needs to
>> use something in Utils in the authorizer package and then you no longer
>> have a dag). Second people end up mixing and matching in ways you didn't
>> anticipate which causes crazy heisenbugs (e.g. they depend on two different
>> versions of the client via transitive dependencies and somehow end up with
>> client version x and common version y due to duplicate entries on the class
>> path).
>>
>> I'm not really arguing that this approach is superior, I'm just saying this
>> is the current approach and that is the reason we went with it.
>>
>> So I could see splitting common and client and you could even further split
>> the producer and consumer and multiple sub-jars in common, and if this was
>> the approach I think a separate authorizer jar would make sense. But in the
>> current approach I think the authorizer stuff would be most consistent as a
>> public package in common. It is true that this means you build against more
>> stuff then needed but I'm not sure this has any negative implications in
>> practice.
>>
>> -Jay
>>
>> On Wed, Apr 20, 2016 at 4:17 PM, Gwen Shapira  wrote:
>>
>> > But its just a compile-time dependency, right?
>> > Since the third-party-authorizer-implementation will be installed on a
>> > broker where the common classes will exist anyway.
>> >
>> >
>> > On Wed, Apr 20, 2016 at 3:13 PM, Ashish Singh 
>> wrote:
>> > > Jay,
>> > >
>> > > Thanks for the info. I think having common in clients jar makes sense,
>> as
>> > > their is no direct usage of it. i.e., without depending on or using
>> > > clients. Authorizer is a bit different, as third party implementations
>> do
>> > > not really need anything from clients or server, all they need is
>> > > Authorizer interface and related classes. If we move authorizer into
>> > > common, then third party implementations will have to depend on
>> clients.
>> > > Though third party implementations depending on clients is not a big
>> > > problem, right now they depend on core, I think it is cleaner to have
>> > > dependency on minimal modules. Would you agree?
>> > >
>> > > On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:
>> > >
>> > >> I think it's great that we're moving the interface to java and fixing
>> > some
>> > >> of the naming foibles.
>> > >>
>> > >> This isn't explicit in the KIP which just refers to the java package
>> > name
>> > >> (I think), but it looks like you are proposing adding a new authorizer
>> > jar
>> > >> for this new package and adding it as a dependency for the client jar.
>> > This
>> > >> is a bit inconsistent with how we decided to package stuff when we
>> > started
>> > >> with the new clients so it'd be good to work that out.
>> > >>
>> > >> To date the categorization has been:
>> > >> 1. Anything which is just in the clients is in org.apache.clients
>> under
>> > >> clients/
>> > >> 2. Anything which is in the server is kafka.* which is under core/
>> > >> 3. Anything which is needed in both places (as it sounds like some
>> enums
>> > >> for authorization are?) is in common which is under clients/
>> > >>
>> > >> org.apache.clients and org.apache.common are both pure java and
>> > dependency
>> > >> free other than the compression libraries and slf4j and are packaged
>> > into
>> > >> the kafka-clients.java, the server has it's own jar which has richer
>> > >> dependencies and depends on the client jar.
>> > >>
>> > >> There are other ways this could have been done--e.g. common could have
>> > been
>> > >> its own library or even split into multiple sub-libraries--but the
>> > decision
>> > >> at that time was just to keep it simple and hard to mess up. Based on
>> > the
>> > >> experience with the scala clients our plan was to be ultra-hostile to
>> > any
>> > >> added client dependencies.
>> > >>
>> > >> So I think if we're continuing this model we would put the shared
>> > >> authorizer code somewhere under
>> > >> clients/src/main/java/org/apache/kafka/common as with the other shared
>> > >> authorizer. If we're moving away from this model we should probably
>> > rethink
>> > >> things and be consistent with this, at the very least splitting up

Jenkins build is back to normal : kafka-trunk-jdk7 #1210

2016-04-20 Thread Apache Jenkins Server
See 



Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Ashish Singh
OK, in that case we can move the authorizer interface and related classes
to existing org.apache.kafka.common.security.auth. I have updated KIP to
reflect this.
​

On Wed, Apr 20, 2016 at 4:52 PM, Jay Kreps  wrote:

> Yeah our take when we came up with this approach was pretty much what Gwen
> is saying:
> 1. In practice you either need the server or client to do anything and the
> server depends on the client so bundling common and client doesn't hurt.
> 2. Our experience with more granular jars (not in Kafka) was that although
> it feels "cleaner" the complexity comes quickly for a few reasons. First it
> gets hard to detangle the more granular packages (e.g. somebody needs to
> use something in Utils in the authorizer package and then you no longer
> have a dag). Second people end up mixing and matching in ways you didn't
> anticipate which causes crazy heisenbugs (e.g. they depend on two different
> versions of the client via transitive dependencies and somehow end up with
> client version x and common version y due to duplicate entries on the class
> path).
>
> I'm not really arguing that this approach is superior, I'm just saying this
> is the current approach and that is the reason we went with it.
>
> So I could see splitting common and client and you could even further split
> the producer and consumer and multiple sub-jars in common, and if this was
> the approach I think a separate authorizer jar would make sense. But in the
> current approach I think the authorizer stuff would be most consistent as a
> public package in common. It is true that this means you build against more
> stuff then needed but I'm not sure this has any negative implications in
> practice.
>
> -Jay
>
> On Wed, Apr 20, 2016 at 4:17 PM, Gwen Shapira  wrote:
>
> > But its just a compile-time dependency, right?
> > Since the third-party-authorizer-implementation will be installed on a
> > broker where the common classes will exist anyway.
> >
> >
> > On Wed, Apr 20, 2016 at 3:13 PM, Ashish Singh 
> wrote:
> > > Jay,
> > >
> > > Thanks for the info. I think having common in clients jar makes sense,
> as
> > > their is no direct usage of it. i.e., without depending on or using
> > > clients. Authorizer is a bit different, as third party implementations
> do
> > > not really need anything from clients or server, all they need is
> > > Authorizer interface and related classes. If we move authorizer into
> > > common, then third party implementations will have to depend on
> clients.
> > > Though third party implementations depending on clients is not a big
> > > problem, right now they depend on core, I think it is cleaner to have
> > > dependency on minimal modules. Would you agree?
> > >
> > > On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:
> > >
> > >> I think it's great that we're moving the interface to java and fixing
> > some
> > >> of the naming foibles.
> > >>
> > >> This isn't explicit in the KIP which just refers to the java package
> > name
> > >> (I think), but it looks like you are proposing adding a new authorizer
> > jar
> > >> for this new package and adding it as a dependency for the client jar.
> > This
> > >> is a bit inconsistent with how we decided to package stuff when we
> > started
> > >> with the new clients so it'd be good to work that out.
> > >>
> > >> To date the categorization has been:
> > >> 1. Anything which is just in the clients is in org.apache.clients
> under
> > >> clients/
> > >> 2. Anything which is in the server is kafka.* which is under core/
> > >> 3. Anything which is needed in both places (as it sounds like some
> enums
> > >> for authorization are?) is in common which is under clients/
> > >>
> > >> org.apache.clients and org.apache.common are both pure java and
> > dependency
> > >> free other than the compression libraries and slf4j and are packaged
> > into
> > >> the kafka-clients.java, the server has it's own jar which has richer
> > >> dependencies and depends on the client jar.
> > >>
> > >> There are other ways this could have been done--e.g. common could have
> > been
> > >> its own library or even split into multiple sub-libraries--but the
> > decision
> > >> at that time was just to keep it simple and hard to mess up. Based on
> > the
> > >> experience with the scala clients our plan was to be ultra-hostile to
> > any
> > >> added client dependencies.
> > >>
> > >> So I think if we're continuing this model we would put the shared
> > >> authorizer code somewhere under
> > >> clients/src/main/java/org/apache/kafka/common as with the other shared
> > >> authorizer. If we're moving away from this model we should probably
> > rethink
> > >> things and be consistent with this, at the very least splitting up
> > common
> > >> and clients.
> > >>
> > >> -Jay
> > >>
> > >> On Wed, Apr 20, 2016 at 1:04 PM, Ashish Singh 
> > wrote:
> > >>
> > >> > Jun/ Jay/ Gwen/ Harsha/ Ismael,
> 

Build failed in Jenkins: kafka-trunk-jdk7 #1209

2016-04-20 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3117: handle metadata updates during consumer rebalance

--
[...truncated 2425 lines...]

kafka.api.test.ProducerCompressionTest > testCompression[3] PASSED

kafka.api.SslConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.SslConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SslConsumerTest > testListTopics PASSED

kafka.api.SslConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SslConsumerTest > testSimpleConsumption PASSED

kafka.api.SslConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SslConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ProducerFailureHandlingTest > testCannotSendToInternalTopic PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckOne PASSED

kafka.api.ProducerFailureHandlingTest > testWrongBrokerList PASSED

kafka.api.ProducerFailureHandlingTest > testNotEnoughReplicas PASSED

kafka.api.ProducerFailureHandlingTest > testNonExistentTopic PASSED

kafka.api.ProducerFailureHandlingTest > testInvalidPartition PASSED

kafka.api.ProducerFailureHandlingTest > testSendAfterClosed PASSED

kafka.api.ProducerFailureHandlingTest > testTooLargeRecordWithAckZero PASSED

kafka.api.ProducerFailureHandlingTest > 
testNotEnoughReplicasAfterBrokerShutdown PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.SaslPlaintextConsumerTest > testPauseStateNotPreservedByRebalance 
PASSED

kafka.api.SaslPlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.SaslPlaintextConsumerTest > testListTopics PASSED

kafka.api.SaslPlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.SaslPlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithCreateTime 
PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithLogAppendTime 
PASSED

kafka.api.SslProducerSendTest > testClose PASSED

kafka.api.SslProducerSendTest > testFlush PASSED

kafka.api.SslProducerSendTest > testSendToPartition PASSED

kafka.api.SslProducerSendTest > testSendOffset PASSED

kafka.api.SslProducerSendTest > testAutoCreateTopic PASSED

kafka.api.SslProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.SslProducerSendTest > testSendCompressedMessageWithCreateTime PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromCallerThread PASSED

kafka.api.SslProducerSendTest > testCloseWithZeroTimeoutFromSenderThread PASSED

kafka.api.SslProducerSendTest > testSendNonCompressedMessageWithLogApendTime 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl 

Build failed in Jenkins: kafka-trunk-jdk8 #540

2016-04-20 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-3117: handle metadata updates during consumer rebalance

--
[...truncated 2602 lines...]

kafka.api.PlaintextProducerSendTest > testClose PASSED

kafka.api.PlaintextProducerSendTest > testFlush PASSED

kafka.api.PlaintextProducerSendTest > testSendToPartition PASSED

kafka.api.PlaintextProducerSendTest > testSendOffset PASSED

kafka.api.PlaintextProducerSendTest > testAutoCreateTopic PASSED

kafka.api.PlaintextProducerSendTest > testSendWithInvalidCreateTime PASSED

kafka.api.PlaintextProducerSendTest > testSendCompressedMessageWithCreateTime 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
PASSED

kafka.api.PlaintextProducerSendTest > testCloseWithZeroTimeoutFromSenderThread 
PASSED

kafka.api.PlaintextProducerSendTest > 
testSendNonCompressedMessageWithLogApendTime PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnRebalance PASSED

kafka.api.PlaintextConsumerTest > testSimpleConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.PlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.ApiUtilsTest > testShortStringNonASCII PASSED

kafka.api.ApiUtilsTest > testShortStringASCII PASSED

kafka.api.ConsumerBounceTest > testSeekAndCommitWithBrokerFailures PASSED

kafka.api.ConsumerBounceTest > testConsumptionWithBrokerFailures PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testBasic PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompressionSetConsumption 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testLeaderSelectionForPartition 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerDecoder PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testConsumerRebalanceListener 
PASSED

kafka.consumer.ZookeeperConsumerConnectorTest > testCompression PASSED

kafka.consumer.TopicFilterTest > testWhitelists PASSED

kafka.consumer.TopicFilterTest > 
testWildcardTopicCountGetTopicCountMapEscapeJson PASSED

kafka.consumer.TopicFilterTest > testBlacklists PASSED

kafka.consumer.ConsumerIteratorTest > 
testConsumerIteratorDeduplicationDeepIterator PASSED

kafka.consumer.ConsumerIteratorTest > testConsumerIteratorDecodingFailure PASSED

kafka.consumer.MetricsTest > testMetricsReporterAfterDeletingTopic PASSED

kafka.consumer.MetricsTest > testMetricsLeak PASSED

kafka.consumer.PartitionAssignorTest > testRoundRobinPartitionAssignor PASSED

kafka.consumer.PartitionAssignorTest > testRangePartitionAssignor PASSED

kafka.network.SocketServerTest > testClientDisconnectionUpdatesRequestMetrics 
PASSED

kafka.network.SocketServerTest 

[jira] [Updated] (KAFKA-3548) Locale is not handled properly in kafka-consumer

2016-04-20 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3548?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3548:

   Resolution: Fixed
Fix Version/s: (was: 0.10.0.0)
   0.10.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1220
[https://github.com/apache/kafka/pull/1220]

> Locale is not handled properly in kafka-consumer
> 
>
> Key: KAFKA-3548
> URL: https://issues.apache.org/jira/browse/KAFKA-3548
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Tanju Cataltepe
>Assignee: Rajini Sivaram
> Fix For: 0.10.1.0
>
>
> If the JVM local language is Turkish, which has different upper case for the 
> lower case letter i, the result is a runtime error caused by 
> org.apache.kafka.clients.consumer.OffsetResetStrategy. More specifically an 
> enum constant *EARLİEST* is generated which does not match *EARLIEST* (note 
> the _dotted capital i_).
> If the locale for the JVM is explicitly set to en_US, the example runs as 
> expected.
> A sample error log is below:
> {noforma}
> [akka://ReactiveKafka/user/$a] Failed to construct kafka consumer
> akka.actor.ActorInitializationException: exception during creation
> at akka.actor.ActorInitializationException$.apply(Actor.scala:172)
> at akka.actor.ActorCell.create(ActorCell.scala:606)
> at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
> at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
> at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
> at akka.dispatch.Mailbox.run(Mailbox.scala:223)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka 
> consumer
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:648)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:542)
> at 
> com.softwaremill.react.kafka.ReactiveKafkaConsumer.consumer$lzycompute(ReactiveKafkaConsumer.scala:31)
> at 
> com.softwaremill.react.kafka.ReactiveKafkaConsumer.consumer(ReactiveKafkaConsumer.scala:30)
> at 
> com.softwaremill.react.kafka.KafkaActorPublisher.(KafkaActorPublisher.scala:17)
> at 
> com.softwaremill.react.kafka.ReactiveKafka$$anonfun$consumerActorProps$1.apply(ReactiveKafka.scala:270)
> at 
> com.softwaremill.react.kafka.ReactiveKafka$$anonfun$consumerActorProps$1.apply(ReactiveKafka.scala:270)
> at 
> akka.actor.TypedCreatorFunctionConsumer.produce(IndirectActorProducer.scala:87)
> at akka.actor.Props.newActor(Props.scala:214)
> at akka.actor.ActorCell.newActor(ActorCell.scala:562)
> at akka.actor.ActorCell.create(ActorCell.scala:588)
> ... 7 more
> Caused by: java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.clients.consumer.OffsetResetStrategy.EARLİEST
> at java.lang.Enum.valueOf(Enum.java:238)
> at 
> org.apache.kafka.clients.consumer.OffsetResetStrategy.valueOf(OffsetResetStrategy.java:15)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:588)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3548: Use root locale for case transform...

2016-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1220


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3548) Locale is not handled properly in kafka-consumer

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3548?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251100#comment-15251100
 ] 

ASF GitHub Bot commented on KAFKA-3548:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1220


> Locale is not handled properly in kafka-consumer
> 
>
> Key: KAFKA-3548
> URL: https://issues.apache.org/jira/browse/KAFKA-3548
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.1
>Reporter: Tanju Cataltepe
>Assignee: Rajini Sivaram
> Fix For: 0.10.0.0
>
>
> If the JVM local language is Turkish, which has different upper case for the 
> lower case letter i, the result is a runtime error caused by 
> org.apache.kafka.clients.consumer.OffsetResetStrategy. More specifically an 
> enum constant *EARLİEST* is generated which does not match *EARLIEST* (note 
> the _dotted capital i_).
> If the locale for the JVM is explicitly set to en_US, the example runs as 
> expected.
> A sample error log is below:
> {noforma}
> [akka://ReactiveKafka/user/$a] Failed to construct kafka consumer
> akka.actor.ActorInitializationException: exception during creation
> at akka.actor.ActorInitializationException$.apply(Actor.scala:172)
> at akka.actor.ActorCell.create(ActorCell.scala:606)
> at akka.actor.ActorCell.invokeAll$1(ActorCell.scala:461)
> at akka.actor.ActorCell.systemInvoke(ActorCell.scala:483)
> at akka.dispatch.Mailbox.processAllSystemMessages(Mailbox.scala:282)
> at akka.dispatch.Mailbox.run(Mailbox.scala:223)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka 
> consumer
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:648)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:542)
> at 
> com.softwaremill.react.kafka.ReactiveKafkaConsumer.consumer$lzycompute(ReactiveKafkaConsumer.scala:31)
> at 
> com.softwaremill.react.kafka.ReactiveKafkaConsumer.consumer(ReactiveKafkaConsumer.scala:30)
> at 
> com.softwaremill.react.kafka.KafkaActorPublisher.(KafkaActorPublisher.scala:17)
> at 
> com.softwaremill.react.kafka.ReactiveKafka$$anonfun$consumerActorProps$1.apply(ReactiveKafka.scala:270)
> at 
> com.softwaremill.react.kafka.ReactiveKafka$$anonfun$consumerActorProps$1.apply(ReactiveKafka.scala:270)
> at 
> akka.actor.TypedCreatorFunctionConsumer.produce(IndirectActorProducer.scala:87)
> at akka.actor.Props.newActor(Props.scala:214)
> at akka.actor.ActorCell.newActor(ActorCell.scala:562)
> at akka.actor.ActorCell.create(ActorCell.scala:588)
> ... 7 more
> Caused by: java.lang.IllegalArgumentException: No enum constant 
> org.apache.kafka.clients.consumer.OffsetResetStrategy.EARLİEST
> at java.lang.Enum.valueOf(Enum.java:238)
> at 
> org.apache.kafka.clients.consumer.OffsetResetStrategy.valueOf(OffsetResetStrategy.java:15)
> at 
> org.apache.kafka.clients.consumer.KafkaConsumer.(KafkaConsumer.java:588)
> ... 17 more
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3480) Autogenerate metrics documentation

2016-04-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251080#comment-15251080
 ] 

Gwen Shapira commented on KAFKA-3480:
-

Sorry for the delay James. I'm a bit busy at the moment, it will probably be 
another week before I have cycles to check this :(

> Autogenerate metrics documentation
> --
>
> Key: KAFKA-3480
> URL: https://issues.apache.org/jira/browse/KAFKA-3480
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jason Gustafson
> Attachments: Screen Shot 2016-04-07 at 6.52.19 PM.png, 
> sample_metrics.html
>
>
> Metrics documentation is done manually, which means it's hard to keep it 
> current. It would be nice to have automatic generation of this documentation 
> as we have for configs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei updated KAFKA-3459:
--
Status: Patch Available  (was: In Progress)

> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3459 started by Liquan Pei.
-
> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251033#comment-15251033
 ] 

ASF GitHub Bot commented on KAFKA-3459:
---

GitHub user Ishiihara opened a pull request:

https://github.com/apache/kafka/pull/1248

KAFKA-3459: Returning zero task configurations from a connector does not 
properly clean up existing tasks

@hachikuji @ewencp Can you take a look when you have time?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Ishiihara/kafka kafka-3459

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1248.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1248


commit e91c6032464e49759ab9f753ba6afe1181944770
Author: Liquan Pei 
Date:   2016-04-21T00:51:22Z

Handle zero task




> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3459: Returning zero task configurations...

2016-04-20 Thread Ishiihara
GitHub user Ishiihara opened a pull request:

https://github.com/apache/kafka/pull/1248

KAFKA-3459: Returning zero task configurations from a connector does not 
properly clean up existing tasks

@hachikuji @ewencp Can you take a look when you have time?

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Ishiihara/kafka kafka-3459

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1248.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1248


commit e91c6032464e49759ab9f753ba6afe1181944770
Author: Liquan Pei 
Date:   2016-04-21T00:51:22Z

Handle zero task




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Minor comment fix

2016-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1243


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


electing leader failed and result in 0 latest offset

2016-04-20 Thread Qi Xu
Hi folks,
Recently we run into an odd issue that some partition's latest offset
becomes 0. Here's the snapshot of the Kafka Manager. As you can see
partition 2 and 3 becomes zero.

*Partition*

*Latest Offset*

*Leader*

*Replicas*

*In Sync Replicas*

*Preferred Leader?*

*Under Replicated?*

0

25822061

3 

(3,4,5)

(3,5,4)

true

false

1

25822388

4 

(4,5,1)

(4,1,5)

true

false

2

0

2 

(5,1,2)

(2)

false

true

3

0

2 

(1,2,3)

(3,2)

false

true

In the Kafka Controller node, I saw there're some errors like below in
state-change log. The timing seems match, not sure if it's related or not.

[2016-04-14 19:59:21,800] ERROR Controller 3 epoch 74174 initiated state
change for partition [topic,2] from OnlinePartition to OnlinePartition
failed (state.change.logger)
kafka.common.StateChangeFailedException: encountered error while electing
leader for partition [topic,2] due to: Preferred replica 1 for partition
[topic,2] is either not alive or not in the isr. Current leader and ISR:
[{"leader":2,"leader_epoch":169,"isr":[2]}].


And when this happens, basically all these partitions with zero latest
offset fail to get new data. After we restart the controller, everything
goes back normally.

Do you see the similar issue before and any idea about the root cause? What
other information do you suggest to collect to get to the root cause?

Thanks,
Qi


[jira] [Commented] (KAFKA-3579) TopicCommand references outdated consumer property fetch.message.max.bytes

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251001#comment-15251001
 ] 

ASF GitHub Bot commented on KAFKA-3579:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1239

KAFKA-3579 - Update reference to the outdated consumer property

Replace references to the old consumer property 'fetch.message.max.bytes' 
with the corresponding property in the new consumer 'max.partition.fetch.bytes'.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3579

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1239.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1239


commit fbd579331e8136209356bc572e978908fbde2c1e
Author: Vahid Hashemian 
Date:   2016-04-19T18:21:45Z

KAFKA-3579 - Update reference to the outdated consumer property

Replace references to the old consumer property 'fetch.message.max.bytes' 
with the corresponding property in the new consumer 'max.partition.fetch.bytes'.




> TopicCommand references outdated consumer property fetch.message.max.bytes 
> ---
>
> Key: KAFKA-3579
> URL: https://issues.apache.org/jira/browse/KAFKA-3579
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> TopicCommand gives the following warning.
> *
> *** WARNING: you are creating a topic where the the max.message.bytes is 
> greater than the consumer ***
> *** default. This operation is potentially dangerous. Consumers will get 
> failures if their***
> *** fetch.message.max.bytes < the value you are using.
> ***
> *
> - value set here: 130
> - Default Consumer fetch.message.max.bytes: 1048576
> - Default Broker max.message.bytes: 112
> fetch.message.max.bytes is used in the old consumer. We should reference 
> max.partition.fetch.bytes in the new consumer instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3579) TopicCommand references outdated consumer property fetch.message.max.bytes

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15251000#comment-15251000
 ] 

ASF GitHub Bot commented on KAFKA-3579:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1239


> TopicCommand references outdated consumer property fetch.message.max.bytes 
> ---
>
> Key: KAFKA-3579
> URL: https://issues.apache.org/jira/browse/KAFKA-3579
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Jun Rao
>Assignee: Vahid Hashemian
>  Labels: newbie
>
> TopicCommand gives the following warning.
> *
> *** WARNING: you are creating a topic where the the max.message.bytes is 
> greater than the consumer ***
> *** default. This operation is potentially dangerous. Consumers will get 
> failures if their***
> *** fetch.message.max.bytes < the value you are using.
> ***
> *
> - value set here: 130
> - Default Consumer fetch.message.max.bytes: 1048576
> - Default Broker max.message.bytes: 112
> fetch.message.max.bytes is used in the old consumer. We should reference 
> max.partition.fetch.bytes in the new consumer instead.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3579 - Update reference to the outdated ...

2016-04-20 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/1239

KAFKA-3579 - Update reference to the outdated consumer property

Replace references to the old consumer property 'fetch.message.max.bytes' 
with the corresponding property in the new consumer 'max.partition.fetch.bytes'.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3579

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1239.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1239


commit fbd579331e8136209356bc572e978908fbde2c1e
Author: Vahid Hashemian 
Date:   2016-04-19T18:21:45Z

KAFKA-3579 - Update reference to the outdated consumer property

Replace references to the old consumer property 'fetch.message.max.bytes' 
with the corresponding property in the new consumer 'max.partition.fetch.bytes'.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3579 - Update reference to the outdated ...

2016-04-20 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/1239


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread Guozhang Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250995#comment-15250995
 ] 

Guozhang Wang commented on KAFKA-3117:
--

PR merged, leaving this ticket open for another month to see if the transient 
failure is gone.

> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250994#comment-15250994
 ] 

ASF GitHub Bot commented on KAFKA-3117:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1247


> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3117: handle metadata updates during con...

2016-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1247


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3101) Optimize Aggregation Outputs

2016-04-20 Thread Henry Cai (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250977#comment-15250977
 ] 

Henry Cai commented on KAFKA-3101:
--

We really need this batching/buffering feature for us to adopt kafka streams, 
otherwise the output rate from aggregation store is too high.  Any idea on when 
this will be implemented?

Another use case is similar to this, we have a left outer join case between two 
streams:

INSERT INTO C
SELECT a, b
FROM A
Left Outer Join B
on a.id = b.id

On the output stream, we might see (a, null) then followed by (a, b) which 
cancels the (a, null).  In order to reduce this kind of churn, we can have a 
policy of 15 minute buffer, if we don't see (a, b) within 15 minute then we 
emit (a, null).

Hopefully your solution for buffering aggregation output can solve the left 
outer join case as well.  The other workaround would be to delay the B stream 
by 15 minutes which also needs a buffering mechanism.


> Optimize Aggregation Outputs
> 
>
> Key: KAFKA-3101
> URL: https://issues.apache.org/jira/browse/KAFKA-3101
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Bill Bejeck
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Today we emit one output record for each incoming message for Table / 
> Windowed Stream Aggregations. For example, say we have a sequence of 
> aggregate outputs computed from the input stream (assuming there is no agg 
> value for this key before):
> V1, V2, V3, V4, V5
> Then the aggregator will output the following sequence of Change oldValue>:
> , , , , 
> where could cost a lot of CPU overhead computing the intermediate results. 
> Instead if we can let the underlying state store to "remember" the last 
> emitted old value, we can reduce the number of emits based on some configs. 
> More specifically, we can add one more field in the KV store engine storing 
> the last emitted old value, which only get updated when we emit to the 
> downstream processor. For example:
> At Beginning: 
> Store: key => empty (no agg values yet)
> V1 computed: 
> Update Both in Store: key => (V1, V1), Emit 
> V2 computed: 
> Update NewValue in Store: key => (V2, V1), No Emit
> V3 computed: 
> Update NewValue in Store: key => (V3, V1), No Emit
> V4 computed: 
> Update Both in Store: key => (V4, V4), Emit 
> V5 computed: 
> Update NewValue in Store: key => (V5, V4), No Emit
> One more thing to consider is that, we need a "closing" time control on the 
> not-yet-emitted keys; when some time has elapsed (or the window is to be 
> closed), we need to check for any key if their current materialized pairs 
> have not been emitted (for example  in the above example). 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Jay Kreps
Yeah our take when we came up with this approach was pretty much what Gwen
is saying:
1. In practice you either need the server or client to do anything and the
server depends on the client so bundling common and client doesn't hurt.
2. Our experience with more granular jars (not in Kafka) was that although
it feels "cleaner" the complexity comes quickly for a few reasons. First it
gets hard to detangle the more granular packages (e.g. somebody needs to
use something in Utils in the authorizer package and then you no longer
have a dag). Second people end up mixing and matching in ways you didn't
anticipate which causes crazy heisenbugs (e.g. they depend on two different
versions of the client via transitive dependencies and somehow end up with
client version x and common version y due to duplicate entries on the class
path).

I'm not really arguing that this approach is superior, I'm just saying this
is the current approach and that is the reason we went with it.

So I could see splitting common and client and you could even further split
the producer and consumer and multiple sub-jars in common, and if this was
the approach I think a separate authorizer jar would make sense. But in the
current approach I think the authorizer stuff would be most consistent as a
public package in common. It is true that this means you build against more
stuff then needed but I'm not sure this has any negative implications in
practice.

-Jay

On Wed, Apr 20, 2016 at 4:17 PM, Gwen Shapira  wrote:

> But its just a compile-time dependency, right?
> Since the third-party-authorizer-implementation will be installed on a
> broker where the common classes will exist anyway.
>
>
> On Wed, Apr 20, 2016 at 3:13 PM, Ashish Singh  wrote:
> > Jay,
> >
> > Thanks for the info. I think having common in clients jar makes sense, as
> > their is no direct usage of it. i.e., without depending on or using
> > clients. Authorizer is a bit different, as third party implementations do
> > not really need anything from clients or server, all they need is
> > Authorizer interface and related classes. If we move authorizer into
> > common, then third party implementations will have to depend on clients.
> > Though third party implementations depending on clients is not a big
> > problem, right now they depend on core, I think it is cleaner to have
> > dependency on minimal modules. Would you agree?
> >
> > On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:
> >
> >> I think it's great that we're moving the interface to java and fixing
> some
> >> of the naming foibles.
> >>
> >> This isn't explicit in the KIP which just refers to the java package
> name
> >> (I think), but it looks like you are proposing adding a new authorizer
> jar
> >> for this new package and adding it as a dependency for the client jar.
> This
> >> is a bit inconsistent with how we decided to package stuff when we
> started
> >> with the new clients so it'd be good to work that out.
> >>
> >> To date the categorization has been:
> >> 1. Anything which is just in the clients is in org.apache.clients under
> >> clients/
> >> 2. Anything which is in the server is kafka.* which is under core/
> >> 3. Anything which is needed in both places (as it sounds like some enums
> >> for authorization are?) is in common which is under clients/
> >>
> >> org.apache.clients and org.apache.common are both pure java and
> dependency
> >> free other than the compression libraries and slf4j and are packaged
> into
> >> the kafka-clients.java, the server has it's own jar which has richer
> >> dependencies and depends on the client jar.
> >>
> >> There are other ways this could have been done--e.g. common could have
> been
> >> its own library or even split into multiple sub-libraries--but the
> decision
> >> at that time was just to keep it simple and hard to mess up. Based on
> the
> >> experience with the scala clients our plan was to be ultra-hostile to
> any
> >> added client dependencies.
> >>
> >> So I think if we're continuing this model we would put the shared
> >> authorizer code somewhere under
> >> clients/src/main/java/org/apache/kafka/common as with the other shared
> >> authorizer. If we're moving away from this model we should probably
> rethink
> >> things and be consistent with this, at the very least splitting up
> common
> >> and clients.
> >>
> >> -Jay
> >>
> >> On Wed, Apr 20, 2016 at 1:04 PM, Ashish Singh 
> wrote:
> >>
> >> > Jun/ Jay/ Gwen/ Harsha/ Ismael,
> >> >
> >> > As you guys have provided feedback on this earlier, could you review
> the
> >> > KIP again? I have updated the PR <
> >> https://github.com/apache/kafka/pull/861>
> >> > as
> >> > well.
> >> >
> >> > On Wed, Apr 20, 2016 at 1:01 PM, Ashish Singh 
> >> wrote:
> >> >
> >> > > Hi Grant,
> >> > >
> >> > > On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke 
> >> > wrote:
> >> > >
> >> > >> Hi 

[jira] [Updated] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson updated KAFKA-3117:
---
Status: Patch Available  (was: Open)

> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Gwen Shapira
But its just a compile-time dependency, right?
Since the third-party-authorizer-implementation will be installed on a
broker where the common classes will exist anyway.


On Wed, Apr 20, 2016 at 3:13 PM, Ashish Singh  wrote:
> Jay,
>
> Thanks for the info. I think having common in clients jar makes sense, as
> their is no direct usage of it. i.e., without depending on or using
> clients. Authorizer is a bit different, as third party implementations do
> not really need anything from clients or server, all they need is
> Authorizer interface and related classes. If we move authorizer into
> common, then third party implementations will have to depend on clients.
> Though third party implementations depending on clients is not a big
> problem, right now they depend on core, I think it is cleaner to have
> dependency on minimal modules. Would you agree?
>
> On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:
>
>> I think it's great that we're moving the interface to java and fixing some
>> of the naming foibles.
>>
>> This isn't explicit in the KIP which just refers to the java package name
>> (I think), but it looks like you are proposing adding a new authorizer jar
>> for this new package and adding it as a dependency for the client jar. This
>> is a bit inconsistent with how we decided to package stuff when we started
>> with the new clients so it'd be good to work that out.
>>
>> To date the categorization has been:
>> 1. Anything which is just in the clients is in org.apache.clients under
>> clients/
>> 2. Anything which is in the server is kafka.* which is under core/
>> 3. Anything which is needed in both places (as it sounds like some enums
>> for authorization are?) is in common which is under clients/
>>
>> org.apache.clients and org.apache.common are both pure java and dependency
>> free other than the compression libraries and slf4j and are packaged into
>> the kafka-clients.java, the server has it's own jar which has richer
>> dependencies and depends on the client jar.
>>
>> There are other ways this could have been done--e.g. common could have been
>> its own library or even split into multiple sub-libraries--but the decision
>> at that time was just to keep it simple and hard to mess up. Based on the
>> experience with the scala clients our plan was to be ultra-hostile to any
>> added client dependencies.
>>
>> So I think if we're continuing this model we would put the shared
>> authorizer code somewhere under
>> clients/src/main/java/org/apache/kafka/common as with the other shared
>> authorizer. If we're moving away from this model we should probably rethink
>> things and be consistent with this, at the very least splitting up common
>> and clients.
>>
>> -Jay
>>
>> On Wed, Apr 20, 2016 at 1:04 PM, Ashish Singh  wrote:
>>
>> > Jun/ Jay/ Gwen/ Harsha/ Ismael,
>> >
>> > As you guys have provided feedback on this earlier, could you review the
>> > KIP again? I have updated the PR <
>> https://github.com/apache/kafka/pull/861>
>> > as
>> > well.
>> >
>> > On Wed, Apr 20, 2016 at 1:01 PM, Ashish Singh 
>> wrote:
>> >
>> > > Hi Grant,
>> > >
>> > > On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke 
>> > wrote:
>> > >
>> > >> Hi Ashish,
>> > >>
>> > >> Thanks for the updates. I have a few questions below:
>> > >>
>> > >> > Move following interfaces to new package,
>> org.apche.kafka.authorizer.
>> > >> >
>> > >> >1. Authorizer
>> > >> >2. Acl
>> > >> >3. Operation
>> > >> >4. PermissionType
>> > >> >5. Resource
>> > >> >6. ResourceType
>> > >> >7. KafkaPrincipal
>> > >> >8. Session
>> > >> >
>> > >> >
>> > >> This means the client would be required to depend on the authorizer
>> > >> package
>> > >> as a part of KIP-4. Another option is to have the client objects in
>> > >> common.
>> > >> Have we ruled out leaving the interface in the core module?
>> > >>
>> > >  With this entities that use Authorizer will depend only on Authorizer
>> > > package. Third party implementations can have only the authorizer pkg
>> as
>> > > dependency. core and client modules will also have to depend on the
>> > > authorizer with this approach. Do you see any issue with it?
>> > >
>> > >>
>> > >> Authorizer interface will be updated to remove getter naming
>> convention.
>> > >>
>> > >>
>> > >> Now that this is Java do we still want to change to the Scala naming
>> > >> convention?
>> > >>
>> > > Even in clients module I do not see getter naming convention being
>> > > followed, it is better to be consistent I guess.
>> > >
>> > >>
>> > >>
>> > >> Since we are completely rewriting the interface, can we add some (at
>> > least
>> > >> one to start with) standard exceptions that each method is recommended
>> > to
>> > >> use/throw? This will help the server in KIP-4 provide meaningful error
>> > >> codes. KAFKA-3507 
>> is
>> > >> tracking it 

[jira] [Created] (KAFKA-3597) Enable query ConsoleConsumer and VerifiableProducer if they shutdown cleanly

2016-04-20 Thread Anna Povzner (JIRA)
Anna Povzner created KAFKA-3597:
---

 Summary: Enable query ConsoleConsumer and VerifiableProducer if 
they shutdown cleanly
 Key: KAFKA-3597
 URL: https://issues.apache.org/jira/browse/KAFKA-3597
 Project: Kafka
  Issue Type: Test
Reporter: Anna Povzner
Assignee: Anna Povzner
 Fix For: 0.10.0.0


It would be useful for some tests to check if ConsoleConsumer and 
VerifiableProducer shutdown cleanly or not. 

Add methods to ConsoleConsumer and VerifiableProducer that return true if all 
producers/consumes shutdown cleanly; otherwise false. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3593) kafka.network.Processor throws ArrayIndexOutOfBoundsException

2016-04-20 Thread S Shan (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3593?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250884#comment-15250884
 ] 

S Shan commented on KAFKA-3593:
---

After I created this issue, I then found out that this is a known regression in 
brokers 0.9.0.0 and 0.9.0.1, https://issues.apache.org/jira/browse/KAFKA-3547
and the fix will be in  0.10.0.0.
Also, there is a workaround in the c/c++ library librdkafka. With the latest 
c/c++ librdkafka-master source (downloaded Apr 20, 2016), running the same 
tests, the kafka  broker no longer throws the ArrayIndexOutOfBoundsException. 

> kafka.network.Processor throws ArrayIndexOutOfBoundsException
> -
>
> Key: KAFKA-3593
> URL: https://issues.apache.org/jira/browse/KAFKA-3593
> Project: Kafka
>  Issue Type: Bug
>  Components: network
>Affects Versions: 0.9.0.1
> Environment: Red Hat Enterprise Linux Workstation release 6.7 
> (Santiago)
>Reporter: S Shan
>Assignee: Jun Rao
>
> While running the tests and examples from Kafka c/c++ library librdkafka 
> (version 0.9),  Kafka repeated threw the ArrayIndexOutOfBoundsException due 
> to "Array index out of range".
> [2016-04-13 12:02:27,490] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 17
>   at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
>   at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.network.Processor.run(SocketServer.scala:421)
>   at java.lang.Thread.run(Thread.java:798)
> ...
> [2016-04-20 13:39:59,973] INFO [Group Metadata Manager on Broker 2]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2016-04-20 13:40:00,446] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2016-04-20 13:40:07,135] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
>   at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.network.Processor.run(SocketServer.scala:421)
>   at java.lang.Thread.run(Thread.java:798)
> [2016-04-20 13:40:17,478] ERROR Processor got uncaught exception. 
> (kafka.network.Processor)
> java.lang.ArrayIndexOutOfBoundsException
>   at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
>   at 
> org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
>   at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
>   at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:742)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
>   at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
>   at kafka.network.Processor.run(SocketServer.scala:421)
>   at java.lang.Thread.run(Thread.java:798)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3117: handle metadata updates during con...

2016-04-20 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1247

KAFKA-3117: handle metadata updates during consumer rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3117

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1247.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1247


commit 6a4fa54a03e9b9a1c3d8512728d50075e7aa7c31
Author: Jason Gustafson 
Date:   2016-04-20T21:37:22Z

KAFKA-3117: handle metadata updates during consumer rebalance




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250849#comment-15250849
 ] 

ASF GitHub Bot commented on KAFKA-3117:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/1247

KAFKA-3117: handle metadata updates during consumer rebalance



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3117

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1247.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1247


commit 6a4fa54a03e9b9a1c3d8512728d50075e7aa7c31
Author: Jason Gustafson 
Date:   2016-04-20T21:37:22Z

KAFKA-3117: handle metadata updates during consumer rebalance




> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-1981) Make log compaction point configurable

2016-04-20 Thread Eric Wasserman (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Eric Wasserman updated KAFKA-1981:
--
Attachment: KIP for Kafka Compaction Patch.md

Attached a KIP as I cannot add one in Confluence

> Make log compaction point configurable
> --
>
> Key: KAFKA-1981
> URL: https://issues.apache.org/jira/browse/KAFKA-1981
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>  Labels: newbie++
> Attachments: KIP for Kafka Compaction Patch.md
>
>
> Currently if you enable log compaction the compactor will kick in whenever 
> you hit a certain "dirty ratio", i.e. when 50% of your data is uncompacted. 
> Other than this we don't give you fine-grained control over when compaction 
> occurs. In addition we never compact the active segment (since it is still 
> being written to).
> Other than this we don't really give you much control over when compaction 
> will happen. The result is that you can't really guarantee that a consumer 
> will get every update to a compacted topic--if the consumer falls behind a 
> bit it might just get the compacted version.
> This is usually fine, but it would be nice to make this more configurable so 
> you could set either a # messages, size, or time bound for compaction.
> This would let you say, for example, "any consumer that is no more than 1 
> hour behind will get every message."
> This should be relatively easy to implement since it just impacts the 
> end-point the compactor considers available for compaction. I think we 
> already have that concept, so this would just be some other overrides to add 
> in when calculating that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Ashish Singh
Jay,

Thanks for the info. I think having common in clients jar makes sense, as
their is no direct usage of it. i.e., without depending on or using
clients. Authorizer is a bit different, as third party implementations do
not really need anything from clients or server, all they need is
Authorizer interface and related classes. If we move authorizer into
common, then third party implementations will have to depend on clients.
Though third party implementations depending on clients is not a big
problem, right now they depend on core, I think it is cleaner to have
dependency on minimal modules. Would you agree?

On Wed, Apr 20, 2016 at 2:27 PM, Jay Kreps  wrote:

> I think it's great that we're moving the interface to java and fixing some
> of the naming foibles.
>
> This isn't explicit in the KIP which just refers to the java package name
> (I think), but it looks like you are proposing adding a new authorizer jar
> for this new package and adding it as a dependency for the client jar. This
> is a bit inconsistent with how we decided to package stuff when we started
> with the new clients so it'd be good to work that out.
>
> To date the categorization has been:
> 1. Anything which is just in the clients is in org.apache.clients under
> clients/
> 2. Anything which is in the server is kafka.* which is under core/
> 3. Anything which is needed in both places (as it sounds like some enums
> for authorization are?) is in common which is under clients/
>
> org.apache.clients and org.apache.common are both pure java and dependency
> free other than the compression libraries and slf4j and are packaged into
> the kafka-clients.java, the server has it's own jar which has richer
> dependencies and depends on the client jar.
>
> There are other ways this could have been done--e.g. common could have been
> its own library or even split into multiple sub-libraries--but the decision
> at that time was just to keep it simple and hard to mess up. Based on the
> experience with the scala clients our plan was to be ultra-hostile to any
> added client dependencies.
>
> So I think if we're continuing this model we would put the shared
> authorizer code somewhere under
> clients/src/main/java/org/apache/kafka/common as with the other shared
> authorizer. If we're moving away from this model we should probably rethink
> things and be consistent with this, at the very least splitting up common
> and clients.
>
> -Jay
>
> On Wed, Apr 20, 2016 at 1:04 PM, Ashish Singh  wrote:
>
> > Jun/ Jay/ Gwen/ Harsha/ Ismael,
> >
> > As you guys have provided feedback on this earlier, could you review the
> > KIP again? I have updated the PR <
> https://github.com/apache/kafka/pull/861>
> > as
> > well.
> >
> > On Wed, Apr 20, 2016 at 1:01 PM, Ashish Singh 
> wrote:
> >
> > > Hi Grant,
> > >
> > > On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke 
> > wrote:
> > >
> > >> Hi Ashish,
> > >>
> > >> Thanks for the updates. I have a few questions below:
> > >>
> > >> > Move following interfaces to new package,
> org.apche.kafka.authorizer.
> > >> >
> > >> >1. Authorizer
> > >> >2. Acl
> > >> >3. Operation
> > >> >4. PermissionType
> > >> >5. Resource
> > >> >6. ResourceType
> > >> >7. KafkaPrincipal
> > >> >8. Session
> > >> >
> > >> >
> > >> This means the client would be required to depend on the authorizer
> > >> package
> > >> as a part of KIP-4. Another option is to have the client objects in
> > >> common.
> > >> Have we ruled out leaving the interface in the core module?
> > >>
> > >  With this entities that use Authorizer will depend only on Authorizer
> > > package. Third party implementations can have only the authorizer pkg
> as
> > > dependency. core and client modules will also have to depend on the
> > > authorizer with this approach. Do you see any issue with it?
> > >
> > >>
> > >> Authorizer interface will be updated to remove getter naming
> convention.
> > >>
> > >>
> > >> Now that this is Java do we still want to change to the Scala naming
> > >> convention?
> > >>
> > > Even in clients module I do not see getter naming convention being
> > > followed, it is better to be consistent I guess.
> > >
> > >>
> > >>
> > >> Since we are completely rewriting the interface, can we add some (at
> > least
> > >> one to start with) standard exceptions that each method is recommended
> > to
> > >> use/throw? This will help the server in KIP-4 provide meaningful error
> > >> codes. KAFKA-3507 
> is
> > >> tracking it right now.
> > >>
> > > That should be good to have. Will include that. Thanks.
> > >
> > >>
> > >> Thanks,
> > >> Grant
> > >>
> > >>
> > >>
> > >> On Tue, Apr 19, 2016 at 9:48 AM, Ashish Singh 
> > >> wrote:
> > >>
> > >> > I have updated KIP-50
> > >> > <
> > >> >
> > >>
> >
> 

Jenkins build is back to normal : kafka-trunk-jdk8 #539

2016-04-20 Thread Apache Jenkins Server
See 



[GitHub] kafka pull request: KAFKA-3589: set inner serializer for ChangedSe...

2016-04-20 Thread guozhangwang
GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1246

KAFKA-3589: set inner serializer for ChangedSerde upon initialization



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3589

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1246.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1246


commit b75faa63a9af73fc5ba50291cdd6abe2f1e6
Author: Guozhang Wang 
Date:   2016-04-20T22:00:19Z

v1




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3589) KTable.count(final KeyValueMapper<K, V, K1> selector, String name) throw NPE

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250801#comment-15250801
 ] 

ASF GitHub Bot commented on KAFKA-3589:
---

GitHub user guozhangwang opened a pull request:

https://github.com/apache/kafka/pull/1246

KAFKA-3589: set inner serializer for ChangedSerde upon initialization



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/guozhangwang/kafka K3589

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1246.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1246


commit b75faa63a9af73fc5ba50291cdd6abe2f1e6
Author: Guozhang Wang 
Date:   2016-04-20T22:00:19Z

v1




> KTable.count(final KeyValueMapper selector, String name) throw NPE
> 
>
> Key: KAFKA-3589
> URL: https://issues.apache.org/jira/browse/KAFKA-3589
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.0
>
>
> The implementation of KTable.count(inal KeyValueMapper selector, 
> String name) passes null through as the parameters for the key and value 
> serdes. This results in an NPE in the aggregate method when it does:
> ChangedSerializer changedValueSerializer = new 
> ChangedSerializer<>(valueSerde.serializer());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3595:
-
Fix Version/s: 0.10.1.0

> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3595:
-
Labels: user-experience  (was: )

> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>  Labels: user-experience
> Fix For: 0.10.1.0
>
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3596) Kafka Streams: Window expiration needs to consider more than event time

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3596:
-
Fix Version/s: 0.10.1.0

> Kafka Streams: Window expiration needs to consider more than event time
> ---
>
> Key: KAFKA-3596
> URL: https://issues.apache.org/jira/browse/KAFKA-3596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Currently in Kafka Streams, the way the windows are expired in RocksDB is 
> triggered by new event insertion.  When a window is created at T0 with 10 
> minutes retention, when we saw a new record coming with event timestamp T0 + 
> 10 +1, we will expire that window (remove it) out of RocksDB.
> In the real world, it's very easy to see event coming with future timestamp 
> (or out-of-order events coming with big time gaps between events), this way 
> of retiring a window based on one event's event timestamp is dangerous.  I 
> think at least we need to consider both the event's event time and 
> server/stream time elapse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3596) Kafka Streams: Window expiration needs to consider more than event time

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3596:
-
Labels: architecture  (was: )

> Kafka Streams: Window expiration needs to consider more than event time
> ---
>
> Key: KAFKA-3596
> URL: https://issues.apache.org/jira/browse/KAFKA-3596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>  Labels: architecture
> Fix For: 0.10.1.0
>
>
> Currently in Kafka Streams, the way the windows are expired in RocksDB is 
> triggered by new event insertion.  When a window is created at T0 with 10 
> minutes retention, when we saw a new record coming with event timestamp T0 + 
> 10 +1, we will expire that window (remove it) out of RocksDB.
> In the real world, it's very easy to see event coming with future timestamp 
> (or out-of-order events coming with big time gaps between events), this way 
> of retiring a window based on one event's event timestamp is dangerous.  I 
> think at least we need to consider both the event's event time and 
> server/stream time elapse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3568) KafkaProducer fails with timeout on message send()

2016-04-20 Thread Greg Zoller (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Greg Zoller resolved KAFKA-3568.

   Resolution: Fixed
Fix Version/s: 0.10.0.0
   0.10.1.0

Wow!  What a find!  Thank you so much.  You've saved my sanity.  I couldn't 
fathom how commented-out lines of code would make a difference, but they did, 
and now I understand why.

> KafkaProducer fails with timeout on message send()
> --
>
> Key: KAFKA-3568
> URL: https://issues.apache.org/jira/browse/KAFKA-3568
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: MacOS Docker
>Reporter: Greg Zoller
>  Labels: producer
> Fix For: 0.10.1.0, 0.10.0.0
>
>
> I had a KafkaProducer working fine in 0.9.0.1.  I was having unrelated 
> problems in that version so thought to try 0.10.1.0.  I built it as I did 
> 0.9.0.1:
> Fresh build against Scala 2.11.7.  Built the tgz build plus local maven 
> install.  From the tgz I created a Docker image similar to spotify/kafka.  I 
> linked my producer code to the maven jars.  This process worked in 0.9.
> Code is here:  
> https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
> At the bottom you can see a clip from the output... there's a warning about 
> metadata (not sure if its important or not) and then its trying to send() 
> messages and timing out.  I clipped the output, but it does fail the same way 
> for each message sent in 0.10.1.0.  Same code compiled against 0.9.0.1 
> populates the topic's partitions w/o problem.
> Was there a breaking change between 0.9 and 0.10, or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3595:
-
Description: 
Currently state store replication always go through a compact kafka topic. For 
some state stores, e.g. JoinWindow, there are no duplicates in the store, there 
is not much benefit using a compacted topic.
The problem of using compacted topic is the records can stay in kafka broker 
forever. In my use case, my key is ad_id, it's incrementing all the time, not 
bounded, I am worried the disk space on broker for that topic will go forever.
I think we either need the capability to purge the compacted records on broker, 
or allow us to specify different compact option for state store replication.

  was:
Currently in Kafka Streams, the way the windows are expired in RocksDB is 
triggered by new event insertion.  When a window is created at T0 with 10 
minutes retention, when we saw a new record coming with event timestamp T0 + 10 
+1, we will expire that window (remove it) out of RocksDB.

In the real world, it's very easy to see event coming with future timestamp (or 
out-of-order events coming with big time gaps between events), this way of 
retiring a window based on one event's event timestamp is dangerous.  I think 
at least we need to consider both the event's event time and server/stream time 
elapse.


> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Currently state store replication always go through a compact kafka topic. 
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever. In my use case, my key is ad_id, it's incrementing all the time, not 
> bounded, I am worried the disk space on broker for that topic will go forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3596) Kafka Streams: Window expiration needs to consider more than event time

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3596?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3596:
-
Description: 
Currently in Kafka Streams, the way the windows are expired in RocksDB is 
triggered by new event insertion.  When a window is created at T0 with 10 
minutes retention, when we saw a new record coming with event timestamp T0 + 10 
+1, we will expire that window (remove it) out of RocksDB.

In the real world, it's very easy to see event coming with future timestamp (or 
out-of-order events coming with big time gaps between events), this way of 
retiring a window based on one event's event timestamp is dangerous.  I think 
at least we need to consider both the event's event time and server/stream time 
elapse.

  was:
Currently state store replication always go through a compact kafka topic.  For 
some state stores, e.g. JoinWindow, there are no duplicates in the store, there 
is not much benefit using a compacted topic.

The problem of using compacted topic is the records can stay in kafka broker 
forever.  In my use case, my key is ad_id, it's incrementing all the time, not 
bounded, I am worried the disk space on broker for that topic will go forever.

I think we either need the capability to purge the compacted records on broker, 
or allow us to specify different compact option for state store replication.


> Kafka Streams: Window expiration needs to consider more than event time
> ---
>
> Key: KAFKA-3596
> URL: https://issues.apache.org/jira/browse/KAFKA-3596
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Currently in Kafka Streams, the way the windows are expired in RocksDB is 
> triggered by new event insertion.  When a window is created at T0 with 10 
> minutes retention, when we saw a new record coming with event timestamp T0 + 
> 10 +1, we will expire that window (remove it) out of RocksDB.
> In the real world, it's very easy to see event coming with future timestamp 
> (or out-of-order events coming with big time gaps between events), this way 
> of retiring a window based on one event's event timestamp is dangerous.  I 
> think at least we need to consider both the event's event time and 
> server/stream time elapse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3595:
-
Description: 
Currently in Kafka Streams, the way the windows are expired in RocksDB is 
triggered by new event insertion.  When a window is created at T0 with 10 
minutes retention, when we saw a new record coming with event timestamp T0 + 10 
+1, we will expire that window (remove it) out of RocksDB.

In the real world, it's very easy to see event coming with future timestamp (or 
out-of-order events coming with big time gaps between events), this way of 
retiring a window based on one event's event timestamp is dangerous.  I think 
at least we need to consider both the event's event time and server/stream time 
elapse.

  was:
Currently state store replication always go through a compact kafka topic.  For 
some state stores, e.g. JoinWindow, there are no duplicates in the store, there 
is not much benefit using a compacted topic.

The problem of using compacted topic is the records can stay in kafka broker 
forever.  In my use case, my key is ad_id, it's incrementing all the time, not 
bounded, I am worried the disk space on broker for that topic will go forever.

I think we either need the capability to purge the compacted records on broker, 
or allow us to specify different compact option for state store replication.


> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Currently in Kafka Streams, the way the windows are expired in RocksDB is 
> triggered by new event insertion.  When a window is created at T0 with 10 
> minutes retention, when we saw a new record coming with event timestamp T0 + 
> 10 +1, we will expire that window (remove it) out of RocksDB.
> In the real world, it's very easy to see event coming with future timestamp 
> (or out-of-order events coming with big time gaps between events), this way 
> of retiring a window based on one event's event timestamp is dangerous.  I 
> think at least we need to consider both the event's event time and 
> server/stream time elapse.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3596) Kafka Streams: Window expiration needs to consider more than event time

2016-04-20 Thread Henry Cai (JIRA)
Henry Cai created KAFKA-3596:


 Summary: Kafka Streams: Window expiration needs to consider more 
than event time
 Key: KAFKA-3596
 URL: https://issues.apache.org/jira/browse/KAFKA-3596
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Affects Versions: 0.10.1.0
Reporter: Henry Cai
Assignee: Guozhang Wang
Priority: Minor


Currently state store replication always go through a compact kafka topic.  For 
some state stores, e.g. JoinWindow, there are no duplicates in the store, there 
is not much benefit using a compacted topic.

The problem of using compacted topic is the records can stay in kafka broker 
forever.  In my use case, my key is ad_id, it's incrementing all the time, not 
bounded, I am worried the disk space on broker for that topic will go forever.

I think we either need the capability to purge the compacted records on broker, 
or allow us to specify different compact option for state store replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Jay Kreps
I think it's great that we're moving the interface to java and fixing some
of the naming foibles.

This isn't explicit in the KIP which just refers to the java package name
(I think), but it looks like you are proposing adding a new authorizer jar
for this new package and adding it as a dependency for the client jar. This
is a bit inconsistent with how we decided to package stuff when we started
with the new clients so it'd be good to work that out.

To date the categorization has been:
1. Anything which is just in the clients is in org.apache.clients under
clients/
2. Anything which is in the server is kafka.* which is under core/
3. Anything which is needed in both places (as it sounds like some enums
for authorization are?) is in common which is under clients/

org.apache.clients and org.apache.common are both pure java and dependency
free other than the compression libraries and slf4j and are packaged into
the kafka-clients.java, the server has it's own jar which has richer
dependencies and depends on the client jar.

There are other ways this could have been done--e.g. common could have been
its own library or even split into multiple sub-libraries--but the decision
at that time was just to keep it simple and hard to mess up. Based on the
experience with the scala clients our plan was to be ultra-hostile to any
added client dependencies.

So I think if we're continuing this model we would put the shared
authorizer code somewhere under
clients/src/main/java/org/apache/kafka/common as with the other shared
authorizer. If we're moving away from this model we should probably rethink
things and be consistent with this, at the very least splitting up common
and clients.

-Jay

On Wed, Apr 20, 2016 at 1:04 PM, Ashish Singh  wrote:

> Jun/ Jay/ Gwen/ Harsha/ Ismael,
>
> As you guys have provided feedback on this earlier, could you review the
> KIP again? I have updated the PR 
> as
> well.
>
> On Wed, Apr 20, 2016 at 1:01 PM, Ashish Singh  wrote:
>
> > Hi Grant,
> >
> > On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke 
> wrote:
> >
> >> Hi Ashish,
> >>
> >> Thanks for the updates. I have a few questions below:
> >>
> >> > Move following interfaces to new package, org.apche.kafka.authorizer.
> >> >
> >> >1. Authorizer
> >> >2. Acl
> >> >3. Operation
> >> >4. PermissionType
> >> >5. Resource
> >> >6. ResourceType
> >> >7. KafkaPrincipal
> >> >8. Session
> >> >
> >> >
> >> This means the client would be required to depend on the authorizer
> >> package
> >> as a part of KIP-4. Another option is to have the client objects in
> >> common.
> >> Have we ruled out leaving the interface in the core module?
> >>
> >  With this entities that use Authorizer will depend only on Authorizer
> > package. Third party implementations can have only the authorizer pkg as
> > dependency. core and client modules will also have to depend on the
> > authorizer with this approach. Do you see any issue with it?
> >
> >>
> >> Authorizer interface will be updated to remove getter naming convention.
> >>
> >>
> >> Now that this is Java do we still want to change to the Scala naming
> >> convention?
> >>
> > Even in clients module I do not see getter naming convention being
> > followed, it is better to be consistent I guess.
> >
> >>
> >>
> >> Since we are completely rewriting the interface, can we add some (at
> least
> >> one to start with) standard exceptions that each method is recommended
> to
> >> use/throw? This will help the server in KIP-4 provide meaningful error
> >> codes. KAFKA-3507  is
> >> tracking it right now.
> >>
> > That should be good to have. Will include that. Thanks.
> >
> >>
> >> Thanks,
> >> Grant
> >>
> >>
> >>
> >> On Tue, Apr 19, 2016 at 9:48 AM, Ashish Singh 
> >> wrote:
> >>
> >> > I have updated KIP-50
> >> > <
> >> >
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Move+Authorizer+to+a+separate+package
> >> > >
> >> > and PR  as per recent
> >> > discussions. Please take a look.
> >> >
> >> > @Harsha / Don, it would be nice if you guys can review the KIP and PR
> as
> >> > well.
> >> > ​
> >> >
> >> > On Mon, Apr 11, 2016 at 7:36 PM, Ashish Singh 
> >> wrote:
> >> >
> >> > > Yes, Jun. I would like to try get option 2 in, if possible in 0.10.
> I
> >> am
> >> > > not asking for delaying 0.10 for it, but some reviews and early
> >> feedback
> >> > > would be great. At this point this is what I have in mind.
> >> > >
> >> > > 1. Move authorizer and related entities to its own package. Note
> that
> >> I
> >> > am
> >> > > proposing to drop scala interface completely. Ranger team is fine
> >> with it
> >> > > and I will update Sentry.
> >> > > 2. The only new public method that will be added to authorizer
> >> interface
> >> > > 

[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3595:
-
Description: 
Currently state store replication always go through a compact kafka topic.  For 
some state stores, e.g. JoinWindow, there are no duplicates in the store, there 
is not much benefit using a compacted topic.

The problem of using compacted topic is the records can stay in kafka broker 
forever.  In my use case, my key is ad_id, it's incrementing all the time, not 
bounded, I am worried the disk space on broker for that topic will go forever.

I think we either need the capability to purge the compacted records on broker, 
or allow us to specify different compact option for state store replication.

  was:Add the ability to record metrics in the serializer/deserializer 
components. As it stands, I cannot record latency/sensor metrics since the API 
does not provide the context at the serde levels. Exposing the ProcessorContext 
at this level may not be the solution; but perhaps change the configure method 
to take a different config or init context and make the StreamMetrics available 
in that context along with config information.


> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Currently state store replication always go through a compact kafka topic.  
> For some state stores, e.g. JoinWindow, there are no duplicates in the store, 
> there is not much benefit using a compacted topic.
> The problem of using compacted topic is the records can stay in kafka broker 
> forever.  In my use case, my key is ad_id, it's incrementing all the time, 
> not bounded, I am worried the disk space on broker for that topic will go 
> forever.
> I think we either need the capability to purge the compacted records on 
> broker, or allow us to specify different compact option for state store 
> replication.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3595:
-
Affects Version/s: (was: 0.9.0.1)
   0.10.1.0

> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Add the ability to record metrics in the serializer/deserializer components. 
> As it stands, I cannot record latency/sensor metrics since the API does not 
> provide the context at the serde levels. Exposing the ProcessorContext at 
> this level may not be the solution; but perhaps change the configure method 
> to take a different config or init context and make the StreamMetrics 
> available in that context along with config information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Henry Cai updated KAFKA-3595:
-
Issue Type: Improvement  (was: New Feature)

> Add capability to specify replication compact option for stream store
> -
>
> Key: KAFKA-3595
> URL: https://issues.apache.org/jira/browse/KAFKA-3595
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Affects Versions: 0.10.1.0
>Reporter: Henry Cai
>Assignee: Guozhang Wang
>Priority: Minor
>
> Add the ability to record metrics in the serializer/deserializer components. 
> As it stands, I cannot record latency/sensor metrics since the API does not 
> provide the context at the serde levels. Exposing the ProcessorContext at 
> this level may not be the solution; but perhaps change the configure method 
> to take a different config or init context and make the StreamMetrics 
> available in that context along with config information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3595) Add capability to specify replication compact option for stream store

2016-04-20 Thread Henry Cai (JIRA)
Henry Cai created KAFKA-3595:


 Summary: Add capability to specify replication compact option for 
stream store
 Key: KAFKA-3595
 URL: https://issues.apache.org/jira/browse/KAFKA-3595
 Project: Kafka
  Issue Type: New Feature
  Components: streams
Affects Versions: 0.9.0.1
Reporter: Henry Cai
Assignee: Guozhang Wang
Priority: Minor


Add the ability to record metrics in the serializer/deserializer components. As 
it stands, I cannot record latency/sensor metrics since the API does not 
provide the context at the serde levels. Exposing the ProcessorContext at this 
level may not be the solution; but perhaps change the configure method to take 
a different config or init context and make the StreamMetrics available in that 
context along with config information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2370) Add pause/unpause connector support

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250720#comment-15250720
 ] 

ASF GitHub Bot commented on KAFKA-2370:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1087


> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2370) Add pause/unpause connector support

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2370?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2370:
-
Resolution: Fixed
Status: Resolved  (was: Patch Available)

Issue resolved by pull request 1087
[https://github.com/apache/kafka/pull/1087]

> Add pause/unpause connector support
> ---
>
> Key: KAFKA-2370
> URL: https://issues.apache.org/jira/browse/KAFKA-2370
> Project: Kafka
>  Issue Type: Sub-task
>  Components: KafkaConnect
>Reporter: Ewen Cheslack-Postava
>Assignee: Jason Gustafson
>Priority: Critical
> Fix For: 0.10.1.0
>
>
> It will sometimes be useful to pause/unpause connectors. For example, if you 
> know planned maintenance will occur on the source/destination system, it 
> would make sense to pause and then resume (but not delete and then restore), 
> a connector.
> This likely requires support in all Coordinator implementations 
> (standalone/distributed) to trigger the events.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-2370: kafka connect pause/resume API

2016-04-20 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/1087


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3594) Kafka new producer retries doesn't work in 0.9.0.1

2016-04-20 Thread Nicolas PHUNG (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas PHUNG updated KAFKA-3594:
-
Description: 
Hello,

I'm encountering an issue with the new Producer on 0.9.0.1 client with a 
0.9.0.1 Kafka broker when Kafka broker are offline for example. It seems the 
retries doesn't work anymore and I got the following error logs :

{noformat}
play.api.Application$$anon$1: Execution exception[[IllegalStateException: 
Memory records is not writable]]
at play.api.Application$class.handleError(Application.scala:296) 
~[com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at play.api.DefaultApplication.handleError(Application.scala:402) 
[com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at scala.Option.map(Option.scala:146) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:316)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:46) 
[com.typesafe.play.play-iteratees_2.11-2.3.10.jar:2.3.10]
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) 
[com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) 
[com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 [org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 [org.scala-lang.scala-library-2.11.8.jar:na]
Caused by: java.lang.IllegalStateException: Memory records is not writable
at 
org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93) 
~[org.apache.kafka.kafka-clients-0.9.0.1-cp1.jar:na]
at 

[jira] [Created] (KAFKA-3594) Kafka new producer retries doesn't work in 0.9.0.1

2016-04-20 Thread Nicolas PHUNG (JIRA)
Nicolas PHUNG created KAFKA-3594:


 Summary: Kafka new producer retries doesn't work in 0.9.0.1
 Key: KAFKA-3594
 URL: https://issues.apache.org/jira/browse/KAFKA-3594
 Project: Kafka
  Issue Type: Bug
  Components: producer 
Affects Versions: 0.9.0.1
 Environment: Debian 7.8 Wheezy / Kafka 0.9.0.1 (Confluent 2.0.1)
Reporter: Nicolas PHUNG
Assignee: Jun Rao


Hello,

I'm encountering an issue with the new Producer on 0.9.0.1 client with a 
0.9.0.1 Kafka broker when Kafka broker are offline for example. It seems the 
retries doesn't work anymore and I got the following error logs :

{noformat}
play.api.Application$$anon$1: Execution exception[[IllegalStateException: 
Memory records is not writable]]
at play.api.Application$class.handleError(Application.scala:296) 
~[com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at play.api.DefaultApplication.handleError(Application.scala:402) 
[com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3$$anonfun$applyOrElse$4.apply(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at scala.Option.map(Option.scala:146) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:320)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
play.core.server.netty.PlayDefaultUpstreamHandler$$anonfun$3.applyOrElse(PlayDefaultUpstreamHandler.scala:316)
 [com.typesafe.play.play_2.11-2.3.10.jar:2.3.10]
at 
scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:346) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.Future$$anonfun$recoverWith$1.apply(Future.scala:345) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
play.api.libs.iteratee.Execution$trampoline$.execute(Execution.scala:46) 
[com.typesafe.play.play-iteratees_2.11-2.3.10.jar:2.3.10]
at 
scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Promise$class.complete(Promise.scala:55) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:237) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.processBatch$1(BatchingExecutor.scala:67)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:82)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.BatchingExecutor$Batch$$anonfun$run$1.apply(BatchingExecutor.scala:59)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58) 
[com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41) 
[com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at 
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 [com.typesafe.akka.akka-actor_2.11-2.3.4.jar:na]
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 [org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
[org.scala-lang.scala-library-2.11.8.jar:na]
at 
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
 [org.scala-lang.scala-library-2.11.8.jar:na]
Caused by: 

[jira] [Created] (KAFKA-3593) kafka.network.Processor throws ArrayIndexOutOfBoundsException

2016-04-20 Thread S Shan (JIRA)
S Shan created KAFKA-3593:
-

 Summary: kafka.network.Processor throws 
ArrayIndexOutOfBoundsException
 Key: KAFKA-3593
 URL: https://issues.apache.org/jira/browse/KAFKA-3593
 Project: Kafka
  Issue Type: Bug
  Components: network
Affects Versions: 0.9.0.1
 Environment: Red Hat Enterprise Linux Workstation release 6.7 
(Santiago)

Reporter: S Shan
Assignee: Jun Rao


While running the tests and examples from Kafka c/c++ library librdkafka 
(version 0.9),  Kafka repeated threw the ArrayIndexOutOfBoundsException due to 
"Array index out of range".

[2016-04-13 12:02:27,490] ERROR Processor got uncaught exception. 
(kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException: Array index out of range: 17
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:798)
...
[2016-04-20 13:39:59,973] INFO [Group Metadata Manager on Broker 2]: Removed 0 
expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-04-20 13:40:00,446] INFO [Group Metadata Manager on Broker 0]: Removed 0 
expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2016-04-20 13:40:07,135] ERROR Processor got uncaught exception. 
(kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:798)
[2016-04-20 13:40:17,478] ERROR Processor got uncaught exception. 
(kafka.network.Processor)
java.lang.ArrayIndexOutOfBoundsException
at org.apache.kafka.common.protocol.ApiKeys.forId(ApiKeys.java:68)
at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:39)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:79)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:426)
at kafka.network.Processor$$anonfun$run$11.apply(SocketServer.scala:421)
at scala.collection.Iterator$class.foreach(Iterator.scala:742)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at kafka.network.Processor.run(SocketServer.scala:421)
at java.lang.Thread.run(Thread.java:798)




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3592) System tests - don't hardcode paths to scripts

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3592?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250663#comment-15250663
 ] 

ASF GitHub Bot commented on KAFKA-3592:
---

GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/1245

KAFKA-3592: System test - configurable paths [WIP]

This patch adds logic for the following:
- remove hard-coded paths to various scripts and jars in kafkatest service 
classes
- provide a mechanism for overriding path resolution logic with a 
"pluggable" path resolver class

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka configurable-install-path

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1245.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1245


commit 1bd1d63e8aeec23bf043774af9aa59f0b038084a
Author: Geoff Anderson 
Date:   2016-04-11T20:09:40Z

First pass on updating path resolution

commit 088e6d1a2e8ea30905d899c8fb1cf60c6fa3f934
Author: Geoff Anderson 
Date:   2016-04-12T00:36:38Z

Moved system test path resolution logic

commit 386f995594d0fb8769d43b5914b43fe7a77085db
Author: Geoff Anderson 
Date:   2016-04-12T01:31:39Z

Renamed kafka system test path resolver

commit c6c764e867450e5b263c66a58f46c735bc0bc622
Author: Geoff Anderson 
Date:   2016-04-14T01:13:49Z

Cleanup of path resolution, moved kafka version to avoid circular imports.

commit 9a0a8bedff68a9998c87c8382f9672abceef7233
Author: Geoff Anderson 
Date:   2016-04-15T00:10:32Z

Updated services to use path resolver

commit a0751503445678b4ae76e1fc33bd1b0fbf374142
Author: Geoff Anderson 
Date:   2016-04-15T23:17:43Z

Refactored path resolver; simplified method definitions

commit 0cb26c7f2b186bbbc0203826f88cf189faf6d1f8
Author: Geoff Anderson 
Date:   2016-04-19T17:56:05Z

Added missing path resolver

commit 293ae1dafa10eef2ff36d08f2877e565ed083ab5
Author: Geoff Anderson 
Date:   2016-04-20T19:44:19Z

Added python unit tests for system test path resolution logic




> System tests - don't hardcode paths to scripts
> --
>
> Key: KAFKA-3592
> URL: https://issues.apache.org/jira/browse/KAFKA-3592
> Project: Kafka
>  Issue Type: Improvement
>  Components: system tests
>Reporter: Geoff Anderson
>Assignee: Geoff Anderson
> Fix For: 0.10.0.0
>
>
> In the various service classes in the kafkatest module (aka python system 
> test code), there are numerous hard-coded paths to bin scripts which make it 
> hard to run tests on anything other than the very specific installation 
> performed in the vagrant provisioning files.
> This change is partially a standard DRY improvement, and also should make it 
> possible to reuse service classes defined in kafkatest without being confined 
> to any particular assumptions about directory layout.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3592: System test - configurable paths [...

2016-04-20 Thread granders
GitHub user granders opened a pull request:

https://github.com/apache/kafka/pull/1245

KAFKA-3592: System test - configurable paths [WIP]

This patch adds logic for the following:
- remove hard-coded paths to various scripts and jars in kafkatest service 
classes
- provide a mechanism for overriding path resolution logic with a 
"pluggable" path resolver class

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/confluentinc/kafka configurable-install-path

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1245.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1245


commit 1bd1d63e8aeec23bf043774af9aa59f0b038084a
Author: Geoff Anderson 
Date:   2016-04-11T20:09:40Z

First pass on updating path resolution

commit 088e6d1a2e8ea30905d899c8fb1cf60c6fa3f934
Author: Geoff Anderson 
Date:   2016-04-12T00:36:38Z

Moved system test path resolution logic

commit 386f995594d0fb8769d43b5914b43fe7a77085db
Author: Geoff Anderson 
Date:   2016-04-12T01:31:39Z

Renamed kafka system test path resolver

commit c6c764e867450e5b263c66a58f46c735bc0bc622
Author: Geoff Anderson 
Date:   2016-04-14T01:13:49Z

Cleanup of path resolution, moved kafka version to avoid circular imports.

commit 9a0a8bedff68a9998c87c8382f9672abceef7233
Author: Geoff Anderson 
Date:   2016-04-15T00:10:32Z

Updated services to use path resolver

commit a0751503445678b4ae76e1fc33bd1b0fbf374142
Author: Geoff Anderson 
Date:   2016-04-15T23:17:43Z

Refactored path resolver; simplified method definitions

commit 0cb26c7f2b186bbbc0203826f88cf189faf6d1f8
Author: Geoff Anderson 
Date:   2016-04-19T17:56:05Z

Added missing path resolver

commit 293ae1dafa10eef2ff36d08f2877e565ed083ab5
Author: Geoff Anderson 
Date:   2016-04-20T19:44:19Z

Added python unit tests for system test path resolution logic




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Created] (KAFKA-3592) System tests - don't hardcode paths to scripts

2016-04-20 Thread Geoff Anderson (JIRA)
Geoff Anderson created KAFKA-3592:
-

 Summary: System tests - don't hardcode paths to scripts
 Key: KAFKA-3592
 URL: https://issues.apache.org/jira/browse/KAFKA-3592
 Project: Kafka
  Issue Type: Improvement
  Components: system tests
Reporter: Geoff Anderson
Assignee: Geoff Anderson
 Fix For: 0.10.0.0


In the various service classes in the kafkatest module (aka python system test 
code), there are numerous hard-coded paths to bin scripts which make it hard to 
run tests on anything other than the very specific installation performed in 
the vagrant provisioning files.

This change is partially a standard DRY improvement, and also should make it 
possible to reuse service classes defined in kafkatest without being confined 
to any particular assumptions about directory layout.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3035) Transient: kafka.api.PlaintextConsumerTest > testAutoOffsetReset FAILED

2016-04-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3035:
-

Assignee: Liquan Pei

> Transient: kafka.api.PlaintextConsumerTest > testAutoOffsetReset FAILED
> ---
>
> Key: KAFKA-3035
> URL: https://issues.apache.org/jira/browse/KAFKA-3035
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>Assignee: Liquan Pei
>  Labels: transient-unit-test-failure
>
> See: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1868/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1981) Make log compaction point configurable

2016-04-20 Thread Eric Wasserman (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250643#comment-15250643
 ] 

Eric Wasserman commented on KAFKA-1981:
---

I apparently don't have any privileges to create a KIP in confluence (no 
"Create" button). My user name is: ewasserman. How should I submit it?

> Make log compaction point configurable
> --
>
> Key: KAFKA-1981
> URL: https://issues.apache.org/jira/browse/KAFKA-1981
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.8.2.0
>Reporter: Jay Kreps
>  Labels: newbie++
>
> Currently if you enable log compaction the compactor will kick in whenever 
> you hit a certain "dirty ratio", i.e. when 50% of your data is uncompacted. 
> Other than this we don't give you fine-grained control over when compaction 
> occurs. In addition we never compact the active segment (since it is still 
> being written to).
> Other than this we don't really give you much control over when compaction 
> will happen. The result is that you can't really guarantee that a consumer 
> will get every update to a compacted topic--if the consumer falls behind a 
> bit it might just get the compacted version.
> This is usually fine, but it would be nice to make this more configurable so 
> you could set either a # messages, size, or time bound for compaction.
> This would let you say, for example, "any consumer that is no more than 1 
> hour behind will get every message."
> This should be relatively easy to implement since it just impacts the 
> end-point the compactor considers available for compaction. I think we 
> already have that concept, so this would just be some other overrides to add 
> in when calculating that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Ashish Singh
Jun/ Jay/ Gwen/ Harsha/ Ismael,

As you guys have provided feedback on this earlier, could you review the
KIP again? I have updated the PR  as
well.

On Wed, Apr 20, 2016 at 1:01 PM, Ashish Singh  wrote:

> Hi Grant,
>
> On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke  wrote:
>
>> Hi Ashish,
>>
>> Thanks for the updates. I have a few questions below:
>>
>> > Move following interfaces to new package, org.apche.kafka.authorizer.
>> >
>> >1. Authorizer
>> >2. Acl
>> >3. Operation
>> >4. PermissionType
>> >5. Resource
>> >6. ResourceType
>> >7. KafkaPrincipal
>> >8. Session
>> >
>> >
>> This means the client would be required to depend on the authorizer
>> package
>> as a part of KIP-4. Another option is to have the client objects in
>> common.
>> Have we ruled out leaving the interface in the core module?
>>
>  With this entities that use Authorizer will depend only on Authorizer
> package. Third party implementations can have only the authorizer pkg as
> dependency. core and client modules will also have to depend on the
> authorizer with this approach. Do you see any issue with it?
>
>>
>> Authorizer interface will be updated to remove getter naming convention.
>>
>>
>> Now that this is Java do we still want to change to the Scala naming
>> convention?
>>
> Even in clients module I do not see getter naming convention being
> followed, it is better to be consistent I guess.
>
>>
>>
>> Since we are completely rewriting the interface, can we add some (at least
>> one to start with) standard exceptions that each method is recommended to
>> use/throw? This will help the server in KIP-4 provide meaningful error
>> codes. KAFKA-3507  is
>> tracking it right now.
>>
> That should be good to have. Will include that. Thanks.
>
>>
>> Thanks,
>> Grant
>>
>>
>>
>> On Tue, Apr 19, 2016 at 9:48 AM, Ashish Singh 
>> wrote:
>>
>> > I have updated KIP-50
>> > <
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Move+Authorizer+to+a+separate+package
>> > >
>> > and PR  as per recent
>> > discussions. Please take a look.
>> >
>> > @Harsha / Don, it would be nice if you guys can review the KIP and PR as
>> > well.
>> > ​
>> >
>> > On Mon, Apr 11, 2016 at 7:36 PM, Ashish Singh 
>> wrote:
>> >
>> > > Yes, Jun. I would like to try get option 2 in, if possible in 0.10. I
>> am
>> > > not asking for delaying 0.10 for it, but some reviews and early
>> feedback
>> > > would be great. At this point this is what I have in mind.
>> > >
>> > > 1. Move authorizer and related entities to its own package. Note that
>> I
>> > am
>> > > proposing to drop scala interface completely. Ranger team is fine
>> with it
>> > > and I will update Sentry.
>> > > 2. The only new public method that will be added to authorizer
>> interface
>> > > is description().
>> > > 3. Update SimpleAclAuthorizer to use the new interface and classes.
>> > >
>> > > On Mon, Apr 11, 2016 at 6:38 PM, Jun Rao  wrote:
>> > >
>> > >> Ashish,
>> > >>
>> > >> So, you want to take a shot at option 2 for 0.10.0? That's fine with
>> me
>> > >> too. I am just not sure if we have enough time to think through the
>> > >> changes.
>> > >>
>> > >> Thanks,
>> > >>
>> > >> Jun
>> > >>
>> > >> On Mon, Apr 11, 2016 at 6:05 PM, Ashish Singh 
>> > >> wrote:
>> > >>
>> > >> > Hello Jun,
>> > >> >
>> > >> > The 3rd option will require Apache Sentry to go GA with current
>> > >> authorizer
>> > >> > interface, and at this point it seems that the interface won't last
>> > >> long.
>> > >> > Within a few months, Sentry will have to make a breaking change. I
>> do
>> > >> > understand that Kafka should not have to delay its release due to
>> one
>> > of
>> > >> > the authorizer implementations. However, can we assist Sentry
>> users to
>> > >> > avoid that breaking upgrade? I think it is worth a shot. If the
>> > changes
>> > >> are
>> > >> > not done by 0.10 code freeze, then sure lets punt it to next
>> release.
>> > >> Does
>> > >> > this seem reasonable to you?
>> > >> >
>> > >> > On Sun, Apr 10, 2016 at 11:42 AM, Jun Rao 
>> wrote:
>> > >> >
>> > >> > > Ashish,
>> > >> > >
>> > >> > > A 3rd option is to in 0.10.0, just sanity check the principal
>> type
>> > in
>> > >> the
>> > >> > > implementation of addAcls/removeAcls of Authorizer, but don't
>> change
>> > >> the
>> > >> > > Authorizer api to add the getDescription() method. This fixes the
>> > >> > immediate
>> > >> > > issue that an acl rule with the wrong principal type is silently
>> > >> ignored.
>> > >> > > Knowing valid user types is nice, but not critical (we can
>> include
>> > the
>> > >> > > supported user type in the UnsupportedPrincipalTypeException
>> thrown
>> > >> from
>> > >> > > addAcls/removeAcls). This 

Re: [DISCUSS] KIP-50 - Enhance Authorizer interface to be aware of supported Principal Types

2016-04-20 Thread Ashish Singh
Hi Grant,

On Tue, Apr 19, 2016 at 8:13 AM, Grant Henke  wrote:

> Hi Ashish,
>
> Thanks for the updates. I have a few questions below:
>
> > Move following interfaces to new package, org.apche.kafka.authorizer.
> >
> >1. Authorizer
> >2. Acl
> >3. Operation
> >4. PermissionType
> >5. Resource
> >6. ResourceType
> >7. KafkaPrincipal
> >8. Session
> >
> >
> This means the client would be required to depend on the authorizer package
> as a part of KIP-4. Another option is to have the client objects in common.
> Have we ruled out leaving the interface in the core module?
>
 With this entities that use Authorizer will depend only on Authorizer
package. Third party implementations can have only the authorizer pkg as
dependency. core and client modules will also have to depend on the
authorizer with this approach. Do you see any issue with it?

>
> Authorizer interface will be updated to remove getter naming convention.
>
>
> Now that this is Java do we still want to change to the Scala naming
> convention?
>
Even in clients module I do not see getter naming convention being
followed, it is better to be consistent I guess.

>
>
> Since we are completely rewriting the interface, can we add some (at least
> one to start with) standard exceptions that each method is recommended to
> use/throw? This will help the server in KIP-4 provide meaningful error
> codes. KAFKA-3507  is
> tracking it right now.
>
That should be good to have. Will include that. Thanks.

>
> Thanks,
> Grant
>
>
>
> On Tue, Apr 19, 2016 at 9:48 AM, Ashish Singh  wrote:
>
> > I have updated KIP-50
> > <
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-50+-+Move+Authorizer+to+a+separate+package
> > >
> > and PR  as per recent
> > discussions. Please take a look.
> >
> > @Harsha / Don, it would be nice if you guys can review the KIP and PR as
> > well.
> > ​
> >
> > On Mon, Apr 11, 2016 at 7:36 PM, Ashish Singh 
> wrote:
> >
> > > Yes, Jun. I would like to try get option 2 in, if possible in 0.10. I
> am
> > > not asking for delaying 0.10 for it, but some reviews and early
> feedback
> > > would be great. At this point this is what I have in mind.
> > >
> > > 1. Move authorizer and related entities to its own package. Note that I
> > am
> > > proposing to drop scala interface completely. Ranger team is fine with
> it
> > > and I will update Sentry.
> > > 2. The only new public method that will be added to authorizer
> interface
> > > is description().
> > > 3. Update SimpleAclAuthorizer to use the new interface and classes.
> > >
> > > On Mon, Apr 11, 2016 at 6:38 PM, Jun Rao  wrote:
> > >
> > >> Ashish,
> > >>
> > >> So, you want to take a shot at option 2 for 0.10.0? That's fine with
> me
> > >> too. I am just not sure if we have enough time to think through the
> > >> changes.
> > >>
> > >> Thanks,
> > >>
> > >> Jun
> > >>
> > >> On Mon, Apr 11, 2016 at 6:05 PM, Ashish Singh 
> > >> wrote:
> > >>
> > >> > Hello Jun,
> > >> >
> > >> > The 3rd option will require Apache Sentry to go GA with current
> > >> authorizer
> > >> > interface, and at this point it seems that the interface won't last
> > >> long.
> > >> > Within a few months, Sentry will have to make a breaking change. I
> do
> > >> > understand that Kafka should not have to delay its release due to
> one
> > of
> > >> > the authorizer implementations. However, can we assist Sentry users
> to
> > >> > avoid that breaking upgrade? I think it is worth a shot. If the
> > changes
> > >> are
> > >> > not done by 0.10 code freeze, then sure lets punt it to next
> release.
> > >> Does
> > >> > this seem reasonable to you?
> > >> >
> > >> > On Sun, Apr 10, 2016 at 11:42 AM, Jun Rao  wrote:
> > >> >
> > >> > > Ashish,
> > >> > >
> > >> > > A 3rd option is to in 0.10.0, just sanity check the principal type
> > in
> > >> the
> > >> > > implementation of addAcls/removeAcls of Authorizer, but don't
> change
> > >> the
> > >> > > Authorizer api to add the getDescription() method. This fixes the
> > >> > immediate
> > >> > > issue that an acl rule with the wrong principal type is silently
> > >> ignored.
> > >> > > Knowing valid user types is nice, but not critical (we can include
> > the
> > >> > > supported user type in the UnsupportedPrincipalTypeException
> thrown
> > >> from
> > >> > > addAcls/removeAcls). This will give us more time to clean up the
> > >> > Authorizer
> > >> > > api post 0.10.0.
> > >> > >
> > >> > > Thanks
> > >> > >
> > >> > > Jun
> > >> > >
> > >> > > On Fri, Apr 8, 2016 at 9:04 AM, Ashish Singh  >
> > >> > wrote:
> > >> > >
> > >> > > > Thanks for the input Don. One of the possible paths for Option 2
> > is
> > >> to
> > >> > > > completely drop Scala interface, would that be Ok with you
> folks?
> > >> 

[jira] [Assigned] (KAFKA-3459) Returning zero task configurations from a connector does not properly clean up existing tasks

2016-04-20 Thread Liquan Pei (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3459?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Liquan Pei reassigned KAFKA-3459:
-

Assignee: Liquan Pei  (was: Ewen Cheslack-Postava)

> Returning zero task configurations from a connector does not properly clean 
> up existing tasks
> -
>
> Key: KAFKA-3459
> URL: https://issues.apache.org/jira/browse/KAFKA-3459
> Project: Kafka
>  Issue Type: Bug
>  Components: KafkaConnect
>Affects Versions: 0.9.0.1
>Reporter: Ewen Cheslack-Postava
>Assignee: Liquan Pei
>
> Instead of deleting existing tasks it just leaves existing tasks in place. If 
> you're writing a connector with a variable number of inputs where it may drop 
> to zero, this makes it impossible to cleanup existing tasks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3577) Partial cluster breakdown

2016-04-20 Thread Kim Christensen (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250531#comment-15250531
 ] 

Kim Christensen commented on KAFKA-3577:


Yeah, it seems a lot like the issue you referenced. We can't find any evidence 
that we're are experiencing high latency, but the current logging of network 
are disk load leaves a lot to be desired, so it is very likely the issue. We 
try to increase the zookeeper session timeout for now, and see if the issue is 
resolved, until our logging of possible latency issues are up to date.

> Partial cluster breakdown
> -
>
> Key: KAFKA-3577
> URL: https://issues.apache.org/jira/browse/KAFKA-3577
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: Debian GNU/Linux 7.9 (wheezy)
>Reporter: Kim Christensen
>
> We run a cluster of 3 brokers and 3 zookeepers, but after we upgraded to 
> 0.9.0.1 our cluster sometimes goes partially down, and we can't figure why. A 
> full cluster restart fixed the problem.
> I've added a snippet of the logs on each broker below.
> Broker 4:
> {quote}
> [2016-04-18 05:58:26,390] INFO [Group Metadata Manager on Broker 4]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2016-04-18 06:05:55,218] INFO Creating /controller (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-04-18 06:05:57,396] ERROR Session has expired while creating 
> /controller (kafka.utils.ZKCheckedEphemeral)
> [2016-04-18 06:05:57,396] INFO Result of znode creation is: SESSIONEXPIRED 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-04-18 06:05:57,400] ERROR Error while electing or becoming leader on 
> broker 4 (kafka.server.ZookeeperLeaderElector)
> org.I0Itec.zkclient.exception.ZkException: 
> org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode 
> = Session expired
> at 
> org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
> at kafka.utils.ZKCheckedEphemeral.create(ZkUtils.scala:1090)
> at 
> kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:81)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply$mcZ$sp(ZookeeperLeaderElector.scala:146)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener$$anonfun$handleDataDeleted$1.apply(ZookeeperLeaderElector.scala:141)
> at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:262)
> at 
> kafka.server.ZookeeperLeaderElector$LeaderChangeListener.handleDataDeleted(ZookeeperLeaderElector.scala:141)
> at org.I0Itec.zkclient.ZkClient$9.run(ZkClient.java:823)
> at org.I0Itec.zkclient.ZkEventThread.run(ZkEventThread.java:71)
> Caused by: org.apache.zookeeper.KeeperException$SessionExpiredException: 
> KeeperErrorCode = Session expired
> at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
> ... 9 more
> [2016-04-18 06:05:57,420] INFO Creating /controller (is it secure? false) 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-04-18 06:05:57,424] INFO Result of znode creation is: OK 
> (kafka.utils.ZKCheckedEphemeral)
> [2016-04-18 06:05:57,425] INFO 4 successfully elected as leader 
> (kafka.server.ZookeeperLeaderElector)
> [2016-04-18 06:05:57,885] INFO [ReplicaFetcherManager on broker 4] Removed 
> fetcher for partitions 
> [__consumer_offsets,32],[__consumer_offsets,44],[cdrrecords-errors,1],[cdrrecords,0],[__consumer_offsets,38],[__consumer_offsets,8],[events
> ,2],[__consumer_offsets,20],[__consumer_offsets,2],[__consumer_offsets,14],[__consumer_offsets,26]
>  (kafka.server.ReplicaFetcherManager)
> [2016-04-18 06:05:57,892] INFO [ReplicaFetcherManager on broker 4] Removed 
> fetcher for partitions 
> [__consumer_offsets,35],[__consumer_offsets,23],[__consumer_offsets,47],[__consumer_offsets,11],[__consumer_offsets,5],[events-errors,2],[_
> _consumer_offsets,17],[__consumer_offsets,41],[__consumer_offsets,29] 
> (kafka.server.ReplicaFetcherManager)
> [2016-04-18 06:05:57,894] INFO Truncating log __consumer_offsets-17 to offset 
> 0. (kafka.log.Log)
> [2016-04-18 06:05:57,894] INFO Truncating log __consumer_offsets-23 to offset 
> 0. (kafka.log.Log)
> [2016-04-18 06:05:57,894] INFO Truncating log __consumer_offsets-29 to offset 
> 0. (kafka.log.Log)
> [2016-04-18 06:05:57,895] INFO Truncating log __consumer_offsets-35 to offset 
> 0. (kafka.log.Log)
> [2016-04-18 06:05:57,895] INFO Truncating log __consumer_offsets-41 to offset 
> 0. (kafka.log.Log)
> [2016-04-18 06:05:57,895] INFO Truncating log events-errors-2 to offset 0. 
> (kafka.log.Log)
> [2016-04-18 06:05:57,895] INFO Truncating log __consumer_offsets-5 

[GitHub] kafka pull request: MINOR: Docs for ACLs over SSL auth and KAFKA_O...

2016-04-20 Thread QwertyManiac
GitHub user QwertyManiac opened a pull request:

https://github.com/apache/kafka/pull/1244

MINOR: Docs for ACLs over SSL auth and KAFKA_OPTS


[Motivation](http://mail-archives.apache.org/mod_mbox/kafka-users/201604.mbox/%3CCAFXAVc4%2B6Z863K1%2B-2h1aFaAjnMb3jjCu_A9RvWdVnHZg1s1SQ%40mail.gmail.com%3E).

Adds two distinct notes to the Security documentation, that seek to explain 
how Kerberos JAAS configurations can be used with Kafka's inbuilt command line 
tools (Console Producer/Consumers as examples) via `$KAFKA_OPTS`, and a better 
example of how the User principals for ACLs should appear in the 
`kafka-acls.sh` commands when Kafka is set to use client authentication via 
client SSL certificates instead of Kerberos+SASL.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QwertyManiac/kafka security-doc-improvement

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1244.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1244


commit 2d488462d86e9705dfa7cce61c5f68162f2b7d18
Author: Harsh J 
Date:   2016-04-20T12:00:59Z

KAFKA-3591: JmxTool should exit out if a provided query matches no values

commit e4f40beedd89476acd9702f243e8a952cfb627de
Author: Harsh J 
Date:   2016-04-20T18:50:36Z

MINOR: Docs for ACLs over SSL auth and KAFKA_OPTS




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: Minor comment fix

2016-04-20 Thread Ishiihara
GitHub user Ishiihara opened a pull request:

https://github.com/apache/kafka/pull/1243

Minor comment fix

@ewencp 

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/Ishiihara/kafka docs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1243.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1243


commit 02784174943e2327a00e785534c2e820fc4c8d03
Author: Liquan Pei 
Date:   2016-04-20T19:00:32Z

Minor comment fix




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3589) KTable.count(final KeyValueMapper<K, V, K1> selector, String name) throw NPE

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3589:
-
Labels: architecture  (was: )

> KTable.count(final KeyValueMapper selector, String name) throw NPE
> 
>
> Key: KAFKA-3589
> URL: https://issues.apache.org/jira/browse/KAFKA-3589
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.0
>
>
> The implementation of KTable.count(inal KeyValueMapper selector, 
> String name) passes null through as the parameters for the key and value 
> serdes. This results in an NPE in the aggregate method when it does:
> ChangedSerializer changedValueSerializer = new 
> ChangedSerializer<>(valueSerde.serializer());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3589) KTable.count(final KeyValueMapper<K, V, K1> selector, String name) throw NPE

2016-04-20 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3589?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3589:
-
Fix Version/s: 0.10.0.0

> KTable.count(final KeyValueMapper selector, String name) throw NPE
> 
>
> Key: KAFKA-3589
> URL: https://issues.apache.org/jira/browse/KAFKA-3589
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 0.10.0.0
>Reporter: Damian Guy
>Assignee: Guozhang Wang
>  Labels: architecture
> Fix For: 0.10.0.0
>
>
> The implementation of KTable.count(inal KeyValueMapper selector, 
> String name) passes null through as the parameters for the key and value 
> serdes. This results in an NPE in the aggregate method when it does:
> ChangedSerializer changedValueSerializer = new 
> ChangedSerializer<>(valueSerde.serializer());



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #538

2016-04-20 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: Remove RollingBounceTest since its functionality is covered by

--
[...truncated 3415 lines...]

kafka.api.SaslPlaintextConsumerTest > testPartitionReassignmentCallback PASSED

kafka.api.SaslPlaintextConsumerTest > testCommitSpecifiedOffsets PASSED

kafka.api.QuotasTest > testProducerConsumerOverrideUnthrottled PASSED

kafka.api.QuotasTest > testThrottledProducerConsumer PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededToReadFromNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testListOfsetsWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicRead PASSED

kafka.api.AuthorizerIntegrationTest > testListOffsetsWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > 
testCreatePermissionNeededForWritingToNonExistentTopic PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithNoTopicAccess PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testAuthorization PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicAndGroupRead PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithTopicWrite PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithNoGroupAccess PASSED

kafka.api.AuthorizerIntegrationTest > testConsumeWithNoAccess PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchWithTopicAndGroupRead 
PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testProduceWithTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testOffsetFetchTopicDescribe PASSED

kafka.api.AuthorizerIntegrationTest > testCommitWithTopicAndGroupRead PASSED

kafka.api.ProducerBounceTest > testBrokerFailure PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoConsumeAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testProduceConsume PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoProduceAcl PASSED

kafka.api.SaslSslEndToEndAuthorizationTest > testNoGroupAcl PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForAutoCreate PASSED

kafka.api.PlaintextConsumerTest > testShrinkingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testAsyncCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnStopPolling 
PASSED

kafka.api.PlaintextConsumerTest > testPartitionsForInvalidTopic PASSED

kafka.api.PlaintextConsumerTest > testSeek PASSED

kafka.api.PlaintextConsumerTest > testPositionAndCommit PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerSessionTimeoutOnClose PASSED

kafka.api.PlaintextConsumerTest > testFetchRecordTooLarge PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnClose PASSED

kafka.api.PlaintextConsumerTest > testExpandingTopicSubscriptions PASSED

kafka.api.PlaintextConsumerTest > testInterceptors PASSED

kafka.api.PlaintextConsumerTest > testPatternUnsubscription PASSED

kafka.api.PlaintextConsumerTest > testGroupConsumption PASSED

kafka.api.PlaintextConsumerTest > testPartitionsFor PASSED

kafka.api.PlaintextConsumerTest > testInterceptorsWithWrongKeyValue PASSED

kafka.api.PlaintextConsumerTest > testMultiConsumerRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPartitionPauseAndResume PASSED

kafka.api.PlaintextConsumerTest > testConsumeMessagesWithLogAppendTime PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitOnCloseAfterWakeup PASSED

kafka.api.PlaintextConsumerTest > testMaxPollRecords PASSED

kafka.api.PlaintextConsumerTest > testAutoOffsetReset PASSED

kafka.api.PlaintextConsumerTest > testFetchInvalidOffset PASSED

kafka.api.PlaintextConsumerTest > testAutoCommitIntercept PASSED

kafka.api.PlaintextConsumerTest > testCommitMetadata PASSED

kafka.api.PlaintextConsumerTest > testRoundRobinAssignment PASSED

kafka.api.PlaintextConsumerTest > testPatternSubscription PASSED

kafka.api.PlaintextConsumerTest > testPauseStateNotPreservedByRebalance PASSED

kafka.api.PlaintextConsumerTest > testUnsubscribeTopic PASSED

kafka.api.PlaintextConsumerTest > testListTopics PASSED

kafka.api.PlaintextConsumerTest > 

[jira] [Commented] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250473#comment-15250473
 ] 

Jason Gustafson commented on KAFKA-3117:


Looks like the first guess was nearer the mark. Although there is a guard which 
attempts to ensure fresh metadata before performing assignment, the check seems 
incorrect since it doesn't take into account the backoff interval used in the 
metadata request logic to prevent metadata storms. Instead of blocking for the 
metadata update, this causes the metadata fetch to be sent at the same time the 
rebalance begins and then the race is on. But even if the check were correct, 
we still have potential for a metadata fetch to return while a rebalance is in 
progress which can prevent assignment of some topics in the worst case. So we 
need to address this problem as well.

> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (KAFKA-3568) KafkaProducer fails with timeout on message send()

2016-04-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250386#comment-15250386
 ] 

Gwen Shapira edited comment on KAFKA-3568 at 4/20/16 6:06 PM:
--

Aha! 

https://github.com/spotify/docker-kafka/blob/master/kafka/scripts/start-kafka.sh#L20

The docker you are using may have a script that searches for those lines...

I opened an issue to Spotify to fix their scripts:
https://github.com/spotify/docker-kafka/issues/40


was (Author: gwenshap):
Aha! 

https://github.com/spotify/docker-kafka/blob/master/kafka/scripts/start-kafka.sh#L20

The docker you are using may have a script that searches for those lines...

> KafkaProducer fails with timeout on message send()
> --
>
> Key: KAFKA-3568
> URL: https://issues.apache.org/jira/browse/KAFKA-3568
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: MacOS Docker
>Reporter: Greg Zoller
>  Labels: producer
>
> I had a KafkaProducer working fine in 0.9.0.1.  I was having unrelated 
> problems in that version so thought to try 0.10.1.0.  I built it as I did 
> 0.9.0.1:
> Fresh build against Scala 2.11.7.  Built the tgz build plus local maven 
> install.  From the tgz I created a Docker image similar to spotify/kafka.  I 
> linked my producer code to the maven jars.  This process worked in 0.9.
> Code is here:  
> https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
> At the bottom you can see a clip from the output... there's a warning about 
> metadata (not sure if its important or not) and then its trying to send() 
> messages and timing out.  I clipped the output, but it does fail the same way 
> for each message sent in 0.10.1.0.  Same code compiled against 0.9.0.1 
> populates the topic's partitions w/o problem.
> Was there a breaking change between 0.9 and 0.10, or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3568) KafkaProducer fails with timeout on message send()

2016-04-20 Thread Gwen Shapira (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250386#comment-15250386
 ] 

Gwen Shapira commented on KAFKA-3568:
-

Aha! 

https://github.com/spotify/docker-kafka/blob/master/kafka/scripts/start-kafka.sh#L20

The docker you are using may have a script that searches for those lines...

> KafkaProducer fails with timeout on message send()
> --
>
> Key: KAFKA-3568
> URL: https://issues.apache.org/jira/browse/KAFKA-3568
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: MacOS Docker
>Reporter: Greg Zoller
>  Labels: producer
>
> I had a KafkaProducer working fine in 0.9.0.1.  I was having unrelated 
> problems in that version so thought to try 0.10.1.0.  I built it as I did 
> 0.9.0.1:
> Fresh build against Scala 2.11.7.  Built the tgz build plus local maven 
> install.  From the tgz I created a Docker image similar to spotify/kafka.  I 
> linked my producer code to the maven jars.  This process worked in 0.9.
> Code is here:  
> https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
> At the bottom you can see a clip from the output... there's a warning about 
> metadata (not sure if its important or not) and then its trying to send() 
> messages and timing out.  I clipped the output, but it does fail the same way 
> for each message sent in 0.10.1.0.  Same code compiled against 0.9.0.1 
> populates the topic's partitions w/o problem.
> Was there a breaking change between 0.9 and 0.10, or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3568) KafkaProducer fails with timeout on message send()

2016-04-20 Thread Greg Zoller (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15250342#comment-15250342
 ] 

Greg Zoller commented on KAFKA-3568:


Hi, Gwen... I am puzzled too.  I didn't see any code that would account for 
this, so I'm still not ruling out my methods as a cause.

I rebuilt both.  Step #2 above is rebuilding the tgz core, which I then 
unpacked into a Docker to run it.  I modded the spotify/kafka Dockerfile as it 
puts Kafka and Zookeeper conveniently in the same image.  I then expose ports 
9092 and 2181.  Again, not ruling anything out here, but this does work for the 
builds noted, including (now with my patch) the latest trunk build.

Step #4 builds and installs the clients to my local maven repo where my test 
code picks it up.  The test code is likewise always cleaned and rebuilt from 
scratch to eliminate unwanted artifacts there.

My host is always 192.168.99.100:9092.   I'm on a Mac so this IP address is my 
VirtualBox IP (docker-machine ip default), and the two ports mentioned above 
are configured to pass through VirtualBox to my Mac host.  On the working 
builds this functions without issue.

I was initially suspicious of a one-line change in build 73470b0 (Mar 22) that 
commented out this line:

listeners=PLAINTEXT://:9092

It was really the only "active" change in this build, which is the first one 
that stopped working for me.  Aha! I though... this is it.  So I un-commented 
it out and to my surprise it was still broken.  Out of raw frustration I cut 'n 
pasted just the 4 other commented out lines from server.properties in that same 
build.  Further trial 'n error narrowed it down to the two mentioned above.  I 
got tired and didn't narrow it further to see if just one of these two made a 
difference.

Still not trusting my methods, I thought i'd try the ultimate test... checking 
out trunk raw and just pasting those two lines in and seeing what 
happensand that worked.  Even ignoring the fact these are commented 
out--the values are not meaningful.

I'm really hoping there's something else going on here.  There's one other bug 
that's kinda like this one.  If I can find it I'll ask the poster what happens 
if he tries this.

> KafkaProducer fails with timeout on message send()
> --
>
> Key: KAFKA-3568
> URL: https://issues.apache.org/jira/browse/KAFKA-3568
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.10.1.0
> Environment: MacOS Docker
>Reporter: Greg Zoller
>  Labels: producer
>
> I had a KafkaProducer working fine in 0.9.0.1.  I was having unrelated 
> problems in that version so thought to try 0.10.1.0.  I built it as I did 
> 0.9.0.1:
> Fresh build against Scala 2.11.7.  Built the tgz build plus local maven 
> install.  From the tgz I created a Docker image similar to spotify/kafka.  I 
> linked my producer code to the maven jars.  This process worked in 0.9.
> Code is here:  
> https://gist.github.com/gzoller/145faef1fefc8acea212e87e06fc86e8
> At the bottom you can see a clip from the output... there's a warning about 
> metadata (not sure if its important or not) and then its trying to send() 
> messages and timing out.  I clipped the output, but it does fail the same way 
> for each message sent in 0.10.1.0.  Same code compiled against 0.9.0.1 
> populates the topic's partitions w/o problem.
> Was there a breaking change between 0.9 and 0.10, or is this a bug?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: Remove RollingBounceTest since its func...

2016-04-20 Thread ewencp
GitHub user ewencp opened a pull request:

https://github.com/apache/kafka/pull/1242

MINOR: Remove RollingBounceTest since its functionality is covered by the 
ReplicationTest system test

RollingBounceTest is a system test that cannot be run reliably in unit 
tests and ReplicationTest is a superset of the
functionality: in addition to verifying that bouncing leaders eventually 
results in a new leader, ReplicationTest also
validates that data continues to be produced and consumed.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ewencp/kafka 
minor-remove-rolling-bounce-integration-test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1242.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1242


commit 39a5f56d2ce9faa48d0c1db97e50ca69ca02ed39
Author: Ewen Cheslack-Postava 
Date:   2016-04-20T17:04:08Z

MINOR: Remove RollingBounceTest since its functionality is covered by the 
ReplicationTest system test

RollingBounceTest is a system test that cannot be run reliably in unit 
tests and ReplicationTest is a superset of the
functionality: in addition to verifying that bouncing leaders eventually 
results in a new leader, ReplicationTest also
validates that data continues to be produced and consumed.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-3117) Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3117?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3117:
-
Labels: newbie test transient-unit-test-failure  (was: newbie test)

> Fail test at: PlaintextConsumerTest. testAutoCommitOnRebalance 
> ---
>
> Key: KAFKA-3117
> URL: https://issues.apache.org/jira/browse/KAFKA-3117
> Project: Kafka
>  Issue Type: Sub-task
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: oracle java764bit
> ubuntu 13.10 
>Reporter: edwardt
>Assignee: Jason Gustafson
>  Labels: newbie, test, transient-unit-test-failure
>
> java.lang.AssertionError: Expected partitions [topic-0, topic-1, topic2-0, 
> topic2-1] but actually got [topic-0, topic-1]
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:730)
>   at 
> kafka.api.BaseConsumerTest.testAutoCommitOnRebalance(BaseConsumerTest.scala:125)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:271)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:70)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:50)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:238)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:63)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:236)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:53)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:22



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3182) Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3182?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3182:
-
Labels: transient-unit-test-failure  (was: )

> Failure in kafka.network.SocketServerTest.testSocketsCloseOnShutdown
> 
>
> Key: KAFKA-3182
> URL: https://issues.apache.org/jira/browse/KAFKA-3182
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> org.scalatest.junit.JUnitTestFailedError: expected exception when writing to 
> closed trace socket
>   at 
> org.scalatest.junit.AssertionsForJUnit$class.newAssertionFailedException(AssertionsForJUnit.scala:102)
>   at 
> org.scalatest.junit.JUnitSuite.newAssertionFailedException(JUnitSuite.scala:79)
>   at org.scalatest.Assertions$class.fail(Assertions.scala:1328)
>   at org.scalatest.junit.JUnitSuite.fail(JUnitSuite.scala:79)
>   at 
> kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:180)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Updated] (KAFKA-3220) Failure in kafka.server.ClientQuotaManagerTest.testQuotaViolation

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3220?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3220:
-
Labels: transient-unit-test-failure  (was: )

> Failure in kafka.server.ClientQuotaManagerTest.testQuotaViolation
> -
>
> Key: KAFKA-3220
> URL: https://issues.apache.org/jira/browse/KAFKA-3220
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<1>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.server.ClientQuotaManagerTest.testQuotaViolation(ClientQuotaManagerTest.scala:110)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor44.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor43.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code}
> Example: 
> 

[jira] [Updated] (KAFKA-3155) Transient Failure in kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3155?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3155:
-
Labels: transient-unit-test-failure  (was: )

> Transient Failure in 
> kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack
> 
>
> Key: KAFKA-3155
> URL: https://issues.apache.org/jira/browse/KAFKA-3155
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> java.lang.AssertionError: No request is complete.
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.assertTrue(Assert.java:41)
>   at 
> kafka.api.BaseProducerSendTest$$anonfun$testFlush$1.apply$mcVI$sp(BaseProducerSendTest.scala:275)
>   at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
>   at 
> kafka.api.BaseProducerSendTest.testFlush(BaseProducerSendTest.scala:273)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

[jira] [Updated] (KAFKA-3168) Failure in kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3168?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3168:
-
Labels: transient-unit-test-failure  (was: )

> Failure in kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests
> --
>
> Key: KAFKA-3168
> URL: https://issues.apache.org/jira/browse/KAFKA-3168
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: Published messages should be in the log
>   at org.junit.Assert.fail(Assert.java:88)
>   at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
>   at 
> kafka.integration.PrimitiveApiTest.testPipelinedProduceRequests(PrimitiveApiTest.scala:245)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> {code}
> Example: 
> 

[jira] [Updated] (KAFKA-2398) Transient test failure for SocketServerTest - Socket closed.

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2398:
-
Labels: transient-unit-test-failure  (was: )

> Transient test failure for SocketServerTest - Socket closed.
> 
>
> Key: KAFKA-2398
> URL: https://issues.apache.org/jira/browse/KAFKA-2398
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jiangjie Qin
>  Labels: transient-unit-test-failure
>
> See the following transient test failure for SocketServerTest.
> kafka.network.SocketServerTest > simpleRequest FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.simpleRequest(SocketServerTest.scala:94)
> kafka.network.SocketServerTest > tooBigRequestIsRejected FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.tooBigRequestIsRejected(SocketServerTest.scala:124)
> kafka.network.SocketServerTest > testSocketsCloseOnShutdown FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest.testSocketsCloseOnShutdown(SocketServerTest.scala:136)
> kafka.network.SocketServerTest > testMaxConnectionsPerIp FAILED
> java.net.SocketException: Socket closed
> at java.net.PlainSocketImpl.socketConnect(Native Method)
> at 
> java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:339)
> at 
> java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:200)
> at 
> java.net.AbstractPlainSocketImpl.connect(AbstractPlainSocketImpl.java:182)
> at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:392)
> at java.net.Socket.connect(Socket.java:579)
> at java.net.Socket.connect(Socket.java:528)
> at java.net.Socket.(Socket.java:425)
> at java.net.Socket.(Socket.java:208)
> at kafka.network.SocketServerTest.connect(SocketServerTest.scala:84)
> at 
> kafka.network.SocketServerTest$$anonfun$1.apply(SocketServerTest.scala:170)
> at 
> kafka.network.SocketServerTest$$anonfun$1.apply(SocketServerTest.scala:170)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.immutable.Range.foreach(Range.scala:141)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.network.SocketServerTest.testMaxConnectionsPerIp(SocketServerTest.scala:170)
> 

[jira] [Updated] (KAFKA-2933) Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2933?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2933:
-
Labels: transient-unit-test-failure  (was: )

> Failure in kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment
> -
>
> Key: KAFKA-2933
> URL: https://issues.apache.org/jira/browse/KAFKA-2933
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.10.0.0
>Reporter: Guozhang Wang
>Assignee: Jason Gustafson
>  Labels: transient-unit-test-failure
>
> {code}
> kafka.api.PlaintextConsumerTest > testMultiConsumerDefaultAssignment FAILED
> java.lang.AssertionError: Did not get valid assignment for partitions 
> [topic1-2, topic2-0, topic1-4, topic-1, topic-0, topic2-1, topic1-0, 
> topic1-3, topic1-1, topic2-2] after we changed subscription
> at org.junit.Assert.fail(Assert.java:88)
> at kafka.utils.TestUtils$.waitUntilTrue(TestUtils.scala:747)
> at 
> kafka.api.PlaintextConsumerTest.validateGroupAssignment(PlaintextConsumerTest.scala:644)
> at 
> kafka.api.PlaintextConsumerTest.changeConsumerGroupSubscriptionAndValidateAssignment(PlaintextConsumerTest.scala:663)
> at 
> kafka.api.PlaintextConsumerTest.testMultiConsumerDefaultAssignment(PlaintextConsumerTest.scala:461)
> java.util.ConcurrentModificationException: KafkaConsumer is not safe for 
> multi-threaded access
> {code}
> Example: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1582/console



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3034) kafka.api.PlaintextConsumerTest > testSeek FAILED

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3034:
-
Labels: transient-unit-test-failure  (was: )

> kafka.api.PlaintextConsumerTest > testSeek FAILED
> -
>
> Key: KAFKA-3034
> URL: https://issues.apache.org/jira/browse/KAFKA-3034
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: transient-unit-test-failure
>
> See for example:
> https://builds.apache.org/job/kafka-trunk-jdk7/921/console
> It doesn't fail consistently, but happens fairly often.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2059) ZookeeperConsumerConnectorTest.testBasic trasient failure

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2059?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2059:
-
Labels: newbie transient-unit-test-failure  (was: newbie)

> ZookeeperConsumerConnectorTest.testBasic trasient failure
> -
>
> Key: KAFKA-2059
> URL: https://issues.apache.org/jira/browse/KAFKA-2059
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: newbie, transient-unit-test-failure
>
> {code}
> kafka.javaapi.consumer.ZookeeperConsumerConnectorTest > testBasic FAILED
> kafka.common.InconsistentBrokerIdException: Configured brokerId 1 doesn't 
> match stored brokerId 0 in meta.properties
> at kafka.server.KafkaServer.getBrokerId(KafkaServer.scala:443)
> at kafka.server.KafkaServer.startup(KafkaServer.scala:116)
> at kafka.utils.TestUtils$.createServer(TestUtils.scala:126)
> at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:57)
> at 
> kafka.integration.KafkaServerTestHarness$$anonfun$setUp$1.apply(KafkaServerTestHarness.scala:57)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at 
> scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:244)
> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> scala.collection.TraversableLike$class.map(TraversableLike.scala:244)
> at scala.collection.AbstractTraversable.map(Traversable.scala:105)
> at 
> kafka.integration.KafkaServerTestHarness$class.setUp(KafkaServerTestHarness.scala:57)
> at 
> kafka.javaapi.consumer.ZookeeperConsumerConnectorTest.setUp(ZookeeperConsumerConnectorTest.scala:41)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-435) Keep track of the transient test failure for Kafka-343 on Apache Jenkins

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-435?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-435:

Labels: transient-unit-test-failure  (was: )

> Keep track of the transient test failure for Kafka-343 on Apache Jenkins
> 
>
> Key: KAFKA-435
> URL: https://issues.apache.org/jira/browse/KAFKA-435
> Project: Kafka
>  Issue Type: Sub-task
>Affects Versions: 0.8.0
>Reporter: Yang Ye
>Assignee: Yang Ye
>Priority: Minor
>  Labels: transient-unit-test-failure
>
> See: 
> http://mail-archives.apache.org/mod_mbox/incubator-kafka-commits/201208.mbox/browser
> Error message:
> --
> [...truncated 3415 lines...]
> [2012-08-01 17:27:08,432] ERROR KafkaApi on Broker 0, error when processing 
> request (test_topic,0,-1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset -1 is out of range
>   at kafka.log.Log$.findRange(Log.scala:46)
>   at kafka.log.Log.read(Log.scala:265)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
>   at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:272)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:59)
>   at kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:38)
>   at java.lang.Thread.run(Thread.java:662)
> [2012-08-01 17:27:08,446] ERROR Closing socket for /67.195.138.9 because of 
> error (kafka.network.Processor:99)
> java.io.IOException: Connection reset by peer
>   at sun.nio.ch.FileDispatcher.read0(Native Method)
>   at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:21)
>   at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:198)
>   at sun.nio.ch.IOUtil.read(IOUtil.java:171)
>   at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:243)
>   at kafka.utils.Utils$.read(Utils.scala:630)
>   at 
> kafka.network.BoundedByteBufferReceive.readFrom(BoundedByteBufferReceive.scala:54)
>   at kafka.network.Processor.read(SocketServer.scala:296)
>   at kafka.network.Processor.run(SocketServer.scala:212)
>   at java.lang.Thread.run(Thread.java:662)
> [info] Test Passed: 
> testResetToEarliestWhenOffsetTooLow(kafka.integration.AutoOffsetResetTest)
> [info] Test Starting: 
> testResetToLatestWhenOffsetTooHigh(kafka.integration.AutoOffsetResetTest)
> [2012-08-01 17:27:09,203] ERROR KafkaApi on Broker 0, error when processing 
> request (test_topic,0,1,1048576)
> (kafka.server.KafkaApis:99)
> kafka.common.OffsetOutOfRangeException: offset 1 is out of range
>   at kafka.log.Log$.findRange(Log.scala:46)
>   at kafka.log.Log.read(Log.scala:265)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSet(KafkaApis.scala:377)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:333)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1$$anonfun$apply$21.apply(KafkaApis.scala:332)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:57)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:43)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:332)
>   at 
> kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$readMessageSets$1.apply(KafkaApis.scala:328)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$readMessageSets(KafkaApis.scala:328)
>   at 

[jira] [Updated] (KAFKA-1573) Transient test failures on LogTest.testCorruptLog

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-1573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-1573:
-
Labels: transient-unit-test-failure  (was: )

> Transient test failures on LogTest.testCorruptLog
> -
>
> Key: KAFKA-1573
> URL: https://issues.apache.org/jira/browse/KAFKA-1573
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> Here is an example of the test failure trace:
> junit.framework.AssertionFailedError: expected:<87> but was:<68>
>   at junit.framework.Assert.fail(Assert.java:47)
>   at junit.framework.Assert.failNotEquals(Assert.java:277)
>   at junit.framework.Assert.assertEquals(Assert.java:64)
>   at junit.framework.Assert.assertEquals(Assert.java:130)
>   at junit.framework.Assert.assertEquals(Assert.java:136)
>   at 
> kafka.log.LogTest$$anonfun$testCorruptLog$1.apply$mcVI$sp(LogTest.scala:615)
>   at 
> scala.collection.immutable.Range$ByOne$class.foreach$mVc$sp(Range.scala:282)
>   at 
> scala.collection.immutable.Range$$anon$2.foreach$mVc$sp(Range.scala:265)
>   at kafka.log.LogTest.testCorruptLog(LogTest.scala:595)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.junit.internal.runners.TestMethodRunner.executeMethodBody(TestMethodRunner.java:99)
>   at 
> org.junit.internal.runners.TestMethodRunner.runUnprotected(TestMethodRunner.java:81)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestMethodRunner.runMethod(TestMethodRunner.java:75)
>   at 
> org.junit.internal.runners.TestMethodRunner.run(TestMethodRunner.java:45)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.invokeTestMethod(TestClassMethodsRunner.java:71)
>   at 
> org.junit.internal.runners.TestClassMethodsRunner.run(TestClassMethodsRunner.java:35)
>   at 
> org.junit.internal.runners.TestClassRunner$1.runUnprotected(TestClassRunner.java:42)
>   at 
> org.junit.internal.runners.BeforeAndAfterRunner.runProtected(BeforeAndAfterRunner.java:34)
>   at 
> org.junit.internal.runners.TestClassRunner.run(TestClassRunner.java:52)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:80)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:47)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:69)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at $Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:103)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:355)
>   at 
> org.gradle.internal.concurrent.DefaultExecutorFactory$StoppableExecutorImpl$1.run(DefaultExecutorFactory.java:66)
>   at 
> 

[jira] [Updated] (KAFKA-2943) Transient Failure in kafka.producer.SyncProducerTest.testReachableServer

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2943?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2943:
-
Labels: transient-unit-test-failure  (was: )

> Transient Failure in kafka.producer.SyncProducerTest.testReachableServer
> 
>
> Key: KAFKA-2943
> URL: https://issues.apache.org/jira/browse/KAFKA-2943
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> Stacktrace
> java.lang.AssertionError: Unexpected failure sending message to broker. null
>   at org.junit.Assert.fail(Assert.java:88)
>   at 
> kafka.producer.SyncProducerTest.testReachableServer(SyncProducerTest.scala:58)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>   at java.lang.Thread.run(Thread.java:744)
> Standard Output
> [2015-12-03 07:10:17,494] ERROR [Replica Manager on Broker 0]: Error 
> processing append operation on partition 

[jira] [Updated] (KAFKA-2363) ProducerSendTest.testCloseWithZeroTimeoutFromCallerThread Transient Failure

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2363?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2363:
-
Labels: newbie transient-unit-test-failure  (was: newbie)

> ProducerSendTest.testCloseWithZeroTimeoutFromCallerThread Transient Failure
> ---
>
> Key: KAFKA-2363
> URL: https://issues.apache.org/jira/browse/KAFKA-2363
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Fangmin Lv
>Assignee: Ben Stopford
>  Labels: newbie, transient-unit-test-failure
> Fix For: 0.10.1.0
>
>
> {code}
> kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromCallerThread 
> STANDARD_OUT
> [2015-07-24 23:13:05,148] WARN fsync-ing the write ahead log in SyncThread:0 
> took 1084ms which will adversely effect operation latency. See the ZooKeeper 
> troubleshooting guide (org.apache.zookeeper.s
> erver.persistence.FileTxnLog:334)
> kafka.api.ProducerSendTest > testCloseWithZeroTimeoutFromCallerThread FAILED
> java.lang.AssertionError: No request is complete.
> at org.junit.Assert.fail(Assert.java:92)
> at org.junit.Assert.assertTrue(Assert.java:44)
> at 
> kafka.api.ProducerSendTest$$anonfun$testCloseWithZeroTimeoutFromCallerThread$1.apply$mcVI$sp(ProducerSendTest.scala:343)
> at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
> at 
> kafka.api.ProducerSendTest.testCloseWithZeroTimeoutFromCallerThread(ProducerSendTest.scala:340)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-2442) QuotasTest should not fail when cpu is busy

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-2442:
-
Labels: transient-unit-test-failure  (was: )

> QuotasTest should not fail when cpu is busy
> ---
>
> Key: KAFKA-2442
> URL: https://issues.apache.org/jira/browse/KAFKA-2442
> Project: Kafka
>  Issue Type: Bug
>Reporter: Dong Lin
>Assignee: Aditya Auradkar
>  Labels: transient-unit-test-failure
>
> We observed that testThrottledProducerConsumer in QuotasTest may fail or 
> succeed randomly. It appears that the test may fail when the system is slow. 
> We can add timer in the integration test to avoid random failure.
> See an example failure at 
> https://builds.apache.org/job/kafka-trunk-git-pr/166/console for patch 
> https://github.com/apache/kafka/pull/142.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3035) Transient: kafka.api.PlaintextConsumerTest > testAutoOffsetReset FAILED

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3035?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3035:
-
Labels: transient-unit-test-failure  (was: )

> Transient: kafka.api.PlaintextConsumerTest > testAutoOffsetReset FAILED
> ---
>
> Key: KAFKA-3035
> URL: https://issues.apache.org/jira/browse/KAFKA-3035
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Gwen Shapira
>  Labels: transient-unit-test-failure
>
> See: https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/1868/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3246) Transient Failure in kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3246?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3246:
-
Labels: transient-unit-test-failure  (was: )

> Transient Failure in 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse
> -
>
> Key: KAFKA-3246
> URL: https://issues.apache.org/jira/browse/KAFKA-3246
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>  Labels: transient-unit-test-failure
>
> {code}
> java.lang.AssertionError: expected:<0> but was:<3>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.producer.SyncProducerTest.testProduceCorrectlyReceivesResponse(SyncProducerTest.scala:182)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Updated] (KAFKA-3420) Transient failure in OffsetCommitTest.testNonExistingTopicOffsetCommit

2016-04-20 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3420?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava updated KAFKA-3420:
-
Labels: transient-unit-test-failure  (was: )

> Transient failure in OffsetCommitTest.testNonExistingTopicOffsetCommit
> --
>
> Key: KAFKA-3420
> URL: https://issues.apache.org/jira/browse/KAFKA-3420
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>  Labels: transient-unit-test-failure
>
> From a recent build. Possibly related to KAFKA-2068, which was committed 
> recently.
> {code}
> java.lang.AssertionError: expected:<0> but was:<16>
>   at org.junit.Assert.fail(Assert.java:88)
>   at org.junit.Assert.failNotEquals(Assert.java:834)
>   at org.junit.Assert.assertEquals(Assert.java:645)
>   at org.junit.Assert.assertEquals(Assert.java:631)
>   at 
> kafka.server.OffsetCommitTest.testNonExistingTopicOffsetCommit(OffsetCommitTest.scala:308)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
>   at 
> org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:49)
>   at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:606)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
>   at 
> org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
>   at 
> org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>   at 
> 

[jira] [Updated] (KAFKA-3591) JmxTool should exit out if a provided query matches no values

2016-04-20 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J updated KAFKA-3591:
---
Status: Patch Available  (was: Open)

> JmxTool should exit out if a provided query matches no values
> -
>
> Key: KAFKA-3591
> URL: https://issues.apache.org/jira/browse/KAFKA-3591
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Harsh J
>Priority: Trivial
>
> Running {{kafka.tools.JmxTool}} with an invalid query, such as 
> {{--object-name "foobar"}} would produce just "time" field outputs given no 
> such object. If there are no matched objects when a query has been explicitly 
> provided, it should exit out instead of just printing the time data.
> (We should not exit if no filter is provided though, i.e. if the KAFKA-2278 
> feature is used)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3591) JmxTool should exit out if a provided query matches no values

2016-04-20 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249727#comment-15249727
 ] 

ASF GitHub Bot commented on KAFKA-3591:
---

GitHub user QwertyManiac opened a pull request:

https://github.com/apache/kafka/pull/1241

KAFKA-3591: JmxTool should exit out if a provided query matches no values.

Testing: I did manual tests by building locally and running the class 
against a remote 0.9.0.0 broker JMX URL. Running with a bad object-name 
produces the exit condition, and running with a good one continues to behave as 
before (prints the values). Running with no object name dumps all the metrics 
(KAFKA-2278).

Docs: This should need no doc changes.

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QwertyManiac/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1241.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1241


commit 2d488462d86e9705dfa7cce61c5f68162f2b7d18
Author: Harsh J 
Date:   2016-04-20T12:00:59Z

KAFKA-3591: JmxTool should exit out if a provided query matches no values




> JmxTool should exit out if a provided query matches no values
> -
>
> Key: KAFKA-3591
> URL: https://issues.apache.org/jira/browse/KAFKA-3591
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Harsh J
>Priority: Trivial
>
> Running {{kafka.tools.JmxTool}} with an invalid query, such as 
> {{--object-name "foobar"}} would produce just "time" field outputs given no 
> such object. If there are no matched objects when a query has been explicitly 
> provided, it should exit out instead of just printing the time data.
> (We should not exit if no filter is provided though, i.e. if the KAFKA-2278 
> feature is used)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3591: JmxTool should exit out if a provi...

2016-04-20 Thread QwertyManiac
GitHub user QwertyManiac opened a pull request:

https://github.com/apache/kafka/pull/1241

KAFKA-3591: JmxTool should exit out if a provided query matches no values.

Testing: I did manual tests by building locally and running the class 
against a remote 0.9.0.0 broker JMX URL. Running with a bad object-name 
produces the exit condition, and running with a good one continues to behave as 
before (prints the values). Running with no object name dumps all the metrics 
(KAFKA-2278).

Docs: This should need no doc changes.

This contribution is my original work and I license the work to the project 
under the project's open source license.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/QwertyManiac/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/1241.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #1241


commit 2d488462d86e9705dfa7cce61c5f68162f2b7d18
Author: Harsh J 
Date:   2016-04-20T12:00:59Z

KAFKA-3591: JmxTool should exit out if a provided query matches no values




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3591) JmxTool should exit out if a provided query matches no values

2016-04-20 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3591?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15249722#comment-15249722
 ] 

Harsh J commented on KAFKA-3591:


(Please assign to me)

> JmxTool should exit out if a provided query matches no values
> -
>
> Key: KAFKA-3591
> URL: https://issues.apache.org/jira/browse/KAFKA-3591
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 0.9.0.0
>Reporter: Harsh J
>Priority: Trivial
>
> Running {{kafka.tools.JmxTool}} with an invalid query, such as 
> {{--object-name "foobar"}} would produce just "time" field outputs given no 
> such object. If there are no matched objects when a query has been explicitly 
> provided, it should exit out instead of just printing the time data.
> (We should not exit if no filter is provided though, i.e. if the KAFKA-2278 
> feature is used)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3591) JmxTool should exit out if a provided query matches no values

2016-04-20 Thread Harsh J (JIRA)
Harsh J created KAFKA-3591:
--

 Summary: JmxTool should exit out if a provided query matches no 
values
 Key: KAFKA-3591
 URL: https://issues.apache.org/jira/browse/KAFKA-3591
 Project: Kafka
  Issue Type: Improvement
  Components: tools
Affects Versions: 0.9.0.0
Reporter: Harsh J
Priority: Trivial


Running {{kafka.tools.JmxTool}} with an invalid query, such as {{--object-name 
"foobar"}} would produce just "time" field outputs given no such object. If 
there are no matched objects when a query has been explicitly provided, it 
should exit out instead of just printing the time data.

(We should not exit if no filter is provided though, i.e. if the KAFKA-2278 
feature is used)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3572) Metrics of topics still exist when they have been deleted

2016-04-20 Thread Manikumar Reddy (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3572?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar Reddy resolved KAFKA-3572.

Resolution: Duplicate

This issue is fixed in KAFKA-1866 and similar issue is getting fixed in 
KAFKA-3258

> Metrics of topics still exist when they have been deleted
> -
>
> Key: KAFKA-3572
> URL: https://issues.apache.org/jira/browse/KAFKA-3572
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.8.2.1
> Environment: centos 6.5
>Reporter: Eric Huang
>Priority: Trivial
>
> Does someone know why metrics of topics still exist when they have been 
> deleted? The config item "delete.topic.enable=true” is used in our online 
> kafka cluster.
> Remarks:
> Part of metrics content  from "kafka-http-metrics-reporter", but topic of 
> "huanggang_test" has been deleted.
> "kafka.log.Log.partition.0.topic.huanggang_test":{"LogEndOffset":{"type":"gauge","value":35295},"LogStartOffset":{"type":"gauge","value":35295},"NumLogSegments":{"type":"gauge","value":1},"Size":{"type":"gauge","value":0}}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3590) KafkaConsumer fails with "Messages are rejected since there are fewer in-sync replicas than required." when polling

2016-04-20 Thread Sergey Alaev (JIRA)
Sergey Alaev created KAFKA-3590:
---

 Summary: KafkaConsumer fails with "Messages are rejected since 
there are fewer in-sync replicas than required." when polling
 Key: KAFKA-3590
 URL: https://issues.apache.org/jira/browse/KAFKA-3590
 Project: Kafka
  Issue Type: Bug
  Components: clients
Affects Versions: 0.9.0.1
 Environment: JDK1.8 Ubuntu 14.04
Reporter: Sergey Alaev


KafkaConsumer.poll() fails with "Messages are rejected since there are fewer 
in-sync replicas than required.". Isn't this message about minimum number of 
ISR's when *sending* messages?

Stacktrace:
org.apache.kafka.common.KafkaException: Unexpected error from SyncGroup: 
Messages are rejected since there are fewer in-sync replicas than required.
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:444)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$SyncGroupRequestHandler.handle(AbstractCoordinator.java:411)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:665)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator$CoordinatorResponseHandler.onSuccess(AbstractCoordinator.java:644)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.RequestFuture$1.onSuccess(RequestFuture.java:167)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.fireSuccess(RequestFuture.java:133)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.RequestFuture.complete(RequestFuture.java:107)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient$RequestFutureCompletionHandler.onComplete(ConsumerNetworkClient.java:380)
 ~[kafka-clients-0.9.0.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:274) 
~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:163)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureActiveGroup(AbstractCoordinator.java:222)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.ensurePartitionAssignment(ConsumerCoordinator.java:311)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:890)
 ~[kafka-clients-0.9.0.1.jar:na]
at 
org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853) 
~[kafka-clients-0.9.0.1.jar:na]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >