Re: [DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-26 Thread Rajini Sivaram
Hi Tao,

I have a couple of questions:

   1. Is there a reason why you wouldn't want to implement a custom SASL
   mechanism to use your authentication mechanism? SASL itself aims to provide
   pluggable authentication mechanisms.
   2. The KIP suggests that you are interested in plugging in a custom
   authenticator, but not a custom transport layer. If that is the case, maybe
   you need CUSTOM_PLAINTEXT and CUSTOM_SSL for consistency with the other
   security protocols (which are a combination of transport layer protocol and
   authentication protocol)?


Regards,

Rajini

On Tue, Jan 26, 2016 at 6:58 AM, tao xiao  wrote:


> HI Kafka developers,
>
> I raised a KIP-44, allow a customized security protocol, for discussion.
> The goal of this KIP to enable a customized security protocol where users
> can plugin their own implementation.
>
> Feedback is welcomed
>
>


Re: Kafka KIP meeting Jan 26 at 11:00am PST

2016-01-26 Thread Rajini Sivaram
Jun,

Can you send me an invite, please?

Thank you...

Regards,

Rajini

On Mon, Jan 25, 2016 at 10:56 PM, Jun Rao  wrote:

> Hi, Everyone,
>
> We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
> attend but haven't received an invite, please let me know. The following is
> the agenda.
>
> Agenda:
>
> KIP-42: Add Producer and Consumer Interceptors
>
> Thanks,
>
> Jun
>


Re: [DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-26 Thread Ismael Juma
Tao,

Thanks for the KIP.

As others are saying, it would be helpful to have more details on why a new
SASL mechanism was rejected in this KIP. An example of how using a new SASL
mechanism would be more complex when compared to using a customised
security protocol (for example) would help.

Ismael

On Tue, Jan 26, 2016 at 10:21 AM, Harsha  wrote:

> SASL itself can provide pluggable authentication , why not extend there.
> There is also proposal for SASL/PLAIN which does extend the current
> authentication options. I think thats what Rajini is also talking about.
> -Harsha
>
> On Tue, Jan 26, 2016, at 01:56 AM, tao xiao wrote:
> > Hi Rajini,
> >
> > I think I need to rephrase some of the wordings in the KIP. I meant to
> > provide a customized security protocol which may/may not include SSL
> > underneath.  With CUSTOMIZED security protocol users have the ability to
> > plugin both authentication and security communication components.
> >
> >
> > On Tue, 26 Jan 2016 at 17:45 Rajini Sivaram
> > 
> > wrote:
> >
> > > Hi Tao,
> > >
> > > I have a couple of questions:
> > >
> > >1. Is there a reason why you wouldn't want to implement a custom
> SASL
> > >mechanism to use your authentication mechanism? SASL itself aims to
> > > provide
> > >pluggable authentication mechanisms.
> > >2. The KIP suggests that you are interested in plugging in a custom
> > >authenticator, but not a custom transport layer. If that is the
> case,
> > > maybe
> > >you need CUSTOM_PLAINTEXT and CUSTOM_SSL for consistency with the
> other
> > >security protocols (which are a combination of transport layer
> protocol
> > > and
> > >authentication protocol)?
> > >
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> > > On Tue, Jan 26, 2016 at 6:58 AM, tao xiao 
> wrote:
> > >
> > >
> > > > HI Kafka developers,
> > > >
> > > > I raised a KIP-44, allow a customized security protocol, for
> discussion.
> > > > The goal of this KIP to enable a customized security protocol where
> users
> > > > can plugin their own implementation.
> > > >
> > > > Feedback is welcomed
> > > >
> > > >
> > >
>


Re: [DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-26 Thread Harsha
SASL itself can provide pluggable authentication , why not extend there.
There is also proposal for SASL/PLAIN which does extend the current
authentication options. I think thats what Rajini is also talking about.
-Harsha

On Tue, Jan 26, 2016, at 01:56 AM, tao xiao wrote:
> Hi Rajini,
> 
> I think I need to rephrase some of the wordings in the KIP. I meant to
> provide a customized security protocol which may/may not include SSL
> underneath.  With CUSTOMIZED security protocol users have the ability to
> plugin both authentication and security communication components.
> 
> 
> On Tue, 26 Jan 2016 at 17:45 Rajini Sivaram
> 
> wrote:
> 
> > Hi Tao,
> >
> > I have a couple of questions:
> >
> >1. Is there a reason why you wouldn't want to implement a custom SASL
> >mechanism to use your authentication mechanism? SASL itself aims to
> > provide
> >pluggable authentication mechanisms.
> >2. The KIP suggests that you are interested in plugging in a custom
> >authenticator, but not a custom transport layer. If that is the case,
> > maybe
> >you need CUSTOM_PLAINTEXT and CUSTOM_SSL for consistency with the other
> >security protocols (which are a combination of transport layer protocol
> > and
> >authentication protocol)?
> >
> >
> > Regards,
> >
> > Rajini
> >
> > On Tue, Jan 26, 2016 at 6:58 AM, tao xiao  wrote:
> >
> >
> > > HI Kafka developers,
> > >
> > > I raised a KIP-44, allow a customized security protocol, for discussion.
> > > The goal of this KIP to enable a customized security protocol where users
> > > can plugin their own implementation.
> > >
> > > Feedback is welcomed
> > >
> > >
> >


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Ismael Juma
Hi Anna and Neha,

I think it makes a lot of sense to try and keep the interface lean and to
add more methods later when/if there is a need. What is the current
thinking with regards to compatibility when/if we add new methods? A few
options come to mind:

1. Change the interface to an abstract class with empty implementations for
all the methods. This means that the path to adding new methods is clear.
2. Hope we have moved to Java 8 by the time we need to add new methods and
use default methods with an empty implementation for any new method (and
potentially make existing methods default methods too at that point for
consistency)
3. Introduce a new interface that inherits from the existing Interceptor
interface when we need to add new methods.

Option 1 is the easiest and it also means that interceptor users only need
to override the methods that they are interested (more useful if the number
of methods grows). The downside is that interceptor implementations cannot
inherit from another class (a straightforward workaround is to make the
interceptor a forwarder that calls another class). Also, our existing
callbacks are interfaces, so seems a bit inconsistent.

Option 2 may be the most appealing one as both users and ourselves retain
flexibility. The main downside is that it relies on us moving to Java 8,
which may be more than a year away potentially (if we support the last 2
Java releases).

Thoughts?

Ismael

On Tue, Jan 26, 2016 at 4:59 AM, Neha Narkhede  wrote:

> Anna,
>
> I'm also in favor of including just the APIs for which we have a clear use
> case. If more use cases for finer monitoring show up in the future, we can
> always update the interface. Would you please highlight in the KIP the APIs
> that you think we have an immediate use for?
>
> Joel,
>
> Broker-side monitoring makes a lot of sense in the long term though I don't
> think it is a requirement for end-to-end monitoring. With the producer and
> consumer interceptors, you have the ability to get full
> publish-to-subscribe end-to-end monitoring. The broker interceptor
> certainly improves the resolution of monitoring but it is also a riskier
> change. I prefer an incremental approach over a big-bang and recommend
> taking baby-steps. Let's first make sure the producer/consumer interceptors
> are successful. And then come back and add the broker interceptor
> carefully.
>
> Having said that, it would be great to understand your proposal for the
> broker interceptor independently. We can either add an interceptor
> on-append or on-commit. If people want to use this for monitoring, then
> possibly on-commit might be more useful?
>
> Thanks,
> Neha
>
> On Mon, Jan 25, 2016 at 6:47 PM, Jay Kreps  wrote:
>
> > Hey Joel,
> >
> > What is the interface you are thinking of? Something like this:
> > onAppend(String topic, int partition, Records records, long time)
> > ?
> >
> > One challenge right now is that we are still using the old
> > Message/MessageSet classes on the broker which I'm not sure if we'd want
> to
> > support over the long haul but it might be okay just to create the
> records
> > instance for this interface.
> >
> > -Jay
> >
> > On Mon, Jan 25, 2016 at 12:37 PM, Joel Koshy 
> wrote:
> >
> > > I'm definitely in favor of having such hooks in the produce/consume
> > > life-cycle. Not sure if people remember this but in Kafka 0.7 this was
> > > pretty much how it was:
> > >
> > >
> >
> https://github.com/apache/kafka/blob/0.7/core/src/main/scala/kafka/producer/async/CallbackHandler.scala
> > > i.e., we had something similar to the interceptor proposal for various
> > > stages of the producer request. The producer provided call-backs for
> > > beforeEnqueue, afterEnqueue, afterDequeuing, beforeSending, etc. So at
> > > LinkedIn we in fact did auditing within these call-backs (and not
> > > explicitly in the wrapper). Over time and with 0.8 we moved that out to
> > the
> > > wrapper libraries.
> > >
> > > On a side-note while audit and other monitoring can be done internally
> > in a
> > > convenient way I think it should be clarified that having a wrapper is
> in
> > > general not a bad idea and I would even consider it to be a
> > best-practice.
> > > Even with 0.7 we still had a wrapper library and that API has largely
> > > stayed the same and has helped protect against (sometimes backwards
> > > incompatible) changes in open source.
> > >
> > > While we are on this topic I have one comment and Anna, you may have
> > > already considered this but I don't see mention of it in the KIP:
> > >
> > > Add a custom message interceptor/validator on the broker on message
> > > arrival.
> > >
> > > We decompress and do basic validation of messages on arrival. I think
> > there
> > > is value in supporting custom validation and expand it to support
> custom
> > > on-arrival processing. Here is a specific use-case I have in mind. The
> > blog
> > > that James referenced 

[jira] [Commented] (KAFKA-3149) Extend SASL implementation to support more mechanisms

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117091#comment-15117091
 ] 

ASF GitHub Bot commented on KAFKA-3149:
---

GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/812

KAFKA-3149: Extend SASL implementation to support more mechanisms

Code changes corresponding to KIP-43 to enable review of the KIP.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3149

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/812.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #812


commit 67ec8fb9ef90e1f46e2e4d68f961f95fe6162cc4
Author: Rajini Sivaram 
Date:   2016-01-26T11:30:51Z

KAFKA-3149: Extend SASL implementation to support more mechanisms




> Extend SASL implementation to support more mechanisms
> -
>
> Key: KAFKA-3149
> URL: https://issues.apache.org/jira/browse/KAFKA-3149
> Project: Kafka
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 0.9.0.0
>Reporter: Rajini Sivaram
>Assignee: Rajini Sivaram
>
> Make SASL implementation more configurable to enable integration with 
> existing authentication servers.
> Details are in KIP-43: 
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3150) kafka.tools.UpdateOffsetsInZK not work (sasl enabled)

2016-01-26 Thread linbao111 (JIRA)
linbao111 created KAFKA-3150:


 Summary: kafka.tools.UpdateOffsetsInZK not work (sasl enabled)
 Key: KAFKA-3150
 URL: https://issues.apache.org/jira/browse/KAFKA-3150
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
 Environment: redhat as6.5
Reporter: linbao111


./bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest 
config/consumer.properties   alalei_2  
[2016-01-26 17:20:49,920] WARN Property sasl.kerberos.service.name is not valid 
(kafka.utils.VerifiableProperties)
[2016-01-26 17:20:49,920] WARN Property security.protocol is not valid 
(kafka.utils.VerifiableProperties)
Exception in thread "main" kafka.common.BrokerEndPointNotAvailableException: 
End point PLAINTEXT not found for broker 1
at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
at 
kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply$mcVI$sp(UpdateOffsetsInZK.scala:70)
at 
kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
at 
kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
at 
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at 
kafka.tools.UpdateOffsetsInZK$.getAndSetOffsets(UpdateOffsetsInZK.scala:59)
at kafka.tools.UpdateOffsetsInZK$.main(UpdateOffsetsInZK.scala:43)
at kafka.tools.UpdateOffsetsInZK.main(UpdateOffsetsInZK.scala)


same error for:
./bin/kafka-consumer-offset-checker.sh  --broker-info --group 
test-consumer-group --topic alalei_2 --zookeeper slave1:2181
[2016-01-26 17:23:45,218] WARN WARNING: ConsumerOffsetChecker is deprecated and 
will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. 
(kafka.tools.ConsumerOffsetChecker$)
Exiting due to: End point PLAINTEXT not found for broker 0.

./bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper 
slave1:2181 --group  test-consumer-group
[2016-01-26 17:26:15,075] WARN WARNING: ConsumerOffsetChecker is deprecated and 
will be dropped in releases following 0.9.0. Use ConsumerGroupCommand instead. 
(kafka.tools.ConsumerOffsetChecker$)
Exiting due to: End point PLAINTEXT not found for broker 0




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread Ismael Juma
Hi Rajini,

Thanks for the KIP. As stated in the KIP, it does not address "Support for
multiple SASL mechanisms within a broker". Maybe we should also mention
this in the "Rejected Alternatives" section with the reasoning. I think
it's particularly relevant to understand if it's not being proposed because
we don't think it's useful or due to the additional implementation
complexity (it's probably a combination). If we think this could be useful
in the future, it would also be worth thinking about how it is affected if
we do KIP-43 first (ie will it be easier, harder, etc.)

Thanks,
Ismael

On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> I have just created KIP-43 to extend the SASL implementation in Kafka to
> support new SASL mechanisms.
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
>
>
> Comments and suggestions are appreciated.
>
>
> Thank you...
>
> Regards,
>
> Rajini
>


[jira] [Commented] (KAFKA-3068) NetworkClient may connect to a different Kafka cluster than originally configured

2016-01-26 Thread Eno Thereska (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3068?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117071#comment-15117071
 ] 

Eno Thereska commented on KAFKA-3068:
-

Ok, I'll revert to using the 0.8.2 solution (by updating the PR) and draft a 
KIP for moving forward. Thanks.

> NetworkClient may connect to a different Kafka cluster than originally 
> configured
> -
>
> Key: KAFKA-3068
> URL: https://issues.apache.org/jira/browse/KAFKA-3068
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Eno Thereska
>
> In https://github.com/apache/kafka/pull/290, we added the logic to cache all 
> brokers (id and ip) that the client has ever seen. If we can't find an 
> available broker from the current Metadata, we will pick a broker that we 
> have ever seen (in NetworkClient.leastLoadedNode()).
> One potential problem this logic can introduce is the following. Suppose that 
> we have a broker with id 1 in a Kafka cluster. A producer client remembers 
> this broker in nodesEverSeen. At some point, we bring down this broker and 
> use the host in a different Kafka cluster. Then, the producer client uses 
> this broker from nodesEverSeen to refresh metadata. It will find the metadata 
> in a different Kafka cluster and start producing data there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread Rajini Sivaram
Ismael,

Thank you for your review. The main reason I didn't address the support for
multiple mechanisms within a broker is because it requires changes to the
wire protocol to propagate mechanisms. But I do agree that we need to
understand whether it would be even harder to support this in the future.
Will give it some thought and write it up in the KIP.

Regards,

Rajini

On Tue, Jan 26, 2016 at 10:44 AM, Ismael Juma  wrote:

> Hi Rajini,
>
> Thanks for the KIP. As stated in the KIP, it does not address "Support for
> multiple SASL mechanisms within a broker". Maybe we should also mention
> this in the "Rejected Alternatives" section with the reasoning. I think
> it's particularly relevant to understand if it's not being proposed because
> we don't think it's useful or due to the additional implementation
> complexity (it's probably a combination). If we think this could be useful
> in the future, it would also be worth thinking about how it is affected if
> we do KIP-43 first (ie will it be easier, harder, etc.)
>
> Thanks,
> Ismael
>
> On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > I have just created KIP-43 to extend the SASL implementation in Kafka to
> > support new SASL mechanisms.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> >
> >
> > Comments and suggestions are appreciated.
> >
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


Re: [DISCUSS] KIP-44 - Allow Kafka to have a customized security protocol

2016-01-26 Thread tao xiao
Hi Rajini,

I think I need to rephrase some of the wordings in the KIP. I meant to
provide a customized security protocol which may/may not include SSL
underneath.  With CUSTOMIZED security protocol users have the ability to
plugin both authentication and security communication components.


On Tue, 26 Jan 2016 at 17:45 Rajini Sivaram 
wrote:

> Hi Tao,
>
> I have a couple of questions:
>
>1. Is there a reason why you wouldn't want to implement a custom SASL
>mechanism to use your authentication mechanism? SASL itself aims to
> provide
>pluggable authentication mechanisms.
>2. The KIP suggests that you are interested in plugging in a custom
>authenticator, but not a custom transport layer. If that is the case,
> maybe
>you need CUSTOM_PLAINTEXT and CUSTOM_SSL for consistency with the other
>security protocols (which are a combination of transport layer protocol
> and
>authentication protocol)?
>
>
> Regards,
>
> Rajini
>
> On Tue, Jan 26, 2016 at 6:58 AM, tao xiao  wrote:
>
>
> > HI Kafka developers,
> >
> > I raised a KIP-44, allow a customized security protocol, for discussion.
> > The goal of this KIP to enable a customized security protocol where users
> > can plugin their own implementation.
> >
> > Feedback is welcomed
> >
> >
>


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread Rajini Sivaram
Hi Tao,

Thank you for the review. The changes I had in mind are in the PR
https://github.com/apache/kafka/pull/812. Login for non-Kerberos protocols
contains very little logic. I was expecting that combined with a custom
login module specified in JAAS configuration, this would give sufficient
flexibility. Is there a specific usecase you have in mind where you need to
customize the Login code?

Regards,

Rajini

On Tue, Jan 26, 2016 at 11:15 AM, tao xiao  wrote:

> Hi Rajini,
>
> I think it makes sense to change LoginManager or Login to an interface
> which users can extend to provide their own logic of login otherwise it is
> hard for users to implement a custom SASL mechanism but have no control
> over login
>
> On Tue, 26 Jan 2016 at 18:45 Ismael Juma  wrote:
>
> > Hi Rajini,
> >
> > Thanks for the KIP. As stated in the KIP, it does not address "Support
> for
> > multiple SASL mechanisms within a broker". Maybe we should also mention
> > this in the "Rejected Alternatives" section with the reasoning. I think
> > it's particularly relevant to understand if it's not being proposed
> because
> > we don't think it's useful or due to the additional implementation
> > complexity (it's probably a combination). If we think this could be
> useful
> > in the future, it would also be worth thinking about how it is affected
> if
> > we do KIP-43 first (ie will it be easier, harder, etc.)
> >
> > Thanks,
> > Ismael
> >
> > On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> > rajinisiva...@googlemail.com> wrote:
> >
> > > I have just created KIP-43 to extend the SASL implementation in Kafka
> to
> > > support new SASL mechanisms.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> > >
> > >
> > > Comments and suggestions are appreciated.
> > >
> > >
> > > Thank you...
> > >
> > > Regards,
> > >
> > > Rajini
> > >
> >
>


[jira] [Commented] (KAFKA-3006) Make collection default container type for sequences in the consumer API

2016-01-26 Thread Pierre-Yves Ritschard (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116960#comment-15116960
 ] 

Pierre-Yves Ritschard commented on KAFKA-3006:
--

[~gwenshap] - thanks! the KIP is up: 
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61337336

> Make collection default container type for sequences in the consumer API
> 
>
> Key: KAFKA-3006
> URL: https://issues.apache.org/jira/browse/KAFKA-3006
> Project: Kafka
>  Issue Type: Improvement
>  Components: clients
>Affects Versions: 0.9.0.0
>Reporter: Pierre-Yves Ritschard
>  Labels: patch
>
> The KafkaConsumer API has some annoying inconsistencies in the usage of 
> collection types. For example, subscribe() takes a list, but subscription() 
> returns a set. Similarly for assign() and assignment(). We also have pause() 
> , seekToBeginning(), seekToEnd(), and resume() which annoyingly use a 
> variable argument array, which means you have to copy the result of 
> assignment() to an array if you want to pause all assigned partitions. We can 
> solve these issues by adding the following variants:
> {code}
> void subscribe(Collection topics);
> void subscribe(Collection topics, ConsumerRebalanceListener);
> void assign(Collection partitions);
> void pause(Collection partitions);
> void resume(Collection partitions);
> void seekToBeginning(Collection);
> void seekToEnd(Collection);
> {code}
> This issues supersedes KAFKA-2991



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2016-01-26 Thread linbao111 (JIRA)
linbao111 created KAFKA-3151:


 Summary: kafka-consumer-groups.sh fail with sasl enabled 
 Key: KAFKA-3151
 URL: https://issues.apache.org/jira/browse/KAFKA-3151
 Project: Kafka
  Issue Type: Bug
Affects Versions: 0.9.0.1
 Environment: redhat as6.5
Reporter: linbao111


./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
slave1.otocyon.com:9092 --list
Error while executing consumer group command Request METADATA failed on brokers 
List(Node(-1, slave1.otocyon.com, 9092))
java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
slave1.otocyon.com, 9092))
at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
at kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
at 
kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
at 
kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)

same error for:
bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
Error while executing consumer group command Request GROUP_COORDINATOR failed 
on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
List(Node(-1, slave16, 9092), Node(-2, app, 9092))
at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
at kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
at 
kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
at 
kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
at 
kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
at kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread tao xiao
Hi Rajini,

I think it makes sense to change LoginManager or Login to an interface
which users can extend to provide their own logic of login otherwise it is
hard for users to implement a custom SASL mechanism but have no control
over login

On Tue, 26 Jan 2016 at 18:45 Ismael Juma  wrote:

> Hi Rajini,
>
> Thanks for the KIP. As stated in the KIP, it does not address "Support for
> multiple SASL mechanisms within a broker". Maybe we should also mention
> this in the "Rejected Alternatives" section with the reasoning. I think
> it's particularly relevant to understand if it's not being proposed because
> we don't think it's useful or due to the additional implementation
> complexity (it's probably a combination). If we think this could be useful
> in the future, it would also be worth thinking about how it is affected if
> we do KIP-43 first (ie will it be easier, harder, etc.)
>
> Thanks,
> Ismael
>
> On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > I have just created KIP-43 to extend the SASL implementation in Kafka to
> > support new SASL mechanisms.
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> >
> >
> > Comments and suggestions are appreciated.
> >
> >
> > Thank you...
> >
> > Regards,
> >
> > Rajini
> >
>


[GitHub] kafka pull request: KAFKA-3149: Extend SASL implementation to supp...

2016-01-26 Thread rajinisivaram
GitHub user rajinisivaram opened a pull request:

https://github.com/apache/kafka/pull/812

KAFKA-3149: Extend SASL implementation to support more mechanisms

Code changes corresponding to KIP-43 to enable review of the KIP.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/rajinisivaram/kafka KAFKA-3149

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/812.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #812


commit 67ec8fb9ef90e1f46e2e4d68f961f95fe6162cc4
Author: Rajini Sivaram 
Date:   2016-01-26T11:30:51Z

KAFKA-3149: Extend SASL implementation to support more mechanisms




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Build failed in Jenkins: kafka-trunk-jdk7 #993

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: remove FilteredIterator

--
[...truncated 3302 lines...]

kafka.log.BrokerCompressionTest > testBrokerSideCompression[0] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[1] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[2] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[3] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[4] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[5] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[6] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[7] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[8] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[9] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[10] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[11] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[12] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[13] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[14] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[15] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[16] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[17] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[18] PASSED

kafka.log.BrokerCompressionTest > testBrokerSideCompression[19] PASSED

kafka.log.OffsetMapTest > testClear PASSED

kafka.log.OffsetMapTest > testBasicValidation PASSED

kafka.log.LogManagerTest > testCleanupSegmentsToMaintainSize PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithRelativeDirectory 
PASSED

kafka.log.LogManagerTest > testGetNonExistentLog PASSED

kafka.log.LogManagerTest > testTwoLogManagersUsingSameDirFails PASSED

kafka.log.LogManagerTest > testLeastLoadedAssignment PASSED

kafka.log.LogManagerTest > testCleanupExpiredSegments PASSED

kafka.log.LogManagerTest > testCheckpointRecoveryPoints PASSED

kafka.log.LogManagerTest > testTimeBasedFlush PASSED

kafka.log.LogManagerTest > testCreateLog PASSED

kafka.log.LogManagerTest > testRecoveryDirectoryMappingWithTrailingSlash PASSED

kafka.log.LogSegmentTest > testRecoveryWithCorruptMessage PASSED

kafka.log.LogSegmentTest > testRecoveryFixesCorruptIndex PASSED

kafka.log.LogSegmentTest > testReadFromGap PASSED

kafka.log.LogSegmentTest > testTruncate PASSED

kafka.log.LogSegmentTest > testReadBeforeFirstOffset PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeAppendMessage PASSED

kafka.log.LogSegmentTest > testChangeFileSuffixes PASSED

kafka.log.LogSegmentTest > testMaxOffset PASSED

kafka.log.LogSegmentTest > testNextOffsetCalculation PASSED

kafka.log.LogSegmentTest > testReadOnEmptySegment PASSED

kafka.log.LogSegmentTest > testReadAfterLast PASSED

kafka.log.LogSegmentTest > testCreateWithInitFileSizeClearShutdown PASSED

kafka.log.LogSegmentTest > testTruncateFull PASSED

kafka.log.FileMessageSetTest > testTruncate PASSED

kafka.log.FileMessageSetTest > testIterationOverPartialAndTruncation PASSED

kafka.log.FileMessageSetTest > testRead PASSED

kafka.log.FileMessageSetTest > testFileSize PASSED

kafka.log.FileMessageSetTest > testIteratorWithLimits PASSED

kafka.log.FileMessageSetTest > testPreallocateTrue PASSED

kafka.log.FileMessageSetTest > testIteratorIsConsistent PASSED

kafka.log.FileMessageSetTest > testIterationDoesntChangePosition PASSED

kafka.log.FileMessageSetTest > testWrittenEqualsRead PASSED

kafka.log.FileMessageSetTest > testWriteTo PASSED

kafka.log.FileMessageSetTest > testPreallocateFalse PASSED

kafka.log.FileMessageSetTest > testPreallocateClearShutdown PASSED

kafka.log.FileMessageSetTest > testSearch PASSED

kafka.log.FileMessageSetTest > testSizeInBytes PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[0] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[1] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[2] PASSED

kafka.log.LogCleanerIntegrationTest > cleanerTest[3] PASSED

kafka.log.LogTest > testParseTopicPartitionNameForMissingTopic PASSED

kafka.log.LogTest > testIndexRebuild PASSED

kafka.log.LogTest > testLogRolls PASSED

kafka.log.LogTest > testMessageSizeCheck PASSED

kafka.log.LogTest > testAsyncDelete PASSED

kafka.log.LogTest > testReadOutOfRange PASSED

kafka.log.LogTest > testReadAtLogGap PASSED

kafka.log.LogTest > testTimeBasedLogRoll PASSED

kafka.log.LogTest > testLoadEmptyLog PASSED

kafka.log.LogTest > testMessageSetSizeCheck PASSED

kafka.log.LogTest > testIndexResizingAtTruncation PASSED

kafka.log.LogTest > testCompactedTopicConstraints PASSED

kafka.log.LogTest > testThatGarbageCollectingSegmentsDoesntChangeOffset PASSED

kafka.log.LogTest > testAppendAndReadWithSequentialOffsets PASSED

kafka.log.LogTest > 

[jira] [Assigned] (KAFKA-2673) Log JmxTool output to logger

2016-01-26 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2673?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu reassigned KAFKA-2673:
---

Assignee: chen zhu

> Log JmxTool output to logger
> 
>
> Key: KAFKA-2673
> URL: https://issues.apache.org/jira/browse/KAFKA-2673
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 0.8.2.1
>Reporter: Eno Thereska
>Assignee: chen zhu
>Priority: Trivial
>  Labels: newbie
> Fix For: 0.8.1.2
>
>
> Currently JmxTool outputs the data into a CSV file. It could be of value to 
> have the data sent to a logger specified in a log4j configuration file.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-2676) AclCommandTest has wrong package name

2016-01-26 Thread chen zhu (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2676?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chen zhu reassigned KAFKA-2676:
---

Assignee: chen zhu

> AclCommandTest has wrong package name 
> --
>
> Key: KAFKA-2676
> URL: https://issues.apache.org/jira/browse/KAFKA-2676
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: chen zhu
>  Labels: newbie
>
> AclCommandTest has the package unit.kafka.security.auth. We should remove the 
> unit par. ResourceTypeTest has the same issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3142) Improve error message in kstreams

2016-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3142:


Assignee: Guozhang Wang

> Improve error message in kstreams
> -
>
> Key: KAFKA-3142
> URL: https://issues.apache.org/jira/browse/KAFKA-3142
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jay Kreps
>Assignee: Guozhang Wang
>
> If you have a bug in your kstream code the error message could be slightly 
> better.
> 1. Not sure what the second error about it already being closed is.
> 2. I recommend we avoid wrapping the exception (at least for runtime 
> exceptions) so it is more clear to the user that the error is coming from 
> their code not ours.
> {code}
> [2016-01-23 14:40:59,776] ERROR Uncaught error during processing in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:222)
> org.apache.kafka.common.KafkaException: java.lang.NullPointerException
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:327)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:220)
> Caused by: java.lang.NullPointerException
>   at TestKstreamsApi.lambda$0(TestKstreamsApi.java:27)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:42)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:324)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:56)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:170)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:310)
>   ... 1 more
> [2016-01-23 14:40:59,803] ERROR Failed to close a StreamTask #0_0 in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:586)
> java.lang.IllegalStateException: This consumer has already been closed.
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.ensureNotClosed(KafkaConsumer.java:1281)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1292)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.assign(KafkaConsumer.java:779)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:339)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.close(AbstractTask.java:95)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:306)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.closeOne(StreamThread.java:584)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.removeStreamTasks(StreamThread.java:571)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.shutdown(StreamThread.java:265)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:225)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3125) Exception Hierarchy for Streams

2016-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang reassigned KAFKA-3125:


Assignee: Guozhang Wang

> Exception Hierarchy for Streams
> ---
>
> Key: KAFKA-3125
> URL: https://issues.apache.org/jira/browse/KAFKA-3125
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Currently Kafka Streams do not have its own exception category: we only have 
> one TopologyException that extends from KafkaException.
> It's better to start thinking about categorizing exceptions in Streams with a 
> common parent of "StreamsException". For example:
> 1. What type of exceptions should be exposed to users at job runtime; what 
> type of exceptions should be exposed at "topology build time".
> 2. Should KafkaStreaming.start / stop ever need to throw any exceptions?
> 3. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-26 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira updated KAFKA-3132:

   Resolution: Fixed
Fix Version/s: 0.9.1.0
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 811
[https://github.com/apache/kafka/pull/811]

> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Assignee: Grant Henke
>Priority: Minor
>  Labels: newbie
> Fix For: 0.9.1.0
>
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #320

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] MINOR: remove FilteredIterator

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9ffa907d704b424ad05c18d834cfefdc1c7ad22a 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9ffa907d704b424ad05c18d834cfefdc1c7ad22a
 > git rev-list e9a72ceab6e0ceaf4d2125756f07154cd15a7178 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson1087026905110920189.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 15.91 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3989824260206570302.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 16.429 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


Build failed in Jenkins: kafka-trunk-jdk7 #994

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] MINOR: join test for windowed keys

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-4 (docker Ubuntu ubuntu4 ubuntu yahoo-not-h2) in 
workspace 
 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 22de0a8ab5d0e84fa40016754f9a8eff8193aa89 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 22de0a8ab5d0e84fa40016754f9a8eff8193aa89
 > git rev-list 9ffa907d704b424ad05c18d834cfefdc1c7ad22a # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5354669192199590310.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 17.314 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson5388312364483730613.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 20.628 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2016-01-26 Thread Guozhang Wang
One motivation of my proposal is actually to avoid any clients trying to
read the timestamp type from the topic metadata response and behave
differently since:

1) topic metadata response is not always in-sync with the source-of-truth
(ZK), hence when the clients realized that the config has changed it may
already be too late (i.e. for consumer the records with the wrong timestamp
could already be returned to user).

2) the client logic could be a bit simpler, and this will benefit non-Java
development a lot. Also we can avoid adding this into the topic metadata
response.

Guozhang

On Tue, Jan 26, 2016 at 3:20 PM, Becket Qin  wrote:

> My hesitation for the changed protocol is that I think If we will have
> topic configuration returned in the topic metadata, the current protocol
> makes more sense. Because the timestamp type is a topic level setting so we
> don't need to put it into each message. That is assuming the timestamp type
> change on a topic rarely happens and if it is ever needed, the existing
> data should be wiped out.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
> On Tue, Jan 26, 2016 at 2:07 PM, Becket Qin  wrote:
>
> > Bump up this thread per discussion on the KIP hangout.
> >
> > During the implementation of the KIP, Guozhang raised another proposal on
> > how to indicate the message timestamp type used by messages. So we want
> to
> > see people's opinion on this proposal.
> >
> > The difference between current and the new proposal only differs on
> > messages that are a) compressed, and b) using LogAppendTime
> >
> > For compressed messages using LogAppendTime, the timestamps in the
> current
> > proposal is as below:
> > 1. When a producer produces the messages, it tries to set timestamp to -1
> > for inner messages if it knows LogAppendTime is used.
> > 2. When a broker receives the messages, it will overwrite the timestamp
> of
> > inner message to -1 if needed and write server time to the wrapper
> message.
> > Broker will do re-compression if inner message timestamp is overwritten.
> > 3. When a consumer receives the messages, it will see the inner message
> > timestamp is -1 so the wrapper message timestamp is used.
> >
> > Implementation wise, this proposal requires the producer to set timestamp
> > for inner messages correctly to avoid broker side re-compression. To do
> > that, the short term solution is to let producer infer the timestamp type
> > returned by broker in ProduceResponse and set correct timestamp
> afterwards.
> > This means the first few batches will still need re-compression on the
> > broker. The long term solution is to have producer get topic
> configuration
> > during metadata update.
> >
> >
> > The proposed modification is:
> > 1. When a producer produces the messages, it always using create time.
> > 2. When a broker receives the messages, it ignores the inner messages
> > timestamp, but simply set a wrapper message timestamp type attribute bit
> to
> > 1 and set the timestamp of the wrapper message to server time. (The
> broker
> > will also set the timesatmp type attribute bit accordingly for
> > non-compressed messages using LogAppendTime).
> > 3. When a consumer receives the messages, it checks timestamp type
> > attribute bit of wrapper message. If it is set to 1, the inner message's
> > timestamp will be ignored and the wrapper message's timestamp will be
> used.
> >
> > This approach uses an extra attribute bit. The good thing of the modified
> > protocol is consumers will be able to know the timestamp type. And
> > re-compression on broker side is completely avoided no matter what value
> is
> > sent by the producer. In this approach the inner messages will have wrong
> > timestamps.
> >
> > We want to see if people have concerns over the modified approach.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> >
> >
> >
> >
> > On Tue, Dec 15, 2015 at 11:45 AM, Becket Qin 
> wrote:
> >
> >> Jun,
> >>
> >> 1. I agree it would be nice to have the timestamps used in a unified
> way.
> >> My concern is that if we let server change timestamp of the inner
> message
> >> for LogAppendTime, that will enforce the user who are using
> LogAppendTime
> >> to always pay the recompression penalty. So using LogAppendTime makes
> >> KIP-31 in vain.
> >>
> >> 4. If there are no entries in the log segment, we can read from the time
> >> index before the previous log segment. If there is no previous entry
> >> avaliable after we search until the earliest log segment, that means all
> >> the previous log segment with a valid time index entry has been
> deleted. In
> >> that case supposedly there should be only one log segment left - the
> active
> >> log segment, we can simply set the latest timestamp to 0.
> >>
> >> Guozhang,
> >>
> >> Sorry for the confusion. by "the timestamp of the latest message" I
> >> actually meant "the timestamp of the message with largest timestamp".
> So in
> >> your example the 

[jira] [Created] (KAFKA-3155) Transient Failure in kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack

2016-01-26 Thread Guozhang Wang (JIRA)
Guozhang Wang created KAFKA-3155:


 Summary: Transient Failure in 
kafka.integration.PlaintextTopicMetadataTest.testIsrAfterBrokerShutDownAndJoinsBack
 Key: KAFKA-3155
 URL: https://issues.apache.org/jira/browse/KAFKA-3155
 Project: Kafka
  Issue Type: Sub-task
Reporter: Guozhang Wang


{code}
Stacktrace

java.lang.AssertionError: No request is complete.
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.assertTrue(Assert.java:41)
at 
kafka.api.BaseProducerSendTest$$anonfun$testFlush$1.apply$mcVI$sp(BaseProducerSendTest.scala:275)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:141)
at 
kafka.api.BaseProducerSendTest.testFlush(BaseProducerSendTest.scala:273)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:105)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:56)
at 
org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:64)
at 
org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:50)
at sun.reflect.GeneratedMethodAccessor12.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
at 
org.gradle.messaging.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
at 
org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:106)
at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
at 
org.gradle.messaging.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
at 
org.gradle.messaging.remote.internal.hub.MessageHub$Handler.run(MessageHub.java:360)
at 
org.gradle.internal.concurrent.ExecutorPolicy$CatchAndRecordFailures.onExecute(ExecutorPolicy.java:54)
at 
org.gradle.internal.concurrent.StoppableExecutorImpl$1.run(StoppableExecutorImpl.java:40)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
{code}

One example: 
https://builds.apache.org/job/kafka-trunk-git-pr-jdk7/2117/testReport/junit/kafka.api/PlaintextProducerSendTest/testFlush/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: join test for windowed keys

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/814


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: remove FilteredIterator

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/816


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: valueFactory's key is consumerId

2016-01-26 Thread zqhxuyuan
GitHub user zqhxuyuan opened a pull request:

https://github.com/apache/kafka/pull/817

valueFactory's key is consumerId

the key of partitionAssignment's valueFactory is consumerId, not topic.
partitionAssignment.getAndMaybePut(threadId.consumer)

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/zqhxuyuan/kafka trunk

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/817.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #817


commit 938e4e11d9cbdf6e3d6b70b2f619c8d4872c6b7f
Author: zqh 
Date:   2016-01-27T06:59:09Z

valueFactory's key is consumerId

the key of partitionAssignment's valueFactory is consumerId, not topic.
partitionAssignment.getAndMaybePut(threadId.consumer)




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Resolved] (KAFKA-3142) Improve error message in kstreams

2016-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-3142.
--
Resolution: Fixed

This is fixed as part of KAFKA-3125.

> Improve error message in kstreams
> -
>
> Key: KAFKA-3142
> URL: https://issues.apache.org/jira/browse/KAFKA-3142
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jay Kreps
>Assignee: Guozhang Wang
>
> If you have a bug in your kstream code the error message could be slightly 
> better.
> 1. Not sure what the second error about it already being closed is.
> 2. I recommend we avoid wrapping the exception (at least for runtime 
> exceptions) so it is more clear to the user that the error is coming from 
> their code not ours.
> {code}
> [2016-01-23 14:40:59,776] ERROR Uncaught error during processing in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:222)
> org.apache.kafka.common.KafkaException: java.lang.NullPointerException
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:327)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:220)
> Caused by: java.lang.NullPointerException
>   at TestKstreamsApi.lambda$0(TestKstreamsApi.java:27)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:42)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:324)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:56)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:170)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:310)
>   ... 1 more
> [2016-01-23 14:40:59,803] ERROR Failed to close a StreamTask #0_0 in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:586)
> java.lang.IllegalStateException: This consumer has already been closed.
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.ensureNotClosed(KafkaConsumer.java:1281)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1292)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.assign(KafkaConsumer.java:779)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:339)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.close(AbstractTask.java:95)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:306)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.closeOne(StreamThread.java:584)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.removeStreamTasks(StreamThread.java:571)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.shutdown(StreamThread.java:265)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:225)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3142) Improve error message in kstreams

2016-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3142?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-3142:
-
Fix Version/s: 0.9.1.0

> Improve error message in kstreams
> -
>
> Key: KAFKA-3142
> URL: https://issues.apache.org/jira/browse/KAFKA-3142
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jay Kreps
>Assignee: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> If you have a bug in your kstream code the error message could be slightly 
> better.
> 1. Not sure what the second error about it already being closed is.
> 2. I recommend we avoid wrapping the exception (at least for runtime 
> exceptions) so it is more clear to the user that the error is coming from 
> their code not ours.
> {code}
> [2016-01-23 14:40:59,776] ERROR Uncaught error during processing in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:222)
> org.apache.kafka.common.KafkaException: java.lang.NullPointerException
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:327)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:220)
> Caused by: java.lang.NullPointerException
>   at TestKstreamsApi.lambda$0(TestKstreamsApi.java:27)
>   at 
> org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:42)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:68)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.forward(StreamTask.java:324)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:175)
>   at 
> org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:56)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:170)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:310)
>   ... 1 more
> [2016-01-23 14:40:59,803] ERROR Failed to close a StreamTask #0_0 in thread 
> [StreamThread-1]:  
> (org.apache.kafka.streams.processor.internals.StreamThread:586)
> java.lang.IllegalStateException: This consumer has already been closed.
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.ensureNotClosed(KafkaConsumer.java:1281)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.acquire(KafkaConsumer.java:1292)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.assign(KafkaConsumer.java:779)
>   at 
> org.apache.kafka.streams.processor.internals.ProcessorStateManager.close(ProcessorStateManager.java:339)
>   at 
> org.apache.kafka.streams.processor.internals.AbstractTask.close(AbstractTask.java:95)
>   at 
> org.apache.kafka.streams.processor.internals.StreamTask.close(StreamTask.java:306)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.closeOne(StreamThread.java:584)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.removeStreamTasks(StreamThread.java:571)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.shutdown(StreamThread.java:265)
>   at 
> org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:225)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-26 Thread Federico Fissore (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15116905#comment-15116905
 ] 

Federico Fissore commented on KAFKA-2985:
-

[~hachikuji] you're right. Sorry for having bothered you. I forgot to code a 
"tearDown()". Stock 0.9.0.0 works fine. Thank you for you're patience

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,154] INFO 

Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-26 Thread Ismael Juma
Thanks Pierre. Including the dev mailing list.

A few comments:

1. It's worth mentioning that the KafkaConsumer has the
@InterfaceStability.Unstable annotation.
2. It would be good to show the existing signatures of the methods being
changed before we show the changed signatures.
3. The proposed changes section mentions an alternative. I think the
alternative should be moved to the "Rejected Alternatives" section.
4. It would be good to explain why `Collection` was chosen specifically for
the parameters (as opposed to `Iterable` for example).\
5. Finally, it would be good to explain why we decided to change the method
parameters instead of the return types (or why we should not change the
return types).

Hopefully it should be straightforward to address these points.

Thanks,
Ismael

On Tue, Jan 26, 2016 at 9:00 AM, Pierre-Yves Ritschard 
wrote:

>
> KAFKA-3006 is under review, and would change some commonly used
> signatures in the Kafka client library. The idea behind the proposal is
> to provide a unified way of interacting with anything sequence like in
> the client.
>
> If the change is accepted, these would be the signatures that change:
>
> void subscribe(Collection topics);
> void subscribe(Collection topics, ConsumerRebalanceListener);
> void assign(Collection partitions);
> void pause(Collection partitions);
> void resume(Collection partitions);
> void seekToBeginning(Collection);
> void seekToEnd(Collection);
>
>


[jira] [Commented] (KAFKA-3110) can't set cluster acl for a user to CREATE topics without first creating a topic

2016-01-26 Thread Thomas Graves (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117246#comment-15117246
 ] 

Thomas Graves commented on KAFKA-3110:
--

I was using 0.9 release so I'll give it a try with trunk.

> can't set cluster acl for a user to CREATE topics without first creating a 
> topic
> 
>
> Key: KAFKA-3110
> URL: https://issues.apache.org/jira/browse/KAFKA-3110
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Thomas Graves
>
> I started a new kafka cluster with security.  I tried to give a user cluster 
> CREATE permissions so they could create topics:
> kafka-acls.sh --authorizer-properties zookeeper.connect=host.com:2181 
> --cluster --add --operation CREATE --allow-principal User:myuser
> This failed with the error below and the broker ended up shutting down and 
> wouldn't restart without removing the zookeeper data.
> @40005699398806bd170c org.I0Itec.zkclient.exception.ZkNoNodeException: 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /kafka-acl/Topic
> To work around this you can first create any topic which creates the 
> zookeeper node and then after that you can give the user create permissions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-1189) kafka-server-stop.sh doesn't stop broker

2016-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117254#comment-15117254
 ] 

Ismael Juma commented on KAFKA-1189:


[~linwukang], please file a new issue.

> kafka-server-stop.sh doesn't stop broker
> 
>
> Key: KAFKA-1189
> URL: https://issues.apache.org/jira/browse/KAFKA-1189
> Project: Kafka
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 0.8.0
> Environment: RHEL 6.4 64bit, Java 6u35
>Reporter: Bryan Baugher
>Assignee: Martin Kleppmann
>Priority: Minor
>  Labels: newbie
> Fix For: 0.8.2.0
>
> Attachments: KAFKA-1189.patch
>
>
> Just before the 0.8.0 release this commit[1] changed the signal in the 
> kafka-server-stop.sh script from SIGTERM to SIGINT. This doesn't seem to stop 
> the broker. Changing this back to SIGTERM (or -15) fixes the issues.
> [1] - 
> https://github.com/apache/kafka/commit/51de7c55d2b3107b79953f401fc8c9530bd0eea0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3150) kafka.tools.UpdateOffsetsInZK not work (sasl enabled)

2016-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117261#comment-15117261
 ] 

Ismael Juma commented on KAFKA-3150:


UpdateOffsetsInZK requires a PLAINTEXT port to be open by the brokers.There are 
no plans to improve this tool as offsets are no longer stored in ZK by the new 
consumer.

> kafka.tools.UpdateOffsetsInZK not work (sasl enabled)
> -
>
> Key: KAFKA-3150
> URL: https://issues.apache.org/jira/browse/KAFKA-3150
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest 
> config/consumer.properties   alalei_2  
> [2016-01-26 17:20:49,920] WARN Property sasl.kerberos.service.name is not 
> valid (kafka.utils.VerifiableProperties)
> [2016-01-26 17:20:49,920] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> Exception in thread "main" kafka.common.BrokerEndPointNotAvailableException: 
> End point PLAINTEXT not found for broker 1
> at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply$mcVI$sp(UpdateOffsetsInZK.scala:70)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at 
> kafka.tools.UpdateOffsetsInZK$.getAndSetOffsets(UpdateOffsetsInZK.scala:59)
> at kafka.tools.UpdateOffsetsInZK$.main(UpdateOffsetsInZK.scala:43)
> at kafka.tools.UpdateOffsetsInZK.main(UpdateOffsetsInZK.scala)
> same error for:
> ./bin/kafka-consumer-offset-checker.sh  --broker-info --group 
> test-consumer-group --topic alalei_2 --zookeeper slave1:2181
> [2016-01-26 17:23:45,218] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0.
> ./bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper 
> slave1:2181 --group  test-consumer-group
> [2016-01-26 17:26:15,075] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Todd Palino
Finally got a chance to take a look at this. I won’t be able to make the
KIP meeting due to a conflict.

I’m somewhat disappointed in this proposal. I think that the explicit
exclusion of modification of the messages is short-sighted, and not
accounting for it now is going to bite us later. Jay, aren’t you the one
railing against public interfaces and how difficult they are to work with
when you don’t get them right? The “simple” change to one of these
interfaces to make it able to return a record is going to be a significant
change and is going to require all clients to rewrite their interceptors.
If we’re not willing to put the time to think through manipulation now,
then this KIP should be shelved until we are. Implementing something
halfway is going to be worse than taking a little longer. In addition, I
don’t believe that manipulation requires anything more than interceptors to
receive the full record, and then to return it.

There are 3 use case I can think of right now without any deep discussion
that can make use of interceptors with modification:

1. Auditing. The ability to add metadata to a message for auditing is
critical. Hostname, service name, timestamps, etc. are all pieces of data
that can be used on the other side of the pipeline to categorize messages,
determine loss and transport time, and pin down issues. You may say that
these things can just be part of the message schema, but anyone who has
worked with a multi-user data system (especially those who have been
involved with LinkedIn) know how difficult it is to maintain consistent
message schemas and to get other people to put in fields for your use.

2. Encryption. This is probably the most obvious case for record
manipulation on both sides. The ability to tie in end to end encryption is
important for data that requires external compliance (PCI, HIPAA, etc.).

3. Routing. By being able to add a bit of information about the source or
destination of a message to the metadata, you can easily construct an
intelligent mirror maker that can prevent loops. This has the opportunity
to result in significant operational savings, as you can get rid of the
need for tiered clusters in order to prevent loops in mirroring messages.

All three of these share the feature that they add metadata to messages.
With the pushback on having arbitrary metadata as an “envelope” to the
message, this is a way to provide it and make it the responsibility of the
client, and not the Kafka broker and system itself.

-Todd



On Tue, Jan 26, 2016 at 2:30 AM, Ismael Juma  wrote:

> Hi Anna and Neha,
>
> I think it makes a lot of sense to try and keep the interface lean and to
> add more methods later when/if there is a need. What is the current
> thinking with regards to compatibility when/if we add new methods? A few
> options come to mind:
>
> 1. Change the interface to an abstract class with empty implementations for
> all the methods. This means that the path to adding new methods is clear.
> 2. Hope we have moved to Java 8 by the time we need to add new methods and
> use default methods with an empty implementation for any new method (and
> potentially make existing methods default methods too at that point for
> consistency)
> 3. Introduce a new interface that inherits from the existing Interceptor
> interface when we need to add new methods.
>
> Option 1 is the easiest and it also means that interceptor users only need
> to override the methods that they are interested (more useful if the number
> of methods grows). The downside is that interceptor implementations cannot
> inherit from another class (a straightforward workaround is to make the
> interceptor a forwarder that calls another class). Also, our existing
> callbacks are interfaces, so seems a bit inconsistent.
>
> Option 2 may be the most appealing one as both users and ourselves retain
> flexibility. The main downside is that it relies on us moving to Java 8,
> which may be more than a year away potentially (if we support the last 2
> Java releases).
>
> Thoughts?
>
> Ismael
>
> On Tue, Jan 26, 2016 at 4:59 AM, Neha Narkhede  wrote:
>
> > Anna,
> >
> > I'm also in favor of including just the APIs for which we have a clear
> use
> > case. If more use cases for finer monitoring show up in the future, we
> can
> > always update the interface. Would you please highlight in the KIP the
> APIs
> > that you think we have an immediate use for?
> >
> > Joel,
> >
> > Broker-side monitoring makes a lot of sense in the long term though I
> don't
> > think it is a requirement for end-to-end monitoring. With the producer
> and
> > consumer interceptors, you have the ability to get full
> > publish-to-subscribe end-to-end monitoring. The broker interceptor
> > certainly improves the resolution of monitoring but it is also a riskier
> > change. I prefer an incremental approach over a big-bang and recommend
> > taking baby-steps. Let's first make sure the producer/consumer
> 

[jira] [Commented] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2016-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117285#comment-15117285
 ] 

Ismael Juma commented on KAFKA-3151:


You need to configure the tool to use SASL (similarly to how you would 
configure a client). See: 
http://kafka.apache.org/documentation.html#security_sasl

> kafka-consumer-groups.sh fail with sasl enabled 
> 
>
> Key: KAFKA-3151
> URL: https://issues.apache.org/jira/browse/KAFKA-3151
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
> slave1.otocyon.com:9092 --list
> Error while executing consumer group command Request METADATA failed on 
> brokers List(Node(-1, slave1.otocyon.com, 9092))
> java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
> slave1.otocyon.com, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
> at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
> at 
> kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
> at 
> kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> same error for:
> bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
> slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
> Error while executing consumer group command Request GROUP_COORDINATOR failed 
> on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
> List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
> at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
> at 
> kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-45 Standardize all client sequence interaction on j.u.Collection.

2016-01-26 Thread Pierre-Yves Ritschard

Hi Ismael,

Thanks for the review, I will act on these a bit later today.

  - pyr
  
Ismael Juma writes:

> Thanks Pierre. Including the dev mailing list.
>
> A few comments:
>
> 1. It's worth mentioning that the KafkaConsumer has the
> @InterfaceStability.Unstable annotation.
> 2. It would be good to show the existing signatures of the methods being
> changed before we show the changed signatures.
> 3. The proposed changes section mentions an alternative. I think the
> alternative should be moved to the "Rejected Alternatives" section.
> 4. It would be good to explain why `Collection` was chosen specifically for
> the parameters (as opposed to `Iterable` for example).\
> 5. Finally, it would be good to explain why we decided to change the method
> parameters instead of the return types (or why we should not change the
> return types).
>
> Hopefully it should be straightforward to address these points.
>
> Thanks,
> Ismael
>
> On Tue, Jan 26, 2016 at 9:00 AM, Pierre-Yves Ritschard 
> wrote:
>
>>
>> KAFKA-3006 is under review, and would change some commonly used
>> signatures in the Kafka client library. The idea behind the proposal is
>> to provide a unified way of interacting with anything sequence like in
>> the client.
>>
>> If the change is accepted, these would be the signatures that change:
>>
>> void subscribe(Collection topics);
>> void subscribe(Collection topics, ConsumerRebalanceListener);
>> void assign(Collection partitions);
>> void pause(Collection partitions);
>> void resume(Collection partitions);
>> void seekToBeginning(Collection);
>> void seekToEnd(Collection);
>>
>>



Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread Rajini Sivaram
Ismael,

I have written up a section on supporting multiple mechanisms within a
Kafka broker. At the moment, it is under "Rejected Alternatives", even
though having thought about it, we should possibly include it in this KIP,
unless we are sure it is not going to come up as a requirement. We don't
actually need this feature, but it will be useful to know what others think.

Regards,

Rajini

On Tue, Jan 26, 2016 at 12:00 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Ismael,
>
> Thank you for your review. The main reason I didn't address the support
> for multiple mechanisms within a broker is because it requires changes to
> the wire protocol to propagate mechanisms. But I do agree that we need to
> understand whether it would be even harder to support this in the future.
> Will give it some thought and write it up in the KIP.
>
> Regards,
>
> Rajini
>
> On Tue, Jan 26, 2016 at 10:44 AM, Ismael Juma  wrote:
>
>> Hi Rajini,
>>
>> Thanks for the KIP. As stated in the KIP, it does not address "Support for
>> multiple SASL mechanisms within a broker". Maybe we should also mention
>> this in the "Rejected Alternatives" section with the reasoning. I think
>> it's particularly relevant to understand if it's not being proposed
>> because
>> we don't think it's useful or due to the additional implementation
>> complexity (it's probably a combination). If we think this could be useful
>> in the future, it would also be worth thinking about how it is affected if
>> we do KIP-43 first (ie will it be easier, harder, etc.)
>>
>> Thanks,
>> Ismael
>>
>> On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
>> rajinisiva...@googlemail.com> wrote:
>>
>> > I have just created KIP-43 to extend the SASL implementation in Kafka to
>> > support new SASL mechanisms.
>> >
>> >
>> >
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
>> >
>> >
>> > Comments and suggestions are appreciated.
>> >
>> >
>> > Thank you...
>> >
>> > Regards,
>> >
>> > Rajini
>> >
>>
>
>
>


[jira] [Created] (KAFKA-3152) kafka-acl doesn't allow space in principal name

2016-01-26 Thread Jun Rao (JIRA)
Jun Rao created KAFKA-3152:
--

 Summary: kafka-acl doesn't allow space in principal name
 Key: KAFKA-3152
 URL: https://issues.apache.org/jira/browse/KAFKA-3152
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 0.9.0.0
Reporter: Jun Rao


When running the following,
kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer 
--authorizer-properties zookeeper.connect=localhost:2181 --topic test --add 
--producer --allow-host=* --allow-principal "User:CN=xxx,O=My Organization"

the acl is set as the following. The part after space is ignored.
Following is list of acls for resource: Topic:test 
User:CN=xxx,O=My has Allow permission for operations: Describe from 
hosts: *
User:CN=xxx,O=My has Allow permission for operations: Write from hosts: 
* 




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3152) kafka-acl doesn't allow space in principal name

2016-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ismael Juma reassigned KAFKA-3152:
--

Assignee: Ismael Juma

> kafka-acl doesn't allow space in principal name
> ---
>
> Key: KAFKA-3152
> URL: https://issues.apache.org/jira/browse/KAFKA-3152
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> When running the following,
> kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer 
> --authorizer-properties zookeeper.connect=localhost:2181 --topic test --add 
> --producer --allow-host=* --allow-principal "User:CN=xxx,O=My Organization"
> the acl is set as the following. The part after space is ignored.
> Following is list of acls for resource: Topic:test 
>   User:CN=xxx,O=My has Allow permission for operations: Describe from 
> hosts: *
>   User:CN=xxx,O=My has Allow permission for operations: Write from hosts: 
> * 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3125: Add Kafka Streams Exceptions

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/809


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3132: URI scheme in "listeners" property...

2016-01-26 Thread granthenke
GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/811

KAFKA-3132: URI scheme in "listeners" property should not be case-sen…

…sitive

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka listeners-case

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/811.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #811


commit 45fa9e3d14afdb982fecb9cff644572a6757942b
Author: Grant Henke 
Date:   2016-01-26T05:38:32Z

KAFKA-3132: URI scheme in "listeners" property should not be case-sensitive




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3132: URI scheme in "listeners" property...

2016-01-26 Thread granthenke
Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/811


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117582#comment-15117582
 ] 

ASF GitHub Bot commented on KAFKA-3132:
---

GitHub user granthenke reopened a pull request:

https://github.com/apache/kafka/pull/811

KAFKA-3132: URI scheme in "listeners" property should not be case-sen…

…sitive

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/granthenke/kafka listeners-case

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/811.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #811


commit 45fa9e3d14afdb982fecb9cff644572a6757942b
Author: Grant Henke 
Date:   2016-01-26T05:38:32Z

KAFKA-3132: URI scheme in "listeners" property should not be case-sensitive




> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Assignee: Grant Henke
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117581#comment-15117581
 ] 

ASF GitHub Bot commented on KAFKA-3132:
---

Github user granthenke closed the pull request at:

https://github.com/apache/kafka/pull/811


> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Assignee: Grant Henke
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3125) Exception Hierarchy for Streams

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117580#comment-15117580
 ] 

ASF GitHub Bot commented on KAFKA-3125:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/809


> Exception Hierarchy for Streams
> ---
>
> Key: KAFKA-3125
> URL: https://issues.apache.org/jira/browse/KAFKA-3125
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Currently Kafka Streams do not have its own exception category: we only have 
> one TopologyException that extends from KafkaException.
> It's better to start thinking about categorizing exceptions in Streams with a 
> common parent of "StreamsException". For example:
> 1. What type of exceptions should be exposed to users at job runtime; what 
> type of exceptions should be exposed at "topology build time".
> 2. Should KafkaStreaming.start / stop ever need to throw any exceptions?
> 3. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3125) Exception Hierarchy for Streams

2016-01-26 Thread Ewen Cheslack-Postava (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3125?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ewen Cheslack-Postava resolved KAFKA-3125.
--
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 809
[https://github.com/apache/kafka/pull/809]

> Exception Hierarchy for Streams
> ---
>
> Key: KAFKA-3125
> URL: https://issues.apache.org/jira/browse/KAFKA-3125
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Guozhang Wang
> Fix For: 0.9.1.0
>
>
> Currently Kafka Streams do not have its own exception category: we only have 
> one TopologyException that extends from KafkaException.
> It's better to start thinking about categorizing exceptions in Streams with a 
> common parent of "StreamsException". For example:
> 1. What type of exceptions should be exposed to users at job runtime; what 
> type of exceptions should be exposed at "topology build time".
> 2. Should KafkaStreaming.start / stop ever need to throw any exceptions?
> 3. etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk7 #989

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[me] KAFKA-3125: Add Kafka Streams Exceptions

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5ae97196ae149e58f6cfa3c5b6d968cbd7cb6787 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5ae97196ae149e58f6cfa3c5b6d968cbd7cb6787
 > git rev-list 942074b77b1b1162acdd1bf7a3ee299bd0113886 # timeout=10
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson187805341738604981.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 22.002 secs
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/hudson8856354180683618569.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 19.469 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
Setting 
JDK_1_7U51_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk-1.7u51


[jira] [Commented] (KAFKA-2985) Consumer group stuck in rebalancing state

2016-01-26 Thread Jason Gustafson (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2985?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117616#comment-15117616
 ] 

Jason Gustafson commented on KAFKA-2985:


[~fridrik] No worries, I'm just glad to see people finally giving the new 
consumer a try!

> Consumer group stuck in rebalancing state
> -
>
> Key: KAFKA-2985
> URL: https://issues.apache.org/jira/browse/KAFKA-2985
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
> Environment: Kafka 0.9.0.0.
> Kafka Java consumer 0.9.0.0
> 2 Java producers.
> 3 Java consumers using the new consumer API.
> 2 kafka brokers.
>Reporter: Jens Rantil
>Assignee: Jason Gustafson
>
> We've doing some load testing on Kafka. _After_ the load test when our 
> consumers and have two times now seen Kafka become stuck in consumer group 
> rebalancing. This is after all our consumers are done consuming and 
> essentially polling periodically without getting any records.
> The brokers list the consumer group (named "default"), but I can't query the 
> offsets:
> {noformat}
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --list
> default
> jrantil@queue-0:/srv/kafka/kafka$ ./bin/kafka-consumer-groups.sh 
> --new-consumer --bootstrap-server localhost:9092 --describe --group 
> default|sort
> Consumer group `default` does not exist or is rebalancing.
> {noformat}
> Retrying to query the offsets for 15 minutes or so still said it was 
> rebalancing. After restarting our first broker, the group immediately started 
> rebalancing. That broker was logging this before restart:
> {noformat}
> [2015-12-12 13:09:48,517] INFO [Group Metadata Manager on Broker 0]: Removed 
> 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
> [2015-12-12 13:10:16,139] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,141] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 16 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:10:16,575] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 16 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,141] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,143] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 17 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:11:15,314] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 17 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,144] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,145] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 18 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:12:14,340] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 18 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,146] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,148] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 19 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:13:13,238] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 19 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,148] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,149] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 20 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:14:12,360] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 20 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,150] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,152] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group default for generation 21 
> (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:15:11,217] INFO [GroupCoordinator 0]: Preparing to restabilize 
> group default with old generation 21 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,152] INFO [GroupCoordinator 0]: Stabilized group default 
> generation 22 (kafka.coordinator.GroupCoordinator)
> [2015-12-12 13:16:10,154] INFO [GroupCoordinator 0]: Assignment received from 
> leader for group 

[jira] [Commented] (KAFKA-1791) Corrupt index after safe shutdown and restart

2016-01-26 Thread sandeep reddy (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117667#comment-15117667
 ] 

sandeep reddy commented on KAFKA-1791:
--

I deleted corrupted index's and restarted the Kafka broker. I see the messages 
are retained and still see the same index error. Any suggestions on this issue 
is much appreciated.

> Corrupt index after safe shutdown and restart
> -
>
> Key: KAFKA-1791
> URL: https://issues.apache.org/jira/browse/KAFKA-1791
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
> Environment: Debian6 with Sun-Java6
>Reporter: Vamsi Subhash Achanta
>Priority: Critical
> Attachments: 0233.index, 0233.log
>
>
> We have 3 kafka brokers - all VMs. One of the broker was stopped for around 
> 30 minutes to fix a problem with the bare metal. Upon restart, after some 
> time, the broker went out of file-descriptors (FDs) and started throwing 
> errors. Upon restart, it started throwing this corrupted index exceptions. I 
> followed the other JIRAs related to corrupted indices but the solutions 
> mentioned there (deleting the index and restart) didn't work - the index gets 
> created again. The other JIRAs solution of deleting those indexes which got 
> wrongly compacted (> 10MB) didn't work either.
> What is the error? How can I fix this and bring back the broker? Thanks.
> INFO [2014-11-21 02:57:17,510] [main][] kafka.log.LogManager - Found clean 
> shutdown file. Skipping recovery for all logs in data directory 
> '/var/lib/fk-3p-kafka/logs'
>  INFO [2014-11-21 02:57:17,510] [main][] kafka.log.LogManager - Loading log 
> 'kf.production.b2b.return_order.status-25'
> FATAL [2014-11-21 02:57:17,533] [main][] kafka.server.KafkaServerStartable - 
> Fatal error during KafkaServerStable startup. Prepare to shutdown
> java.lang.IllegalArgumentException: requirement failed: Corrupt index found, 
> index file 
> (/var/lib/fk-3p-kafka/logs/kf.production.b2b.return_order.status-25/0233.index)
>  has non-zero size but the last offset is 233 and the base offset is 233
>   at scala.Predef$.require(Predef.scala:145)
>   at kafka.log.OffsetIndex.sanityCheck(OffsetIndex.scala:352)
>   at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159)
>   at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:158)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>   at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
>   at 
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
>   at kafka.log.Log.loadSegments(Log.scala:158)
>   at kafka.log.Log.(Log.scala:64)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
>   at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>   at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>   at kafka.log.LogManager.(LogManager.scala:57)
>   at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>   at kafka.Kafka$.main(Kafka.scala:46)
>   at kafka.Kafka.main(Kafka.scala)
>  INFO [2014-11-21 02:57:17,534] [main][] kafka.server.KafkaServer - [Kafka 
> Server 2], shutting down
>  INFO [2014-11-21 02:57:17,538] [main][] kafka.server.KafkaServer - [Kafka 
> Server 2], shut down completed
>  INFO [2014-11-21 02:57:17,539] [Thread-2][] kafka.server.KafkaServer - 
> [Kafka Server 2], shutting down



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Anna Povzner
Thanks Ismael and Todd for your feedback!

I agree about coming up with lean, but useful interfaces that will be easy
to extend later.

When we discuss the minimal set of producer and consumer interceptor API in
today’s KIP meeting (discussion item #2 in my previous email), lets compare
two options:

*1. Minimal set of immutable API for producer and consumer interceptors*

ProducerInterceptor:
public void onSend(ProducerRecord record);
public void onAcknowledgement(RecordMetadata metadata, Exception exception);

ConsumerInterceptor:
public void onConsume(ConsumerRecords records);
public void onCommit(Map offsets);

Use-cases:
— end-to-end monitoring; custom tracing and logging


*2. Minimal set of mutable API for producer and consumer interceptors*

ProducerInterceptor:
ProducerRecord onSend(ProducerRecord record);
void onAcknowledgement(RecordMetadata metadata, Exception exception);

ConsumerInterceptor:
void onConsume(ConsumerRecords records);
void onCommit(Map offsets);

Additional use-cases to #1:
— Ability to add metadata to a message or fill in standard fields for audit
and routing.

Implications
— Partition assignment will be done based on modified key/value instead of
original key/value. If key/value transformation is not consistent (same key
and value does not mutate to the same, but modified, key/value), then log
compaction would not work. However, audit and routing use-cases from Todd
will likely do consistent transformation.


*Additional callbacks (discussion item #3 in my previous email):*

If we want to support encryption, we would want to be able to modify
serialized key/value, rather than key and value objects. This will add the
following API to producer and consumer interceptors:

ProducerInterceptor:
SerializedKeyValue onEnqueued(TopicPartition tp, ProducerRecord
record, SerializedKeyValue serializedKeyValue);

ConsumerInterceptor:
SerializedKeyValue onReceive(TopicPartition tp, SerializedKeyValue
serializedKeyValue);


I am leaning towards implementing the minimal set of immutable or mutable
interfaces, making sure that we have a compatibility plan that allows us to
add more callbacks in the future (per Ismael comment), and add more APIs
later. E.g., for encryption use-case, there could be an argument in doing
encryption after message compression vs. per-record encryption that could
be done using the above additional API. There is also more implications for
every API that modifies records: modifying serialized key/value will again
impact partition assignment (we will likely do that after partition
assignment), which may impact log compaction and mirror maker partitioning.


Thanks,
Anna

On Tue, Jan 26, 2016 at 7:26 AM, Todd Palino  wrote:

> Finally got a chance to take a look at this. I won’t be able to make the
> KIP meeting due to a conflict.
>
> I’m somewhat disappointed in this proposal. I think that the explicit
> exclusion of modification of the messages is short-sighted, and not
> accounting for it now is going to bite us later. Jay, aren’t you the one
> railing against public interfaces and how difficult they are to work with
> when you don’t get them right? The “simple” change to one of these
> interfaces to make it able to return a record is going to be a significant
> change and is going to require all clients to rewrite their interceptors.
> If we’re not willing to put the time to think through manipulation now,
> then this KIP should be shelved until we are. Implementing something
> halfway is going to be worse than taking a little longer. In addition, I
> don’t believe that manipulation requires anything more than interceptors to
> receive the full record, and then to return it.
>
> There are 3 use case I can think of right now without any deep discussion
> that can make use of interceptors with modification:
>
> 1. Auditing. The ability to add metadata to a message for auditing is
> critical. Hostname, service name, timestamps, etc. are all pieces of data
> that can be used on the other side of the pipeline to categorize messages,
> determine loss and transport time, and pin down issues. You may say that
> these things can just be part of the message schema, but anyone who has
> worked with a multi-user data system (especially those who have been
> involved with LinkedIn) know how difficult it is to maintain consistent
> message schemas and to get other people to put in fields for your use.
>
> 2. Encryption. This is probably the most obvious case for record
> manipulation on both sides. The ability to tie in end to end encryption is
> important for data that requires external compliance (PCI, HIPAA, etc.).
>
> 3. Routing. By being able to add a bit of information about the source or
> destination of a message to the metadata, you can easily construct an
> intelligent mirror maker that can prevent loops. This has the opportunity
> to result in 

Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Mayuresh Gharat
Hi,

I won't be able to make it to KIP hangout due to conflict.

Anna, here is the use case where knowing if there are messages in the
RecordAccumulator left to be sent to the kafka cluster for a topic is
useful.

1) Consider a pipeline :
A ---> Mirror-maker -> B

2) We have a topic T in cluster A mirrored to cluster B.

3) Now if we delete topic T in A and immediately proceed to delete the
topic in cluster B, some of the the Mirror-maker machines die because
atleast one of the batches in RecordAccumulator for topic T fail to be
produced to cluster B. We have seen this happening in our clusters.


If we know that there are no more messages left in the RecordAccumulator to
be produced to cluster B, we can safely delete the topic in cluster B
without disturbing the pipeline.

Thanks,

Mayuresh

On Tue, Jan 26, 2016 at 10:31 AM, Anna Povzner  wrote:

> Thanks Ismael and Todd for your feedback!
>
> I agree about coming up with lean, but useful interfaces that will be easy
> to extend later.
>
> When we discuss the minimal set of producer and consumer interceptor API in
> today’s KIP meeting (discussion item #2 in my previous email), lets compare
> two options:
>
> *1. Minimal set of immutable API for producer and consumer interceptors*
>
> ProducerInterceptor:
> public void onSend(ProducerRecord record);
> public void onAcknowledgement(RecordMetadata metadata, Exception
> exception);
>
> ConsumerInterceptor:
> public void onConsume(ConsumerRecords records);
> public void onCommit(Map offsets);
>
> Use-cases:
> — end-to-end monitoring; custom tracing and logging
>
>
> *2. Minimal set of mutable API for producer and consumer interceptors*
>
> ProducerInterceptor:
> ProducerRecord onSend(ProducerRecord record);
> void onAcknowledgement(RecordMetadata metadata, Exception exception);
>
> ConsumerInterceptor:
> void onConsume(ConsumerRecords records);
> void onCommit(Map offsets);
>
> Additional use-cases to #1:
> — Ability to add metadata to a message or fill in standard fields for audit
> and routing.
>
> Implications
> — Partition assignment will be done based on modified key/value instead of
> original key/value. If key/value transformation is not consistent (same key
> and value does not mutate to the same, but modified, key/value), then log
> compaction would not work. However, audit and routing use-cases from Todd
> will likely do consistent transformation.
>
>
> *Additional callbacks (discussion item #3 in my previous email):*
>
> If we want to support encryption, we would want to be able to modify
> serialized key/value, rather than key and value objects. This will add the
> following API to producer and consumer interceptors:
>
> ProducerInterceptor:
> SerializedKeyValue onEnqueued(TopicPartition tp, ProducerRecord
> record, SerializedKeyValue serializedKeyValue);
>
> ConsumerInterceptor:
> SerializedKeyValue onReceive(TopicPartition tp, SerializedKeyValue
> serializedKeyValue);
>
>
> I am leaning towards implementing the minimal set of immutable or mutable
> interfaces, making sure that we have a compatibility plan that allows us to
> add more callbacks in the future (per Ismael comment), and add more APIs
> later. E.g., for encryption use-case, there could be an argument in doing
> encryption after message compression vs. per-record encryption that could
> be done using the above additional API. There is also more implications for
> every API that modifies records: modifying serialized key/value will again
> impact partition assignment (we will likely do that after partition
> assignment), which may impact log compaction and mirror maker partitioning.
>
>
> Thanks,
> Anna
>
> On Tue, Jan 26, 2016 at 7:26 AM, Todd Palino  wrote:
>
> > Finally got a chance to take a look at this. I won’t be able to make the
> > KIP meeting due to a conflict.
> >
> > I’m somewhat disappointed in this proposal. I think that the explicit
> > exclusion of modification of the messages is short-sighted, and not
> > accounting for it now is going to bite us later. Jay, aren’t you the one
> > railing against public interfaces and how difficult they are to work with
> > when you don’t get them right? The “simple” change to one of these
> > interfaces to make it able to return a record is going to be a
> significant
> > change and is going to require all clients to rewrite their interceptors.
> > If we’re not willing to put the time to think through manipulation now,
> > then this KIP should be shelved until we are. Implementing something
> > halfway is going to be worse than taking a little longer. In addition, I
> > don’t believe that manipulation requires anything more than interceptors
> to
> > receive the full record, and then to return it.
> >
> > There are 3 use case I can think of right now without any deep discussion
> > that can make use of interceptors with modification:
> >

[GitHub] kafka pull request: KAFKA-2478: Fix manual committing example in j...

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/210


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Updated] (KAFKA-2478) KafkaConsumer javadoc example seems wrong

2016-01-26 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang updated KAFKA-2478:
-
   Resolution: Fixed
Fix Version/s: 0.9.0.1
   Status: Resolved  (was: Patch Available)

Issue resolved by pull request 210
[https://github.com/apache/kafka/pull/210]

> KafkaConsumer javadoc example seems wrong
> -
>
> Key: KAFKA-2478
> URL: https://issues.apache.org/jira/browse/KAFKA-2478
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Dmitry Stratiychuk
>Assignee: Dmitry Stratiychuk
> Fix For: 0.9.0.1
>
>
> I was looking at this KafkaConsumer example in the javadoc:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L199
> As I understand, commit() method commits the maximum offsets returned by the 
> most recent invocation of poll() method.
> In this example, there's a danger of losing the data.
> Imagine the case where 300 records are returned by consumer.poll()
> The commit will happen after inserting 200 records into the database.
> But it will also commit the offsets for 100 records that are still 
> unprocessed.
> So if consumer fails before buffer is dumped into the database again,
> then those 100 records will never be processed.
> If I'm wrong, could you please clarify the behaviour of commit() method?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-2478) KafkaConsumer javadoc example seems wrong

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-2478?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117762#comment-15117762
 ] 

ASF GitHub Bot commented on KAFKA-2478:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/210


> KafkaConsumer javadoc example seems wrong
> -
>
> Key: KAFKA-2478
> URL: https://issues.apache.org/jira/browse/KAFKA-2478
> Project: Kafka
>  Issue Type: Bug
>  Components: consumer
>Affects Versions: 0.9.0.0
>Reporter: Dmitry Stratiychuk
>Assignee: Dmitry Stratiychuk
> Fix For: 0.9.0.1
>
>
> I was looking at this KafkaConsumer example in the javadoc:
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/consumer/KafkaConsumer.java#L199
> As I understand, commit() method commits the maximum offsets returned by the 
> most recent invocation of poll() method.
> In this example, there's a danger of losing the data.
> Imagine the case where 300 records are returned by consumer.poll()
> The commit will happen after inserting 200 records into the database.
> But it will also commit the offsets for 100 records that are still 
> unprocessed.
> So if consumer fails before buffer is dumped into the database again,
> then those 100 records will never be processed.
> If I'm wrong, could you please clarify the behaviour of commit() method?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Jenkins build is back to normal : kafka-trunk-jdk8 #316

2016-01-26 Thread Apache Jenkins Server
See 



Build failed in Jenkins: kafka-trunk-jdk8 #317

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2478: Fix manual committing example in javadoc

--
[...truncated 82 lines...]
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaServer.scala:305:
 a pure expression does nothing in statement position; you may be omitting 
necessary parentheses
ControllerStats.leaderElectionTimer
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:389:
 class BrokerEndPoint in object UpdateMetadataRequest is deprecated: see 
corresponding Javadoc for more information.
  new UpdateMetadataRequest.BrokerEndPoint(brokerEndPoint.id, 
brokerEndPoint.host, brokerEndPoint.port)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/controller/ControllerChannelManager.scala:391:
 constructor UpdateMetadataRequest in class UpdateMetadataRequest is 
deprecated: see corresponding Javadoc for more information.
new UpdateMetadataRequest(controllerId, controllerEpoch, 
liveBrokers.asJava, partitionStates.asJava)
^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/network/BlockingChannel.scala:129:
 method readFromReadableChannel in class NetworkReceive is deprecated: see 
corresponding Javadoc for more information.
  response.readFromReadableChannel(channel)
   ^
there were 15 feature warning(s); re-run with -feature for details
11 warnings found
warning: [options] bootstrap class path not set in conjunction with -source 1.7
1 warning
:kafka-trunk-jdk8:core:processResources UP-TO-DATE
:kafka-trunk-jdk8:core:classes
:kafka-trunk-jdk8:clients:compileTestJavawarning: [options] bootstrap class 
path not set in conjunction with -source 1.7
Note: 
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/clients/src/test/java/org/apache/kafka/common/requests/RequestResponseTest.java
 uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
1 warning

:kafka-trunk-jdk8:clients:processTestResources
:kafka-trunk-jdk8:clients:testClasses
:kafka-trunk-jdk8:core:copyDependantLibs
:kafka-trunk-jdk8:core:copyDependantTestLibs
:kafka-trunk-jdk8:core:jar
:jar_core_2_11
Building project 'core' with Scala version 2.11.7
:kafka-trunk-jdk8:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk8:clients:processResources UP-TO-DATE
:kafka-trunk-jdk8:clients:classes UP-TO-DATE
:kafka-trunk-jdk8:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk8:clients:createVersionFile
:kafka-trunk-jdk8:clients:jar UP-TO-DATE
:kafka-trunk-jdk8:core:compileJava UP-TO-DATE
:kafka-trunk-jdk8:core:compileScalaJava HotSpot(TM) 64-Bit Server VM warning: 
ignoring option MaxPermSize=512m; support was removed in 8.0

/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/api/OffsetCommitRequest.scala:79:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.

org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP
 ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:36:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 commitTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP,

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/common/OffsetMetadataAndError.scala:37:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
 expireTimestamp: Long = 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP) {

  ^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/coordinator/GroupMetadataManager.scala:395:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
  if (value.expireTimestamp == 
org.apache.kafka.common.requests.OffsetCommitRequest.DEFAULT_TIMESTAMP)

^
/x1/jenkins/jenkins-slave/workspace/kafka-trunk-jdk8/core/src/main/scala/kafka/server/KafkaApis.scala:293:
 value DEFAULT_TIMESTAMP in object OffsetCommitRequest is deprecated: see 
corresponding Javadoc for more information.
if 

[jira] [Assigned] (KAFKA-3116) Failure to build

2016-01-26 Thread Vahid Hashemian (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vahid Hashemian reassigned KAFKA-3116:
--

Assignee: Vahid Hashemian

> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3116: Specify minimum Gradle version req...

2016-01-26 Thread vahidhashemian
GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/813

KAFKA-3116: Specify minimum Gradle version required in Readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3116

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/813.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #813


commit 977a257a6cbff1a7293ba1ae6fefa2b0ec32fea0
Author: Vahid Hashemian 
Date:   2016-01-26T19:52:20Z

KAFKA-3116: Update Readme to specify minimum Gradle version for buliding 
Kafka




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3116) Failure to build

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117864#comment-15117864
 ] 

ASF GitHub Bot commented on KAFKA-3116:
---

GitHub user vahidhashemian opened a pull request:

https://github.com/apache/kafka/pull/813

KAFKA-3116: Specify minimum Gradle version required in Readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3116

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/813.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #813


commit 977a257a6cbff1a7293ba1ae6fefa2b0ec32fea0
Author: Vahid Hashemian 
Date:   2016-01-26T19:52:20Z

KAFKA-3116: Update Readme to specify minimum Gradle version for buliding 
Kafka




> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka_0.9.0_jdk7 #95

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2478: Fix manual committing example in javadoc

--
[...truncated 1507 lines...]

kafka.integration.PrimitiveApiTest > 
testDefaultEncoderProducerAndFetchWithCompression PASSED

kafka.integration.PrimitiveApiTest > testConsumerEmptyTopic PASSED

kafka.integration.PrimitiveApiTest > testEmptyFetchRequest PASSED

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionEnabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testCleanLeaderElectionDisabledByTopicOverride FAILED
java.lang.AssertionError: expected: but 
was:
at org.junit.Assert.fail(Assert.java:88)
at org.junit.Assert.failNotEquals(Assert.java:743)
at org.junit.Assert.assertEquals(Assert.java:118)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
kafka.integration.UncleanLeaderElectionTest.verifyUncleanLeaderElectionDisabled(UncleanLeaderElectionTest.scala:253)
at 
kafka.integration.UncleanLeaderElectionTest.testCleanLeaderElectionDisabledByTopicOverride(UncleanLeaderElectionTest.scala:155)

kafka.integration.UncleanLeaderElectionTest > testUncleanLeaderElectionDisabled 
PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionInvalidTopicOverride PASSED

kafka.integration.UncleanLeaderElectionTest > 
testUncleanLeaderElectionEnabledByTopicOverride PASSED

kafka.integration.MinIsrConfigTest > testDefaultKafkaConfig PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAutoCreateTopicWithCollision PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokerListWithNoTopics PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testGetAllTopicMetadata 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > testTopicMetadataRequest 
PASSED

kafka.integration.SaslPlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooLow PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooLow 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToEarliestWhenOffsetTooHigh 
PASSED

kafka.integration.AutoOffsetResetTest > testResetToLatestWhenOffsetTooHigh 
PASSED

kafka.integration.SslTopicMetadataTest > testIsrAfterBrokerShutDownAndJoinsBack 
PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopicWithCollision PASSED

kafka.integration.SslTopicMetadataTest > testAliveBrokerListWithNoTopics PASSED

kafka.integration.SslTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.SslTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.SslTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.SslTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.SslTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.FetcherTest > testFetcher PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testIsrAfterBrokerShutDownAndJoinsBack PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopicWithCollision 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAliveBrokerListWithNoTopics 
PASSED

kafka.integration.PlaintextTopicMetadataTest > testAutoCreateTopic PASSED

kafka.integration.PlaintextTopicMetadataTest > testGetAllTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterNewBrokerStartup PASSED

kafka.integration.PlaintextTopicMetadataTest > testBasicTopicMetadata PASSED

kafka.integration.PlaintextTopicMetadataTest > testTopicMetadataRequest PASSED

kafka.integration.PlaintextTopicMetadataTest > 
testAliveBrokersListWithNoTopicsAfterABrokerShutdown PASSED

kafka.integration.RollingBounceTest > testRollingBounce PASSED

integration.kafka.api.AdminClientTest > testDescribeGroup PASSED

integration.kafka.api.AdminClientTest > testDescribeConsumerGroup PASSED

integration.kafka.api.AdminClientTest > testListGroups PASSED

integration.kafka.api.AdminClientTest > 
testDescribeConsumerGroupForNonExistentGroup PASSED

640 tests completed, 1 failed
:kafka_0.9.0_jdk7:core:test FAILED
:test_core_2_10_5 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':core:test'.
> There were failing tests. See the report at: 
> 

Re: Kafka KIP meeting Jan 26 at 11:00am PST

2016-01-26 Thread Jun Rao
The following are the notes from today's KIP discussion.


   - KIP-42: We agreed to leave the broker side interceptor for another
   KIP. On the client side, people favor the 2nd option in Anna's proposal.
   Anna will update the wiki accordingly.
   - KIP-43: We discussed whether there is a need to support multiple SASL
   mechanisms at the same time and what's the best way to implement this. Will
   discuss this in more details in the email thread.
   - Jiangjie brought up an issue related to KIP-32 (adding timestamp field
   in the message). The issue is that currently there is no convenient way for
   the consumer to tell whether the timestamp in a message is the create time
   or the server time. He and Guozhang propose to use a bit in the message
   attribute to do that. Jiangjie will describe the proposal in the email
   thread.


The video will be uploaded soon in
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
 .

Thanks,

Jun

On Mon, Jan 25, 2016 at 2:56 PM, Jun Rao  wrote:

> Hi, Everyone,
>
> We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
> attend but haven't received an invite, please let me know. The following is
> the agenda.
>
> Agenda:
>
> KIP-42: Add Producer and Consumer Interceptors
>
> Thanks,
>
> Jun
>


Re: Kafka KIP meeting Jan 26 at 11:00am PST

2016-01-26 Thread Anna Povzner
I will send more detailed meeting notes regarding KIP-42 in the KIP-42
discussion thread.

Thanks,
Anna

On Tue, Jan 26, 2016 at 1:21 PM, Jun Rao  wrote:

> The following are the notes from today's KIP discussion.
>
>
>- KIP-42: We agreed to leave the broker side interceptor for another
>KIP. On the client side, people favor the 2nd option in Anna's proposal.
>Anna will update the wiki accordingly.
>- KIP-43: We discussed whether there is a need to support multiple SASL
>mechanisms at the same time and what's the best way to implement this.
> Will
>discuss this in more details in the email thread.
>- Jiangjie brought up an issue related to KIP-32 (adding timestamp field
>in the message). The issue is that currently there is no convenient way
> for
>the consumer to tell whether the timestamp in a message is the create
> time
>or the server time. He and Guozhang propose to use a bit in the message
>attribute to do that. Jiangjie will describe the proposal in the email
>thread.
>
>
> The video will be uploaded soon in
>
> https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Improvement+Proposals
>  .
>
> Thanks,
>
> Jun
>
> On Mon, Jan 25, 2016 at 2:56 PM, Jun Rao  wrote:
>
> > Hi, Everyone,
> >
> > We will have a Kafka KIP meeting tomorrow at 11:00am PST. If you plan to
> > attend but haven't received an invite, please let me know. The following
> is
> > the agenda.
> >
> > Agenda:
> >
> > KIP-42: Add Producer and Consumer Interceptors
> >
> > Thanks,
> >
> > Jun
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #990

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[wangguoz] KAFKA-2478: Fix manual committing example in javadoc

--
[...truncated 6887 lines...]
org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorConfigConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testGetConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigs PASSED

org.apache.kafka.connect.runtime.rest.resources.ConnectorsResourceTest > 
testPutConnectorTaskConfigsConnectorNotFound PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testGracefulShutdown 
PASSED

org.apache.kafka.connect.util.ShutdownableThreadTest > testForcibleShutdown 
PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testStartStop PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testReloadOnStart PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testSendAndReadToEnd PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testConsumerError PASSED

org.apache.kafka.connect.util.KafkaBasedLogTest > testProducerError PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testStartStop 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > 
testReloadOnStart PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testMissingTopic 
PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testGetSet PASSED

org.apache.kafka.connect.storage.KafkaOffsetBackingStoreTest > testSetFailure 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutConnectorConfig PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testPutTaskConfigs 
PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testStartStop PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > testRestore PASSED

org.apache.kafka.connect.storage.KafkaConfigStorageTest > 
testPutTaskConfigsDoesNotResolveAllInconsistencies PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullValueFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testFlushFailureReplacesOffsets PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testAlreadyFlushing 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testWriteFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelBeforeAwaitFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testWriteNullKeyFlush PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > testNoOffsetsToFlush 
PASSED

org.apache.kafka.connect.storage.OffsetStorageWriterTest > 
testCancelAfterAwaitFlush PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testSaveRestore 
PASSED

org.apache.kafka.connect.storage.FileOffsetBackingStoreTest > testGetSet PASSED
:testAll

BUILD SUCCESSFUL

Total time: 1 hrs 7 mins 52.429 secs
+ ./gradlew --stacktrace docsJarAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:docsJar_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk7:clients:compileJava UP-TO-DATE
:kafka-trunk-jdk7:clients:processResources UP-TO-DATE
:kafka-trunk-jdk7:clients:classes UP-TO-DATE
:kafka-trunk-jdk7:clients:determineCommitId UP-TO-DATE
:kafka-trunk-jdk7:clients:createVersionFile
:kafka-trunk-jdk7:clients:jar UP-TO-DATE
:kafka-trunk-jdk7:clients:javadoc
:docsJar_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not add entry 
'
 to cache fileHashes.bin 
(
> Corrupted FreeListBlock 676881 found in cache 
> '

* Try:
Run with --info or --debug option to get more log output.

* Exception is:
org.gradle.api.UncheckedIOException: Could not add entry 
'
 to cache fileHashes.bin 
(
at 
org.gradle.cache.internal.btree.BTreePersistentIndexedCache.put(BTreePersistentIndexedCache.java:155)
at 
org.gradle.cache.internal.DefaultMultiProcessSafePersistentIndexedCache$2.run(DefaultMultiProcessSafePersistentIndexedCache.java:51)
at 

[jira] [Commented] (KAFKA-1791) Corrupt index after safe shutdown and restart

2016-01-26 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-1791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118078#comment-15118078
 ] 

Jun Rao commented on KAFKA-1791:


Could you share the error and the stack trace in the log? Thanks,

> Corrupt index after safe shutdown and restart
> -
>
> Key: KAFKA-1791
> URL: https://issues.apache.org/jira/browse/KAFKA-1791
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.8.1
> Environment: Debian6 with Sun-Java6
>Reporter: Vamsi Subhash Achanta
>Priority: Critical
> Attachments: 0233.index, 0233.log
>
>
> We have 3 kafka brokers - all VMs. One of the broker was stopped for around 
> 30 minutes to fix a problem with the bare metal. Upon restart, after some 
> time, the broker went out of file-descriptors (FDs) and started throwing 
> errors. Upon restart, it started throwing this corrupted index exceptions. I 
> followed the other JIRAs related to corrupted indices but the solutions 
> mentioned there (deleting the index and restart) didn't work - the index gets 
> created again. The other JIRAs solution of deleting those indexes which got 
> wrongly compacted (> 10MB) didn't work either.
> What is the error? How can I fix this and bring back the broker? Thanks.
> INFO [2014-11-21 02:57:17,510] [main][] kafka.log.LogManager - Found clean 
> shutdown file. Skipping recovery for all logs in data directory 
> '/var/lib/fk-3p-kafka/logs'
>  INFO [2014-11-21 02:57:17,510] [main][] kafka.log.LogManager - Loading log 
> 'kf.production.b2b.return_order.status-25'
> FATAL [2014-11-21 02:57:17,533] [main][] kafka.server.KafkaServerStartable - 
> Fatal error during KafkaServerStable startup. Prepare to shutdown
> java.lang.IllegalArgumentException: requirement failed: Corrupt index found, 
> index file 
> (/var/lib/fk-3p-kafka/logs/kf.production.b2b.return_order.status-25/0233.index)
>  has non-zero size but the last offset is 233 and the base offset is 233
>   at scala.Predef$.require(Predef.scala:145)
>   at kafka.log.OffsetIndex.sanityCheck(OffsetIndex.scala:352)
>   at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:159)
>   at kafka.log.Log$$anonfun$loadSegments$5.apply(Log.scala:158)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:631)
>   at 
> scala.collection.JavaConversions$JIteratorWrapper.foreach(JavaConversions.scala:474)
>   at scala.collection.IterableLike$class.foreach(IterableLike.scala:79)
>   at 
> scala.collection.JavaConversions$JCollectionWrapper.foreach(JavaConversions.scala:495)
>   at kafka.log.Log.loadSegments(Log.scala:158)
>   at kafka.log.Log.(Log.scala:64)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:118)
>   at 
> kafka.log.LogManager$$anonfun$loadLogs$1$$anonfun$apply$4.apply(LogManager.scala:113)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.ArrayOps.foreach(ArrayOps.scala:34)
>   at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:113)
>   at kafka.log.LogManager$$anonfun$loadLogs$1.apply(LogManager.scala:105)
>   at 
> scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:34)
>   at scala.collection.mutable.WrappedArray.foreach(WrappedArray.scala:32)
>   at kafka.log.LogManager.loadLogs(LogManager.scala:105)
>   at kafka.log.LogManager.(LogManager.scala:57)
>   at kafka.server.KafkaServer.createLogManager(KafkaServer.scala:275)
>   at kafka.server.KafkaServer.startup(KafkaServer.scala:72)
>   at 
> kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:34)
>   at kafka.Kafka$.main(Kafka.scala:46)
>   at kafka.Kafka.main(Kafka.scala)
>  INFO [2014-11-21 02:57:17,534] [main][] kafka.server.KafkaServer - [Kafka 
> Server 2], shutting down
>  INFO [2014-11-21 02:57:17,538] [main][] kafka.server.KafkaServer - [Kafka 
> Server 2], shut down completed
>  INFO [2014-11-21 02:57:17,539] [Thread-2][] kafka.server.KafkaServer - 
> [Kafka Server 2], shutting down



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-01-26 Thread Jun Rao (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jun Rao updated KAFKA-3088:
---
Assignee: (was: Jun Rao)

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-01-26 Thread Grant Henke (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Grant Henke reassigned KAFKA-3088:
--

Assignee: Grant Henke

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3088) 0.9.0.0 broker crash on receipt of produce request with empty client ID

2016-01-26 Thread Grant Henke (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118088#comment-15118088
 ] 

Grant Henke commented on KAFKA-3088:


I will pick this one up quick if thats alright. 

> 0.9.0.0 broker crash on receipt of produce request with empty client ID
> ---
>
> Key: KAFKA-3088
> URL: https://issues.apache.org/jira/browse/KAFKA-3088
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.9.0.0
>Reporter: Dave Peterson
>Assignee: Grant Henke
>
> Sending a produce request with an empty client ID to a 0.9.0.0 broker causes 
> the broker to crash as shown below.  More details can be found in the 
> following email thread:
> http://mail-archives.apache.org/mod_mbox/kafka-users/201601.mbox/%3c5693ecd9.4050...@dspeterson.com%3e
>[2016-01-10 23:03:44,957] ERROR [KafkaApi-3] error when handling request 
> Name: ProducerRequest; Version: 0; CorrelationId: 1; ClientId: null; 
> RequiredAcks: 1; AckTimeoutMs: 1 ms; TopicAndPartition: [topic_1,3] -> 37 
> (kafka.server.KafkaApis)
>java.lang.NullPointerException
>   at 
> org.apache.kafka.common.metrics.JmxReporter.getMBeanName(JmxReporter.java:127)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.addAttribute(JmxReporter.java:106)
>   at 
> org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:76)
>   at 
> org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:288)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:177)
>   at org.apache.kafka.common.metrics.Sensor.add(Sensor.java:162)
>   at 
> kafka.server.ClientQuotaManager.getOrCreateQuotaSensors(ClientQuotaManager.scala:209)
>   at 
> kafka.server.ClientQuotaManager.recordAndMaybeThrottle(ClientQuotaManager.scala:111)
>   at 
> kafka.server.KafkaApis.kafka$server$KafkaApis$$sendResponseCallback$2(KafkaApis.scala:353)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.KafkaApis$$anonfun$handleProducerRequest$1.apply(KafkaApis.scala:371)
>   at 
> kafka.server.ReplicaManager.appendMessages(ReplicaManager.scala:348)
>   at kafka.server.KafkaApis.handleProducerRequest(KafkaApis.scala:366)
>   at kafka.server.KafkaApis.handle(KafkaApis.scala:68)
>   at 
> kafka.server.KafkaRequestHandler.run(KafkaRequestHandler.scala:60)
>   at java.lang.Thread.run(Thread.java:745)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: MINOR: jointest for windowed keys

2016-01-26 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/814

MINOR: jointest for windowed keys

@guozhangwang 


You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka windowed_key_join_test

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/814.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #814


commit 6ea2caebf06eca6aec2d96e6ca11acd2516596ce
Author: Yasuhiro Matsuda 
Date:   2016-01-26T22:00:48Z

MINOR: jointest for windowed keys




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3116: Specify minimum Gradle version req...

2016-01-26 Thread vahidhashemian
Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/813


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: KAFKA-3116: Specify minimum Gradle version req...

2016-01-26 Thread vahidhashemian
GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/813

KAFKA-3116: Specify minimum Gradle version required in Readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3116

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/813.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #813


commit 977a257a6cbff1a7293ba1ae6fefa2b0ec32fea0
Author: Vahid Hashemian 
Date:   2016-01-26T19:52:20Z

KAFKA-3116: Update Readme to specify minimum Gradle version for buliding 
Kafka




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3116) Failure to build

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118142#comment-15118142
 ] 

ASF GitHub Bot commented on KAFKA-3116:
---

Github user vahidhashemian closed the pull request at:

https://github.com/apache/kafka/pull/813


> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3116) Failure to build

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118143#comment-15118143
 ] 

ASF GitHub Bot commented on KAFKA-3116:
---

GitHub user vahidhashemian reopened a pull request:

https://github.com/apache/kafka/pull/813

KAFKA-3116: Specify minimum Gradle version required in Readme



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/vahidhashemian/kafka KAFKA-3116

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/813.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #813


commit 977a257a6cbff1a7293ba1ae6fefa2b0ec32fea0
Author: Vahid Hashemian 
Date:   2016-01-26T19:52:20Z

KAFKA-3116: Update Readme to specify minimum Gradle version for buliding 
Kafka




> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2016-01-26 Thread Becket Qin
Bump up this thread per discussion on the KIP hangout.

During the implementation of the KIP, Guozhang raised another proposal on
how to indicate the message timestamp type used by messages. So we want to
see people's opinion on this proposal.

The difference between current and the new proposal only differs on
messages that are a) compressed, and b) using LogAppendTime

For compressed messages using LogAppendTime, the timestamps in the current
proposal is as below:
1. When a producer produces the messages, it tries to set timestamp to -1
for inner messages if it knows LogAppendTime is used.
2. When a broker receives the messages, it will overwrite the timestamp of
inner message to -1 if needed and write server time to the wrapper message.
Broker will do re-compression if inner message timestamp is overwritten.
3. When a consumer receives the messages, it will see the inner message
timestamp is -1 so the wrapper message timestamp is used.

Implementation wise, this proposal requires the producer to set timestamp
for inner messages correctly to avoid broker side re-compression. To do
that, the short term solution is to let producer infer the timestamp type
returned by broker in ProduceResponse and set correct timestamp afterwards.
This means the first few batches will still need re-compression on the
broker. The long term solution is to have producer get topic configuration
during metadata update.


The proposed modification is:
1. When a producer produces the messages, it always using create time.
2. When a broker receives the messages, it ignores the inner messages
timestamp, but simply set a wrapper message timestamp type attribute bit to
1 and set the timestamp of the wrapper message to server time. (The broker
will also set the timesatmp type attribute bit accordingly for
non-compressed messages using LogAppendTime).
3. When a consumer receives the messages, it checks timestamp type
attribute bit of wrapper message. If it is set to 1, the inner message's
timestamp will be ignored and the wrapper message's timestamp will be used.

This approach uses an extra attribute bit. The good thing of the modified
protocol is consumers will be able to know the timestamp type. And
re-compression on broker side is completely avoided no matter what value is
sent by the producer. In this approach the inner messages will have wrong
timestamps.

We want to see if people have concerns over the modified approach.

Thanks,

Jiangjie (Becket) Qin





On Tue, Dec 15, 2015 at 11:45 AM, Becket Qin  wrote:

> Jun,
>
> 1. I agree it would be nice to have the timestamps used in a unified way.
> My concern is that if we let server change timestamp of the inner message
> for LogAppendTime, that will enforce the user who are using LogAppendTime
> to always pay the recompression penalty. So using LogAppendTime makes
> KIP-31 in vain.
>
> 4. If there are no entries in the log segment, we can read from the time
> index before the previous log segment. If there is no previous entry
> avaliable after we search until the earliest log segment, that means all
> the previous log segment with a valid time index entry has been deleted. In
> that case supposedly there should be only one log segment left - the active
> log segment, we can simply set the latest timestamp to 0.
>
> Guozhang,
>
> Sorry for the confusion. by "the timestamp of the latest message" I
> actually meant "the timestamp of the message with largest timestamp". So in
> your example the "latest message" is 5.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
> On Tue, Dec 15, 2015 at 10:02 AM, Guozhang Wang 
> wrote:
>
>> Jun, Jiangjie,
>>
>> I am confused about 3) here, if we use "the timestamp of the latest
>> message"
>> then doesn't this mean we will roll the log whenever a message delayed by
>> rolling time is received as well? Just to clarify, my understanding of
>> "the
>> timestamp of the latest message", for example in the following log, is 1,
>> not 5:
>>
>> 2, 3, 4, 5, 1
>>
>> Guozhang
>>
>>
>> On Mon, Dec 14, 2015 at 10:05 PM, Jun Rao  wrote:
>>
>> > 1. Hmm, it's more intuitive if the consumer sees the same timestamp
>> whether
>> > the messages are compressed or not. When
>> > message.timestamp.type=LogAppendTime,
>> > we will need to set timestamp in each message if messages are not
>> > compressed, so that the follower can get the same timestamp. So, it
>> seems
>> > that we should do the same thing for inner messages when messages are
>> > compressed.
>> >
>> > 4. I thought on startup, we restore the timestamp of the latest message
>> by
>> > reading from the time index of the last log segment. So, what happens if
>> > there are no index entries?
>> >
>> > Thanks,
>> >
>> > Jun
>> >
>> > On Mon, Dec 14, 2015 at 6:28 PM, Becket Qin 
>> wrote:
>> >
>> > > Thanks for the explanation, Jun.
>> > >
>> > > 1. That makes sense. So maybe we can do the following:
>> > > (a) Set the timestamp 

Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Anna Povzner
Hi All,

Here is meeting notes from today’s KIP meeting:

1. We agreed to keep the scope of this KIP to be producer and consumer
interceptors only. Broker-side interceptor will be added later as a
separate KIP. The reasons were already mentioned in this thread, but the
summary is:
 * Broker interceptor is riskier and requires careful consideration about
overheads, whether to intercept leaders vs. leaders/replicas, what to do on
leader failover and so on.
 * Broker interceptors increase monitoring resolution, but not including it
in this KIP does not reduce usefulness of producer and consumer
interceptors that enable end-to-end monitoring

2. We agreed to scope ProducerInterceptor and ConsumerInterceptor callbacks
to minimal set of mutable API that are not dependent on producer and
consumer internal implementation.

ProducerInterceptor:
*ProducerRecord onSend(ProducerRecord record);*
*void onAcknowledgement(RecordMetadata metadata, Exception exception);*

ConsumerInterceptor:
*ConsumerRecords onConsume(ConsumerRecords records);*
*void onCommit(Map offsets);*

We will allow interceptors to modify ProducerRecord on producer side, and
modify ConsumerRecords on consumer side. This will support end-to-end
monitoring and auditing and support the ability to add metadata for a
message. This will support Todd’s Auditing and Routing use-cases.

We did not find any use-case for modifying records in onConsume() callback,
but decided to enable modification of consumer records for symmetry with
onSend().

3. We agreed to ensure compatibility when/if we add new methods to
ProducerInterceptor and ConsumerInterceptor by using default methods with
an empty implementation. Ok to assume Java 8. (This is Ismael’s method #2).

4. We decided not to add any callbacks to producer and consumer
interceptors that will depend on internal implementation as part of this
KIP. However, it is possible to add them later as part of another KIP if
there are good use-cases.

*Reasoning.* We did not have concrete use-cases that justified more methods
at this point. Some of the use-cases were for more fine-grain latency
collection, which could be done with Kafka Metrics. Another use-case was
encryption. However, there are several design options for encryption. One
is to do per-record encryption which would require adding
ProducerInterceptor.onEnqueued() and ConsumerInterceptor.onReceive(). One
could argue that in that case encryption could be done by adding a custom
serializer/deserializer. Another option is to do encryption after message
gets compressed, but there are issues that arise regarding broker doing
re-compression. We decided that it is better to have that discussion in a
separate KIP and decide that this is something we want to do with
interceptors or by other means.


Todd, Mayuresh and others who missed the KIP meeting, please let me know
your thoughts on the scope we agreed on during the meeting.

I will update the KIP proposal with the current decision by end of today.

Thanks,
Anna


On Tue, Jan 26, 2016 at 11:41 AM, Mayuresh Gharat <
gharatmayures...@gmail.com> wrote:

> Hi,
>
> I won't be able to make it to KIP hangout due to conflict.
>
> Anna, here is the use case where knowing if there are messages in the
> RecordAccumulator left to be sent to the kafka cluster for a topic is
> useful.
>
> 1) Consider a pipeline :
> A ---> Mirror-maker -> B
>
> 2) We have a topic T in cluster A mirrored to cluster B.
>
> 3) Now if we delete topic T in A and immediately proceed to delete the
> topic in cluster B, some of the the Mirror-maker machines die because
> atleast one of the batches in RecordAccumulator for topic T fail to be
> produced to cluster B. We have seen this happening in our clusters.
>
>
> If we know that there are no more messages left in the RecordAccumulator to
> be produced to cluster B, we can safely delete the topic in cluster B
> without disturbing the pipeline.
>
> Thanks,
>
> Mayuresh
>
> On Tue, Jan 26, 2016 at 10:31 AM, Anna Povzner  wrote:
>
> > Thanks Ismael and Todd for your feedback!
> >
> > I agree about coming up with lean, but useful interfaces that will be
> easy
> > to extend later.
> >
> > When we discuss the minimal set of producer and consumer interceptor API
> in
> > today’s KIP meeting (discussion item #2 in my previous email), lets
> compare
> > two options:
> >
> > *1. Minimal set of immutable API for producer and consumer interceptors*
> >
> > ProducerInterceptor:
> > public void onSend(ProducerRecord record);
> > public void onAcknowledgement(RecordMetadata metadata, Exception
> > exception);
> >
> > ConsumerInterceptor:
> > public void onConsume(ConsumerRecords records);
> > public void onCommit(Map offsets);
> >
> > Use-cases:
> > — end-to-end monitoring; custom tracing and logging
> >
> >
> > *2. Minimal set of mutable API for producer and consumer interceptors*
> >

Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Todd Palino
This looks good. As noted, having one mutable interceptor on each side
allows for the use cases we can envision right now, and I think that’s
going to provide a great deal of opportunity for implementing things like
audit, especially within a multi-tenant environment. Looking forward to
getting this available in the clients.

Thanks!

-Todd


On Tue, Jan 26, 2016 at 2:36 PM, Anna Povzner  wrote:

> Hi All,
>
> Here is meeting notes from today’s KIP meeting:
>
> 1. We agreed to keep the scope of this KIP to be producer and consumer
> interceptors only. Broker-side interceptor will be added later as a
> separate KIP. The reasons were already mentioned in this thread, but the
> summary is:
>  * Broker interceptor is riskier and requires careful consideration about
> overheads, whether to intercept leaders vs. leaders/replicas, what to do on
> leader failover and so on.
>  * Broker interceptors increase monitoring resolution, but not including it
> in this KIP does not reduce usefulness of producer and consumer
> interceptors that enable end-to-end monitoring
>
> 2. We agreed to scope ProducerInterceptor and ConsumerInterceptor callbacks
> to minimal set of mutable API that are not dependent on producer and
> consumer internal implementation.
>
> ProducerInterceptor:
> *ProducerRecord onSend(ProducerRecord record);*
> *void onAcknowledgement(RecordMetadata metadata, Exception exception);*
>
> ConsumerInterceptor:
> *ConsumerRecords onConsume(ConsumerRecords records);*
> *void onCommit(Map offsets);*
>
> We will allow interceptors to modify ProducerRecord on producer side, and
> modify ConsumerRecords on consumer side. This will support end-to-end
> monitoring and auditing and support the ability to add metadata for a
> message. This will support Todd’s Auditing and Routing use-cases.
>
> We did not find any use-case for modifying records in onConsume() callback,
> but decided to enable modification of consumer records for symmetry with
> onSend().
>
> 3. We agreed to ensure compatibility when/if we add new methods to
> ProducerInterceptor and ConsumerInterceptor by using default methods with
> an empty implementation. Ok to assume Java 8. (This is Ismael’s method #2).
>
> 4. We decided not to add any callbacks to producer and consumer
> interceptors that will depend on internal implementation as part of this
> KIP. However, it is possible to add them later as part of another KIP if
> there are good use-cases.
>
> *Reasoning.* We did not have concrete use-cases that justified more methods
> at this point. Some of the use-cases were for more fine-grain latency
> collection, which could be done with Kafka Metrics. Another use-case was
> encryption. However, there are several design options for encryption. One
> is to do per-record encryption which would require adding
> ProducerInterceptor.onEnqueued() and ConsumerInterceptor.onReceive(). One
> could argue that in that case encryption could be done by adding a custom
> serializer/deserializer. Another option is to do encryption after message
> gets compressed, but there are issues that arise regarding broker doing
> re-compression. We decided that it is better to have that discussion in a
> separate KIP and decide that this is something we want to do with
> interceptors or by other means.
>
>
> Todd, Mayuresh and others who missed the KIP meeting, please let me know
> your thoughts on the scope we agreed on during the meeting.
>
> I will update the KIP proposal with the current decision by end of today.
>
> Thanks,
> Anna
>
>
> On Tue, Jan 26, 2016 at 11:41 AM, Mayuresh Gharat <
> gharatmayures...@gmail.com> wrote:
>
> > Hi,
> >
> > I won't be able to make it to KIP hangout due to conflict.
> >
> > Anna, here is the use case where knowing if there are messages in the
> > RecordAccumulator left to be sent to the kafka cluster for a topic is
> > useful.
> >
> > 1) Consider a pipeline :
> > A ---> Mirror-maker -> B
> >
> > 2) We have a topic T in cluster A mirrored to cluster B.
> >
> > 3) Now if we delete topic T in A and immediately proceed to delete the
> > topic in cluster B, some of the the Mirror-maker machines die because
> > atleast one of the batches in RecordAccumulator for topic T fail to be
> > produced to cluster B. We have seen this happening in our clusters.
> >
> >
> > If we know that there are no more messages left in the RecordAccumulator
> to
> > be produced to cluster B, we can safely delete the topic in cluster B
> > without disturbing the pipeline.
> >
> > Thanks,
> >
> > Mayuresh
> >
> > On Tue, Jan 26, 2016 at 10:31 AM, Anna Povzner 
> wrote:
> >
> > > Thanks Ismael and Todd for your feedback!
> > >
> > > I agree about coming up with lean, but useful interfaces that will be
> > easy
> > > to extend later.
> > >
> > > When we discuss the minimal set of producer and consumer interceptor
> API
> > in
> > > today’s KIP meeting (discussion item #2 

Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2016-01-26 Thread Jay Kreps
I'm in favor of Guozhang's proposal. I think that logic is a bit hacky, but
I agree that this is better than the alternative, and the hackiness only
effects people using log append time which I think will be pretty uncommon.
I think setting that bit will have the additional added value that
consumers can know the meaning of the timestamp.

-Jay

On Tue, Jan 26, 2016 at 2:07 PM, Becket Qin  wrote:

> Bump up this thread per discussion on the KIP hangout.
>
> During the implementation of the KIP, Guozhang raised another proposal on
> how to indicate the message timestamp type used by messages. So we want to
> see people's opinion on this proposal.
>
> The difference between current and the new proposal only differs on
> messages that are a) compressed, and b) using LogAppendTime
>
> For compressed messages using LogAppendTime, the timestamps in the current
> proposal is as below:
> 1. When a producer produces the messages, it tries to set timestamp to -1
> for inner messages if it knows LogAppendTime is used.
> 2. When a broker receives the messages, it will overwrite the timestamp of
> inner message to -1 if needed and write server time to the wrapper message.
> Broker will do re-compression if inner message timestamp is overwritten.
> 3. When a consumer receives the messages, it will see the inner message
> timestamp is -1 so the wrapper message timestamp is used.
>
> Implementation wise, this proposal requires the producer to set timestamp
> for inner messages correctly to avoid broker side re-compression. To do
> that, the short term solution is to let producer infer the timestamp type
> returned by broker in ProduceResponse and set correct timestamp afterwards.
> This means the first few batches will still need re-compression on the
> broker. The long term solution is to have producer get topic configuration
> during metadata update.
>
>
> The proposed modification is:
> 1. When a producer produces the messages, it always using create time.
> 2. When a broker receives the messages, it ignores the inner messages
> timestamp, but simply set a wrapper message timestamp type attribute bit to
> 1 and set the timestamp of the wrapper message to server time. (The broker
> will also set the timesatmp type attribute bit accordingly for
> non-compressed messages using LogAppendTime).
> 3. When a consumer receives the messages, it checks timestamp type
> attribute bit of wrapper message. If it is set to 1, the inner message's
> timestamp will be ignored and the wrapper message's timestamp will be used.
>
> This approach uses an extra attribute bit. The good thing of the modified
> protocol is consumers will be able to know the timestamp type. And
> re-compression on broker side is completely avoided no matter what value is
> sent by the producer. In this approach the inner messages will have wrong
> timestamps.
>
> We want to see if people have concerns over the modified approach.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
>
>
> On Tue, Dec 15, 2015 at 11:45 AM, Becket Qin  wrote:
>
> > Jun,
> >
> > 1. I agree it would be nice to have the timestamps used in a unified way.
> > My concern is that if we let server change timestamp of the inner message
> > for LogAppendTime, that will enforce the user who are using LogAppendTime
> > to always pay the recompression penalty. So using LogAppendTime makes
> > KIP-31 in vain.
> >
> > 4. If there are no entries in the log segment, we can read from the time
> > index before the previous log segment. If there is no previous entry
> > avaliable after we search until the earliest log segment, that means all
> > the previous log segment with a valid time index entry has been deleted.
> In
> > that case supposedly there should be only one log segment left - the
> active
> > log segment, we can simply set the latest timestamp to 0.
> >
> > Guozhang,
> >
> > Sorry for the confusion. by "the timestamp of the latest message" I
> > actually meant "the timestamp of the message with largest timestamp". So
> in
> > your example the "latest message" is 5.
> >
> > Thanks,
> >
> > Jiangjie (Becket) Qin
> >
> >
> >
> > On Tue, Dec 15, 2015 at 10:02 AM, Guozhang Wang 
> > wrote:
> >
> >> Jun, Jiangjie,
> >>
> >> I am confused about 3) here, if we use "the timestamp of the latest
> >> message"
> >> then doesn't this mean we will roll the log whenever a message delayed
> by
> >> rolling time is received as well? Just to clarify, my understanding of
> >> "the
> >> timestamp of the latest message", for example in the following log, is
> 1,
> >> not 5:
> >>
> >> 2, 3, 4, 5, 1
> >>
> >> Guozhang
> >>
> >>
> >> On Mon, Dec 14, 2015 at 10:05 PM, Jun Rao  wrote:
> >>
> >> > 1. Hmm, it's more intuitive if the consumer sees the same timestamp
> >> whether
> >> > the messages are compressed or not. When
> >> > message.timestamp.type=LogAppendTime,
> >> > we will need to set timestamp in each message if messages are not
> 

Re: [DISCUSS] KIP-32 - Add CreateTime and LogAppendTime to Kafka message

2016-01-26 Thread Becket Qin
My hesitation for the changed protocol is that I think If we will have
topic configuration returned in the topic metadata, the current protocol
makes more sense. Because the timestamp type is a topic level setting so we
don't need to put it into each message. That is assuming the timestamp type
change on a topic rarely happens and if it is ever needed, the existing
data should be wiped out.

Thanks,

Jiangjie (Becket) Qin

On Tue, Jan 26, 2016 at 2:07 PM, Becket Qin  wrote:

> Bump up this thread per discussion on the KIP hangout.
>
> During the implementation of the KIP, Guozhang raised another proposal on
> how to indicate the message timestamp type used by messages. So we want to
> see people's opinion on this proposal.
>
> The difference between current and the new proposal only differs on
> messages that are a) compressed, and b) using LogAppendTime
>
> For compressed messages using LogAppendTime, the timestamps in the current
> proposal is as below:
> 1. When a producer produces the messages, it tries to set timestamp to -1
> for inner messages if it knows LogAppendTime is used.
> 2. When a broker receives the messages, it will overwrite the timestamp of
> inner message to -1 if needed and write server time to the wrapper message.
> Broker will do re-compression if inner message timestamp is overwritten.
> 3. When a consumer receives the messages, it will see the inner message
> timestamp is -1 so the wrapper message timestamp is used.
>
> Implementation wise, this proposal requires the producer to set timestamp
> for inner messages correctly to avoid broker side re-compression. To do
> that, the short term solution is to let producer infer the timestamp type
> returned by broker in ProduceResponse and set correct timestamp afterwards.
> This means the first few batches will still need re-compression on the
> broker. The long term solution is to have producer get topic configuration
> during metadata update.
>
>
> The proposed modification is:
> 1. When a producer produces the messages, it always using create time.
> 2. When a broker receives the messages, it ignores the inner messages
> timestamp, but simply set a wrapper message timestamp type attribute bit to
> 1 and set the timestamp of the wrapper message to server time. (The broker
> will also set the timesatmp type attribute bit accordingly for
> non-compressed messages using LogAppendTime).
> 3. When a consumer receives the messages, it checks timestamp type
> attribute bit of wrapper message. If it is set to 1, the inner message's
> timestamp will be ignored and the wrapper message's timestamp will be used.
>
> This approach uses an extra attribute bit. The good thing of the modified
> protocol is consumers will be able to know the timestamp type. And
> re-compression on broker side is completely avoided no matter what value is
> sent by the producer. In this approach the inner messages will have wrong
> timestamps.
>
> We want to see if people have concerns over the modified approach.
>
> Thanks,
>
> Jiangjie (Becket) Qin
>
>
>
>
>
> On Tue, Dec 15, 2015 at 11:45 AM, Becket Qin  wrote:
>
>> Jun,
>>
>> 1. I agree it would be nice to have the timestamps used in a unified way.
>> My concern is that if we let server change timestamp of the inner message
>> for LogAppendTime, that will enforce the user who are using LogAppendTime
>> to always pay the recompression penalty. So using LogAppendTime makes
>> KIP-31 in vain.
>>
>> 4. If there are no entries in the log segment, we can read from the time
>> index before the previous log segment. If there is no previous entry
>> avaliable after we search until the earliest log segment, that means all
>> the previous log segment with a valid time index entry has been deleted. In
>> that case supposedly there should be only one log segment left - the active
>> log segment, we can simply set the latest timestamp to 0.
>>
>> Guozhang,
>>
>> Sorry for the confusion. by "the timestamp of the latest message" I
>> actually meant "the timestamp of the message with largest timestamp". So in
>> your example the "latest message" is 5.
>>
>> Thanks,
>>
>> Jiangjie (Becket) Qin
>>
>>
>>
>> On Tue, Dec 15, 2015 at 10:02 AM, Guozhang Wang 
>> wrote:
>>
>>> Jun, Jiangjie,
>>>
>>> I am confused about 3) here, if we use "the timestamp of the latest
>>> message"
>>> then doesn't this mean we will roll the log whenever a message delayed by
>>> rolling time is received as well? Just to clarify, my understanding of
>>> "the
>>> timestamp of the latest message", for example in the following log, is 1,
>>> not 5:
>>>
>>> 2, 3, 4, 5, 1
>>>
>>> Guozhang
>>>
>>>
>>> On Mon, Dec 14, 2015 at 10:05 PM, Jun Rao  wrote:
>>>
>>> > 1. Hmm, it's more intuitive if the consumer sees the same timestamp
>>> whether
>>> > the messages are compressed or not. When
>>> > message.timestamp.type=LogAppendTime,
>>> > we will need to set timestamp in each message if 

[jira] [Created] (KAFKA-3153) Serializer/Deserializer Registration and Type inference

2016-01-26 Thread Yasuhiro Matsuda (JIRA)
Yasuhiro Matsuda created KAFKA-3153:
---

 Summary: Serializer/Deserializer Registration and Type inference
 Key: KAFKA-3153
 URL: https://issues.apache.org/jira/browse/KAFKA-3153
 Project: Kafka
  Issue Type: Sub-task
  Components: kafka streams
Reporter: Yasuhiro Matsuda
Assignee: Yasuhiro Matsuda
 Fix For: 0.9.1.0


This changes the way serializer/deserializer are selected by the framework. 
The new scheme requires the app dev to register serializers/deserializers for 
types using API. The framework infers the type of data from topology and uses 
appropriate serializer/deserializer. This is best effort. Type inference is not 
always possible due to Java's type erasure. If a type cannot be determined, a 
user code can supply more information.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3116: Specify minimum Gradle version req...

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/813


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3116) Failure to build

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118280#comment-15118280
 ] 

ASF GitHub Bot commented on KAFKA-3116:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/813


> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
> Fix For: 0.9.1.0
>
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (KAFKA-3116) Failure to build

2016-01-26 Thread Gwen Shapira (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3116?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gwen Shapira resolved KAFKA-3116.
-
   Resolution: Fixed
Fix Version/s: 0.9.1.0

Issue resolved by pull request 813
[https://github.com/apache/kafka/pull/813]

> Failure to build 
> -
>
> Key: KAFKA-3116
> URL: https://issues.apache.org/jira/browse/KAFKA-3116
> Project: Kafka
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 0.9.0.0
> Environment: Ubuntu 13.10
> Java 7 64
> Kafka 0.9.0
>Reporter: edwardt
>Assignee: Vahid Hashemian
>  Labels: build, newbie
> Fix For: 0.9.1.0
>
>
> failure to build, says ScalaPlugin not found for project client
> Failed at bootstrap
> cd kafka_source_dir
> gradle   <  Error happened here.
> * What went wrong:
> A problem occurred evaluating root project 'kafka'.
> > Could not find property 'ScalaPlugin' on project ':clients'.
> Detailed error log:
> https://github.com/linearregression/kafka/blob/0.9.0/builderror/error.txt
> Found out I was using gradle 1.x which is not supported.
> Will be very nice to detect minimum supported gradle version or java version 
> in build script, and report. 
> Please add info on README to resolve common build errors.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Build failed in Jenkins: kafka-trunk-jdk8 #318

2016-01-26 Thread Apache Jenkins Server
See 

Changes:

[cshapi] KAFKA-3116; Specify minimum Gradle version required in Readme

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H11 (docker Ubuntu ubuntu yahoo-not-h2) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url 
 > https://git-wip-us.apache.org/repos/asf/kafka.git # timeout=10
Fetching upstream changes from https://git-wip-us.apache.org/repos/asf/kafka.git
 > git --version # timeout=10
 > git -c core.askpass=true fetch --tags --progress 
 > https://git-wip-us.apache.org/repos/asf/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 1388ed9ba21a4f451f4208e0a6179bb18b14008d 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 1388ed9ba21a4f451f4208e0a6179bb18b14008d
 > git rev-list 82c219149027e8d96840af98d32fb1b877ab4ec2 # timeout=10
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson348544797180730173.sh
+ 
/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
http://gradle.org/docs/2.4-rc-2/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:downloadWrapper

BUILD SUCCESSFUL

Total time: 17.961 secs
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/hudson3781961927275796051.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew -Dorg.gradle.project.maxParallelForks=1 clean jarAll testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/2.10/userguide/gradle_daemon.html.
Building project 'core' with Scala version 2.10.6
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean UP-TO-DATE
:examples:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:jar_core_2_10
Building project 'core' with Scala version 2.10.6
:kafka-trunk-jdk8:clients:compileJava
:jar_core_2_10 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Failed to capture snapshot of input files for task 'compileJava' during 
up-to-date check.  See stacktrace for details.
> Could not add entry 
> '/home/jenkins/.gradle/caches/modules-2/files-2.1/net.jpountz.lz4/lz4/1.3.0/c708bb2590c0652a642236ef45d9f99ff842a2ce/lz4-1.3.0.jar'
>  to cache fileHashes.bin 
> (

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output.

BUILD FAILED

Total time: 27.311 secs
Build step 'Execute shell' marked build as failure
Recording test results
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2
ERROR: Publisher 'Publish JUnit test result report' failed: No test report 
files were found. Configuration error?
Setting 
JDK1_8_0_45_HOME=/home/jenkins/jenkins-slave/tools/hudson.model.JDK/jdk1.8.0_45
Setting 
GRADLE_2_4_RC_2_HOME=/home/jenkins/jenkins-slave/tools/hudson.plugins.gradle.GradleInstallation/Gradle_2.4-rc-2


[jira] [Updated] (KAFKA-3141) kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid authorizer-property

2016-01-26 Thread Ashish K Singh (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3141?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ashish K Singh updated KAFKA-3141:
--
Status: Patch Available  (was: Open)

> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property
> --
>
> Key: KAFKA-3141
> URL: https://issues.apache.org/jira/browse/KAFKA-3141
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ashish K Singh
>Assignee: Ashish K Singh
>
> kafka-acls.sh throws ArrayIndexOutOfBoundsException for an invalid 
> authorizer-property. ST below.
> {code}
> Error while executing topic Acl command 1
> java.lang.ArrayIndexOutOfBoundsException: 1
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> kafka.admin.AclCommand$$anonfun$withAuthorizer$2.apply(AclCommand.scala:65)
>   at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>   at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>   at kafka.admin.AclCommand$.withAuthorizer(AclCommand.scala:65)
>   at kafka.admin.AclCommand$.addAcl(AclCommand.scala:78)
>   at kafka.admin.AclCommand$.main(AclCommand.scala:48)
>   at kafka.admin.AclCommand.main(AclCommand.scala)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3092) Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3092?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118330#comment-15118330
 ] 

ASF GitHub Bot commented on KAFKA-3092:
---

GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/815

KAFKA-3092: Replace SinkTask onPartitionsAssigned/onPartitionsRevoked with 
open/close



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3092

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/815.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #815


commit 6d262240a8a9d01e99388e8b2f3cf2d45ff7d57d
Author: Jason Gustafson 
Date:   2016-01-22T00:45:44Z

Combine WorkerSinkTask and WorkerSinkTaskThread and refactor as a Runnable

commit 22c04c95bcf2e355376892afc7d2990f5e3cb02f
Author: Jason Gustafson 
Date:   2016-01-26T21:13:40Z

Ensure onPartitionsRevoked obeys close semantics

commit ec9611cfdb05f12a62e304433c819e46feeed321
Author: Jason Gustafson 
Date:   2016-01-26T22:45:27Z

Assignments are only initialized in onPartitionsAssigned

commit dc002cc43eacade42e07e78d0b297d31bf548cad
Author: Jason Gustafson 
Date:   2016-01-27T00:09:47Z

Add open/close methods to SinkTask and deprecate 
onPartitionsAssigned/onPartitionsRevoked




> Rename SinkTask.onPartitionsAssigned/onPartitionsRevoked and Clarify Contract
> -
>
> Key: KAFKA-3092
> URL: https://issues.apache.org/jira/browse/KAFKA-3092
> Project: Kafka
>  Issue Type: Improvement
>  Components: copycat
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>
> The purpose of the onPartitionsRevoked() and onPartitionsAssigned() methods 
> exposed in Kafka Connect's SinkTask interface seems a little unclear and too 
> closely tied to consumer semantics. From the javadoc, these APIs are used to 
> open/close per-partition resources, but that would suggest that we should 
> always get one call to onPartitionsAssigned() before writing any records for 
> the corresponding partitions and one call to onPartitionsRevoked() when we 
> have finished with them. However, the same methods on the consumer are used 
> to indicate phases of the rebalance operation: onPartitionsRevoked() is 
> called before the rebalance begins and onPartitionsAssigned() is called after 
> it completes. In particular, the consumer does not guarantee a final call to 
> onPartitionsRevoked(). 
> This mismatch makes the contract of these methods unclear. In fact, the 
> WorkerSinkTask currently does not guarantee the initial call to 
> onPartitionsAssigned(), nor the final call to onPartitionsRevoked(). Instead, 
> the task implementation must pull the initial assignment from the 
> SinkTaskContext. To make it more confusing, the call to commit offsets 
> following onPartitionsRevoked() causes a flush() on a partition which had 
> already been revoked. All of this makes it difficult to use this API as 
> suggested in the javadocs.
> To fix this, we should clarify the behavior of these methods and consider 
> renaming them to avoid confusion with the same methods in the consumer API. 
> If onPartitionsAssigned() is meant for opening resources, maybe we can rename 
> it to open(). Similarly, onPartitionsRevoked() can be renamed to close(). We 
> can then fix the code to ensure that a typical open/close contract is 
> enforced. This would also mean removing the need to pass the initial 
> assignment in the SinkTaskContext. This would give the following API:
> {code}
> void open(Collection partitions);
> void close(Collection partitions);
> {code}
> We could also consider going a little further. Instead of depending on 
> onPartitionsAssigned() to open resources, tasks could open partition 
> resources on demand as records are received. In general, connectors will need 
> some way to close partition-specific resources, but there might not be any 
> need to pass the full list of partitions to close since the only open 
> resources should be those that have received writes since the last rebalance. 
> In this case, we just have a single method:
> {code}
> void close();
> {code}
> The downside to this is that the difference between close() and stop() then 
> becomes a little unclear.
> Obviously these are not compatible changes and connectors would have to be 
> updated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3092: Replace SinkTask onPartitionsAssig...

2016-01-26 Thread hachikuji
GitHub user hachikuji opened a pull request:

https://github.com/apache/kafka/pull/815

KAFKA-3092: Replace SinkTask onPartitionsAssigned/onPartitionsRevoked with 
open/close



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/hachikuji/kafka KAFKA-3092

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/815.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #815


commit 6d262240a8a9d01e99388e8b2f3cf2d45ff7d57d
Author: Jason Gustafson 
Date:   2016-01-22T00:45:44Z

Combine WorkerSinkTask and WorkerSinkTaskThread and refactor as a Runnable

commit 22c04c95bcf2e355376892afc7d2990f5e3cb02f
Author: Jason Gustafson 
Date:   2016-01-26T21:13:40Z

Ensure onPartitionsRevoked obeys close semantics

commit ec9611cfdb05f12a62e304433c819e46feeed321
Author: Jason Gustafson 
Date:   2016-01-26T22:45:27Z

Assignments are only initialized in onPartitionsAssigned

commit dc002cc43eacade42e07e78d0b297d31bf548cad
Author: Jason Gustafson 
Date:   2016-01-27T00:09:47Z

Add open/close methods to SinkTask and deprecate 
onPartitionsAssigned/onPartitionsRevoked




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] kafka pull request: MINOR: remove FilteredIterator

2016-01-26 Thread ymatsuda
GitHub user ymatsuda opened a pull request:

https://github.com/apache/kafka/pull/816

MINOR: remove FilteredIterator

@guozhangwang 
removing an unused class, FilteredIterator, and its test.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/ymatsuda/kafka remove_obsolete_class

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/kafka/pull/816.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #816


commit 42f3e3f2ed549b51077480a1dc5bfb046ec2e429
Author: Yasuhiro Matsuda 
Date:   2016-01-27T00:31:30Z

MINOR: remove FilteredIterator




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


Jenkins build is back to normal : kafka-trunk-jdk7 #991

2016-01-26 Thread Apache Jenkins Server
See 



[jira] [Commented] (KAFKA-3147) Memory records is not writable in MirrorMaker

2016-01-26 Thread Jun Rao (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118362#comment-15118362
 ] 

Jun Rao commented on KAFKA-3147:


I think the issue could be related to "abort.on.send.failure". There is a code 
path where the above error can be hit.

In MirrorMaker, if abortOnSendFailure is enabled, we will call 
producer.close(0) on send failure. This will call sender.forceClose(), which 
first sets this.forceClose = true, followed by accumulator.close(). Suppose 
that the sender wakes up after this.forceClose = true but before 
accumulator.close(). The sender will call 
this.accumulator.abortIncompleteBatches(), which eventually closes all batch 
records in the queue w/o actually dequeuing them. At this moment, if the client 
sends another record to the producer, it can hit the exception described in the 
jira.

[~mgharat], this may be introduced in the request timeout patch. Could you take 
a look and see if the above scenario is possible? Thanks.



> Memory records is not writable in MirrorMaker
> -
>
> Key: KAFKA-3147
> URL: https://issues.apache.org/jira/browse/KAFKA-3147
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.0
>Reporter: Meghana Narasimhan
>
> Hi,
> We are running a 3 node cluster (kafka version 0.9) and Node 0 also has a few 
> mirror makers running. 
> When we do a rolling restart of the cluster, the mirror maker shuts down with 
> the following errors.
> [2016-01-11 20:16:00,348] WARN Got error produce response with correlation id 
> 12491674 on topic-partition test-99, retrying (2147483646 attempts left). 
> Error: NOT_LEADER_FOR_PARTITION 
> (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:00,853] FATAL [mirrormaker-thread-0] Mirror maker thread 
> failure due to  (kafka.tools.MirrorMaker$MirrorMakerThread)
> java.lang.IllegalStateException: Memory records is not writable
> at 
> org.apache.kafka.common.record.MemoryRecords.append(MemoryRecords.java:93)
> at 
> org.apache.kafka.clients.producer.internals.RecordBatch.tryAppend(RecordBatch.java:69)
> at 
> org.apache.kafka.clients.producer.internals.RecordAccumulator.append(RecordAccumulator.java:168)
> at 
> org.apache.kafka.clients.producer.KafkaProducer.send(KafkaProducer.java:435)
> at 
> kafka.tools.MirrorMaker$MirrorMakerProducer.send(MirrorMaker.scala:593)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread$$anonfun$run$3.apply(MirrorMaker.scala:398)
> at scala.collection.Iterator$class.foreach(Iterator.scala:742)
> at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
> at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
> at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
> at 
> kafka.tools.MirrorMaker$MirrorMakerThread.run(MirrorMaker.scala:398)
> [2016-01-11 20:16:01,072] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-75, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-93, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:01,073] WARN Got error produce response with correlation id 
> 12491679 on topic-partition test-24, retrying (2147483646 attempts left). 
> Error: NETWORK_EXCEPTION (org.apache.kafka.clients.producer.internals.Sender)
> [2016-01-11 20:16:20,479] FATAL [mirrormaker-thread-0] Mirror maker thread 
> exited abnormally, stopping the whole mirror maker. 
> (kafka.tools.MirrorMaker$MirrorMakerThread)
> Curious if the NOT_LEADER_FOR_PARTITION is because of a potential bug hinted 
> at in the thread , 
> http://mail-archives.apache.org/mod_mbox/kafka-users/201505.mbox/%3ccajs3ho_u8s1xou_kudnfjamypjtmrjlw10qvkngn2yqkdan...@mail.gmail.com%3E
>
> And I think the mirror maker shuts down because of the 
> "abort.on.send.failure" which is set to true in our case. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[GitHub] kafka pull request: KAFKA-3132: URI scheme in "listeners" property...

2016-01-26 Thread asfgit
Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/811


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[jira] [Commented] (KAFKA-3132) URI scheme in "listeners" property should not be case-sensitive

2016-01-26 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118364#comment-15118364
 ] 

ASF GitHub Bot commented on KAFKA-3132:
---

Github user asfgit closed the pull request at:

https://github.com/apache/kafka/pull/811


> URI scheme in "listeners" property should not be case-sensitive
> ---
>
> Key: KAFKA-3132
> URL: https://issues.apache.org/jira/browse/KAFKA-3132
> Project: Kafka
>  Issue Type: Bug
>  Components: config
>Affects Versions: 0.9.0.0
>Reporter: Jake Robb
>Assignee: Grant Henke
>Priority: Minor
>  Labels: newbie
>
> I configured my Kafka brokers as follows:
> {{listeners=plaintext://kafka01:9092,ssl://kafka01:9093}}
> With this config, my Kafka brokers start, print out all of the config 
> properties, and exit quietly. No errors, nothing in the log. No indication of 
> a problem whatsoever, let alone the nature of said problem.
> Then, I changed my config as follows:
> {{listeners=PLAINTEXT://kafka01:9092,SSL://kafka01:9093}}
> Now they start and run just fine.
> Per [RFC-3986|https://tools.ietf.org/html/rfc3986#section-6.2.2.1]:
> {quote}
> When a URI uses components of the generic syntax, the component
> syntax equivalence rules always apply; namely, that the scheme and
> host are case-insensitive and therefore should be normalized to
> lowercase.  For example, the URI  is
> equivalent to .
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Work started] (KAFKA-3152) kafka-acl doesn't allow space in principal name

2016-01-26 Thread Ismael Juma (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on KAFKA-3152 started by Ismael Juma.
--
> kafka-acl doesn't allow space in principal name
> ---
>
> Key: KAFKA-3152
> URL: https://issues.apache.org/jira/browse/KAFKA-3152
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> When running the following,
> kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer 
> --authorizer-properties zookeeper.connect=localhost:2181 --topic test --add 
> --producer --allow-host=* --allow-principal "User:CN=xxx,O=My Organization"
> the acl is set as the following. The part after space is ignored.
> Following is list of acls for resource: Topic:test 
>   User:CN=xxx,O=My has Allow permission for operations: Describe from 
> hosts: *
>   User:CN=xxx,O=My has Allow permission for operations: Write from hosts: 
> * 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3152) kafka-acl doesn't allow space in principal name

2016-01-26 Thread Ismael Juma (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3152?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118401#comment-15118401
 ] 

Ismael Juma commented on KAFKA-3152:


This is a trivial bug in the kafka-acls shell script. Many other shell scripts 
have the same problem. Will post a PR tomorrow.

> kafka-acl doesn't allow space in principal name
> ---
>
> Key: KAFKA-3152
> URL: https://issues.apache.org/jira/browse/KAFKA-3152
> Project: Kafka
>  Issue Type: Bug
>  Components: core
>Affects Versions: 0.9.0.0
>Reporter: Jun Rao
>Assignee: Ismael Juma
>
> When running the following,
> kafka-acls --authorizer kafka.security.auth.SimpleAclAuthorizer 
> --authorizer-properties zookeeper.connect=localhost:2181 --topic test --add 
> --producer --allow-host=* --allow-principal "User:CN=xxx,O=My Organization"
> the acl is set as the following. The part after space is ignored.
> Following is list of acls for resource: Topic:test 
>   User:CN=xxx,O=My has Allow permission for operations: Describe from 
> hosts: *
>   User:CN=xxx,O=My has Allow permission for operations: Write from hosts: 
> * 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: Request to be added to contributor list

2016-01-26 Thread Gwen Shapira
Will be happy to add you. What's your username in Apache's Jira?

Gwen

On Mon, Jan 25, 2016 at 8:43 PM, Chen Zhu 
wrote:

> Hi,
>
> I am a CS major student. My jira username is chenzhu, can someone please
> add me to the contributor list?
>
> Thank you,
> Chen Zhu
>


Re: [DISCUSS] KIP-43: Kafka SASL enhancements

2016-01-26 Thread tao xiao
Hi Rajini,

One requirement I have is to refresh the login token every X hours. Like
what the Kerberos login does I need to have a background thread that
refreshes the token periodically.

I understand most of the login logic would be simple but it is good that we
can expose the logic login to users and let them decide what they want to
do. And we can have a fallback login component that is used if users dont
specify it.

On Tue, 26 Jan 2016 at 20:07 Rajini Sivaram 
wrote:

> Hi Tao,
>
> Thank you for the review. The changes I had in mind are in the PR
> https://github.com/apache/kafka/pull/812. Login for non-Kerberos protocols
> contains very little logic. I was expecting that combined with a custom
> login module specified in JAAS configuration, this would give sufficient
> flexibility. Is there a specific usecase you have in mind where you need to
> customize the Login code?
>
> Regards,
>
> Rajini
>
> On Tue, Jan 26, 2016 at 11:15 AM, tao xiao  wrote:
>
> > Hi Rajini,
> >
> > I think it makes sense to change LoginManager or Login to an interface
> > which users can extend to provide their own logic of login otherwise it
> is
> > hard for users to implement a custom SASL mechanism but have no control
> > over login
> >
> > On Tue, 26 Jan 2016 at 18:45 Ismael Juma  wrote:
> >
> > > Hi Rajini,
> > >
> > > Thanks for the KIP. As stated in the KIP, it does not address "Support
> > for
> > > multiple SASL mechanisms within a broker". Maybe we should also mention
> > > this in the "Rejected Alternatives" section with the reasoning. I think
> > > it's particularly relevant to understand if it's not being proposed
> > because
> > > we don't think it's useful or due to the additional implementation
> > > complexity (it's probably a combination). If we think this could be
> > useful
> > > in the future, it would also be worth thinking about how it is affected
> > if
> > > we do KIP-43 first (ie will it be easier, harder, etc.)
> > >
> > > Thanks,
> > > Ismael
> > >
> > > On Mon, Jan 25, 2016 at 9:55 PM, Rajini Sivaram <
> > > rajinisiva...@googlemail.com> wrote:
> > >
> > > > I have just created KIP-43 to extend the SASL implementation in Kafka
> > to
> > > > support new SASL mechanisms.
> > > >
> > > >
> > > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-43%3A+Kafka+SASL+enhancements
> > > >
> > > >
> > > > Comments and suggestions are appreciated.
> > > >
> > > >
> > > > Thank you...
> > > >
> > > > Regards,
> > > >
> > > > Rajini
> > > >
> > >
> >
>


[jira] [Commented] (KAFKA-3151) kafka-consumer-groups.sh fail with sasl enabled

2016-01-26 Thread linbao111 (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3151?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118474#comment-15118474
 ] 

linbao111 commented on KAFKA-3151:
--

sasl is enabled on my test cluster,following tools work fine on the cluster
su kafka
export 
KAFKA_JVM_PERFORMANCE_OPTS="-Djava.security.auth.login.config=/opt/alalei/kafka_2.10-0.9.0.1-SNAPSHOT/config/kafka_server_jaas.conf
  -Djava.security.krb5.conf=/etc/krb5.conf"
#生产者
./bin/kafka-console-producer.sh --broker-list slave16.otocyon.com:9092 --topic 
alalei_2 --producer.config config/producer.properties
#消费者
./bin/kafka-console-consumer.sh --new-consumer --bootstrap-server slave16:9092 
--zookeeper slave1.otocyon.com:2181 --topic alalei_2 --consumer.conf  
config/consumer.properties  --from-beginning



> kafka-consumer-groups.sh fail with sasl enabled 
> 
>
> Key: KAFKA-3151
> URL: https://issues.apache.org/jira/browse/KAFKA-3151
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-consumer-groups.sh --new-consumer  --bootstrap-server 
> slave1.otocyon.com:9092 --list
> Error while executing consumer group command Request METADATA failed on 
> brokers List(Node(-1, slave1.otocyon.com, 9092))
> java.lang.RuntimeException: Request METADATA failed on brokers List(Node(-1, 
> slave1.otocyon.com, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findAllBrokers(AdminClient.scala:93)
> at kafka.admin.AdminClient.listAllGroups(AdminClient.scala:101)
> at 
> kafka.admin.AdminClient.listAllGroupsFlattened(AdminClient.scala:122)
> at 
> kafka.admin.AdminClient.listAllConsumerGroupsFlattened(AdminClient.scala:126)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.list(ConsumerGroupCommand.scala:310)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:61)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)
> same error for:
> bin/kafka-run-class.sh kafka.admin.ConsumerGroupCommand  --bootstrap-server 
> slave16:9092,app:9092 --describe --group test-consumer-group  --new-consumer
> Error while executing consumer group command Request GROUP_COORDINATOR failed 
> on brokers List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> java.lang.RuntimeException: Request GROUP_COORDINATOR failed on brokers 
> List(Node(-1, slave16, 9092), Node(-2, app, 9092))
> at kafka.admin.AdminClient.sendAnyNode(AdminClient.scala:73)
> at kafka.admin.AdminClient.findCoordinator(AdminClient.scala:78)
> at kafka.admin.AdminClient.describeGroup(AdminClient.scala:130)
> at 
> kafka.admin.AdminClient.describeConsumerGroup(AdminClient.scala:152)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describeGroup(ConsumerGroupCommand.scala:314)
> at 
> kafka.admin.ConsumerGroupCommand$ConsumerGroupService$class.describe(ConsumerGroupCommand.scala:84)
> at 
> kafka.admin.ConsumerGroupCommand$KafkaConsumerGroupService.describe(ConsumerGroupCommand.scala:302)
> at 
> kafka.admin.ConsumerGroupCommand$.main(ConsumerGroupCommand.scala:63)
> at kafka.admin.ConsumerGroupCommand.main(ConsumerGroupCommand.scala)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-3150) kafka.tools.UpdateOffsetsInZK not work (sasl enabled)

2016-01-26 Thread linbao111 (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-3150?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15118477#comment-15118477
 ] 

linbao111 commented on KAFKA-3150:
--

so how can i manage topic offset on sasl-enabled cluster?
i also find kafka-console-consumer.sh not work with  --from-beginning,or even 
with property auto.offset.reset=earlist,which means that new  
kafka-console-consumer.sh process can not get earlier topics message

> kafka.tools.UpdateOffsetsInZK not work (sasl enabled)
> -
>
> Key: KAFKA-3150
> URL: https://issues.apache.org/jira/browse/KAFKA-3150
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.9.0.1
> Environment: redhat as6.5
>Reporter: linbao111
>
> ./bin/kafka-run-class.sh kafka.tools.UpdateOffsetsInZK earliest 
> config/consumer.properties   alalei_2  
> [2016-01-26 17:20:49,920] WARN Property sasl.kerberos.service.name is not 
> valid (kafka.utils.VerifiableProperties)
> [2016-01-26 17:20:49,920] WARN Property security.protocol is not valid 
> (kafka.utils.VerifiableProperties)
> Exception in thread "main" kafka.common.BrokerEndPointNotAvailableException: 
> End point PLAINTEXT not found for broker 1
> at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:136)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply$mcVI$sp(UpdateOffsetsInZK.scala:70)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> kafka.tools.UpdateOffsetsInZK$$anonfun$getAndSetOffsets$2.apply(UpdateOffsetsInZK.scala:59)
> at 
> scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
> at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
> at 
> kafka.tools.UpdateOffsetsInZK$.getAndSetOffsets(UpdateOffsetsInZK.scala:59)
> at kafka.tools.UpdateOffsetsInZK$.main(UpdateOffsetsInZK.scala:43)
> at kafka.tools.UpdateOffsetsInZK.main(UpdateOffsetsInZK.scala)
> same error for:
> ./bin/kafka-consumer-offset-checker.sh  --broker-info --group 
> test-consumer-group --topic alalei_2 --zookeeper slave1:2181
> [2016-01-26 17:23:45,218] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0.
> ./bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper 
> slave1:2181 --group  test-consumer-group
> [2016-01-26 17:26:15,075] WARN WARNING: ConsumerOffsetChecker is deprecated 
> and will be dropped in releases following 0.9.0. Use ConsumerGroupCommand 
> instead. (kafka.tools.ConsumerOffsetChecker$)
> Exiting due to: End point PLAINTEXT not found for broker 0



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: [DISCUSS] KIP-42: Add Producer and Consumer Interceptors

2016-01-26 Thread Mayuresh Gharat
Hi Anna,

Thanks a lot for summarizing the discussion on this kip.

It LGTM.
This is really nice :
We decided not to add any callbacks to producer and consumer
interceptors that will depend on internal implementation as part of this
KIP.
*However, it is possible to add them later as part of another KIP if there
are good use-cases.*

Do you agree with the use case I explained earlier for knowing the number
of records left in the RecordAccumulator for a particular topic. It might
be orthogonal to this KIP, but will be helpful. What do you think?

Thanks,

Mayuresh


On Tue, Jan 26, 2016 at 2:46 PM, Todd Palino  wrote:

> This looks good. As noted, having one mutable interceptor on each side
> allows for the use cases we can envision right now, and I think that’s
> going to provide a great deal of opportunity for implementing things like
> audit, especially within a multi-tenant environment. Looking forward to
> getting this available in the clients.
>
> Thanks!
>
> -Todd
>
>
> On Tue, Jan 26, 2016 at 2:36 PM, Anna Povzner  wrote:
>
> > Hi All,
> >
> > Here is meeting notes from today’s KIP meeting:
> >
> > 1. We agreed to keep the scope of this KIP to be producer and consumer
> > interceptors only. Broker-side interceptor will be added later as a
> > separate KIP. The reasons were already mentioned in this thread, but the
> > summary is:
> >  * Broker interceptor is riskier and requires careful consideration about
> > overheads, whether to intercept leaders vs. leaders/replicas, what to do
> on
> > leader failover and so on.
> >  * Broker interceptors increase monitoring resolution, but not including
> it
> > in this KIP does not reduce usefulness of producer and consumer
> > interceptors that enable end-to-end monitoring
> >
> > 2. We agreed to scope ProducerInterceptor and ConsumerInterceptor
> callbacks
> > to minimal set of mutable API that are not dependent on producer and
> > consumer internal implementation.
> >
> > ProducerInterceptor:
> > *ProducerRecord onSend(ProducerRecord record);*
> > *void onAcknowledgement(RecordMetadata metadata, Exception exception);*
> >
> > ConsumerInterceptor:
> > *ConsumerRecords onConsume(ConsumerRecords records);*
> > *void onCommit(Map offsets);*
> >
> > We will allow interceptors to modify ProducerRecord on producer side, and
> > modify ConsumerRecords on consumer side. This will support end-to-end
> > monitoring and auditing and support the ability to add metadata for a
> > message. This will support Todd’s Auditing and Routing use-cases.
> >
> > We did not find any use-case for modifying records in onConsume()
> callback,
> > but decided to enable modification of consumer records for symmetry with
> > onSend().
> >
> > 3. We agreed to ensure compatibility when/if we add new methods to
> > ProducerInterceptor and ConsumerInterceptor by using default methods with
> > an empty implementation. Ok to assume Java 8. (This is Ismael’s method
> #2).
> >
> > 4. We decided not to add any callbacks to producer and consumer
> > interceptors that will depend on internal implementation as part of this
> > KIP. However, it is possible to add them later as part of another KIP if
> > there are good use-cases.
> >
> > *Reasoning.* We did not have concrete use-cases that justified more
> methods
> > at this point. Some of the use-cases were for more fine-grain latency
> > collection, which could be done with Kafka Metrics. Another use-case was
> > encryption. However, there are several design options for encryption. One
> > is to do per-record encryption which would require adding
> > ProducerInterceptor.onEnqueued() and ConsumerInterceptor.onReceive(). One
> > could argue that in that case encryption could be done by adding a custom
> > serializer/deserializer. Another option is to do encryption after message
> > gets compressed, but there are issues that arise regarding broker doing
> > re-compression. We decided that it is better to have that discussion in a
> > separate KIP and decide that this is something we want to do with
> > interceptors or by other means.
> >
> >
> > Todd, Mayuresh and others who missed the KIP meeting, please let me know
> > your thoughts on the scope we agreed on during the meeting.
> >
> > I will update the KIP proposal with the current decision by end of today.
> >
> > Thanks,
> > Anna
> >
> >
> > On Tue, Jan 26, 2016 at 11:41 AM, Mayuresh Gharat <
> > gharatmayures...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I won't be able to make it to KIP hangout due to conflict.
> > >
> > > Anna, here is the use case where knowing if there are messages in the
> > > RecordAccumulator left to be sent to the kafka cluster for a topic is
> > > useful.
> > >
> > > 1) Consider a pipeline :
> > > A ---> Mirror-maker -> B
> > >
> > > 2) We have a topic T in cluster A mirrored to cluster B.
> > >
> > > 3) Now if we delete topic T in A and immediately proceed to delete 

  1   2   >