Re: Unable to build project in intelliji

2019-05-22 Thread UMESH CHAUDHARY
Apache Kafka uses Gradle as build tool so:
First create gradle wrapper using instructions here :
https://github.com/apache/kafka#first-bootstrap-and-download-the-wrapper
And then execute "gradlew idea" as mentioned here :
https://github.com/apache/kafka#building-ide-project

On Wed, May 22, 2019 at 12:47 PM omkar mestry  wrote:

> Hi,
>
> I am unable to build project in intelliji with the following exception :-
> Could not find method annotationProcessor() for arguments
> [org.openjdk.jmh:jmh-generator-annprocess:1.21] on object of type
>
> org.gradle.api.internal.artifacts.dsl.dependencies.DefaultDependencyHandler.
>
> Branch which I am looking is trunk.
>
> Please help to solve this issue.
>
> Thanks & Regards
> Omkar Mestry
>


Re: [VOTE] KIP-174 Deprecate and remove internal converter configs in WorkerConfig

2018-01-15 Thread UMESH CHAUDHARY
Thanks All for your votes. As we have got 3+ binding votes, I'll proceed to
implement this KIP and it also closes this voting thread.

Regards,
Umesh

On Sat, 13 Jan 2018 at 01:01 Guozhang Wang <wangg...@gmail.com> wrote:

> +1, thanks
>
> On Fri, Jan 12, 2018 at 10:38 AM, Randall Hauch <rha...@gmail.com> wrote:
>
> > +1 (non-binding)
> >
> > On Mon, Jan 8, 2018 at 7:09 PM, Gwen Shapira <g...@confluent.io> wrote:
> >
> > > +1 binding
> > >
> > > On Mon, Jan 8, 2018 at 4:59 PM Ewen Cheslack-Postava <
> e...@confluent.io>
> > > wrote:
> > >
> > > > +1 binding. Thanks for the KIP!
> > > >
> > > > -Ewen
> > > >
> > > > On Mon, Jan 8, 2018 at 8:34 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> > > >
> > > > > +1
> > > > >
> > > > > On Mon, Jan 8, 2018 at 4:27 AM, UMESH CHAUDHARY <
> umesh9...@gmail.com
> > >
> > > > > wrote:
> > > > >
> > > > > > Hello All,
> > > > > > Since there are no outstanding comments on this, so I'd like to
> > > start a
> > > > > > vote.
> > > > > >
> > > > > > Please find the KIP here
> > > > > > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 174+-+Deprecate+and+remove+internal+converter+configs+in+
> > > WorkerConfig>
> > > > > > and
> > > > > > the related JIRA here <
> > > > https://issues.apache.org/jira/browse/KAFKA-5540
> > > > > >.
> > > > > >
> > > > > > The KIP suggests to deprecate and remove the configs:
> > > > > > internal.key.converter, internal.value.converter
> > > > > >
> > > > > > Appreciate your comments.
> > > > > >
> > > > > > Regards,
> > > > > > Umesh
> > > > > >
> > > > >
> > > >
> > >
> >
>
>
>
> --
> -- Guozhang
>


Re: [ANNOUNCE] New committer: Matthias J. Sax

2018-01-15 Thread UMESH CHAUDHARY
Congratulations Matthias :)

On Mon, 15 Jan 2018 at 17:06 Michael Noll  wrote:

> Herzlichen Glückwunsch, Matthias. ;-)
>
>
>
> On Mon, 15 Jan 2018 at 11:38 Viktor Somogyi 
> wrote:
>
> > Congrats Matthias!
> >
> > On Mon, Jan 15, 2018 at 9:17 AM, Jorge Esteban Quilcate Otoya <
> > quilcate.jo...@gmail.com> wrote:
> >
> > > Congratulations Matthias!!
> > >
> > > El lun., 15 ene. 2018 a las 9:08, Boyang Chen ()
> > > escribió:
> > >
> > > > Great news Matthias!
> > > >
> > > >
> > > > 
> > > > From: Kaufman Ng 
> > > > Sent: Monday, January 15, 2018 11:32 AM
> > > > To: us...@kafka.apache.org
> > > > Cc: dev@kafka.apache.org
> > > > Subject: Re: [ANNOUNCE] New committer: Matthias J. Sax
> > > >
> > > > Congrats Matthias!
> > > >
> > > > On Sun, Jan 14, 2018 at 4:52 AM, Rajini Sivaram <
> > rajinisiva...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Congratulations Matthias!
> > > > >
> > > > > On Sat, Jan 13, 2018 at 11:34 AM, Mickael Maison <
> > > > mickael.mai...@gmail.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Congratulations Matthias !
> > > > > >
> > > > > > On Sat, Jan 13, 2018 at 7:01 AM, Paolo Patierno <
> > ppatie...@live.com>
> > > > > > wrote:
> > > > > > > Congratulations Matthias ! Very well deserved !
> > > > > > > 
> > > > > > > From: Guozhang Wang 
> > > > > > > Sent: Friday, January 12, 2018 11:59:21 PM
> > > > > > > To: dev@kafka.apache.org; us...@kafka.apache.org
> > > > > > > Subject: [ANNOUNCE] New committer: Matthias J. Sax
> > > > > > >
> > > > > > > Hello everyone,
> > > > > > >
> > > > > > > The PMC of Apache Kafka is pleased to announce Matthias J. Sax
> as
> > > our
> > > > > > > newest Kafka committer.
> > > > > > >
> > > > > > > Matthias has made tremendous contributions to Kafka Streams API
> > > since
> > > > > > early
> > > > > > > 2016. His footprint has been all over the places in Streams: in
> > the
> > > > > past
> > > > > > > two years he has been the main driver on improving the join
> > > semantics
> > > > > > > inside Streams DSL, summarizing all their shortcomings and
> > bridging
> > > > the
> > > > > > > gaps; he has also been largely working on the exactly-once
> > > semantics
> > > > of
> > > > > > > Streams by leveraging on the transaction messaging feature in
> > > 0.11.0.
> > > > > In
> > > > > > > addition, Matthias have been very active in community activity
> > that
> > > > > goes
> > > > > > > beyond mailing list: he's getting the close to 1000 up votes
> and
> > > 100
> > > > > > > helpful flags on SO for answering almost all questions about
> > Kafka
> > > > > > Streams.
> > > > > > >
> > > > > > > Thank you for your contribution and welcome to Apache Kafka,
> > > > Matthias!
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > Guozhang, on behalf of the Apache Kafka PMC
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Kaufman Ng
> > > > +1 646 961 8063 <+1%20646-961-8063> <(646)%20961-8063>
> > > > Solutions Architect | Confluent | www.confluent.io > > confluent.io
> > > > >
> > > > [https://www.confluent.io/wp-content/uploads/Untitled-design-12.png
> ]<
> > > > http://www.confluent.io/>
> > > >
> > > > Confluent: Apache Kafka & Streaming Platform for the Enterprise<
> > > > http://www.confluent.io/>
> > > > www.confluent.io
> > > > Confluent, founded by the creators of Apache Kafka, delivers a
> complete
> > > > execution of Kafka for the Enterprise, to help you run your business
> in
> > > > real time.
> > > >
> > > >
> > > >
> > > >
> > >
> >
> --
> *Michael G. Noll*
> Product Manager | Confluent
> +1 650 453 5860 <+1%20650-453-5860> | @miguno 
> Follow us: Twitter  | Blog
> 
>


[VOTE] KIP-174 Deprecate and remove internal converter configs in WorkerConfig

2018-01-08 Thread UMESH CHAUDHARY
Hello All,
Since there are no outstanding comments on this, so I'd like to start a
vote.

Please find the KIP here

and
the related JIRA here .

The KIP suggests to deprecate and remove the configs:
internal.key.converter, internal.value.converter

Appreciate your comments.

Regards,
Umesh


Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2018-01-05 Thread UMESH CHAUDHARY
Thanks Ewen, and apologies as I missed this too. I'll start vote for this
soon and proceed for the next steps.

On Fri, 5 Jan 2018 at 09:03 Ewen Cheslack-Postava <e...@confluent.io> wrote:

> Sorry I lost track of this thread. If things are in good shape we should
> probably vote on this and get the deprecation commit through. It seems like
> a good idea as this has been confusing to users from day one.
>
> -Ewen
>
> On Wed, Aug 9, 2017 at 5:18 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
> wrote:
>
>> Thanks Ewen,
>> I just edited the KIP to reflect the changes.
>>
>> Regards,
>> Umesh
>>
>> On Wed, 9 Aug 2017 at 11:00 Ewen Cheslack-Postava <e...@confluent.io>
>> wrote:
>>
>>> Great, looking good. I'd probably be a bit more concrete about the
>>> Proposed Changes (e.g., "will log an warning if the config is specified"
>>> and "since the JsonConverter is the default, the configs will be removed
>>> immediately from the example worker configuration files").
>>>
>>> Other than that this LGTM and I'll be happy to get rid of those settings!
>>>
>>> -Ewen
>>>
>>> On Tue, Aug 8, 2017 at 2:54 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ewen,
>>>> Sorry, I am bit late in responding this.
>>>>
>>>> Thanks for your inputs and I've updated the KIP by adding more details
>>>> to it.
>>>>
>>>> Regards,
>>>> Umesh
>>>>
>>>> On Mon, 31 Jul 2017 at 21:51 Ewen Cheslack-Postava <e...@confluent.io>
>>>> wrote:
>>>>
>>>>> On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY <umesh9...@gmail.com
>>>>> > wrote:
>>>>>
>>>>>> Hi Ewen,
>>>>>> Thanks for your comments.
>>>>>>
>>>>>> 1) Yes, there are some test and java classes which refer these
>>>>>> configs, so I will include them as well in "public interface" section of
>>>>>> KIP. What should be our approach to deal with the classes and tests which
>>>>>> use these configs: we need to change them to use JsonConverter when
>>>>>> we plan for removal of these configs right?
>>>>>>
>>>>>
>>>>> I actually meant the references in
>>>>> config/connect-standalone.properties and
>>>>> config/connect-distributed.properties
>>>>>
>>>>>
>>>>>> 2) I believe we can target the deprecation in 1.0.0 release as it is
>>>>>> planned in October 2017 and then removal in next major release. Let
>>>>>> me know your thoughts as we don't have any information for next major
>>>>>> release (next to 1.0.0) yet.
>>>>>>
>>>>>
>>>>> That sounds fine. Tough to say at this point what our approach to
>>>>> major version bumps will be since the approach to version numbering is
>>>>> changing a bit.
>>>>>
>>>>>
>>>>>> 3) Thats a good point and mentioned JIRA can help us to validate the
>>>>>> usage of any other converters. I will list this down in the KIP.
>>>>>>
>>>>>> Let me know if you have some additional thoughts on this.
>>>>>>
>>>>>> Regards,
>>>>>> Umesh
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava <e...@confluent.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Umesh,
>>>>>>>
>>>>>>> Thanks for the KIP. Straightforward and I think it's a good change.
>>>>>>> Unfortunately it is hard to tell how many people it would affect
>>>>>>> since we
>>>>>>> can't tell how many people have adjusted that config, but I think
>>>>>>> this is
>>>>>>> the right thing to do long term.
>>>>>>>
>>>>>>> A couple of quick things that might be helpful to refine:
>>>>>>>
>>>>>>> * Note that there are also some references in the example configs
>>>>>>> that we
>>>>>>> should remove.
>>>>>>> * It's nice to be explicit about when the removal is planned. This
&

Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-07 Thread UMESH CHAUDHARY
Congratulations Onur!

On Tue, 7 Nov 2017 at 21:44 Jun Rao  wrote:

> Affan,
>
> All known problems in the controller are described in the doc linked from
> https://issues.apache.org/jira/browse/KAFKA-5027.
>
> Thanks,
>
> Jun
>
> On Mon, Nov 6, 2017 at 11:00 PM, Affan Syed  wrote:
>
> > Congrats Onur,
> >
> > Can you also share the document where all known problems are listed; I am
> > assuming these bugs are still valid for the current stable release.
> >
> > Affan
> >
> > - Affan
> >
> > On Mon, Nov 6, 2017 at 10:24 PM, Jun Rao  wrote:
> >
> > > Hi, everyone,
> > >
> > > The PMC of Apache Kafka is pleased to announce a new Kafka committer
> Onur
> > > Karaman.
> > >
> > > Onur's most significant work is the improvement of Kafka controller,
> > which
> > > is the brain of a Kafka cluster. Over time, we have accumulated quite a
> > few
> > > correctness and performance issues in the controller. There have been
> > > attempts to fix controller issues in isolation, which would make the
> code
> > > base more complicated without a clear path of solving all problems.
> Onur
> > is
> > > the one who took a holistic approach, by first documenting all known
> > > issues, writing down a new design, coming up with a plan to deliver the
> > > changes in phases and executing on it. At this point, Onur has
> completed
> > > the two most important phases: making the controller single threaded
> and
> > > changing the controller to use the async ZK api. The former fixed
> > multiple
> > > deadlocks and race conditions. The latter significantly improved the
> > > performance when there are many partitions. Experimental results show
> > that
> > > Onur's work reduced the controlled shutdown time by a factor of 100
> times
> > > and the controller failover time by a factor of 3 times.
> > >
> > > Congratulations, Onur!
> > >
> > > Thanks,
> > >
> > > Jun (on behalf of the Apache Kafka PMC)
> > >
> >
>


Re: [ANNOUNCE] Apache Kafka 1.0.0 Released

2017-11-02 Thread UMESH CHAUDHARY
--
> >
> > Apache Kafka is a distributed streaming platform with four four core
> APIs:
> >
> > ** The Producer API allows an application to publish a stream records to
> > one
> > or more Kafka topics.
> >
> > ** The Consumer API allows an application to subscribe to one or more
> > topics
> > and process the stream of records produced to them.
> >
> > ** The Streams API allows an application to act as a stream processor,
> > consuming
> > an input stream from one or more topics and producing an output stream to
> > one or more output topics, effectively transforming the input streams to
> > output streams.
> >
> > ** The Connector API allows building and running reusable producers or
> > consumers
> > that connect Kafka topics to existing applications or data systems. For
> > example, a connector to a relational database might capture every change
> to
> > a table.three key capabilities:
> >
> >
> > With these APIs, Kafka can be used for two broad classes of application:
> >
> > ** Building real-time streaming data pipelines that reliably get data
> > between
> > systems or applications.
> >
> > ** Building real-time streaming applications that transform or react
> > to the streams
> > of data.
> >
> >
> > Apache Kafka is in use at large and small companies worldwide, including
> > Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> > Target, The New York Times, Uber, Yelp, and Zalando, among others.
> >
> >
> > A big thank you for the following 108 contributors to this release!
> >
> > Abhishek Mendhekar, Xi Hu, Andras Beni, Andrey Dyachkov, Andy Chambers,
> > Apurva Mehta, Armin Braun, Attila Kreiner, Balint Molnar, Bart De Vylder,
> > Ben Stopford, Bharat Viswanadham, Bill Bejeck, Boyang Chen, Bryan
> Baugher,
> > Colin P. Mccabe, Koen De Groote, Dale Peakall, Damian Guy, Dana Powers,
> > Dejan Stojadinović, Derrick Or, Dong Lin, Zhendong Liu, Dustin Cote,
> > Edoardo Comar, Eno Thereska, Erik Kringen, Erkan Unal, Evgeny
> Veretennikov,
> > Ewen Cheslack-Postava, Florian Hussonnois, Janek P, Gregor Uhlenheuer,
> > Guozhang Wang, Gwen Shapira, Hamidreza Afzali, Hao Chen, Jiefang He,
> Holden
> > Karau, Hooman Broujerdi, Hugo Louro, Ismael Juma, Jacek Laskowski, Jakub
> > Scholz, James Cheng, James Chien, Jan Burkhardt, Jason Gustafson, Jeff
> > Chao, Jeff Klukas, Jeff Widman, Jeremy Custenborder, Jeyhun Karimov,
> > Jiangjie Qin, Joel Dice, Joel Hamill, Jorge Quilcate Otoya, Kamal C,
> Kelvin
> > Rutt, Kevin Lu, Kevin Sweeney, Konstantine Karantasis, Perry Lee, Magnus
> > Edenhill, Manikumar Reddy, Manikumar Reddy O, Manjula Kumar, Mariam John,
> > Mario Molina, Matthias J. Sax, Max Zheng, Michael Andre Pearce, Michael
> > André Pearce, Michael G. Noll, Michal Borowiecki, Mickael Maison, Nick
> > Pillitteri, Oleg Prozorov, Onur Karaman, Paolo Patierno, Pranav Maniar,
> > Qihuang Zheng, Radai Rosenblatt, Alex Radzish, Rajini Sivaram, Randall
> > Hauch, Richard Yu, Robin Moffatt, Sean McCauliff, Sebastian Gavril, Siva
> > Santhalingam, Soenke Liebau, Stephane Maarek, Stephane Roset, Ted Yu,
> > Thibaud Chardonnens, Tom Bentley, Tommy Becker, Umesh Chaudhary, Vahid
> > Hashemian, Vladimír Kleštinec, Xavier Léauté, Xianyang Liu, Xin Li,
> Linhua
> > Xin
> >
> >
> > We welcome your help and feedback. For more information on how to report
> > problems, and to get involved, visit the project website at
> > http://kafka.apache.org/
> >
> >
> >
> >
> > Thanks,
> > Guozhang Wang
> >
>


Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-08-09 Thread UMESH CHAUDHARY
Thanks Ewen,
I just edited the KIP to reflect the changes.

Regards,
Umesh

On Wed, 9 Aug 2017 at 11:00 Ewen Cheslack-Postava <e...@confluent.io> wrote:

> Great, looking good. I'd probably be a bit more concrete about the
> Proposed Changes (e.g., "will log an warning if the config is specified"
> and "since the JsonConverter is the default, the configs will be removed
> immediately from the example worker configuration files").
>
> Other than that this LGTM and I'll be happy to get rid of those settings!
>
> -Ewen
>
> On Tue, Aug 8, 2017 at 2:54 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
> wrote:
>
>> Hi Ewen,
>> Sorry, I am bit late in responding this.
>>
>> Thanks for your inputs and I've updated the KIP by adding more details to
>> it.
>>
>> Regards,
>> Umesh
>>
>> On Mon, 31 Jul 2017 at 21:51 Ewen Cheslack-Postava <e...@confluent.io>
>> wrote:
>>
>>> On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY <umesh9...@gmail.com>
>>> wrote:
>>>
>>>> Hi Ewen,
>>>> Thanks for your comments.
>>>>
>>>> 1) Yes, there are some test and java classes which refer these configs,
>>>> so I will include them as well in "public interface" section of KIP. What
>>>> should be our approach to deal with the classes and tests which use these
>>>> configs: we need to change them to use JsonConverter when we plan for
>>>> removal of these configs right?
>>>>
>>>
>>> I actually meant the references in config/connect-standalone.properties
>>> and config/connect-distributed.properties
>>>
>>>
>>>> 2) I believe we can target the deprecation in 1.0.0 release as it is
>>>> planned in October 2017 and then removal in next major release. Let me
>>>> know your thoughts as we don't have any information for next major release
>>>> (next to 1.0.0) yet.
>>>>
>>>
>>> That sounds fine. Tough to say at this point what our approach to major
>>> version bumps will be since the approach to version numbering is changing a
>>> bit.
>>>
>>>
>>>> 3) Thats a good point and mentioned JIRA can help us to validate the
>>>> usage of any other converters. I will list this down in the KIP.
>>>>
>>>> Let me know if you have some additional thoughts on this.
>>>>
>>>> Regards,
>>>> Umesh
>>>>
>>>>
>>>>
>>>> On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava <e...@confluent.io>
>>>> wrote:
>>>>
>>>>> Umesh,
>>>>>
>>>>> Thanks for the KIP. Straightforward and I think it's a good change.
>>>>> Unfortunately it is hard to tell how many people it would affect since
>>>>> we
>>>>> can't tell how many people have adjusted that config, but I think this
>>>>> is
>>>>> the right thing to do long term.
>>>>>
>>>>> A couple of quick things that might be helpful to refine:
>>>>>
>>>>> * Note that there are also some references in the example configs that
>>>>> we
>>>>> should remove.
>>>>> * It's nice to be explicit about when the removal is planned. This
>>>>> lets us
>>>>> set expectations with users for timeframe (especially now that we have
>>>>> time
>>>>> based releases), allows us to give info about the removal timeframe in
>>>>> log
>>>>> error messages, and lets us file a JIRA against that release so we
>>>>> remember
>>>>> to follow up. Given the update to 1.0.0 for the next release, we may
>>>>> also
>>>>> need to adjust how we deal with deprecations/removal if we don't want
>>>>> to
>>>>> have to wait all the way until 2.0 to remove (though it is unclear how
>>>>> exactly we will be handling version bumps from now on).
>>>>> * Migration path -- I think this is the major missing gap in the KIP.
>>>>> Do we
>>>>> need a migration path? If not, presumably it is because people aren't
>>>>> using
>>>>> any other converters in practice. Do we have some way of validating
>>>>> this (
>>>>> https://issues.apache.org/jira/browse/KAFKA-3988 might be pretty
>>>>> convincing
>>>>> evidence)? If there are some user

Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-08-08 Thread UMESH CHAUDHARY
Hi Ewen,
Sorry, I am bit late in responding this.

Thanks for your inputs and I've updated the KIP by adding more details to
it.

Regards,
Umesh

On Mon, 31 Jul 2017 at 21:51 Ewen Cheslack-Postava <e...@confluent.io>
wrote:

> On Sun, Jul 30, 2017 at 10:21 PM, UMESH CHAUDHARY <umesh9...@gmail.com>
> wrote:
>
>> Hi Ewen,
>> Thanks for your comments.
>>
>> 1) Yes, there are some test and java classes which refer these configs,
>> so I will include them as well in "public interface" section of KIP. What
>> should be our approach to deal with the classes and tests which use these
>> configs: we need to change them to use JsonConverter when we plan for
>> removal of these configs right?
>>
>
> I actually meant the references in config/connect-standalone.properties
> and config/connect-distributed.properties
>
>
>> 2) I believe we can target the deprecation in 1.0.0 release as it is
>> planned in October 2017 and then removal in next major release. Let me
>> know your thoughts as we don't have any information for next major release
>> (next to 1.0.0) yet.
>>
>
> That sounds fine. Tough to say at this point what our approach to major
> version bumps will be since the approach to version numbering is changing a
> bit.
>
>
>> 3) Thats a good point and mentioned JIRA can help us to validate the
>> usage of any other converters. I will list this down in the KIP.
>>
>> Let me know if you have some additional thoughts on this.
>>
>> Regards,
>> Umesh
>>
>>
>>
>> On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava <e...@confluent.io>
>> wrote:
>>
>>> Umesh,
>>>
>>> Thanks for the KIP. Straightforward and I think it's a good change.
>>> Unfortunately it is hard to tell how many people it would affect since we
>>> can't tell how many people have adjusted that config, but I think this is
>>> the right thing to do long term.
>>>
>>> A couple of quick things that might be helpful to refine:
>>>
>>> * Note that there are also some references in the example configs that we
>>> should remove.
>>> * It's nice to be explicit about when the removal is planned. This lets
>>> us
>>> set expectations with users for timeframe (especially now that we have
>>> time
>>> based releases), allows us to give info about the removal timeframe in
>>> log
>>> error messages, and lets us file a JIRA against that release so we
>>> remember
>>> to follow up. Given the update to 1.0.0 for the next release, we may also
>>> need to adjust how we deal with deprecations/removal if we don't want to
>>> have to wait all the way until 2.0 to remove (though it is unclear how
>>> exactly we will be handling version bumps from now on).
>>> * Migration path -- I think this is the major missing gap in the KIP. Do
>>> we
>>> need a migration path? If not, presumably it is because people aren't
>>> using
>>> any other converters in practice. Do we have some way of validating this
>>> (
>>> https://issues.apache.org/jira/browse/KAFKA-3988 might be pretty
>>> convincing
>>> evidence)? If there are some users using other converters, how would they
>>> migrate to newer versions which would no longer support that?
>>>
>>> -Ewen
>>>
>>>
>>> On Fri, Jul 14, 2017 at 2:37 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
>>> wrote:
>>>
>>> > Hi there,
>>> > Resending as probably missed earlier to grab your attention.
>>> >
>>> > Regards,
>>> > Umesh
>>> >
>>> > -- Forwarded message -
>>> > From: UMESH CHAUDHARY <umesh9...@gmail.com>
>>> > Date: Mon, 3 Jul 2017 at 11:04
>>> > Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
>>> > configs in WorkerConfig
>>> > To: dev@kafka.apache.org <dev@kafka.apache.org>
>>> >
>>> >
>>> > Hello All,
>>> > I have added a KIP recently to deprecate and remove internal converter
>>> > configs in WorkerConfig.java class because these have ultimately just
>>> > caused a lot more trouble and confusion than it is worth.
>>> >
>>> > Please find the KIP here
>>> > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> > 174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig>
>>> > and
>>> > the related JIRA here <
>>> https://issues.apache.org/jira/browse/KAFKA-5540>.
>>> >
>>> > Appreciate your review and comments.
>>> >
>>> > Regards,
>>> > Umesh
>>> >
>>>
>>


Re: [DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-07-30 Thread UMESH CHAUDHARY
Hi Ewen,
Thanks for your comments.

1) Yes, there are some test and java classes which refer these configs, so
I will include them as well in "public interface" section of KIP. What
should be our approach to deal with the classes and tests which use these
configs: we need to change them to use JsonConverter when we plan for
removal of these configs right?
2) I believe we can target the deprecation in 1.0.0 release as it is
planned in October 2017 and then removal in next major release. Let me know
your thoughts as we don't have any information for next major release (next
to 1.0.0) yet.
3) Thats a good point and mentioned JIRA can help us to validate the usage
of any other converters. I will list this down in the KIP.

Let me know if you have some additional thoughts on this.

Regards,
Umesh



On Wed, 26 Jul 2017 at 09:27 Ewen Cheslack-Postava <e...@confluent.io>
wrote:

> Umesh,
>
> Thanks for the KIP. Straightforward and I think it's a good change.
> Unfortunately it is hard to tell how many people it would affect since we
> can't tell how many people have adjusted that config, but I think this is
> the right thing to do long term.
>
> A couple of quick things that might be helpful to refine:
>
> * Note that there are also some references in the example configs that we
> should remove.
> * It's nice to be explicit about when the removal is planned. This lets us
> set expectations with users for timeframe (especially now that we have time
> based releases), allows us to give info about the removal timeframe in log
> error messages, and lets us file a JIRA against that release so we remember
> to follow up. Given the update to 1.0.0 for the next release, we may also
> need to adjust how we deal with deprecations/removal if we don't want to
> have to wait all the way until 2.0 to remove (though it is unclear how
> exactly we will be handling version bumps from now on).
> * Migration path -- I think this is the major missing gap in the KIP. Do we
> need a migration path? If not, presumably it is because people aren't using
> any other converters in practice. Do we have some way of validating this (
> https://issues.apache.org/jira/browse/KAFKA-3988 might be pretty
> convincing
> evidence)? If there are some users using other converters, how would they
> migrate to newer versions which would no longer support that?
>
> -Ewen
>
>
> On Fri, Jul 14, 2017 at 2:37 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
> wrote:
>
> > Hi there,
> > Resending as probably missed earlier to grab your attention.
> >
> > Regards,
> > Umesh
> >
> > -- Forwarded message -
> > From: UMESH CHAUDHARY <umesh9...@gmail.com>
> > Date: Mon, 3 Jul 2017 at 11:04
> > Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
> > configs in WorkerConfig
> > To: dev@kafka.apache.org <dev@kafka.apache.org>
> >
> >
> > Hello All,
> > I have added a KIP recently to deprecate and remove internal converter
> > configs in WorkerConfig.java class because these have ultimately just
> > caused a lot more trouble and confusion than it is worth.
> >
> > Please find the KIP here
> > <https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig>
> > and
> > the related JIRA here <https://issues.apache.org/jira/browse/KAFKA-5540
> >.
> >
> > Appreciate your review and comments.
> >
> > Regards,
> > Umesh
> >
>


[DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-07-14 Thread UMESH CHAUDHARY
Hi there,
Resending as probably missed earlier to grab your attention.

Regards,
Umesh

-- Forwarded message -
From: UMESH CHAUDHARY <umesh9...@gmail.com>
Date: Mon, 3 Jul 2017 at 11:04
Subject: [DISCUSS] KIP-174 - Deprecate and remove internal converter
configs in WorkerConfig
To: dev@kafka.apache.org <dev@kafka.apache.org>


Hello All,
I have added a KIP recently to deprecate and remove internal converter
configs in WorkerConfig.java class because these have ultimately just
caused a lot more trouble and confusion than it is worth.

Please find the KIP here
<https://cwiki.apache.org/confluence/display/KAFKA/KIP-174+-+Deprecate+and+remove+internal+converter+configs+in+WorkerConfig>
and
the related JIRA here <https://issues.apache.org/jira/browse/KAFKA-5540>.

Appreciate your review and comments.

Regards,
Umesh


[DISCUSS] KIP-174 - Deprecate and remove internal converter configs in WorkerConfig

2017-07-02 Thread UMESH CHAUDHARY
Hello All,
I have added a KIP recently to deprecate and remove internal converter
configs in WorkerConfig.java class because these have ultimately just
caused a lot more trouble and confusion than it is worth.

Please find the KIP here

and
the related JIRA here .

Appreciate your review and comments.

Regards,
Umesh


[jira] [Commented] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027696#comment-16027696
 ] 

Umesh Chaudhary commented on KAFKA-5093:


No worries at all [~hachikuji] :)

> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5093) Load only batch header when rebuilding producer ID map

2017-05-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16027362#comment-16027362
 ] 

Umesh Chaudhary commented on KAFKA-5093:


Tried to co-relate it with KIP but was unable to locate the intended piece of 
code to tweak. If possible, can you please point that? 

> Load only batch header when rebuilding producer ID map
> --
>
> Key: KAFKA-5093
> URL: https://issues.apache.org/jira/browse/KAFKA-5093
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Jason Gustafson
>Priority: Blocker
>  Labels: exactly-once
> Fix For: 0.11.0.0
>
>
> When rebuilding the producer ID map for KIP-98, we unnecessarily load the 
> full record data into memory when scanning through the log. It would be 
> better to only load the batch header since it is all that is needed.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5171) TC should not accept empty string transactional id

2017-05-26 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-5171:
--

Assignee: Umesh Chaudhary

> TC should not accept empty string transactional id
> --
>
> Key: KAFKA-5171
> URL: https://issues.apache.org/jira/browse/KAFKA-5171
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Guozhang Wang
>    Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> Currently on TC, both `null` and `empty string` will be accepted and a new 
> pid will be returned. However, on the producer client end empty string 
> transactional id is not allowed, and if user specifically set it with empty 
> string RTE will be thrown.
> We can make TC's behavior consistent with client to also reject empty string 
> transactional id.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-4660) Improve test coverage KafkaStreams

2017-05-20 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-4660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-4660:
--

Assignee: Umesh Chaudhary

> Improve test coverage KafkaStreams
> --
>
> Key: KAFKA-4660
> URL: https://issues.apache.org/jira/browse/KAFKA-4660
> Project: Kafka
>  Issue Type: Sub-task
>  Components: streams
>Reporter: Damian Guy
>    Assignee: Umesh Chaudhary
>Priority: Minor
> Fix For: 0.11.0.0
>
>
> {{toString}} is used to print the topology, so probably should have a unit 
> test.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-05-15 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16010242#comment-16010242
 ] 

Umesh Chaudhary commented on KAFKA-5025:


Hi [~ijuma], I executed the above test and observed that *produceData* actually 
returns multiple records i.e. different keys with different values. 
Is that not the expected behaviour of this method ? Please correct me if my 
understanding needs correction. 

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>    Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Permission request to create a KIP

2017-05-04 Thread UMESH CHAUDHARY
Thank you Ewen! Now, I can create child page under "Apache Kafka".

Regards,
Umesh

On Fri, 5 May 2017 at 10:52 Ewen Cheslack-Postava <e...@confluent.io> wrote:

> Umesh,
>
> I've given you permissions on the wiki. Let me know if you run into any
> issues.
>
> -Ewen
>
> On Thu, May 4, 2017 at 12:04 AM, UMESH CHAUDHARY <umesh9...@gmail.com>
> wrote:
>
> > Hello Mates,
> > I need to start a KIP for KAFKA-5057
> > <https://issues.apache.org/jira/browse/KAFKA-5057> under this wiki page
> > <https://cwiki.apache.org/confluence/display/KAFKA/
> > Kafka+Improvement+Proposals>
> > but
> > looks like I don't have sufficient permissions to create a child page
> under
> > "Apache Kafka" space.
> > Can anyone of you please provide me the necessary permissions on that
> > space?
> >
> > My Email : umesh9...@gmail.com
> > Wiki Id: umesh9794
> >
> > Thanks,
> > Umesh
> >
>


Permission request to create a KIP

2017-05-04 Thread UMESH CHAUDHARY
Hello Mates,
I need to start a KIP for KAFKA-5057
 under this wiki page

but
looks like I don't have sufficient permissions to create a child page under
"Apache Kafka" space.
Can anyone of you please provide me the necessary permissions on that space?

My Email : umesh9...@gmail.com
Wiki Id: umesh9794

Thanks,
Umesh


[jira] [Commented] (KAFKA-5171) TC should not accept empty string transactional id

2017-05-03 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15996180#comment-15996180
 ] 

Umesh Chaudhary commented on KAFKA-5171:


[~guozhang] , Please review the PR. 

> TC should not accept empty string transactional id
> --
>
> Key: KAFKA-5171
> URL: https://issues.apache.org/jira/browse/KAFKA-5171
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Guozhang Wang
> Fix For: 0.11.0.0
>
>
> Currently on TC, both `null` and `empty string` will be accepted and a new 
> pid will be returned. However, on the producer client end empty string 
> transactional id is not allowed, and if user specifically set it with empty 
> string RTE will be thrown.
> We can make TC's behavior consistent with client to also reject empty string 
> transactional id.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5137) Controlled shutdown timeout message improvement

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15988235#comment-15988235
 ] 

Umesh Chaudhary commented on KAFKA-5137:


Indeed [~cotedm]. Even the *socketTimeoutMs* instance variable used in this 
method points to *controller.socket.timeout.ms*. Sent an initial PR, please 
review it and I can improve it if we can get other reasons of IOException 
during controlled shutdown. 

> Controlled shutdown timeout message improvement
> ---
>
> Key: KAFKA-5137
> URL: https://issues.apache.org/jira/browse/KAFKA-5137
> Project: Kafka
>  Issue Type: Improvement
>Affects Versions: 0.10.2.0
>Reporter: Dustin Cote
>Priority: Minor
>  Labels: newbie
>
> Currently if you fail during controlled shutdown, you can get a message that 
> says the socket.timeout.ms has expired. This config actually doesn't exist on 
> the broker. Instead, we should explicitly say if we've hit the 
> controller.socket.timeout.ms or the request.timeout.ms as it's confusing to 
> take action given the current message. I believe the relevant code is here:
> https://github.com/apache/kafka/blob/0.10.2/core/src/main/scala/kafka/server/KafkaServer.scala#L428-L454
> I'm also not sure if there's another timeout that could be hit here or 
> another reason why IOException might be thrown. In the least we should call 
> out those two configs instead of the non-existent one but if we can direct to 
> the proper one that would be even better.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982953#comment-15982953
 ] 

Umesh Chaudhary edited comment on KAFKA-5025 at 4/28/17 4:53 AM:
-

[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this? What should be considered in this 
restructure? 


was (Author: umesh9...@gmail.com):
[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this ?

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>    Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5132) Abort long running transactions

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15988198#comment-15988198
 ] 

Umesh Chaudhary commented on KAFKA-5132:


Sure [~damianguy]. Can understand :)

> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>Assignee: Damian Guy
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5132) Abort long running transactions

2017-04-27 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5132?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15986115#comment-15986115
 ] 

Umesh Chaudhary commented on KAFKA-5132:


Hi [~damianguy] , can I attempt this one? I need some pointers to do this 
though. 

> Abort long running transactions
> ---
>
> Key: KAFKA-5132
> URL: https://issues.apache.org/jira/browse/KAFKA-5132
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: Damian Guy
>
> We need to abort any transactions that have been running longer than the txn 
> timeout



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-25 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15982953#comment-15982953
 ] 

Umesh Chaudhary commented on KAFKA-5025:


[~ijuma], taking this one to start my contribution to the project. May I ask 
some guidelines to start working on this ?

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>    Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Assigned] (KAFKA-5025) FetchRequestTest should use batches with more than one message

2017-04-25 Thread Umesh Chaudhary (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-5025?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Umesh Chaudhary reassigned KAFKA-5025:
--

Assignee: Umesh Chaudhary

> FetchRequestTest should use batches with more than one message
> --
>
> Key: KAFKA-5025
> URL: https://issues.apache.org/jira/browse/KAFKA-5025
> Project: Kafka
>  Issue Type: Sub-task
>  Components: clients, core, producer 
>Reporter: Ismael Juma
>    Assignee: Umesh Chaudhary
> Fix For: 0.11.0.0
>
>
> As part of the message format changes for KIP-98, 
> FetchRequestTest.produceData was changed to always use record batches 
> containing a single message. We should restructure the test so that it's more 
> realistic. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5114) Clarify meaning of logs in Introduction: Topics and Logs

2017-04-23 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15980644#comment-15980644
 ] 

Umesh Chaudhary commented on KAFKA-5114:


IMHO, it has a co-relation with the previous statement: *The partitions in the 
log serve several purposes...*. So we described the log as data of the topic 
and partitions of the logs would be partitions of the topic's data.

> Clarify meaning of logs in Introduction: Topics and Logs
> 
>
> Key: KAFKA-5114
> URL: https://issues.apache.org/jira/browse/KAFKA-5114
> Project: Kafka
>  Issue Type: Improvement
>  Components: documentation
>Reporter: Michael Ernest
>Priority: Minor
>
> The term log is ambiguous in this section:
> * To describe a partition as a 'structured commit log'
> * To describe a topic as a partitioned log
> Then there's this sentence under Distribution: "The partitions of the log are 
> distributed over the servers in the Kafka cluster with each server handling 
> data and requests for a share of the partitions"
> In that last sentence, replacing 'log' with 'topic' would be clearer.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5098) KafkaProducer.send() blocks and generates TimeoutException if topic name has illegal char

2017-04-22 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15979910#comment-15979910
 ] 

Umesh Chaudhary commented on KAFKA-5098:


I will reproduce and work on this. 

> KafkaProducer.send() blocks and generates TimeoutException if topic name has 
> illegal char
> -
>
> Key: KAFKA-5098
> URL: https://issues.apache.org/jira/browse/KAFKA-5098
> Project: Kafka
>  Issue Type: Bug
>  Components: producer 
>Affects Versions: 0.10.2.0
> Environment: Java client running against server using 
> wurstmeister/kafka Docker image.
>Reporter: Jeff Larsen
>
> The server is running with auto create enabled. If we try to publish to a 
> topic with a forward slash in the name, the call blocks and we get a 
> TimeoutException in the Callback. I would expect it to return immediately 
> with an InvalidTopicException.
> There are other blocking issues that have been reported which may be related 
> to some degree, but this particular cause seems unrelated.
> Sample code:
> {code}
> import org.apache.kafka.clients.producer.*;
> import java.util.*;
> public class KafkaProducerUnexpectedBlockingAndTimeoutException {
>   public static void main(String[] args) {
> Properties props = new Properties();
> props.put("bootstrap.servers", "kafka.example.com:9092");
> props.put("key.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("value.serializer", 
> "org.apache.kafka.common.serialization.StringSerializer");
> props.put("max.block.ms", 1); // 10 seconds should illustrate our 
> point
> String separator = "/";
> //String separator = "_";
> try (Producer<String, String> producer = new KafkaProducer<>(props)) {
>   System.out.println("Calling KafkaProducer.send() at " + new Date());
>   producer.send(
>   new ProducerRecord<String, String>("abc" + separator + 
> "someStreamName",
>   "Not expecting a TimeoutException here"),
>   new Callback() {
> @Override
> public void onCompletion(RecordMetadata metadata, Exception e) {
>   if (e != null) {
> System.out.println(e.toString());
>   }
> }
>   });
>   System.out.println("KafkaProducer.send() completed at " + new Date());
> }
>   }
> }
> {code}
> Switching to the underscore separator in the above example works as expected.
> Mea culpa: We neglected to research allowed chars in a topic name, but the 
> TimeoutException we encountered did not help point us in the right direction.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-15 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969873#comment-15969873
 ] 

Umesh Chaudhary commented on KAFKA-5049:


[~anukin] I tried to assign this to you but unfortunately I am not able to do 
that. May be [~junrao] or [~ijuma] can do that.

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-14 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15969805#comment-15969805
 ] 

Umesh Chaudhary commented on KAFKA-5049:


[~anukin] you can send a PR and for detailed steps you can refer [this 
page|https://cwiki.apache.org/confluence/display/KAFKA/Contributing+Code+Changes#ContributingCodeChanges-PullRequest]
 

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5049) Chroot check should be done for each ZkUtils instance

2017-04-13 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968624#comment-15968624
 ] 

Umesh Chaudhary commented on KAFKA-5049:


Thanks [~junrao] for the pointers. While I was looking at the current 
implementation, it seems difficult to instantiate ZkPath trivially. Shouldn't 
we need another class definition (as a companion of existing ZkPath object) 
which enables the instantiation of ZkPath? Please correct me if I am wrong. 

> Chroot check should be done for each ZkUtils instance
> -
>
> Key: KAFKA-5049
> URL: https://issues.apache.org/jira/browse/KAFKA-5049
> Project: Kafka
>  Issue Type: Bug
>Reporter: Ismael Juma
>  Labels: newbie
> Fix For: 0.11.0.0
>
>
> In KAFKA-1994, the check for ZK chroot was moved to ZkPath. However, ZkPath 
> is a JVM singleton and we may use multiple ZkClient instances with multiple 
> ZooKeeper ensembles in the same JVM (for cluster info, authorizer and 
> pluggable code provided by users).
> The right way to do this is to make ZkPath an instance variable in ZkUtils so 
> that we do the check once per ZkUtils instance.
> cc [~gwenshap] [~junrao], who reviewed KAFKA-1994, in case I am missing 
> something.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5057) "Big Message Log"

2017-04-13 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15968580#comment-15968580
 ] 

Umesh Chaudhary commented on KAFKA-5057:


Understood and yes this is a good idea to capture the frequency of "Big 
Messages" on broker. That new broker config would set the threshold and the 
produced messages which exceed that threshold, broker would log their details. 
Also, I can start preparing KIP for this feature. 

> "Big Message Log"
> -
>
> Key: KAFKA-5057
> URL: https://issues.apache.org/jira/browse/KAFKA-5057
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Really large requests can cause significant GC pauses which can cause quite a 
> few other symptoms on a broker. Will be nice to be able to catch them.
> Lets add the option to log details (client id, topic, partition) for every 
> produce request that is larger than a configurable threshold.
> /cc [~apurva]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-5057) "Big Message Log"

2017-04-12 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-5057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15965615#comment-15965615
 ] 

Umesh Chaudhary commented on KAFKA-5057:


[~gwenshap] currently we have "max.request.size" configuration for a producer, 
do you suggest to print the log message based on this config or to introduce a 
new config to define the threshold? 

> "Big Message Log"
> -
>
> Key: KAFKA-5057
> URL: https://issues.apache.org/jira/browse/KAFKA-5057
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Really large requests can cause significant GC pauses which can cause quite a 
> few other symptoms on a broker. Will be nice to be able to catch them.
> Lets add the option to log details (client id, topic, partition) for every 
> produce request that is larger than a configurable threshold.
> /cc [~apurva]



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4737) Streams apps hang if started when brokers are not available

2017-02-05 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853424#comment-15853424
 ] 

Umesh Chaudhary commented on KAFKA-4737:


No worries. Thanks [~gwenshap] !

> Streams apps hang if started when brokers are not available
> ---
>
> Key: KAFKA-4737
> URL: https://issues.apache.org/jira/browse/KAFKA-4737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Start a streams example while broker is down, and it will just hang there. It 
> will also hang on shutdown.
> I'd expect it to exit with an error message if broker is not available.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (KAFKA-4737) Streams apps hang if started when brokers are not available

2017-02-05 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4737?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15853422#comment-15853422
 ] 

Umesh Chaudhary commented on KAFKA-4737:


Hi [~gwenshap], does the intent is similar to KAFKA-4564?

> Streams apps hang if started when brokers are not available
> ---
>
> Key: KAFKA-4737
> URL: https://issues.apache.org/jira/browse/KAFKA-4737
> Project: Kafka
>  Issue Type: Bug
>Reporter: Gwen Shapira
>
> Start a streams example while broker is down, and it will just hang there. It 
> will also hang on shutdown.
> I'd expect it to exit with an error message if broker is not available.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813892#comment-15813892
 ] 

Umesh Chaudhary edited comment on KAFKA-4569 at 1/10/17 9:18 AM:
-

Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it with 
breakpoint, it always fails. Trying to get the reasons for debug failures. 


was (Author: umesh9...@gmail.com):
Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it, it always 
fails. Trying to get the reasons for debug failures. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
> 

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-10 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15814359#comment-15814359
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Hi [~guozhang], [~ewencp], I am thinking to create a KafkaProducer (using the 
properties wrapped in the StreamsConfig) inside the constructor of StreamTask 
class to verify whether user has configured the bootstrap listed correctly or 
not. For misconfigured bootstrap list, we can throw the appropriate exception 
here itself without proceeding further. 

Please suggest if that was the expectation form this JIRA and correct me if I 
am wrong here.

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-09 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15813892#comment-15813892
 ] 

Umesh Chaudhary commented on KAFKA-4569:


Hi [~ijuma], One observation regarding the case. I saw that when I run the test 
testWakeupWithFetchDataAvailable it never fails but when I debug it, it always 
fails. Trying to get the reasons for debug failures. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
> Fix For: 0.10.2.0
>
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.Reflecti

[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-02 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793810#comment-15793810
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Thank you [~ewencp].

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>Assignee: Umesh Chaudhary
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4564) When the destination brokers are down or misconfigured in config, Streams should fail fast

2017-01-01 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4564?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15792067#comment-15792067
 ] 

Umesh Chaudhary commented on KAFKA-4564:


Hi [~guozhang], please assign this to me.

> When the destination brokers are down or misconfigured in config, Streams 
> should fail fast
> --
>
> Key: KAFKA-4564
> URL: https://issues.apache.org/jira/browse/KAFKA-4564
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Reporter: Guozhang Wang
>  Labels: newbie
>
> Today if Kafka is down or users misconfigure the bootstrap list, Streams may 
> just hangs for a while without any error messages even with the log4j 
> enabled, which is quite confusing.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4569) Transient failure in org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable

2017-01-01 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15792055#comment-15792055
 ] 

Umesh Chaudhary commented on KAFKA-4569:


Hi [~guozhang], I can work on this. Please assign this to me or add me to the 
contributor's list. 

> Transient failure in 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable
> -
>
> Key: KAFKA-4569
> URL: https://issues.apache.org/jira/browse/KAFKA-4569
> Project: Kafka
>  Issue Type: Sub-task
>  Components: unit tests
>Reporter: Guozhang Wang
>  Labels: newbie
>
> One example is:
> https://builds.apache.org/job/kafka-pr-jdk8-scala2.11/370/testReport/junit/org.apache.kafka.clients.consumer/KafkaConsumerTest/testWakeupWithFetchDataAvailable/
> {code}
> Stacktrace
> java.lang.AssertionError
>   at org.junit.Assert.fail(Assert.java:86)
>   at org.junit.Assert.fail(Assert.java:95)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumerTest.testWakeupWithFetchDataAvailable(KafkaConsumerTest.java:679)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
>   at 
> org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
>   at 
> org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
>   at 
> org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
>   at 
> org.junit.rules.ExpectedException$ExpectedExceptionStatement.evaluate(ExpectedException.java:239)
>   at org.junit.rules.RunRules.evaluate(RunRules.java:20)
>   at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
>   at 
> org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
>   at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
>   at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
>   at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
>   at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
>   at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
>   at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.runTestClass(JUnitTestClassExecuter.java:114)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassExecuter.execute(JUnitTestClassExecuter.java:57)
>   at 
> org.gradle.api.internal.tasks.testing.junit.JUnitTestClassProcessor.processTestClass(JUnitTestClassProcessor.java:66)
>   at 
> org.gradle.api.internal.tasks.testing.SuiteTestClassProcessor.processTestClass(SuiteTestClassProcessor.java:51)
>   at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.dispatch.ContextClassLoaderDispatch.dispatch(ContextClassLoaderDispatch.java:32)
>   at 
> org.gradle.internal.dispatch.ProxyDispatchAdapter$DispatchingInvocationHandler.invoke(ProxyDispatchAdapter.java:93)
>   at com.sun.proxy.$Proxy2.processTestClass(Unknown Source)
>   at 
> org.gradle.api.internal.tasks.testing.worker.TestWorker.processTestClass(TestWorker.java:109)
>   at sun.reflect.GeneratedMethodAccessor10.invoke(Unknown Source)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>   at java.lang.reflect.Method.invoke(Method.java:498)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:35)
>   at 
> org.gradle.internal.dispatch.ReflectionDispatch.dispatch(ReflectionDispatch.java:24)
>   at 
> org.gradle.internal.remote.internal.hub.MessageHub$Handle

[jira] [Commented] (KAFKA-4308) Inconsistent parameters between console producer and consumer

2016-10-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582838#comment-15582838
 ] 

Umesh Chaudhary commented on KAFKA-4308:


Sure Andrew, can you please assign this to me?

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4308
> URL: https://issues.apache.org/jira/browse/KAFKA-4308
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4308) Inconsistent parameters between console producer and consumer

2016-10-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15582819#comment-15582819
 ] 

Umesh Chaudhary commented on KAFKA-4308:


Hi [~gwenshap], I would like to work on this.
If possible, please assign it to me.

> Inconsistent parameters between console producer and consumer
> -
>
> Key: KAFKA-4308
> URL: https://issues.apache.org/jira/browse/KAFKA-4308
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.1.0
>Reporter: Gwen Shapira
>  Labels: newbie
>
> kafka-console-producer uses --broker-list while kafka-console-consumer uses 
> --bootstrap-server.
> Let's add --bootstrap-server to the producer for some consistency?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4122) Consumer startup swallows DNS resolution exception and infinitely retries

2016-09-29 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4122?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531969#comment-15531969
 ] 

Umesh Chaudhary commented on KAFKA-4122:


This one looks feasible. 
[~junrao] [~ewencp] Please advise. I can work on this. 

> Consumer startup swallows DNS resolution exception and infinitely retries
> -
>
> Key: KAFKA-4122
> URL: https://issues.apache.org/jira/browse/KAFKA-4122
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer, network
>Affects Versions: 0.9.0.1
> Environment: Run from Docker image with following Dockerfile:
> {code}
> FROM java:openjdk-8-jre
> ENV DEBIAN_FRONTEND noninteractive
> ENV SCALA_VERSION 2.11
> ENV KAFKA_VERSION 0.9.0.1
> ENV KAFKA_HOME /opt/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION"
> # Install Kafka, Zookeeper and other needed things
> RUN apt-get update && \
> apt-get install -y zookeeper wget supervisor dnsutils && \
> rm -rf /var/lib/apt/lists/* && \
> apt-get clean && \
> wget -q 
> http://apache.mirrors.spacedump.net/kafka/"$KAFKA_VERSION"/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
>  -O /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz && \
> tar xfz /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz -C /opt && \
> rm /tmp/kafka_"$SCALA_VERSION"-"$KAFKA_VERSION".tgz
> {code}
>Reporter: Shane Hender
>
> When a consumer encounters nodes that it can't resolve the IP to, I'd expect 
> it to print an ERROR level msg and bubble up an exception, especially if 
> there are no other nodes available.
> Following is the stack trace that was hidden under the DEBUG trace level:
> {code}
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initialize connection to node 0 for 
> sending metadata request
> 18:30:47.070 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Initiating connection to node 0 at 
> kafka.docker:9092.
> 18:30:47.071 [Filters-akka.kafka.default-dispatcher-7] DEBUG 
> o.apache.kafka.clients.NetworkClient - Error connecting to node 0 at 
> kafka.docker:9092:
> java.io.IOException: Can't resolve address: kafka.docker:9092
>   at 
> org.apache.kafka.common.network.Selector.connect(Selector.java:156)
>   at 
> org.apache.kafka.clients.NetworkClient.initiateConnect(NetworkClient.java:489)
>   at 
> org.apache.kafka.clients.NetworkClient.access$400(NetworkClient.java:47)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:624)
>   at 
> org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater.maybeUpdate(NetworkClient.java:543)
>   at 
> org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:254)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.clientPoll(ConsumerNetworkClient.java:320)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:213)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:193)
>   at 
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.awaitMetadataUpdate(ConsumerNetworkClient.java:134)
>   at 
> org.apache.kafka.clients.consumer.internals.AbstractCoordinator.ensureCoordinatorKnown(AbstractCoordinator.java:184)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:886)
>   at 
> org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:853)
>   at 
> akka.kafka.internal.ConsumerStageLogic.poll(ConsumerStage.scala:410)
>   at 
> akka.kafka.internal.CommittableConsumerStage$$anon$1.poll(ConsumerStage.scala:166)
>   at 
> akka.kafka.internal.ConsumerStageLogic$$anon$5.onPull(ConsumerStage.scala:360)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.processEvent(GraphInterpreter.scala:608)
>   at 
> akka.stream.impl.fusing.GraphInterpreter.execute(GraphInterpreter.scala:542)
>   at 
> akka.stream.impl.fusing.GraphInterpreterShell.runBatch(ActorGraphInterpreter.scala:471)
>   at 
> akka.stream.impl.fusing.GraphInterpreterS

[jira] [Commented] (KAFKA-4142) Log files in /data dir date modified keeps being updated?

2016-09-29 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15531931#comment-15531931
 ] 

Umesh Chaudhary commented on KAFKA-4142:


[~cmhillerman], Did you compare the size of both files which you see the date 
modified changed but are having same name?

>From the doc I see:
the log is cleaned by recopying each log segment but omitting any key that 
appears in the offset map with a higher offset than what is found in the 
segment (i.e. messages with a key that appears in the dirty section of the log).

I believe when you will compare the size of both files, you will find some 
messages are cleaned based on retention policy.

> Log files in /data dir date modified keeps being updated?
> -
>
> Key: KAFKA-4142
> URL: https://issues.apache.org/jira/browse/KAFKA-4142
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 0.10.0.0
> Environment: CentOS release 6.8 (Final)
> uname -a
> Linux 2.6.32-642.1.1.el6.x86_64 #1 SMP Tue May 31 21:57:07 UTC 2016 x86_64 
> x86_64 x86_64 GNU/Linux
>Reporter: Clint Hillerman
>Priority: Minor
>
> The date modified of the kafka logs (the main ones specified by logs.dirs in 
> the config) keep getting updated and set to the exact same time.
> For example:
> Say I had two log and index files ( date modified - file name):
> 20160901:10:00:01 - 0001.log
> 20160901:10:00:01 -0001.index
> 20160902:10:00:01 -0002.log
> 20160902:10:00:01 -0002.index
> Later I notice the logs are getting way to old for the retention time. I then 
> go look at the log dir and I see this:
> 20160903:10:00:01 - 0001.log
> 2016090310:00:01 -0001.index
> 20160903:10:00:01 -0002.log
> 20160903:10:00:01 -0002.index
> 20160903:10:00:01 -0003.log
> 20160903:10:00:01 -0003.index
> 20160904:10:00:01 -0004.log
> 20160904:10:00:01 -0004.index
> The first two log files had there date modified moved forward for some 
> reason. They were updated from 0901 and 0902 to 0903. 
> It seems to happen periodically. The new logs that kafka writes out have the 
> correct time stamp. 
> This causes the logs to not be deleted. Right now I just touch the log files 
> to an older date and they are deleted right away. 
> Any help would be appreciated. Also, I'll explain the problem better if this 
> doesn't make sense.
> Thanks,



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Unable to locate auto.create.topics.enable=true path for KafkaProducer

2016-09-22 Thread UMESH CHAUDHARY
Hi Mates,
I was trying to understand that if auto.create.topics.enable=true then how
KafkaProducer first creates the topic and sends messages to it.

What I saw:

private Future doSend(ProducerRecord record, Callback
callback)

method in KafkaProducer.java.

What I failed to get:

When getting metadata for topic using

ClusterAndWaitTime clusterAndWaitTime =
waitOnMetadata(record.topic(), this.maxBlockTimeMs);
line, I am failed to locate the path where we don't have any metadata for
the topic i.e. topic doesn't exist and according to
"auto.create.topics.enable=true" KafkaProducer invokes createTopic before
sending the record.

It works flawlessly with clients so it must be coded somewhere. I also saw
MockProducerTest.java but unable to locate.

I am a newbie so please forgive for any stupidity here.

Regards,
Umesh


[jira] [Commented] (KAFKA-4189) Consumer poll hangs forever if kafka is disabled

2016-09-21 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15509565#comment-15509565
 ] 

Umesh Chaudhary commented on KAFKA-4189:


Yep, [Guozhang's 
comment|https://issues.apache.org/jira/browse/KAFKA-1894?focusedCommentId=15036256=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15036256]
 suggests the same 

> Consumer poll hangs forever if kafka is disabled
> 
>
> Key: KAFKA-4189
> URL: https://issues.apache.org/jira/browse/KAFKA-4189
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Tomas Benc
>Priority: Critical
>
> We develop web application, where client sends REST request and our 
> application downloads messages from Kafka and sends those messages back to 
> client. In our web application we use "New Consumer API" (not High Level nor 
> Simple Consumer API).
> Problem occurs in case of disabling Kafka and web application is running on. 
> Application receives request and tries to poll messages from Kafka. 
> Processing is on that line blocked until Kafka is enabled.
> ConsumerRecords<byte[], byte[]> records = consumer.poll(1000);
> Timeout parameter of the poll method has no influence in such case. I expect 
> poll method could throw some Exception describing about connection issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (KAFKA-4189) Consumer poll hangs forever if kafka is disabled

2016-09-19 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4189?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15502860#comment-15502860
 ] 

Umesh Chaudhary commented on KAFKA-4189:


Is it similar to [KAFKA-1894|https://issues.apache.org/jira/browse/KAFKA-1894] ?

> Consumer poll hangs forever if kafka is disabled
> 
>
> Key: KAFKA-4189
> URL: https://issues.apache.org/jira/browse/KAFKA-4189
> Project: Kafka
>  Issue Type: Bug
>  Components: clients, consumer
>Affects Versions: 0.9.0.1, 0.10.0.1
>Reporter: Tomas Benc
>Priority: Critical
>
> We develop web application, where client sends REST request and our 
> application downloads messages from Kafka and sends those messages back to 
> client. In our web application we use "New Consumer API" (not High Level nor 
> Simple Consumer API).
> Problem occurs in case of disabling Kafka and web application is running on. 
> Application receives request and tries to poll messages from Kafka. 
> Processing is on that line blocked until Kafka is enabled.
> ConsumerRecords<byte[], byte[]> records = consumer.poll(1000);
> Timeout parameter of the poll method has no influence in such case. I expect 
> poll method could throw some Exception describing about connection issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


WIKI page outdated

2016-09-01 Thread UMESH CHAUDHARY
Hello Mates,
I may be reiterating the previously identified issue here so please forgive
me if you find it noisy.

The page :
https://cwiki.apache.org/confluence/display/KAFKA/0.8.0+Producer+Example# is
having Producer example which is seems to be outdated but having
"partitioner.class" property which is introduced after 0.8.x .

Also it is using kafka.producer.ProducerConfig which seems bit old.

Can we update the page and correct the example?

Regards,
Umesh


[jira] [Commented] (KAFKA-4053) Refactor TopicCommand to remove redundant if/else statements

2016-08-17 Thread Umesh Chaudhary (JIRA)

[ 
https://issues.apache.org/jira/browse/KAFKA-4053?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15424224#comment-15424224
 ] 

Umesh Chaudhary commented on KAFKA-4053:


I will do that. Please assign it to me.

> Refactor TopicCommand to remove redundant if/else statements
> 
>
> Key: KAFKA-4053
> URL: https://issues.apache.org/jira/browse/KAFKA-4053
> Project: Kafka
>  Issue Type: Improvement
>  Components: admin
>Affects Versions: 0.10.0.1
>Reporter: Shuai Zhang
>Priority: Minor
> Fix For: 0.10.0.2
>
>
> In TopicCommand, there are a lot of redundant if/else statements, such as
> ```val ifNotExists = if (opts.options.has(opts.ifNotExistsOpt)) true else 
> false```
> We can refactor it as the following statement:
> ```val ifNotExists = opts.options.has(opts.ifNotExistsOpt)```



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)