Re: [VOTE] KIP-266: Add TimeoutException for KafkaConsumer#position

2018-05-10 Thread zhenya Sun
+1 non-binding

> 在 2018年5月10日,下午5:19,Manikumar  写道:
> 
> +1 (non-binding).
> Thanks.
> 
> On Thu, May 10, 2018 at 2:33 PM, Mickael Maison 
> wrote:
> 
>> +1 (non binding)
>> Thanks
>> 
>> On Thu, May 10, 2018 at 9:39 AM, Rajini Sivaram 
>> wrote:
>>> Hi Richard, Thanks for the KIP.
>>> 
>>> +1 (binding)
>>> 
>>> Regards,
>>> 
>>> Rajini
>>> 
>>> On Wed, May 9, 2018 at 10:54 PM, Guozhang Wang 
>> wrote:
>>> 
 +1 from me, thanks!
 
 
 Guozhang
 
 On Wed, May 9, 2018 at 10:46 AM, Jason Gustafson 
 wrote:
 
> Thanks for the KIP, +1 (binding).
> 
> One small correction: the KIP mentions that close() will be
>> deprecated,
 but
> we do not want to do this because it is needed by the Closeable
 interface.
> We only want to deprecate close(long, TimeUnit) in favor of
> close(Duration).
> 
> -Jason
> 
> On Tue, May 8, 2018 at 12:43 AM, khaireddine Rezgui <
> khaireddine...@gmail.com> wrote:
> 
>> +1
>> 
>> 2018-05-07 20:35 GMT+01:00 Bill Bejeck :
>> 
>>> +1
>>> 
>>> Thanks,
>>> Bill
>>> 
>>> On Fri, May 4, 2018 at 7:21 PM, Richard Yu <
 yohan.richard...@gmail.com
>> 
>>> wrote:
>>> 
 Hi all, I would like to bump this thread since discussion in the
 KIP
 appears to be reaching its conclusion.
 
 
 
 On Thu, Mar 15, 2018 at 3:30 PM, Richard Yu <
>> yohan.richard...@gmail.com>
 wrote:
 
> Hi all,
> 
> Since there does not seem to be too much discussion in
>> KIP-266, I
>> will
>>> be
> starting a voting thread.
> Here is the link to KIP-266 for reference:
> 
> https://cwiki.apache.org/confluence/pages/viewpage.
 action?pageId=75974886
> 
> Recently, I have made some updates to the KIP. To reiterate, I
 have
> included KafkaConsumer's commitSync,
> poll, and committed in the KIP. (we will be adding to a
>>> TimeoutException
> to them as well, in a similar manner
> to what we will be doing for position())
> 
> Thanks,
> Richard Yu
> 
> 
 
>>> 
>> 
>> 
>> 
>> --
>> Ingénieur en informatique
>> 
> 
 
 
 
 --
 -- Guozhang
 
>> 



Re: [VOTE] KIP-278: Add version option to Kafka's commands

2018-05-11 Thread zhenya Sun
+1 building 
> 在 2018年5月11日,上午9:51,Ted Yu  写道:
> 
> +1
> 
> On Thu, May 10, 2018 at 6:42 PM, Sasaki Toru 
> wrote:
> 
>> Hi all,
>> 
>> I would like to start the vote on KIP-278: Add version option to Kafka's
>> commands.
>> 
>> The link to this KIP is here:
>> > +Add+version+option+to+Kafka%27s+commands>
>> 
>> The discussion thread is here:
>> 
>> 
>> 
>> Many thanks,
>> Sasaki
>> 
>> --
>> Sasaki Toru(sasaki...@oss.nttdata.com) NTT DATA CORPORATION
>> 
>> 



Re: [VOTE] KIP-284 Set default retention ms for Streams repartition topics to Long.MAX_VALUE

2018-04-09 Thread zhenya Sun
+1


from my iphone!
On 04/09/2018 17:27, khaireddine Rezgui wrote:
Hi guys,

I created this thread to get your agreement about the new default value of
the retention_ms in kafka stream repartion topics.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-
284%3A+Set+default+retention+ms+for+Streams+repartition+
topics+to+Long.MAX_VALUE

Thanks,


Re: [VOTE] #2 KIP-248: Create New ConfigCommand That Uses The New AdminClient

2018-04-15 Thread zhenya Sun
non-binding +1





from my iphone!
On 04/15/2018 15:41, Attila Sasvári wrote:
Thanks for updating the KIP.

+1 (non-binding)

Viktor Somogyi <viktorsomo...@gmail.com> ezt írta (időpont: 2018. ápr. 9.,
H 16:49):

> Hi Magnus,
>
> Thanks for the heads up, added the endianness to the KIP. Here is the
> current text:
>
> "Double
> A new type needs to be added to transfer quota values. Since the protocol
> classes in Kafka already uses ByteBuffers it is logical to use their
> functionality for serializing doubles. The serialization is basically a
> representation of the specified floating-point value according to the IEEE
> 754 floating-point "double format" bit layout. The ByteBuffer serializer
> writes eight bytes containing the given double value, in Big Endian byte
> order, into this buffer at the current position, and then increments the
> position by eight.
>
> The implementation will be defined in
> org.apache.kafka.common.protocol.types with the other protocol types and it
> will have no default value much like the other types available in the
> protocol."
>
> Also, I haven't changed the protocol docs yet but will do so in my PR for
> this feature.
>
> Let me know if you'd still add something.
>
> Regards,
> Viktor
>
>
> On Mon, Apr 9, 2018 at 3:32 PM, Magnus Edenhill <mag...@edenhill.se>
> wrote:
>
> > Hi Viktor,
> >
> > since serialization of floats isn't as straight forward as integers,
> please
> > specify the exact serialization format of DOUBLE in the protocol docs
> > (e.g., IEEE 754),
> > including endianness (big-endian please).
> >
> > This will help the non-java client ecosystem.
> >
> > Thanks,
> > Magnus
> >
> >
> > 2018-04-09 15:16 GMT+02:00 Viktor Somogyi <viktorsomo...@gmail.com>:
> >
> > > Hi Attila,
> > >
> > > 1. It uses ByteBuffers, which in turn will use Double.doubleToLongBits
> to
> > > convert the double value to a long and that long will be written in the
> > > buffer. I'v updated the KIP with this.
> > > 2. Good idea, modified it.
> > > 3. During the discussion I remember we didn't really decide which one
> > would
> > > be the better one but I agree that a wrapper class that makes sure the
> > list
> > > that is used as a key is immutable is a good idea and would ease the
> life
> > > of people using the interface. Also more importantly would make sure
> that
> > > we always use the same hashCode. I have created wrapper classes for the
> > map
> > > value as well but that was reverted to keep things consistent. Although
> > for
> > > the key I think we wouldn't break consistency. I updated the KIP.
> > >
> > > Thanks,
> > > Viktor
> > >
> > >
> > > On Tue, Apr 3, 2018 at 1:27 PM, Attila Sasvári <asasv...@apache.org>
> > > wrote:
> > >
> > > > Thanks for working on it Viktor.
> > > >
> > > > It looks good to me, but I have some questions:
> > > > - I see a new type DOUBLE is used for quota_value , and it is not
> > listed
> > > > among the primitive types on the Kafka protocol guide. Can you add
> some
> > > > more details?
> > > > - I am not sure that using an environment (i.e.
> > USE_OLD_COMMAND)variable
> > > is
> > > > the best way to control behaviour of kafka-config.sh . In other
> scripts
> > > > (e.g. console-consumer) an argument is passed (e.g. --new-consumer).
> If
> > > we
> > > > still want to use it, then I would suggest something like
> > > > USE_OLD_KAFKA_CONFIG_COMMAND. What do you think?
> > > > - I have seen maps like Map<List,
> > Collection>.
> > > > If List is the key type, you should make sure that
> this
> > > > List is immutable. Have you considered to introduce a new wrapper
> > class?
> > > >
> > > > Regards,
> > > > - Attila
> > > >
> > > > On Thu, Mar 29, 2018 at 1:46 PM, zhenya Sun <toke...@126.com> wrote:
> > > > >
> > > > >
> > > > >
> > > > > +1 (non-binding)
> > > > >
> > > > >
> > > > > | |
> > > > > zhenya Sun
> > > > > 邮箱:toke...@126.com
> > > > > |
> > > > >
> > > > > 签名由 网易邮箱大师 定制
> > > > >
> > > > > On 03/29/2018 19:40, Sandor Murakozi wrote:
> > > > > +1 (non-binding)
> &g

Re: [VOTE] Kafka 2.0.0 in June 2018

2018-04-24 Thread zhenya Sun
no-binding。 +1





from my iphone!
On 04/24/2018 18:19, Sandor Murakozi wrote:
+1 (non-binding).
Thx Ismael

On Thu, Apr 19, 2018 at 10:55 PM, Matt Farmer  wrote:

> +1 (non-binding). TY!
>
> On Thu, Apr 19, 2018 at 11:56 AM, tao xiao  wrote:
>
> > +1 non-binding. thx Ismael
> >
> > On Thu, 19 Apr 2018 at 23:14 Vahid S Hashemian <
> vahidhashem...@us.ibm.com>
> > wrote:
> >
> > > +1 (non-binding).
> > >
> > > Thanks Ismael.
> > >
> > > --Vahid
> > >
> > >
> > >
> > > From:   Jorge Esteban Quilcate Otoya 
> > > To: dev@kafka.apache.org
> > > Date:   04/19/2018 07:32 AM
> > > Subject:Re: [VOTE] Kafka 2.0.0 in June 2018
> > >
> > >
> > >
> > > +1 (non binding), thanks Ismael!
> > >
> > > El jue., 19 abr. 2018 a las 13:01, Manikumar (<
> manikumar.re...@gmail.com
> > >)
> > > escribió:
> > >
> > > > +1 (non-binding).
> > > >
> > > > Thanks.
> > > >
> > > > On Thu, Apr 19, 2018 at 3:07 PM, Stephane Maarek <
> > > > steph...@simplemachines.com.au> wrote:
> > > >
> > > > > +1 (non binding). Thanks Ismael!
> > > > >
> > > > > On Thu., 19 Apr. 2018, 2:47 pm Gwen Shapira, 
> > > wrote:
> > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > > On Wed, Apr 18, 2018 at 11:35 AM, Ismael Juma  >
> > > > wrote:
> > > > > >
> > > > > > > Hi all,
> > > > > > >
> > > > > > > I started a discussion last year about bumping the version of
> the
> > > > June
> > > > > > 2018
> > > > > > > release to 2.0.0[1]. To reiterate the reasons in the original
> > > post:
> > > > > > >
> > > > > > > 1. Adopt KIP-118 (Drop Support for Java 7), which requires a
> > major
> > > > > > version
> > > > > > > bump due to semantic versioning.
> > > > > > >
> > > > > > > 2. Take the chance to remove deprecated code that was
> deprecated
> > > > prior
> > > > > to
> > > > > > > 1.0.0, but not removed in 1.0.0 (e.g. old Scala clients) so
> that
> > > we
> > > > can
> > > > > > > move faster.
> > > > > > >
> > > > > > > One concern that was raised is that we still do not have a
> > rolling
> > > > > > upgrade
> > > > > > > path for the old ZK-based consumers. Since the Scala clients
> > > haven't
> > > > > been
> > > > > > > updated in a long time (they don't support security or the
> latest
> > > > > message
> > > > > > > format), users who need them can continue to use 1.1.0 with no
> > > loss
> > > > of
> > > > > > > functionality.
> > > > > > >
> > > > > > > Since it's already mid-April and people seemed receptive during
> > > the
> > > > > > > discussion last year, I'm going straight to a vote, but we can
> > > > discuss
> > > > > > more
> > > > > > > if needed (of course).
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > [1]
> > > > > > >
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.
> > apache.org_thread.html_dd9d3e31d7e9590c1f727ef5560c93
> > =DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_itwloTQj3_xUKl7Nzswo6KE4Nj-
> > kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-XI9HTNmZdteu6wfk8=
> lBt342M2PM_
> > 4czzbFWtAc63571qsZGc9sfB7A5DlZPo=
> > >
> > > > > > > 3281bad0de3134469b7b3c4257@%3Cdev.kafka.apache.org%3E
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > *Gwen Shapira*
> > > > > > Product Manager | Confluent
> > > > > > 650.450.2760 <(650)%20450-2760> | @gwenshap
> > > > > > Follow us: Twitter <
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=https-3A__
> > twitter.com_ConfluentInc=DwIFaQ=jf_iaSHvJObTbx-
> siA1ZOg=Q_itwloTQj3_
> > xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-
> > XI9HTNmZdteu6wfk8=KcgJLWP_UEkzMrujjrbJA4QfHPDrJjcaWS95p2LGewU=
> > > > | blog
> > > > > > <
> > >
> > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.
> > confluent.io_blog=DwIFaQ=jf_iaSHvJObTbx-siA1ZOg=Q_
> > itwloTQj3_xUKl7Nzswo6KE4Nj-kjJc7uSVcviKUc=dA4UE_6i8-ltuLeZapDpOBc_8-
> > XI9HTNmZdteu6wfk8=XaV8g8yeT1koLf1dbc30NTzBdXB6GAj7FwD8J2VP7iY=
> > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > >
> > >
> >
>


Re: [VOTE] KIP-268: Simplify Kafka Streams Rebalance Metadata Upgrade

2018-03-27 Thread zhenya Sun


   I'm terribly sorry that I made such a mistake.
Thank you for your suggestion!


| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/27/2018 04:38, Matthias J. Sax wrote:
Thanks Ted for pointing out.

Sorry for this mistake Zhenya. You should not have voted twice though ;)

(Even if it was the same email address, so I could have figured this out
by myself -- did not pay close enough attention.)

No bid deal. The KIP is accepted with +6/+3 votes than.


-Matthias

On 3/26/18 12:54 PM, Ted Yu wrote:
> Congratulations.
>
> BTW I believe 孙振亚 and Zhenya is the same person - Zhenya is the PinYin of 振亚
>
> Cheers
>
> On Mon, Mar 26, 2018 at 12:51 PM, Matthias J. Sax <matth...@confluent.io>
> wrote:
>
>> +1 (binding)
>>
>>
>> I am also cloeing this vote. The KIP is accepted with
>>
>> +7 non-binding (Richard, Ted, 孙振亚, Bill, James, John, Zhenya)
>> +3 binding (Damian, Guozhang, Matthias)
>>
>> votes.
>>
>>
>> Thanks a lot!
>>
>>
>> -Matthias
>>
>>
>> On 3/22/18 4:13 PM, zhenya Sun wrote:
>>> +1
>>>
>>>
>>> | |
>>> zhenya Sun
>>> 邮箱:toke...@126.com
>>> |
>>>
>>> 签名由 网易邮箱大师 定制
>>>
>>> On 03/23/2018 03:34, James Cheng wrote:
>>> +1 (non-binding)
>>>
>>> -James
>>>
>>>> On Mar 21, 2018, at 2:28 AM, Damian Guy <damian@gmail.com> wrote:
>>>>
>>>> +1
>>>>
>>>> On Wed, 21 Mar 2018 at 01:44 abel-sun <sunzhenya5611...@gmail.com>
>> wrote:
>>>>
>>>>>
>>>>>   Thanks you of your offer ,agree with you!
>>>>>
>>>>> On 2018/03/21 00:56:11, Richard Yu <yohan.richard...@gmail.com> wrote:
>>>>>> Hi Matthias,
>>>>>> Thanks for setting up the upgrade path.
>>>>>>
>>>>>> +1 (non-binding)
>>>>>>
>>>>>> On Tue, Mar 20, 2018 at 3:42 PM, Matthias J. Sax <
>> matth...@confluent.io>
>>>>>> wrote:
>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> I would like to start the vote for KIP-268:
>>>>>>>
>>>>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>>>>> 268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade
>>>>>>>
>>>>>>> PR https://github.com/apache/kafka/pull/4636 contains the fixes to
>>>>>>> upgrade from metadata version 1 to 2. Some tests are still missing
>> but
>>>>>>> I'll add them asap.
>>>>>>>
>>>>>>> For "version probing" including new metadata version 3 I plan to do a
>>>>>>> follow-up PR after PR-4636 is merged.
>>>>>>>
>>>>>>>
>>>>>>> -Matthias
>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>
>>
>



Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread zhenya Sun
congratulation!



| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/29/2018 02:03, Bill Bejeck wrote:
Congrats Dong!

On Wed, Mar 28, 2018 at 1:58 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> Congratulations, Dong.
>
> On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin <becket@gmail.com> wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> > our invitation to be a new Kafka committer.
> >
> > Dong started working on Kafka about four years ago, since which he has
> > contributed numerous features and patches. His work on Kafka core has
> been
> > consistent and important. Among his contributions, most noticeably, Dong
> > developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > actively remove old messages. Dong has also been active in the community,
> > participating in KIP discussions and doing code reviews.
> >
> > Congratulations and looking forward to your future contribution, Dong!
> >
> > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>


Re: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread zhenya Sun
good !!

> 在 2018年3月29日,下午5:45,Mickael Maison  写道:
> 
> Great news, thanks Damian and Rajini for running this release!
> 
> On Thu, Mar 29, 2018 at 10:33 AM, Rajini Sivaram
>  wrote:
>> Resending to kaka-clients group:
>> 
>> -- Forwarded message --
>> From: Rajini Sivaram 
>> Date: Thu, Mar 29, 2018 at 10:27 AM
>> Subject: [ANNOUNCE] Apache Kafka 1.1.0 Released
>> To: annou...@apache.org, Users , dev <
>> dev@kafka.apache.org>, kafka-clients 
>> 
>> 
>> The Apache Kafka community is pleased to announce the release for
>> 
>> Apache Kafka 1.1.0.
>> 
>> 
>> Kafka 1.1.0 includes a number of significant new features.
>> 
>> Here is a summary of some notable changes:
>> 
>> 
>> ** Kafka 1.1.0 includes significant improvements to the Kafka Controller
>> 
>>   that speed up controlled shutdown. ZooKeeper session expiration edge
>> cases
>> 
>>   have also been fixed as part of this effort.
>> 
>> 
>> ** Controller improvements also enable more partitions to be supported on a
>> 
>>   single cluster. KIP-227 introduced incremental fetch requests, providing
>> 
>>   more efficient replication when the number of partitions is large.
>> 
>> 
>> ** KIP-113 added support for replica movement between log directories to
>> 
>>   enable data balancing with JBOD.
>> 
>> 
>> ** Some of the broker configuration options like SSL keystores can now be
>> 
>>   updated dynamically without restarting the broker. See KIP-226 for
>> details
>> 
>>   and the full list of dynamic configs.
>> 
>> 
>> ** Delegation token based authentication (KIP-48) has been added to Kafka
>> 
>>   brokers to support large number of clients without overloading Kerberos
>> 
>>   KDCs or other authentication servers.
>> 
>> 
>> ** Several new features have been added to Kafka Connect, including header
>> 
>>   support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST
>> 
>>   interface (KIP-208 and KIP-238), validation of connector names (KIP-212)
>> 
>>   and support for topic regex in sink connectors (KIP-215). Additionally,
>> 
>>   the default maximum heap size for Connect workers was increased to 2GB.
>> 
>> 
>> ** Several improvements have been added to the Kafka Streams API, including
>> 
>>   reducing repartition topic partitions footprint, customizable error
>> 
>>   handling for produce failures and enhanced resilience to broker
>> 
>>   unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.
>> 
>> 
>> All of the changes in this release can be found in the release notes:
>> 
>> 
>> 
>> https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html
>> 
>> 
>> 
>> 
>> You can download the source release from:
>> 
>> 
>> 
>> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz
>> 
>> 
>> 
>> and binary releases from:
>> 
>> 
>> 
>> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz
>> 
>> (Scala 2.11)
>> 
>> 
>> https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz
>> 
>> (Scala 2.12)
>> 
>> 
>> 
>> --
>> 
>> 
>> 
>> Apache Kafka is a distributed streaming platform with four core APIs:
>> 
>> 
>> 
>> ** The Producer API allows an application to publish a stream records to
>> 
>> one or more Kafka topics.
>> 
>> 
>> 
>> ** The Consumer API allows an application to subscribe to one or more
>> 
>> topics and process the stream of records produced to them.
>> 
>> 
>> 
>> ** The Streams API allows an application to act as a stream processor,
>> 
>> consuming an input stream from one or more topics and producing an output
>> 
>> stream to one or more output topics, effectively transforming the input
>> 
>> streams to output streams.
>> 
>> 
>> 
>> ** The Connector API allows building and running reusable producers or
>> 
>> consumers that connect Kafka topics to existing applications or data
>> 
>> systems. For example, a connector to a relational database might capture
>> 
>> every change to a table.three key capabilities:
>> 
>> 
>> 
>> 
>> With these APIs, Kafka can be used for two broad classes of application:
>> 
>> ** Building real-time streaming data pipelines that reliably get data
>> 
>> between systems or applications.
>> 
>> 
>> 
>> ** Building real-time streaming applications that transform or react to the
>> 
>> streams of data.
>> 
>> 
>> 
>> 
>> Apache Kafka is in use at large and small companies worldwide, including
>> 
>> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
>> 
>> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>> 
>> 
>> 
>> 
>> A big thank you for the following 120 contributors to this release!
>> 
>> 
>> Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,
>> 
>> Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,
>> 

Re: [VOTE] #2 KIP-248: Create New ConfigCommand That Uses The New AdminClient

2018-03-29 Thread zhenya Sun


+1 (non-binding)


| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/29/2018 19:40, Sandor Murakozi wrote:
+1 (non-binding)

Thanks for the KIP, Viktor

On Wed, Mar 21, 2018 at 5:41 PM, Viktor Somogyi <viktorsomo...@gmail.com>
wrote:

> Hi Everyone,
>
> I've started a vote on KIP-248
> <https://cwiki.apache.org/confluence/display/KAFKA/KIP-248+-+Create+New+
> ConfigCommand+That+Uses+The+New+AdminClient#KIP-248-
> CreateNewConfigCommandThatUsesTheNewAdminClient-DescribeQuotas>
> a few weeks ago but at the time I got a couple more comments and it was
> very close to 1.1 feature freeze, people were occupied with that, so I
> wanted to restart the vote on this.
>
>
> *Summary of the KIP*
> For those who don't have context I thought I'd summarize it in a few
> sentence.
> *Problem & Motivation: *The basic problem that the KIP tries to solve is
> that kafka-configs.sh (which in turn uses the ConfigCommand class) uses a
> direct zookeeper connection. This is not desirable as getting around the
> broker opens up security issues and prevents the tool from being used in
> deployments where only the brokers are exposed to clients. Also a somewhat
> smaller motivation is to rewrite the tool in java as part of the tools
> component so we can get rid of requiring the core module on the classpath
> for the kafka-configs tool.
> *Solution:*
> - I've designed new 2 protocols: DescribeQuotas and AlterQuotas.
> - Also redesigned the output format of the command line tool so it provides
> a nicer result.
> - kafka-configs.[sh/bat] will use a new java based ConfigCommand that is
> placed in tools.
>
>
> I'd be happy to receive any votes or feedback on this.
>
> Regards,
> Viktor
>


Re: [jira] [Created] (KAFKA-6709) broker failed to handle request due to OOM

2018-03-25 Thread zhenya Sun
why don't you increase your machine memory?




| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/24/2018 17:27, Zou Tao (JIRA) wrote:
Zou Tao created KAFKA-6709:
--

Summary: broker failed to handle request due to OOM
Key: KAFKA-6709
URL: https://issues.apache.org/jira/browse/KAFKA-6709
Project: Kafka
 Issue Type: Bug
 Components: core
   Affects Versions: 1.0.1
   Reporter: Zou Tao


I have updated to release 1.0.1.

I set up cluster which have four brokers.
you could find the server.properties in the attachment.
There are about 150 topics, and about total 4000 partitions, ReplicationFactor 
is 2.
connctors are used to write/read data to/from brokers.
connecotr version is 0.10.1.
The average message size is 500B, and around 6 messages per seconds.
one of the broker keep report OOM, and can't handle request like:

[2018-03-24 12:37:17,449] ERROR [KafkaApi-1001] Error when handling request 
\{replica_id=-1,max_wait_time=500,min_bytes=1,topics=[{topic=voltetraffica.data,partitions=[{partition=16,fetch_offset=51198,max_bytes=60728640},\{partition=12,fetch_offset=50984,max_bytes=60728640}]}]}
 (kafka.server.KafkaApis)
java.lang.OutOfMemoryError: Java heap space
at java.nio.HeapByteBuffer.(HeapByteBuffer.java:57)
at java.nio.ByteBuffer.allocate(ByteBuffer.java:335)
at 
org.apache.kafka.common.record.AbstractRecords.downConvert(AbstractRecords.java:101)
at 
org.apache.kafka.common.record.FileRecords.downConvert(FileRecords.java:253)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:525)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1$$anonfun$apply$4.apply(KafkaApis.scala:523)
at scala.Option.map(Option.scala:146)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:523)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$convertedPartitionData$1$1.apply(KafkaApis.scala:513)
at scala.Option.flatMap(Option.scala:171)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$convertedPartitionData$1(KafkaApis.scala:513)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:561)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$createResponse$2$1.apply(KafkaApis.scala:560)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$createResponse$2(KafkaApis.scala:560)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:574)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$fetchResponseCallback$1$1.apply(KafkaApis.scala:574)
at 
kafka.server.KafkaApis$$anonfun$sendResponseMaybeThrottle$1.apply$mcVI$sp(KafkaApis.scala:2041)
at 
kafka.server.ClientRequestQuotaManager.maybeRecordAndThrottle(ClientRequestQuotaManager.scala:54)
at 
kafka.server.KafkaApis.sendResponseMaybeThrottle(KafkaApis.scala:2040)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$fetchResponseCallback$1(KafkaApis.scala:574)
at 
kafka.server.KafkaApis$$anonfun$kafka$server$KafkaApis$$processResponseCallback$1$1.apply$mcVI$sp(KafkaApis.scala:593)
at 
kafka.server.ClientQuotaManager.maybeRecordAndThrottle(ClientQuotaManager.scala:176)
at 
kafka.server.KafkaApis.kafka$server$KafkaApis$$processResponseCallback$1(KafkaApis.scala:592)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$4.apply(KafkaApis.scala:609)
at 
kafka.server.KafkaApis$$anonfun$handleFetchRequest$4.apply(KafkaApis.scala:609)
at kafka.server.ReplicaManager.fetchMessages(ReplicaManager.scala:820)
at kafka.server.KafkaApis.handleFetchRequest(KafkaApis.scala:601)
at kafka.server.KafkaApis.handle(KafkaApis.scala:99)

and then lots of shrink ISR ( this broker is 1001).

[2018-03-24 13:43:00,285] INFO [Partition gnup.source.offset.storage.topic-5 
broker=1001] Shrinking ISR from 1001,1002 to 1001 (kafka.cluster.Partition)
[2018-03-24 13:43:00,286] INFO [Partition s1mme.data-72 broker=1001] Shrinking 
ISR from 1001,1002 to 1001 (kafka.cluster.Partition)
[2018-03-24 13:43:00,286] INFO [Partition gnup.sink.status.storage.topic-17 
broker=1001] Shrinking ISR from 1001,1002 to 1001 (kafka.cluster.Partition)
[2018-03-24 13:43:00,287] INFO [Partition 
probessgsniups.sink.offset.storage.topic-4 broker=1001] Shrinking ISR from 
1001,1002 to 1001 (kafka.cluster.Partition)
[2018-03-24 13

Re: [VOTE] KIP-268: Simplify Kafka Streams Rebalance Metadata Upgrade

2018-03-22 Thread zhenya Sun
+1


| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/23/2018 03:34, James Cheng wrote:
+1 (non-binding)

-James

> On Mar 21, 2018, at 2:28 AM, Damian Guy <damian@gmail.com> wrote:
>
> +1
>
> On Wed, 21 Mar 2018 at 01:44 abel-sun <sunzhenya5611...@gmail.com> wrote:
>
>>
>>   Thanks you of your offer ,agree with you!
>>
>> On 2018/03/21 00:56:11, Richard Yu <yohan.richard...@gmail.com> wrote:
>>> Hi Matthias,
>>> Thanks for setting up the upgrade path.
>>>
>>> +1 (non-binding)
>>>
>>> On Tue, Mar 20, 2018 at 3:42 PM, Matthias J. Sax <matth...@confluent.io>
>>> wrote:
>>>
>>>> Hi,
>>>>
>>>> I would like to start the vote for KIP-268:
>>>>
>>>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>>> 268%3A+Simplify+Kafka+Streams+Rebalance+Metadata+Upgrade
>>>>
>>>> PR https://github.com/apache/kafka/pull/4636 contains the fixes to
>>>> upgrade from metadata version 1 to 2. Some tests are still missing but
>>>> I'll add them asap.
>>>>
>>>> For "version probing" including new metadata version 3 I plan to do a
>>>> follow-up PR after PR-4636 is merged.
>>>>
>>>>
>>>> -Matthias
>>>>
>>>>
>>>
>>


Re: Gradle strategy for exposing and using public test-utils modules

2018-03-22 Thread zhenya Sun
+1
> 在 2018年3月23日,下午12:20,Ted Yu  写道:
> 
> +1
>  Original message From: "Matthias J. Sax" 
>  Date: 3/22/18  9:07 PM  (GMT-08:00) To: 
> dev@kafka.apache.org Subject: Re: Gradle strategy for exposing and using 
> public test-utils modules 
> +1 from my side.
> 
> -Matthias
> 
> On 3/22/18 5:12 PM, John Roesler wrote:
>> Yep, I'm super happy with this approach vs. a third module just for the
>> tests.
>> 
>> For clairty, here's a PR demonstrating the model we're proposing:
>> https://github.com/apache/kafka/pull/4760
>> 
>> Thanks,
>> -John
>> 
>> On Thu, Mar 22, 2018 at 6:21 PM, Guozhang Wang  wrote:
>> 
>>> I'm +1 to the approach as well across modules that are going to have test
>>> utils artifacts in the future. To me this seems to be a much smaller change
>>> we can make to break the circular dependencies than creating a new package
>>> for our own testing code.
>>> 
>>> Guozhang
>>> 
>>> On Thu, Mar 22, 2018 at 1:26 PM, Bill Bejeck  wrote:
>>> 
 John,
 
 Thanks for the clear, detailed explanation.
 
 I'm +1 on what you have proposed.
 While I agree with you manually pulling in transitive test dependencies
>>> is
 not ideal, in this case, I think it's worth it to get over the circular
 dependency hurdle and use streams:test-utils ourselves.
 
 -Bill
 
 On Thu, Mar 22, 2018 at 4:09 PM, John Roesler  wrote:
 
> Hey everyone,
> 
> In 1.1, kafka-streams adds an artifact called
>>> 'kafka-streams-test-utils'
> (see
> https://kafka.apache.org/11/documentation/streams/
> developer-guide/testing.html
> ).
> 
> The basic idea is to provide first-class support for testing Kafka
 Streams
> applications. Without that, users were forced to either depend on our
> internal test artifacts or develop their own test utilities, neither of
> which is ideal.
> 
> I think it would be great if all our APIs offered a similar module, and
 it
> would all be good if we followed a similar pattern, so I'll describe
>>> the
> streams approach along with one challenge we had to overcome:
> 
> =
> = Project Structure =
> =
> 
> The directory structure goes:
> 
> kafka/streams/ <- main module code here
>   /test-utils/  <- test utilities module here
>   /examples/<- example usages here
> 
> Likewise, the artifacts are:
> 
> kafka-streams
> kafka-streams-test-utils
> kafka-streams-examples
> 
> And finally, the Gradle build structure is:
> 
> :streams
> :streams:test-utils
> :streams:examples
> 
> 
> =
> = Problem 1: circular build =
> =
> 
> In eat-your-own-dogfood tradition, we wanted to depend on our own
> test-utils in our streams tests, but :streams:test-utils (obviously)
> depends on :streams already.
> 
> (:streams) <-- (:streams:test-utils)
>\--->
> 
> Luckily, Filipe Agapito found a way out of the conundrum (
> https://issues.apache.org/jira/browse/KAFKA-6474?
> focusedCommentId=16402326=com.atlassian.jira.
> plugin.system.issuetabpanels:comment-tabpanel#comment-16402326).
> Many thanks to him for this contribution.
> 
> * Add this to the ':streams' definition:
>  testCompile project(':streams:test-utils').sourceSets.main.output
> 
> * And this to the ':streams:test-utils' definition:
>  compile project(':streams').sourceSets.main.output
> 
> * And finally (because we also have tests for the examples), add this
>>> to
> the ':streams:examples' definition:
>  testCompile project(':streams:test-utils')
> 
> 
> 
> By scoping the dependencies to 'sourceSets.main', we break the cyclic
> dependency:
> 
> (:streams main) <-- (:streams:test-utils main)
>^^   ^
>|   /|
>|  / |
> (:streams test) (:streams:test-utils test)
> 
> 
> ==
> = Problem 2: missing transitive dependencies =
> ==
> 
> Scoping the dependency to source-only skips copying transitive library
> dependencies into the build & test environment, so we ran into the
> following error in our tests for ':streams:test-utils' :
> 
> java.lang.ClassNotFoundException: org.rocksdb.RocksDBException
> 
> This kind of thing is easy to resolve, once you understand why it
 happens.
> We just added this to the :test-utils build definition:
>  testCompile libs.rocksDBJni
> 
> It's a little unfortunate to 

Re: [VOTE] 1.1.1 RC0

2018-06-21 Thread zhenya Sun
+1 non-binding 

> 在 2018年6月21日,下午2:18,Andras Beni  写道:
> 
> +1 (non-binding)
> 
> Built .tar.gz, created a cluster from it and ran a basic end-to-end test:
> performed a rolling restart while console-producer and console-consumer ran
> at around 20K messages/sec. No errors or data loss.
> 
> Ran unit and integration tests successfully 3 out of 5 times. Encountered
> some flakies:
> - DescribeConsumerGroupTest.testDescribeGroupWithShortInitializationTimeout
> - LogDirFailureTest.testIOExceptionDuringCheckpoint
> - SimpleAclAuthorizerTest.testHighConcurrencyModificationOfResourceAcls
> 
> 
> Andras
> 
> 
> On Wed, Jun 20, 2018 at 4:59 AM Ted Yu  wrote:
> 
>> +1
>> 
>> Ran unit test suite which passed.
>> 
>> Checked signatures.
>> 
>> On Tue, Jun 19, 2018 at 4:47 PM, Dong Lin  wrote:
>> 
>>> Re-send to kafka-clie...@googlegroups.com
>>> 
>>> On Tue, Jun 19, 2018 at 4:29 PM, Dong Lin  wrote:
>>> 
 Hello Kafka users, developers and client-developers,
 
 This is the first candidate for release of Apache Kafka 1.1.1.
 
 Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was
>> first
 released with 1.1.0 about 3 months ago. We have fixed about 25 issues
>>> since
 that release. A few of the more significant fixes include:
 
 KAFKA-6925  - Fix
 memory leak in StreamsMetricsThreadImpl
 KAFKA-6937  -
>> In-sync
 replica delayed during fetch if replica throttle is exceeded
 KAFKA-6917  -
>> Process
 txn completion asynchronously to avoid deadlock
 KAFKA-6893  - Create
 processors before starting acceptor to avoid ArithmeticException
 KAFKA-6870  -
 Fix ConcurrentModificationException in SampledStat
 KAFKA-6878  - Fix
 NullPointerException when querying global state store
 KAFKA-6879  - Invoke
 session init callbacks outside lock to avoid Controller deadlock
 KAFKA-6857  -
>> Prevent
 follower from truncating to the wrong offset if undefined leader epoch
>> is
 requested
 KAFKA-6854  - Log
 cleaner fails with transaction markers that are deleted during clean
 KAFKA-6747  - Check
 whether there is in-flight transaction before aborting transaction
 KAFKA-6748  - Double
 check before scheduling a new task after the punctuate call
 KAFKA-6739  -
 Fix IllegalArgumentException when down-converting from V2 to V0/V1
 KAFKA-6728  -
 Fix NullPointerException when instantiating the HeaderConverter
 
 Kafka 1.1.1 release plan:
 https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
 
 Release notes for the 1.1.1 release:
 http://home.apache.org/~lindong/kafka-1.1.1-rc0/RELEASE_NOTES.html
 
 *** Please download, test and vote by Thursday, Jun 22, 12pm PT ***
 
 Kafka's KEYS file containing PGP keys we use to sign the release:
 http://kafka.apache.org/KEYS
 
 * Release artifacts to be voted upon (source and binary):
 http://home.apache.org/~lindong/kafka-1.1.1-rc0/
 
 * Maven artifacts to be voted upon:
 https://repository.apache.org/content/groups/staging/
 
 * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc0 tag:
 https://github.com/apache/kafka/tree/1.1.1-rc0
 
 * Documentation:
 http://kafka.apache.org/11/documentation.html
 
 * Protocol:
 http://kafka.apache.org/11/protocol.html
 
 * Successful Jenkins builds for the 1.1 branch:
 Unit/integration tests: https://builds.apache.org/job/
>>> kafka-1.1-jdk7/150/
 
 Please test and verify the release artifacts and submit a vote for this
>>> RC,
 or report any issues so we can fix them and get a new RC out ASAP.
>>> Although
 this release vote requires PMC votes to pass, testing, votes, and bug
 reports are valuable and appreciated from everyone.
 
 Cheers,
 Dong
 
 
 
>>> 
>>