Re: [DISCUSS] KIP-269: Substitution Within Configuration Values

2018-03-28 Thread Ron Dagostino
Hi everyone.  There have been no comments on this KIP, so I intend to put
it to a vote next week if there are no comments that might entail changes
between now and then.  Please take a look in the meantime if you wish.

Ron

On Thu, Mar 15, 2018 at 2:36 PM, Ron Dagostino  wrote:

> Hi everyone.
>
> I created KIP-269: Substitution Within Configuration Values
> 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP+269+
> Substitution+Within+Configuration+Values
> 
> ).
>
> This KIP proposes adding support for substitution within client JAAS
> configuration values for PLAIN and SCRAM-related SASL mechanisms in a
> backwards-compatible manner and making the functionality available to other
> existing (or future) configuration contexts where it is deemed appropriate.
>
> This KIP was extracted from (and is now a prerequisite for) KIP-255:
> OAuth Authentication via SASL/OAUTHBEARER
> 
> based on discussion of that KIP.
>
> Ron
>


Re: [DISCUSS] KIP-255: OAuth Authentication via SASL/OAUTHBEARER

2018-03-28 Thread Ron Dagostino
Hi Rajini.  I have adjusted the KIP to use callbacks and callback handlers
throughout.  I also clarified that production implementations of the
retrieval and validation callback handlers will require the use of an open
source JWT library, and the unsecured implementations are as far as
SASL/OAUTHBEARER will go out-of-the-box. Your suggestions, plus this
clarification, has allowed much of the code to move into the ".internal"
java package; the public-facing API now consists of just 8 Java classes, 1
Java interface, and a set of configuration requirements.  I also added a
section outlinng those configuration requirements since they are extensive
(not onerously so -- just not something one can easily remember).

Ron

On Tue, Mar 13, 2018 at 11:44 AM, Rajini Sivaram 
wrote:

> Hi Ron,
>
> Thanks for the response. All sound good, I think the only outstanding
> question is around callbacks vs classes provided through the login context.
> As you have pointed out, there are advantages of both approaches. Even
> though my preference is for callbacks, it is not a blocker since the
> current approach works fine too. I will make the case for callbacks anyway,
> using OAuthTokenValidator as an example:
>
>
>- As you mentioned, the main advantage of using callbacks is
>consistency. It is the standard plug-in mechanism for SASL
> implementations
>in Java and keeps code consistent with built-in mechanisms like
> Kerberos as
>well as our own implementations like PLAIN and SCRAM.
>- With the current approach, there are two classes OAuthTokenValidator
>and a default implementation OAuthBearerUnsecuredJwtValidator. I was
>thinking that we would have a public callback class
> OAuthTokenValidatorCallback
>instead and a default callback handler
>OAuthBearerUnsecuredJwtValidatorCallbackHandler. So it would be two
>classes either way?
>- JAAS config is very opaque, we don't log it because it could contain
>passwords. Your option substitution classes could help here, but it has
>generally made it difficult to diagnose failures in the past. Callback
>handlers on the the other hand are logged as part of the broker configs
> and
>can be easily made dynamically updatable.
>- In the current implementation, an instance of  OAuthTokenValidator
>is created and configured for every SaslServer, i.e every connection. We
>create one server callback handler instance per mechanism and cache it.
>This is useful if we need to make an external connection or load trust
>stores etc.
>
> For token retriever, I think either approach is fine, since it is tied in
> with login anyway and would benefit from login manager cache either way.
>
> Regards,
>
> Rajini
>
> On Sat, Mar 10, 2018 at 4:19 AM, Ron Dagostino  wrote:
>
> > Hi Rajini.  Thanks for the great feedback.  See below for my
> > thoughts/conclusions.  I haven't implemented any of it yet or changed the
> > KIP, but I will start to work on the areas where we are in agreement
> > immediately, and I will await your feedback on the areas where an
> > additional iteration is needed to arrive at a conclusion.
> >
> > Regarding (1), yes, we can and should eliminate some public API.  See
> > below.
> >
> > Regarding (2), I will change the exception hierarchy so that it is
> > unchecked.
> >
> > Regarding (3) and (4), yes, I agree, the expiring/refresh code can and
> > should be simplified.  The name of the Login class (I called it
> > ExpiringCredentialRefreshingLogin) must be part of the public API
> because
> > it is the class that must be set via the oauthbearer.sasl.login.class
> > property.  Its underlying implementation doesn't have to be public, but
> the
> > fully-qualified name has to be well-known and fixed so that it can be
> > associated with that configuration property.  As you point out, we are
> not
> > unifying the refresh logic for OAUTHBEARER and GSSAPI, though it could be
> > undertaken at some point in the future; the name "
> > ExpiringCredentialRefreshingLogin" should probably be used if/when that
> > unification happens.  In the meantime, the class that we expose should
> > probably be called "OAuthBearerLogin", and it's fully-qualified name and
> > the fact that it recognizes several refresh-related property names in the
> > config, with certain min/max/default values, are the only things that
> > should be public.  I also agree from (4) that we can stipulate that
> > SASL/OAUTHBEARER only supports the case where OAUTHBEARER is the only
> SASL
> > mechanism communicated to the code, either because there is only one SASL
> > mechanism defined for the cluster or because the config is done via the
> new
> > dynamic functionality from KIP-226 that eliminates the
> > mechanism-to-login-module ambiguity associated with declaring multiple
> SASL
> > mechanisms in a single JAAS config file.  Given all of this, everything I
> > defined for token refresh could be 

Jenkins build is back to normal : kafka-trunk-jdk8 #2512

2018-03-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-03-28 Thread Ted Yu
I looked at WorkerSinkTask and it seems using a boolean for KIP-275 should
suffice for now.

Thanks

On Wed, Mar 28, 2018 at 7:20 PM, Matt Farmer  wrote:

> Hey Ted,
>
> I have not, actually!
>
> Do you think that we're likely to add multiple states here soon?
>
> My instinct is to keep it simple until there are multiple states that we
> would want
> to consider. I really like the simplicity of just getting a boolean and the
> implementation of WorkerSinkTask already passes around a boolean to
> indicate this is happening internally. We're really just shuttling that
> value into
> the context at the correct moments.
>
> Once we have multiple states, we could choose to provide a more
> appropriately
> named method (e.g. getState?) and reimplement isClosing by checking that
> enum
> without breaking compatibility.
>
> However, if we think multiple states here are imminent for some reason, I
> would
> be pretty easy to convince adding that would be worth the extra complexity!
> :)
>
> Matt
>
> —
> Matt Farmer | Blog  | Twitter
> 
> GPG: CD57 2E26 F60C 0A61 E6D8  FC72 4493 8917 D667 4D07
>
> On Wed, Mar 28, 2018 at 10:02 PM, Ted Yu  wrote:
>
> > The enhancement gives SinkTaskContext state information.
> >
> > Have you thought of exposing the state retrieval as an enum (initially
> with
> > two values) ?
> >
> > Thanks
> >
> > On Wed, Mar 28, 2018 at 6:55 PM, Matt Farmer  wrote:
> >
> > > Hello all,
> > >
> > > I am proposing KIP-275 to improve Connect's SinkTaskContext so that
> Sinks
> > > can be informed
> > > in their preCommit hook if the hook is being invoked as a part of a
> > > rebalance or Connect
> > > shutdown.
> > >
> > > The KIP is here:
> > > https://cwiki.apache.org/confluence/pages/viewpage.
> > action?pageId=75977607
> > >
> > > Please let me know what feedback y'all have. Thanks!
> > >
> >
>


Re: [VOTE] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-28 Thread James Cheng
+1 (non-binding)

Thanks for all the hard work on this, Vahid!

-James

> On Mar 28, 2018, at 10:34 AM, Vahid S Hashemian  
> wrote:
> 
> Hi all,
> 
> As I believe the feedback and suggestions on this KIP have been addressed 
> so far, I'd like to start a vote.
> The KIP can be found at 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets
> 
> Thanks in advance for voting :)
> 
> --Vahid
> 



Jenkins build is back to normal : kafka-trunk-jdk7 #3298

2018-03-28 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-03-28 Thread Matt Farmer
Hey Ted,

I have not, actually!

Do you think that we're likely to add multiple states here soon?

My instinct is to keep it simple until there are multiple states that we
would want
to consider. I really like the simplicity of just getting a boolean and the
implementation of WorkerSinkTask already passes around a boolean to
indicate this is happening internally. We're really just shuttling that
value into
the context at the correct moments.

Once we have multiple states, we could choose to provide a more
appropriately
named method (e.g. getState?) and reimplement isClosing by checking that
enum
without breaking compatibility.

However, if we think multiple states here are imminent for some reason, I
would
be pretty easy to convince adding that would be worth the extra complexity!
:)

Matt

—
Matt Farmer | Blog  | Twitter

GPG: CD57 2E26 F60C 0A61 E6D8  FC72 4493 8917 D667 4D07

On Wed, Mar 28, 2018 at 10:02 PM, Ted Yu  wrote:

> The enhancement gives SinkTaskContext state information.
>
> Have you thought of exposing the state retrieval as an enum (initially with
> two values) ?
>
> Thanks
>
> On Wed, Mar 28, 2018 at 6:55 PM, Matt Farmer  wrote:
>
> > Hello all,
> >
> > I am proposing KIP-275 to improve Connect's SinkTaskContext so that Sinks
> > can be informed
> > in their preCommit hook if the hook is being invoked as a part of a
> > rebalance or Connect
> > shutdown.
> >
> > The KIP is here:
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=75977607
> >
> > Please let me know what feedback y'all have. Thanks!
> >
>


Build failed in Jenkins: kafka-trunk-jdk9 #517

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: ignoring tests using old versions of Streams until KIP-268 is

--
[...truncated 1.48 MB...]

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption PASSED


Re: [DISCUSS] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-03-28 Thread Ted Yu
The enhancement gives SinkTaskContext state information.

Have you thought of exposing the state retrieval as an enum (initially with
two values) ?

Thanks

On Wed, Mar 28, 2018 at 6:55 PM, Matt Farmer  wrote:

> Hello all,
>
> I am proposing KIP-275 to improve Connect's SinkTaskContext so that Sinks
> can be informed
> in their preCommit hook if the hook is being invoked as a part of a
> rebalance or Connect
> shutdown.
>
> The KIP is here:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607
>
> Please let me know what feedback y'all have. Thanks!
>


[jira] [Created] (KAFKA-6725) Indicate "isClosing" in the SinkTaskContext

2018-03-28 Thread Matt Farmer (JIRA)
Matt Farmer created KAFKA-6725:
--

 Summary: Indicate "isClosing" in the SinkTaskContext
 Key: KAFKA-6725
 URL: https://issues.apache.org/jira/browse/KAFKA-6725
 Project: Kafka
  Issue Type: New Feature
Reporter: Matt Farmer


Addition of the isClosing method to SinkTaskContext per this KIP.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] KIP-275 - Indicate "isClosing" in the SinkTaskContext

2018-03-28 Thread Matt Farmer
Hello all,

I am proposing KIP-275 to improve Connect's SinkTaskContext so that Sinks
can be informed
in their preCommit hook if the hook is being invoked as a part of a
rebalance or Connect
shutdown.

The KIP is here:
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75977607

Please let me know what feedback y'all have. Thanks!


Re: [DISCUSS] KIP-274: Kafka Streams Skipped Records Metrics

2018-03-28 Thread Ted Yu
Looks good to me.

On Wed, Mar 28, 2018 at 3:11 PM, John Roesler  wrote:

> Hello all,
>
> I am proposing KIP-274 to improve the metrics around skipped records in
> Streams.
>
> Please find the details here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 274%3A+Kafka+Streams+Skipped+Records+Metrics
>
> Please let me know what you think!
>
> Thanks,
> -John
>


Build failed in Jenkins: kafka-trunk-jdk9 #516

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Downgrade to Gradle 4.5.1 (#4791)

--
[...truncated 1.48 MB...]
kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToSpecificOffset 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsShiftPlus PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLatest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsNewConsumerExistingTopic PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsShiftByLowerThanEarliest PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsByDuration PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToLocalDateTime 
PASSED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions STARTED

kafka.admin.ResetConsumerGroupOffsetTest > 
testResetOffsetsToEarliestOnTopicsAndPartitions PASSED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
STARTED

kafka.admin.ResetConsumerGroupOffsetTest > testResetOffsetsToEarliestOnTopics 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfBlankArg PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowVerifyWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowTopicsOptionWithVerify PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithThrottleOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldFailIfNoArgs PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithoutReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowBrokersListWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumExecuteOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithReassignmentOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldCorrectlyParseValidMinimumGenerateOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowGenerateWithoutBrokersAndTopicsOptions PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowThrottleWithVerifyOption PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > shouldUseDefaultsIfEnabled 
PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldAllowThrottleOptionOnExecute PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers STARTED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithBrokers PASSED

kafka.admin.ReassignPartitionsCommandArgsTest > 
shouldNotAllowExecuteWithTopicsOption STARTED

kafka.admin.ReassignPartitionsCommandArgsTest 

答复: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Hu Xi
Congrats, Dong Lin!



发件人: Matthias J. Sax 
发送时间: 2018年3月29日 6:37
收件人: us...@kafka.apache.org; dev@kafka.apache.org
主题: Re: [ANNOUNCE] New Committer: Dong Lin

Congrats!

On 3/28/18 1:16 PM, James Cheng wrote:
> Congrats, Dong!
>
> -James
>
>> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
>>
>> Hello everyone,
>>
>> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
>> our invitation to be a new Kafka committer.
>>
>> Dong started working on Kafka about four years ago, since which he has
>> contributed numerous features and patches. His work on Kafka core has been
>> consistent and important. Among his contributions, most noticeably, Dong
>> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
>> overall cost, added deleteDataBefore() API (KIP-107) to allow users
>> actively remove old messages. Dong has also been active in the community,
>> participating in KIP discussions and doing code reviews.
>>
>> Congratulations and looking forward to your future contribution, Dong!
>>
>> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>



Build failed in Jenkins: kafka-trunk-jdk7 #3297

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Change getMessage to toString (#4790)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H21 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51
Commit message: "MINOR: Change getMessage to toString (#4790)"
 > git rev-list --no-walk 659fbb0b06a79fa7e94ac0d68925b1718ed3f214 # timeout=10
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins3462452320975004434.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/3.5/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.5/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
:downloadWrapper

BUILD SUCCESSFUL

Total time: 19.1 secs
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins5789820978412680727.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean
:streams:clean
:tools:clean UP-TO-DATE
:connect:api:clean
:connect:file:clean UP-TO-DATE
:connect:json:clean
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk7:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 5.0.
See 
https://docs.gradle.org/4.6/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 15s
17 actionable tasks: 8 executed, 9 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #3280
Recording test results
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Not sending mail to 

Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Matthias J. Sax
Congrats!

On 3/28/18 1:16 PM, James Cheng wrote:
> Congrats, Dong!
> 
> -James
> 
>> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
>>
>> Hello everyone,
>>
>> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
>> our invitation to be a new Kafka committer.
>>
>> Dong started working on Kafka about four years ago, since which he has
>> contributed numerous features and patches. His work on Kafka core has been
>> consistent and important. Among his contributions, most noticeably, Dong
>> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
>> overall cost, added deleteDataBefore() API (KIP-107) to allow users
>> actively remove old messages. Dong has also been active in the community,
>> participating in KIP discussions and doing code reviews.
>>
>> Congratulations and looking forward to your future contribution, Dong!
>>
>> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> 



signature.asc
Description: OpenPGP digital signature


Build failed in Jenkins: kafka-trunk-jdk8 #2511

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[github] MINOR: Change getMessage to toString (#4790)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51
Commit message: "MINOR: Change getMessage to toString (#4790)"
 > git rev-list --no-walk 659fbb0b06a79fa7e94ac0d68925b1718ed3f214 # timeout=10
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins5665307415867882142.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL in 12s
1 actionable task: 1 executed
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins3890543956191425753.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk8:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 13s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=28f1fc2f55269f43f0b2bb769b78f80b8cc9cf51, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #2477
Recording test results
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user wangg...@gmail.com


[DISCUSS] KIP-274: Kafka Streams Skipped Records Metrics

2018-03-28 Thread John Roesler
Hello all,

I am proposing KIP-274 to improve the metrics around skipped records in
Streams.

Please find the details here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-274%3A+Kafka+Streams+Skipped+Records+Metrics

Please let me know what you think!

Thanks,
-John


Re: [DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Tom Lee
Not sure if this is entirely relevant, but figure folks might be
interested: it's early days for us, but we're currently running Kafka+ZK in
AWS on Kubernetes StatefulSets with the magic annotation to publish "not
ready" addresses (can't recall what it is off the top of my head). It seems
stable so far and we can deploy & redeploy without issue. The jury is still
out on whether running Kafka+ZK on Kubernetes is *advisable*, but I can
talk about what we had to do to get to where we are.

The "worst" thing we had to do was patch the ZK client libs on the Kafka
side to replace the default HostProvider implementation with a special
"kubernetes-friendly" HostProvider to force DNS lookups: default behavior
of StaticHostProvider is to resolve once & cache IPs at the time the client
is created. We also had to configure our JVM DNS TTLs to essentially
nothing. We were bitten by KAFKA-2729 in one of our testing environments
for reasons I still don't fully understand, so we're chomping at the bit
for 1.1.0. :)

Otherwise ... so far so good -- but again, very early days.

Also maybe interesting is that the ZK 3.5 client libs allow pluggable
HostProviders, which will give you folks (Kafka devs) more control over the
ZK DNS lookups without needing to do crazy patching stuff like we did. All
the necessary mechanics to do this are in ZK 3.4.x, but not exposed at the
API level. I've contemplated sending a patch upstream to see if they'd
consider allowing a system property to control the HostProvider
implementation in 3.4 as sort of a poor-man's backport -- then we wouldn't
need to patch the client libs to make things play nice with Kubernetes. No
idea what the temperature would be on that, though.

Anyway: don't consider this an endorsement of the idea and I'm not familiar
enough with etcd to comment on whether ZK is remarkably more difficult to
manage as the KIP seems to imply, just wanted to comment on our experience
so far.

Cheers,
Tom


On Wed, Mar 28, 2018 at 2:00 PM, Ismael Juma  wrote:

> Here's a link to the previous discussion:
>
> https://lists.apache.org/thread.html/2bc187040051008452b40b313db06b
> 476c248ef7a5ed7529afe7b118@1448997154@%3Cdev.kafka.apache.org%3E
>
> Ismael
>
> On Wed, Mar 28, 2018 at 10:40 AM, Ismael Juma  wrote:
>
> > Hi Gwen,
> >
> > I don't think the reasons why a pluggable metastore is not desirable have
> > changed. My suggestion is that the KIP should try to address the concerns
> > raised previously as part of the proposal.
> >
> > Ismael
> >
> > On Wed, Mar 28, 2018 at 10:24 AM, Gwen Shapira 
> wrote:
> >
> >> While I'm not in favor of the proposal, I want to point out that the
> >> ecosystem changed quite a bit since KIP-30 was first proposed.
> Kubernetes
> >> deployments are far more common now and are growing in popularity, and
> the
> >> problem in deployment, discovery and management that ZK poses is
> therefore
> >> more relevant now than it was at the time. There are reasons for the
> >> community to change its collective mind even if the objections are still
> >> valid.
> >>
> >> Since the KIP doesn't include the etcd implementation, the proposal
> looks
> >> like very simple refactoring. Of course, the big change is a new public
> >> API. But it's difficult to judge from the KIP if the API is a good one
> >> because it is built to 100% match the one implementation we have. I'm
> >> curious if the plan includes contributing the Etcd module to Apache
> Kafka?
> >>
> >>
> >> On Wed, Mar 28, 2018 at 9:54 AM, Ismael Juma  wrote:
> >>
> >> > Thanks for the KIP. This was proposed previously via "KIP-30 Allow for
> >> > brokers to have plug-able consensus and meta data storage sub systems"
> >> and
> >> > the community was not in favour. Have you considered the points
> >> discussed
> >> > then?
> >> >
> >> > Ismael
> >> >
> >> > On Wed, Mar 28, 2018 at 9:18 AM, Molnár Bálint <
> molnarcsi...@gmail.com>
> >> > wrote:
> >> >
> >> > > Hi all,
> >> > >
> >> > > I have created KIP-273: Kafka to support using ETCD beside Zookeeper
> >> > >
> >> > > Here is the link to the KIP:
> >> > >
> >> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> > > 273+-+Kafka+to+support+using+ETCD+beside+Zookeeper
> >> > >
> >> > > Looking forward to the discussion.
> >> > >
> >> > > Thanks,
> >> > > Balint
> >> > >
> >> >
> >>
> >>
> >>
> >> --
> >> *Gwen Shapira*
> >> Product Manager | Confluent
> >> 650.450.2760 | @gwenshap
> >> Follow us: Twitter  | blog
> >> 
> >>
> >
> >
>



-- 
*Tom Lee */ http://tomlee.co / @tglee 


[jira] [Resolved] (KAFKA-6723) Separate "max.poll.record" for restore consumer and common consumer

2018-03-28 Thread Guozhang Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guozhang Wang resolved KAFKA-6723.
--
Resolution: Duplicate

> Separate "max.poll.record" for restore consumer and common consumer
> ---
>
> Key: KAFKA-6723
> URL: https://issues.apache.org/jira/browse/KAFKA-6723
> Project: Kafka
>  Issue Type: Improvement
>  Components: streams
>Reporter: Boyang Chen
>Assignee: Boyang Chen
>Priority: Minor
>
> Currently, Kafka Streams use `max.poll.record` config for both restore 
> consumer and normal stream consumer. In reality, they are doing different 
> processing workloads, and in order to speed up the restore speed, restore 
> consumer is supposed to have a higher throughput by setting `max.poll.record` 
> higher. The change involved is trivial: 
> [https://github.com/abbccdda/kafka/commit/cace25b74f31c8da79e93b514bcf1ed3ea9a7149]
> However, this is still a public API change (introducing a new config name), 
> so we need a KIP.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Ismael Juma
Here's a link to the previous discussion:

https://lists.apache.org/thread.html/2bc187040051008452b40b313db06b476c248ef7a5ed7529afe7b118@1448997154@%3Cdev.kafka.apache.org%3E

Ismael

On Wed, Mar 28, 2018 at 10:40 AM, Ismael Juma  wrote:

> Hi Gwen,
>
> I don't think the reasons why a pluggable metastore is not desirable have
> changed. My suggestion is that the KIP should try to address the concerns
> raised previously as part of the proposal.
>
> Ismael
>
> On Wed, Mar 28, 2018 at 10:24 AM, Gwen Shapira  wrote:
>
>> While I'm not in favor of the proposal, I want to point out that the
>> ecosystem changed quite a bit since KIP-30 was first proposed. Kubernetes
>> deployments are far more common now and are growing in popularity, and the
>> problem in deployment, discovery and management that ZK poses is therefore
>> more relevant now than it was at the time. There are reasons for the
>> community to change its collective mind even if the objections are still
>> valid.
>>
>> Since the KIP doesn't include the etcd implementation, the proposal looks
>> like very simple refactoring. Of course, the big change is a new public
>> API. But it's difficult to judge from the KIP if the API is a good one
>> because it is built to 100% match the one implementation we have. I'm
>> curious if the plan includes contributing the Etcd module to Apache Kafka?
>>
>>
>> On Wed, Mar 28, 2018 at 9:54 AM, Ismael Juma  wrote:
>>
>> > Thanks for the KIP. This was proposed previously via "KIP-30 Allow for
>> > brokers to have plug-able consensus and meta data storage sub systems"
>> and
>> > the community was not in favour. Have you considered the points
>> discussed
>> > then?
>> >
>> > Ismael
>> >
>> > On Wed, Mar 28, 2018 at 9:18 AM, Molnár Bálint 
>> > wrote:
>> >
>> > > Hi all,
>> > >
>> > > I have created KIP-273: Kafka to support using ETCD beside Zookeeper
>> > >
>> > > Here is the link to the KIP:
>> > >
>> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > 273+-+Kafka+to+support+using+ETCD+beside+Zookeeper
>> > >
>> > > Looking forward to the discussion.
>> > >
>> > > Thanks,
>> > > Balint
>> > >
>> >
>>
>>
>>
>> --
>> *Gwen Shapira*
>> Product Manager | Confluent
>> 650.450.2760 | @gwenshap
>> Follow us: Twitter  | blog
>> 
>>
>
>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread James Cheng
Congrats, Dong!

-James

> On Mar 28, 2018, at 10:58 AM, Becket Qin  wrote:
> 
> Hello everyone,
> 
> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> our invitation to be a new Kafka committer.
> 
> Dong started working on Kafka about four years ago, since which he has
> contributed numerous features and patches. His work on Kafka core has been
> consistent and important. Among his contributions, most noticeably, Dong
> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> actively remove old messages. Dong has also been active in the community,
> participating in KIP discussions and doing code reviews.
> 
> Congratulations and looking forward to your future contribution, Dong!
> 
> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC



Re: [VOTE] KIP-272: Add API version tag to broker's RequestsPerSec metric

2018-03-28 Thread Ted Yu
+1

On Wed, Mar 28, 2018 at 12:05 PM, Mickael Maison 
wrote:

> +1 (non binding)
> Thanks for the KIP
>
> On Wed, Mar 28, 2018 at 6:25 PM, Gwen Shapira  wrote:
> > +1 (binding)
> >
> > On Wed, Mar 28, 2018 at 9:55 AM, Allen Wang 
> wrote:
> >
> >> Hi All,
> >>
> >> I would like to start voting for KIP-272:  Add API version tag to
> broker's
> >> RequestsPerSec metric.
> >>
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric
> >>
> >> Thanks,
> >> Allen
> >>
> >
> >
> >
> > --
> > *Gwen Shapira*
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter  | blog
> > 
>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Paolo Patierno
Congrats Dong !!

From: Becket Qin 
Sent: Wednesday, March 28, 2018 7:58:07 PM
To: dev; us...@kafka.apache.org
Subject: [ANNOUNCE] New Committer: Dong Lin

Hello everyone,

The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
our invitation to be a new Kafka committer.

Dong started working on Kafka about four years ago, since which he has
contributed numerous features and patches. His work on Kafka core has been
consistent and important. Among his contributions, most noticeably, Dong
developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
overall cost, added deleteDataBefore() API (KIP-107) to allow users
actively remove old messages. Dong has also been active in the community,
participating in KIP discussions and doing code reviews.

Congratulations and looking forward to your future contribution, Dong!

Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Jason Gustafson
Congrats Dong!

On Wed, Mar 28, 2018 at 12:04 PM, Mickael Maison 
wrote:

> Congratulations Dong!
>
> On Wed, Mar 28, 2018 at 7:31 PM, Ismael Juma  wrote:
> > Congratulations Dong! Thanks for your contributions so far and looking
> > forward to future ones.
> >
> > Ismael
> >
> > On Wed, 28 Mar 2018, 10:58 Becket Qin,  wrote:
> >
> >> Hello everyone,
> >>
> >> The PMC of Apache Kafka is pleased to announce that Dong Lin has
> accepted
> >> our invitation to be a new Kafka committer.
> >>
> >> Dong started working on Kafka about four years ago, since which he has
> >> contributed numerous features and patches. His work on Kafka core has
> been
> >> consistent and important. Among his contributions, most noticeably, Dong
> >> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> >> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> >> actively remove old messages. Dong has also been active in the
> community,
> >> participating in KIP discussions and doing code reviews.
> >>
> >> Congratulations and looking forward to your future contribution, Dong!
> >>
> >> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >>
>


Re: [VOTE] KIP-272: Add API version tag to broker's RequestsPerSec metric

2018-03-28 Thread Mickael Maison
+1 (non binding)
Thanks for the KIP

On Wed, Mar 28, 2018 at 6:25 PM, Gwen Shapira  wrote:
> +1 (binding)
>
> On Wed, Mar 28, 2018 at 9:55 AM, Allen Wang  wrote:
>
>> Hi All,
>>
>> I would like to start voting for KIP-272:  Add API version tag to broker's
>> RequestsPerSec metric.
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> 272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric
>>
>> Thanks,
>> Allen
>>
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Mickael Maison
Congratulations Dong!

On Wed, Mar 28, 2018 at 7:31 PM, Ismael Juma  wrote:
> Congratulations Dong! Thanks for your contributions so far and looking
> forward to future ones.
>
> Ismael
>
> On Wed, 28 Mar 2018, 10:58 Becket Qin,  wrote:
>
>> Hello everyone,
>>
>> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
>> our invitation to be a new Kafka committer.
>>
>> Dong started working on Kafka about four years ago, since which he has
>> contributed numerous features and patches. His work on Kafka core has been
>> consistent and important. Among his contributions, most noticeably, Dong
>> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
>> overall cost, added deleteDataBefore() API (KIP-107) to allow users
>> actively remove old messages. Dong has also been active in the community,
>> participating in KIP discussions and doing code reviews.
>>
>> Congratulations and looking forward to your future contribution, Dong!
>>
>> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Ismael Juma
Congratulations Dong! Thanks for your contributions so far and looking
forward to future ones.

Ismael

On Wed, 28 Mar 2018, 10:58 Becket Qin,  wrote:

> Hello everyone,
>
> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> our invitation to be a new Kafka committer.
>
> Dong started working on Kafka about four years ago, since which he has
> contributed numerous features and patches. His work on Kafka core has been
> consistent and important. Among his contributions, most noticeably, Dong
> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> actively remove old messages. Dong has also been active in the community,
> participating in KIP discussions and doing code reviews.
>
> Congratulations and looking forward to your future contribution, Dong!
>
> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Guozhang Wang
Congratulations Dong!

On Wed, Mar 28, 2018 at 11:19 AM, Jun Rao  wrote:

> Congratulations, Dong! Thanks for all your contributions to Kafka.
>
> Jun
>
> On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> > our invitation to be a new Kafka committer.
> >
> > Dong started working on Kafka about four years ago, since which he has
> > contributed numerous features and patches. His work on Kafka core has
> been
> > consistent and important. Among his contributions, most noticeably, Dong
> > developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > actively remove old messages. Dong has also been active in the community,
> > participating in KIP discussions and doing code reviews.
> >
> > Congratulations and looking forward to your future contribution, Dong!
> >
> > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>



-- 
-- Guozhang


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Jun Rao
Congratulations, Dong! Thanks for all your contributions to Kafka.

Jun

On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  wrote:

> Hello everyone,
>
> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> our invitation to be a new Kafka committer.
>
> Dong started working on Kafka about four years ago, since which he has
> contributed numerous features and patches. His work on Kafka core has been
> consistent and important. Among his contributions, most noticeably, Dong
> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> actively remove old messages. Dong has also been active in the community,
> participating in KIP discussions and doing code reviews.
>
> Congratulations and looking forward to your future contribution, Dong!
>
> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Rajini Sivaram
Congratulations, Dong!

On Wed, Mar 28, 2018 at 7:06 PM, Vahid S Hashemian <
vahidhashem...@us.ibm.com> wrote:

> Congratulations Dong! Well deserved.
>
>
>
> From:   Bill Bejeck 
> To: dev@kafka.apache.org
> Date:   03/28/2018 11:04 AM
> Subject:Re: [ANNOUNCE] New Committer: Dong Lin
>
>
>
> Congrats Dong!
>
> On Wed, Mar 28, 2018 at 1:58 PM, Ted Yu  wrote:
>
> > Congratulations, Dong.
> >
> > On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin 
> wrote:
> >
> > > Hello everyone,
> > >
> > > The PMC of Apache Kafka is pleased to announce that Dong Lin has
> accepted
> > > our invitation to be a new Kafka committer.
> > >
> > > Dong started working on Kafka about four years ago, since which he has
> > > contributed numerous features and patches. His work on Kafka core has
> > been
> > > consistent and important. Among his contributions, most noticeably,
> Dong
> > > developed JBOD (KIP-112, KIP-113) to handle disk failures and to
> reduce
> > > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > > actively remove old messages. Dong has also been active in the
> community,
> > > participating in KIP discussions and doing code reviews.
> > >
> > > Congratulations and looking forward to your future contribution, Dong!
> > >
> > > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> > >
> >
>
>
>
>
>


Build failed in Jenkins: kafka-trunk-jdk8 #2510

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6724; ConsumerPerformance should not always reset to earliest

[wangguoz] MINOR: Depend on streams:test-utils for streams and examples tests

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 659fbb0b06a79fa7e94ac0d68925b1718ed3f214 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 659fbb0b06a79fa7e94ac0d68925b1718ed3f214
Commit message: "MINOR: Depend on streams:test-utils for streams and examples 
tests (#4760)"
 > git rev-list --no-walk 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07 # timeout=10
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins8263028477061057869.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL in 14s
1 actionable task: 1 executed
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins2933121142005448784.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk8:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 15s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=659fbb0b06a79fa7e94ac0d68925b1718ed3f214, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #2477
Recording test results
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user wangg...@gmail.com


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Vahid S Hashemian
Congratulations Dong! Well deserved.



From:   Bill Bejeck 
To: dev@kafka.apache.org
Date:   03/28/2018 11:04 AM
Subject:Re: [ANNOUNCE] New Committer: Dong Lin



Congrats Dong!

On Wed, Mar 28, 2018 at 1:58 PM, Ted Yu  wrote:

> Congratulations, Dong.
>
> On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  
wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce that Dong Lin has 
accepted
> > our invitation to be a new Kafka committer.
> >
> > Dong started working on Kafka about four years ago, since which he has
> > contributed numerous features and patches. His work on Kafka core has
> been
> > consistent and important. Among his contributions, most noticeably, 
Dong
> > developed JBOD (KIP-112, KIP-113) to handle disk failures and to 
reduce
> > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > actively remove old messages. Dong has also been active in the 
community,
> > participating in KIP discussions and doing code reviews.
> >
> > Congratulations and looking forward to your future contribution, Dong!
> >
> > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>






Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread zhenya Sun
congratulation!



| |
zhenya Sun
邮箱:toke...@126.com
|

签名由 网易邮箱大师 定制

On 03/29/2018 02:03, Bill Bejeck wrote:
Congrats Dong!

On Wed, Mar 28, 2018 at 1:58 PM, Ted Yu  wrote:

> Congratulations, Dong.
>
> On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> > our invitation to be a new Kafka committer.
> >
> > Dong started working on Kafka about four years ago, since which he has
> > contributed numerous features and patches. His work on Kafka core has
> been
> > consistent and important. Among his contributions, most noticeably, Dong
> > developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > actively remove old messages. Dong has also been active in the community,
> > participating in KIP discussions and doing code reviews.
> >
> > Congratulations and looking forward to your future contribution, Dong!
> >
> > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Bill Bejeck
Congrats Dong!

On Wed, Mar 28, 2018 at 1:58 PM, Ted Yu  wrote:

> Congratulations, Dong.
>
> On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  wrote:
>
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> > our invitation to be a new Kafka committer.
> >
> > Dong started working on Kafka about four years ago, since which he has
> > contributed numerous features and patches. His work on Kafka core has
> been
> > consistent and important. Among his contributions, most noticeably, Dong
> > developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> > overall cost, added deleteDataBefore() API (KIP-107) to allow users
> > actively remove old messages. Dong has also been active in the community,
> > participating in KIP discussions and doing code reviews.
> >
> > Congratulations and looking forward to your future contribution, Dong!
> >
> > Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
> >
>


Build failed in Jenkins: kafka-trunk-jdk7 #3296

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[jason] KAFKA-6724; ConsumerPerformance should not always reset to earliest

[wangguoz] MINOR: Depend on streams:test-utils for streams and examples tests

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H25 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 659fbb0b06a79fa7e94ac0d68925b1718ed3f214 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 659fbb0b06a79fa7e94ac0d68925b1718ed3f214
Commit message: "MINOR: Depend on streams:test-utils for streams and examples 
tests (#4760)"
 > git rev-list --no-walk 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07 # timeout=10
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins4252978132725175364.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/3.5/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.5/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
Cleaned up directory 
'
:downloadWrapper

BUILD SUCCESSFUL

Total time: 28.097 secs
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins4979063610516910744.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean
:connect:clean UP-TO-DATE
:core:clean
:examples:clean
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean
:streams:clean
:tools:clean
:connect:api:clean
:connect:file:clean
:connect:json:clean
:connect:runtime:clean
:connect:transforms:clean
:streams:examples:clean
:streams:test-utils:clean
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk7:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 5.0.
See 
https://docs.gradle.org/4.6/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 21s
17 actionable tasks: 14 executed, 3 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=659fbb0b06a79fa7e94ac0d68925b1718ed3f214, 

Re: [ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Ted Yu
Congratulations, Dong.

On Wed, Mar 28, 2018 at 10:58 AM, Becket Qin  wrote:

> Hello everyone,
>
> The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
> our invitation to be a new Kafka committer.
>
> Dong started working on Kafka about four years ago, since which he has
> contributed numerous features and patches. His work on Kafka core has been
> consistent and important. Among his contributions, most noticeably, Dong
> developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
> overall cost, added deleteDataBefore() API (KIP-107) to allow users
> actively remove old messages. Dong has also been active in the community,
> participating in KIP discussions and doing code reviews.
>
> Congratulations and looking forward to your future contribution, Dong!
>
> Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC
>


[ANNOUNCE] New Committer: Dong Lin

2018-03-28 Thread Becket Qin
Hello everyone,

The PMC of Apache Kafka is pleased to announce that Dong Lin has accepted
our invitation to be a new Kafka committer.

Dong started working on Kafka about four years ago, since which he has
contributed numerous features and patches. His work on Kafka core has been
consistent and important. Among his contributions, most noticeably, Dong
developed JBOD (KIP-112, KIP-113) to handle disk failures and to reduce
overall cost, added deleteDataBefore() API (KIP-107) to allow users
actively remove old messages. Dong has also been active in the community,
participating in KIP discussions and doing code reviews.

Congratulations and looking forward to your future contribution, Dong!

Jiangjie (Becket) Qin, on behalf of Apache Kafka PMC


Re: [VOTE] KIP-257 - Configurable Quota Management

2018-03-28 Thread Gwen Shapira
+1

On Thu, Mar 22, 2018 at 2:56 PM, Rajini Sivaram 
wrote:

> Hi all,
>
> I would like to start vote on KIP-257 to enable customisation of client
> quota computation:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 257+-+Configurable+Quota+Management
>
> The KIP proposes to make quota management pluggable to enable group-based
> and partition-based quotas for clients.
>
>
> Thanks,
>
>
> Rajini
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Ismael Juma
Hi Gwen,

I don't think the reasons why a pluggable metastore is not desirable have
changed. My suggestion is that the KIP should try to address the concerns
raised previously as part of the proposal.

Ismael

On Wed, Mar 28, 2018 at 10:24 AM, Gwen Shapira  wrote:

> While I'm not in favor of the proposal, I want to point out that the
> ecosystem changed quite a bit since KIP-30 was first proposed. Kubernetes
> deployments are far more common now and are growing in popularity, and the
> problem in deployment, discovery and management that ZK poses is therefore
> more relevant now than it was at the time. There are reasons for the
> community to change its collective mind even if the objections are still
> valid.
>
> Since the KIP doesn't include the etcd implementation, the proposal looks
> like very simple refactoring. Of course, the big change is a new public
> API. But it's difficult to judge from the KIP if the API is a good one
> because it is built to 100% match the one implementation we have. I'm
> curious if the plan includes contributing the Etcd module to Apache Kafka?
>
>
> On Wed, Mar 28, 2018 at 9:54 AM, Ismael Juma  wrote:
>
> > Thanks for the KIP. This was proposed previously via "KIP-30 Allow for
> > brokers to have plug-able consensus and meta data storage sub systems"
> and
> > the community was not in favour. Have you considered the points discussed
> > then?
> >
> > Ismael
> >
> > On Wed, Mar 28, 2018 at 9:18 AM, Molnár Bálint 
> > wrote:
> >
> > > Hi all,
> > >
> > > I have created KIP-273: Kafka to support using ETCD beside Zookeeper
> > >
> > > Here is the link to the KIP:
> > >
> > > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > 273+-+Kafka+to+support+using+ETCD+beside+Zookeeper
> > >
> > > Looking forward to the discussion.
> > >
> > > Thanks,
> > > Balint
> > >
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


[VOTE] KIP-211: Revise Expiration Semantics of Consumer Group Offsets

2018-03-28 Thread Vahid S Hashemian
Hi all,

As I believe the feedback and suggestions on this KIP have been addressed 
so far, I'd like to start a vote.
The KIP can be found at 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-211%3A+Revise+Expiration+Semantics+of+Consumer+Group+Offsets

Thanks in advance for voting :)

--Vahid



RE: [VOTE] KIP-257 - Configurable Quota Management

2018-03-28 Thread Koushik Chitta
+1 . Thanks for the KIP.

-Original Message-
From: Rajini Sivaram  
Sent: Thursday, March 22, 2018 2:57 PM
To: dev 
Subject: [VOTE] KIP-257 - Configurable Quota Management

Hi all,

I would like to start vote on KIP-257 to enable customisation of client quota 
computation:

https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcwiki.apache.org%2Fconfluence%2Fdisplay%2FKAFKA%2FKIP-257%2B-%2BConfigurable%2BQuota%2BManagement=04%7C01%7Ckchitta%40microsoft.com%7C627ce65854e8448a51ec08d5903fcc43%7C72f988bf86f141af91ab2d7cd011db47%7C1%7C0%7C636573526042774361%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwifQ%3D%3D%7C-1=qVIbP%2BMsy6%2BAQfnyVrqYl4KxGF%2BUt7M%2FNTIzarr5RMY%3D=0

The KIP proposes to make quota management pluggable to enable group-based and 
partition-based quotas for clients.


Thanks,


Rajini


[jira] [Resolved] (KAFKA-6724) ConsumerPerformance resets offsets on every startup

2018-03-28 Thread Jason Gustafson (JIRA)

 [ 
https://issues.apache.org/jira/browse/KAFKA-6724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-6724.

   Resolution: Fixed
Fix Version/s: 1.2.0

> ConsumerPerformance resets offsets on every startup
> ---
>
> Key: KAFKA-6724
> URL: https://issues.apache.org/jira/browse/KAFKA-6724
> Project: Kafka
>  Issue Type: Bug
>  Components: core, tools
>Affects Versions: 0.11.0.1
>Reporter: Alex Dunayevsky
>Priority: Minor
> Fix For: 1.2.0
>
>
> ConsumerPerformance used in kafka-consumer-perf-test.sh resets offsets for 
> it's group on every startup. 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-272: Add API version tag to broker's RequestsPerSec metric

2018-03-28 Thread Gwen Shapira
+1 (binding)

On Wed, Mar 28, 2018 at 9:55 AM, Allen Wang  wrote:

> Hi All,
>
> I would like to start voting for KIP-272:  Add API version tag to broker's
> RequestsPerSec metric.
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric
>
> Thanks,
> Allen
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Gwen Shapira
While I'm not in favor of the proposal, I want to point out that the
ecosystem changed quite a bit since KIP-30 was first proposed. Kubernetes
deployments are far more common now and are growing in popularity, and the
problem in deployment, discovery and management that ZK poses is therefore
more relevant now than it was at the time. There are reasons for the
community to change its collective mind even if the objections are still
valid.

Since the KIP doesn't include the etcd implementation, the proposal looks
like very simple refactoring. Of course, the big change is a new public
API. But it's difficult to judge from the KIP if the API is a good one
because it is built to 100% match the one implementation we have. I'm
curious if the plan includes contributing the Etcd module to Apache Kafka?


On Wed, Mar 28, 2018 at 9:54 AM, Ismael Juma  wrote:

> Thanks for the KIP. This was proposed previously via "KIP-30 Allow for
> brokers to have plug-able consensus and meta data storage sub systems" and
> the community was not in favour. Have you considered the points discussed
> then?
>
> Ismael
>
> On Wed, Mar 28, 2018 at 9:18 AM, Molnár Bálint 
> wrote:
>
> > Hi all,
> >
> > I have created KIP-273: Kafka to support using ETCD beside Zookeeper
> >
> > Here is the link to the KIP:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 273+-+Kafka+to+support+using+ETCD+beside+Zookeeper
> >
> > Looking forward to the discussion.
> >
> > Thanks,
> > Balint
> >
>



-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Ismael Juma
Thanks for the KIP. This was proposed previously via "KIP-30 Allow for
brokers to have plug-able consensus and meta data storage sub systems" and
the community was not in favour. Have you considered the points discussed
then?

Ismael

On Wed, Mar 28, 2018 at 9:18 AM, Molnár Bálint 
wrote:

> Hi all,
>
> I have created KIP-273: Kafka to support using ETCD beside Zookeeper
>
> Here is the link to the KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> 273+-+Kafka+to+support+using+ETCD+beside+Zookeeper
>
> Looking forward to the discussion.
>
> Thanks,
> Balint
>


[VOTE] KIP-272: Add API version tag to broker's RequestsPerSec metric

2018-03-28 Thread Allen Wang
Hi All,

I would like to start voting for KIP-272:  Add API version tag to broker's
RequestsPerSec metric.

https://cwiki.apache.org/confluence/display/KAFKA/KIP-272%3A+Add+API+version+tag+to+broker%27s+RequestsPerSec+metric

Thanks,
Allen


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-03-28 Thread Jun Rao
Hi, John,

I actually think it's important to think through how KStreams handles
partition expansion in this KIP. If we do decide that we truly need
backfilling, it's much better to think through how to add it now, instead
of retrofitting it later. It would be useful to outline how both existing
KStreams jobs and new KStreams jobs work to see if backfilling is really
needed.

If we can figure out how KStreams works, at least we have one reference
implementation for other stream processing frameworks that face the same
issue.

Thanks,

Jun


On Tue, Mar 27, 2018 at 4:56 PM, John Roesler  wrote:

> Hi Jun,
>
> That's a good point.
>
> Yeah, I don't think it would work too well for existing consumers in the
> middle of gen 0 to try and switch to a newly backfilled prefix of gen 1.
> They probably just need to finish up until they get to the end of gen 0 and
> transition just as if there were no backfill available yet.
>
> This isn't terrible, since consumer applications that care about scaling up
> to match a freshly split partition would wait until after the backfill is
> available to scale up. The consumer that starts out in gen=0, part=0 is
> going to be stuck with part=0 and part=3 in gen=1 in my example regardless
> of whether they finish scanning gen=0 before or after the backfill is
> available.
>
> The broker knowing when it's ok to delete gen 0, including its offset
> mappings, is a big issue, though. I don't have any immediate ideas for
> solving it, but it doesn't feel impossible. Hopefully, you agree this is
> outside of KIP-253's scope, so maybe we don't need to worry about it right
> now.
>
> I do agree that reshuffling in KStreams effectively solves the scalability
> problem as well, as it decouples the partition count (and the partition
> scheme) upstream from the parallelism of the streams application. Likely,
> we will do this in any case. I'm predominantly advocating for follow-on
> work to enable backfill for the *other* Kafka users that are not KStreams.
>
> Thanks for your consideration,
> -John
>
> On Tue, Mar 27, 2018 at 6:19 PM, Jun Rao  wrote:
>
> > Hi, John,
> >
> > Thanks for the reply. I agree that the backfill approach works cleaner
> for
> > newly started consumers. I am just not sure if it's a good primitive to
> > support for existing consumers. One of the challenges that I see is the
> > remapping of the offsets. In your approach, we need to copy the existing
> > records from the partitions in generation 0 to generation 1. Those
> records
> > will get different offsets in the new generation. The broker will have to
> > store those offset mappings somewhere. When the backfill completes, you
> can
> > delete generation 0's data. However, the broker can't throw away the
> offset
> > mappings immediately since it doesn't know if there is any existing
> > consumer still consuming generation 0's records. In a compacted topic,
> the
> > broker probably can only safely remove the offset mappings when all
> records
> > in generation 0 are removed by the cleaner. This may never happen though.
> >
> > If we reshuffle the input inside a KStreams job, it obviates the need for
> > offset remapping on the broker.
> >
> > Jun
> >
> > On Tue, Mar 27, 2018 at 11:34 AM, John Roesler 
> wrote:
> >
> > > Hey Dong and Jun,
> > >
> > > Thanks for the thoughtful responses. If you don't mind, I'll mix my
> > replies
> > > together to try for a coherent response. I'm not too familiar with
> > > mailing-list etiquette, though.
> > >
> > > I'm going to keep numbering my points because it makes it easy for you
> > all
> > > to respond.
> > >
> > > Point 1:
> > > As I read it, KIP-253 is *just* about properly fencing the producers
> and
> > > consumers so that you preserve the correct ordering of records during
> > > partition expansion. This is clearly necessary regardless of anything
> > else
> > > we discuss. I think this whole discussion about backfill, consumers,
> > > streams, etc., is beyond the scope of KIP-253. But it would be
> cumbersome
> > > to start a new thread at this point.
> > >
> > > I had missed KIP-253's Proposed Change #9 among all the details... I
> > think
> > > this is a nice addition to the proposal. One thought is that it's
> > actually
> > > irrelevant whether the hash function is linear. This is simply an
> > algorithm
> > > for moving a key from one partition to another, so the type of hash
> > > function need not be a precondition. In fact, it also doesn't matter
> > > whether the topic is compacted or not, the algorithm works regardless.
> > >
> > > I think this is a good algorithm to keep in mind, as it might solve a
> > > variety of problems, but it does have a downside: that the producer
> won't
> > > know whether or not K1 was actually in P1, it just knows that K1 was in
> > > P1's keyspace before the new epoch. Therefore, it will have to
> > > pessimistically send (K1,null) to P1 just in case. But the next time 

[DISCUSS] KIP-273 Kafka to support using ETCD beside Zookeeper

2018-03-28 Thread Molnár Bálint
Hi all,

I have created KIP-273: Kafka to support using ETCD beside Zookeeper

Here is the link to the KIP:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-273+-+Kafka+to+support+using+ETCD+beside+Zookeeper

Looking forward to the discussion.

Thanks,
Balint


Build failed in Jenkins: kafka-trunk-jdk7 #3295

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6716: Should close the `discardChannel` in

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H26 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07
Commit message: "KAFKA-6716: Should close the `discardChannel` in 
MockSelector#completeSend (#4783)"
 > git rev-list --no-walk 9baa9bddba0dc1b3bda167ea509bd90226615e1f # timeout=10
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins2021782624943766945.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/3.5/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.5/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL

Total time: 13.092 secs
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins3379083778273625446.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk7:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 5.0.
See 
https://docs.gradle.org/4.6/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 14s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #3280
Recording test results
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to 

Build failed in Jenkins: kafka-trunk-jdk8 #2509

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[rajinisivaram] KAFKA-6716: Should close the `discardChannel` in

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07
Commit message: "KAFKA-6716: Should close the `discardChannel` in 
MockSelector#completeSend (#4783)"
 > git rev-list --no-walk 9baa9bddba0dc1b3bda167ea509bd90226615e1f # timeout=10
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins965377224803695451.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL in 12s
1 actionable task: 1 executed
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins7951002238980797577.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk8:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 13s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=5d5a2ce4bba4b42b3652d0126d1d2dab979a3e07, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #2477
Recording test results
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user wangg...@gmail.com


[jira] [Created] (KAFKA-6724) ConsumerPerformance resets offsets on every startup

2018-03-28 Thread Alex Dunayevsky (JIRA)
Alex Dunayevsky created KAFKA-6724:
--

 Summary: ConsumerPerformance resets offsets on every startup
 Key: KAFKA-6724
 URL: https://issues.apache.org/jira/browse/KAFKA-6724
 Project: Kafka
  Issue Type: Bug
  Components: core, tools
Affects Versions: 0.11.0.1
Reporter: Alex Dunayevsky


ConsumerPerformance used in kafka-consumer-perf-test.sh resets offsets for it's 
group on every startup. 

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] 1.1.0 RC4

2018-03-28 Thread Rajini Sivaram
This vote passes with 9 +1 votes (4 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Jason Gustafson
* Jun Rao
* Gwen Shapira

* Rajini Sivaram

Committers:
* No votes

Community:
* Ted Yu
* Manikumar

* Jeff Chao

* Vahid Hashemian

* Brett Rann

0 votes
* No votes

-1 votes
* No votes
Vote thread:https://markmail.org/message/trlhjyebmidsamuu

I'll continue with the release process and the release announcement will follow.

Thanks,


Rajini




On Wed, Mar 28, 2018 at 6:34 AM, Gwen Shapira  wrote:

> +1
>
> Checked keys, built, ran quickstart. LGTM.
>
> On Fri, Mar 23, 2018 at 4:37 PM, Rajini Sivaram 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the fifth candidate for release of Apache Kafka 1.1.0.
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?
> pageId=75957546
> >
> > A few highlights:
> >
> > * Significant Controller improvements (much faster and session expiration
> > edge
> > cases fixed)
> > * Data balancing across log directories (JBOD)
> > * More efficient replication when the number of partitions is large
> > * Dynamic Broker Configs
> > * Delegation tokens (KIP-48)
> > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> >
> > Release notes for the 1.1.0 release:
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by Tuesday March 27th 4pm PT.
> >
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> > http://kafka.apache.org/KEYS
> >
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/
> >
> >
> > * Maven artifacts to be voted upon:
> >
> > https://repository.apache.org/content/groups/staging/
> >
> >
> > * Javadoc:
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/javadoc/
> >
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> >
> > https://github.com/apache/kafka/tree/1.1.0-rc4
> >
> >
> >
> > * Documentation:
> >
> > http://kafka.apache.org/11/documentation.html
> >
> >
> > * Protocol:
> >
> > http://kafka.apache.org/11/protocol.html
> >
> >
> >
> > Thanks,
> >
> >
> > Rajini
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Build failed in Jenkins: kafka-trunk-jdk7 #3294

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update Jackson to 2.9.5 (#4776)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H20 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9baa9bddba0dc1b3bda167ea509bd90226615e1f 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9baa9bddba0dc1b3bda167ea509bd90226615e1f
Commit message: "MINOR: Update Jackson to 2.9.5 (#4776)"
 > git rev-list --no-walk 281dbfd9813603d913a2a0b948547dea7b863be2 # timeout=10
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins6801734331138796687.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/3.5/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/3.5/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL

Total time: 11.758 secs
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[kafka-trunk-jdk7] $ /bin/bash -xe /tmp/jenkins8616895931136729035.sh
+ export 'GRADLE_OPTS=-Xmx1024m -XX:MaxPermSize=256m'
+ GRADLE_OPTS='-Xmx1024m -XX:MaxPermSize=256m'
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk7:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 5.0.
See 
https://docs.gradle.org/4.6/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 13s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=9baa9bddba0dc1b3bda167ea509bd90226615e1f, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #3280
Recording test results
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_3_5_HOME=/home/jenkins/tools/gradle/3.5
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-253: Support in-order message delivery with partition expansion

2018-03-28 Thread Dong Lin
Hey John,

Great! Thanks for all the comment. It seems that we agree that the current
KIP is in good shape for core Kafka. IMO, what we have been discussing in
the recent email exchanges is mostly about the second step, i.e. how to
address problem for the stream use-case (or stateful processing in general).

I will comment inline.




On Tue, Mar 27, 2018 at 4:38 PM, John Roesler  wrote:

> Thanks for the response, Dong.
>
> Here are my answers to your questions:
>
> - "Asking producers and consumers, or even two different producers, to
> > share code like the partition function is a pretty huge ask. What if they
> > are using different languages?". It seems that today we already require
> > different producer's to use the same hash function -- otherwise messages
> > with the same key will go to different partitions of the same topic which
> > may cause problem for downstream consumption. So not sure if it adds any
> > more constraint by assuming consumers know the hash function of producer.
> > Could you explain more why user would want to use a cusmtom partition
> > function? Maybe we can check if this is something that can be supported
> in
> > the default Kafka hash function. Also, can you explain more why it is
> > difficuilt to implement the same hash function in different languages?
>
>
> Sorry, I meant two different producers as in producers to two different
> topics. This was in response to the suggestion that we already require
> coordination among producers to different topics in order to achieve
> co-partitioning. I was saying that we do not (and should not).


It is probably common for producers of different team to produce message to
the same topic. In order to ensure that messages with the same key go to
same partition, we need producers of different team to share the same
partition algorithm, which by definition requires coordination among
producers of different teams in an organization. Even for producers of
different topics, it may be common to require producers to use the same
partition algorithm in order to join two topics for stream processing. Does
this make it reasonable to say we already require coordination across
producers?


> By design, consumers are currently ignorant of the partitioning scheme. It
> suffices to trust that the producer has partitioned the topic by key, if
> they claim to have done so. If you don't trust that, or even if you just
> need some other partitioning scheme, then you must re-partition it
> yourself. Nothing we're discussing can or should change that. The value of
> backfill is that it preserves the ability for consumers to avoid
> re-partitioning before consuming, in the case where they don't need to
> today.


> Regarding shared "hash functions", note that it's a bit inaccurate to talk
> about the "hash function" of the producer. Properly speaking, the producer
> has only a "partition function". We do not know that it is a hash. The
> producer can use any method at their disposal to assign a partition to a
> record. The partition function obviously may we written in any programming
> language, so in general it's not something that can be shared around
> without a formal spec or the ability to execute arbitrary executables in
> arbitrary runtime environments.
>

Yeah it is probably better to say partition algorithm. I guess it should
not be difficult to implement same partition algorithms in different
languages, right? Yes we would need a formal specification of the default
partition algorithm in the producer. I think that can be documented as part
of the producer interface.


>
> Why would a producer want a custom partition function? I don't know... why
> did we design the interface so that our users can provide one? In general,
> such systems provide custom partitioners because some data sets may be
> unbalanced under the default or because they can provide some interesting
> functionality built on top of the partitioning scheme, etc. Having provided
> this ability, I don't know why we would remove it.
>

Yeah it is reasonable to assume that there was reason to support custom
partition function in producer. On the other hand it may also be reasonable
to revisit this interface and discuss whether we actually need to support
custom partition function. If we don't have a good reason, we can choose
not to support custom partition function in this KIP in a backward
compatible manner, i.e. user can still use custom partition function but
they would not get the benefit of in-order delivery when there is partition
expansion. What do you think?


>
> - Besides the assumption that consumer needs to share the hash function of
> > producer, is there other organization overhead of the proposal in the
> > current KIP?
> >
>
> It wasn't clear to me that KIP-253 currently required the producer and
> consumer to share the partition function, or in fact that it had a hard
> requirement to abandon the general partition function and use a linear hash

Build failed in Jenkins: kafka-trunk-jdk8 #2508

2018-03-28 Thread Apache Jenkins Server
See 


Changes:

[jason] MINOR: Update Jackson to 2.9.5 (#4776)

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H32 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/trunk^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/trunk^{commit} # timeout=10
Checking out Revision 9baa9bddba0dc1b3bda167ea509bd90226615e1f 
(refs/remotes/origin/trunk)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 9baa9bddba0dc1b3bda167ea509bd90226615e1f
Commit message: "MINOR: Update Jackson to 2.9.5 (#4776)"
 > git rev-list --no-walk 281dbfd9813603d913a2a0b948547dea7b863be2 # timeout=10
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins474971546447329.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.4/bin/gradle
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.4.1/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:downloadWrapper

BUILD SUCCESSFUL in 12s
1 actionable task: 1 executed
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[kafka-trunk-jdk8] $ /bin/bash -xe /tmp/jenkins5053017237484270459.sh
+ export GRADLE_OPTS=-Xmx1024m
+ GRADLE_OPTS=-Xmx1024m
+ ./gradlew --no-daemon -PmaxParallelForks=1 
-PtestLoggingEvents=started,passed,skipped,failed -PxmlFindBugsReport=true 
clean testAll
To honour the JVM settings for this build a new JVM will be forked. Please 
consider using the daemon: 
https://docs.gradle.org/4.6/userguide/gradle_daemon.html.
Daemon will be stopped at the end of the build stopping after processing
Building project 'core' with Scala version 2.11.12
:clean UP-TO-DATE
:clients:clean UP-TO-DATE
:connect:clean UP-TO-DATE
:core:clean
:examples:clean UP-TO-DATE
:jmh-benchmarks:clean UP-TO-DATE
:log4j-appender:clean UP-TO-DATE
:streams:clean UP-TO-DATE
:tools:clean UP-TO-DATE
:connect:api:clean UP-TO-DATE
:connect:file:clean UP-TO-DATE
:connect:json:clean UP-TO-DATE
:connect:runtime:clean UP-TO-DATE
:connect:transforms:clean UP-TO-DATE
:streams:examples:clean UP-TO-DATE
:streams:test-utils:clean UP-TO-DATE
:test_core_2_11
Building project 'core' with Scala version 2.11.12
:test_core_2_11 FAILED

FAILURE: Build failed with an exception.

* What went wrong:
Could not determine the dependencies of task 
':kafka-trunk-jdk8:clients:compileJava'.
> Could not create service of type AnnotationProcessorDetector using 
> JavaGradleScopeServices.createAnnotationProcessorDetector().

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

BUILD FAILED in 13s
17 actionable tasks: 2 executed, 15 up-to-date
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=9baa9bddba0dc1b3bda167ea509bd90226615e1f, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #2477
Recording test results
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_4_HOME=/home/jenkins/tools/gradle/4.4
Not sending mail to unregistered user ism...@juma.me.uk
Not sending mail to unregistered user rajinisiva...@googlemail.com
Not sending mail to unregistered user git...@alasdairhodge.co.uk
Not sending mail to unregistered user wangg...@gmail.com