Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Colin McCabe
+1 (binding)

Colin


On Mon, Sep 24, 2018, at 17:49, Ismael Juma wrote:
> Thanks Colin. I think this is much needed and I'm +1 (binding)
> on fixing> this issue. However, I have a few minor suggestions:
>
> 1. Overload alterConfigs instead of creating a new method name. This
>gives> us both the short name and a path for removal of the deprecated
> overload.> 2. Did we consider Add/Remove instead of Append/Subtract?
>
> Ismael
>
> On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe
>  wrote:>
> > Hi all,
> >
> > I would like to start voting on KIP-339, which creates a new
> > IncrementalAlterConfigs API.
> >
> > The KIP is described here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API>
> >  >
> > Previous discussion:
> > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> >
> > best,
> > Colin
> >



Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Colin McCabe
On Mon, Sep 24, 2018, at 17:49, Ismael Juma wrote:
> Thanks Colin. I think this is much needed and I'm +1 (binding)
> on fixing> this issue. However, I have a few minor suggestions:
>
> 1. Overload alterConfigs instead of creating a new method name. This
>gives> us both the short name and a path for removal of the deprecated
> overload.
Goid idea.  +1

> 2. Did we consider Add/Remove instead of Append/Subtract?

Hmm.  I guess I was worried that Add might be confused with set.  Append
seems to better suggest adding to a multi-part entry, to me, at least.
Best,
Colin

>
> Ismael
>
> On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe
>  wrote:>
> > Hi all,
> >
> > I would like to start voting on KIP-339, which creates a new
> > IncrementalAlterConfigs API.
> >
> > The KIP is described here:
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API>
> >  >
> > Previous discussion:
> > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> >
> > best,
> > Colin
> >



Build failed in Jenkins: kafka-trunk-jdk8 #2986

2018-09-24 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
remote: Enumerating objects: 2602, done.
remote: Counting objects:   0% (1/2602)   remote: Counting objects:   
1% (27/2602)   remote: Counting objects:   2% (53/2602)   
remote: Counting objects:   3% (79/2602)   remote: Counting objects:   
4% (105/2602)   remote: Counting objects:   5% (131/2602)   
remote: Counting objects:   6% (157/2602)   remote: Counting objects:   
7% (183/2602)   remote: Counting objects:   8% (209/2602)   
remote: Counting objects:   9% (235/2602)   remote: Counting objects:  
10% (261/2602)   remote: Counting objects:  11% (287/2602)   
remote: Counting objects:  12% (313/2602)   remote: Counting objects:  
13% (339/2602)   remote: Counting objects:  14% (365/2602)   
remote: Counting objects:  15% (391/2602)   remote: Counting objects:  
16% (417/2602)   remote: Counting objects:  17% (443/2602)   
remote: Counting objects:  18% (469/2602)   remote: Counting objects:  
19% (495/2602)   remote: Counting objects:  20% (521/2602)   
remote: Counting objects:  21% (547/2602)   remote: Counting objects:  
22% (573/2602)   remote: Counting objects:  23% (599/2602)   
remote: Counting objects:  24% (625/2602)   remote: Counting objects:  
25% (651/2602)   remote: Counting objects:  26% (677/2602)   
remote: Counting objects:  27% (703/2602)   remote: Counting objects:  
28% (729/2602)   remote: Counting objects:  29% (755/2602)   
remote: Counting objects:  30% (781/2602)   remote: Counting objects:  
31% (807/2602)   remote: Counting objects:  32% (833/2602)   
remote: Counting objects:  33% (859/2602)   remote: Counting objects:  
34% (885/2602)   remote: Counting objects:  35% (911/2602)   
remote: Counting objects:  36% (937/2602)   remote: Counting objects:  
37% (963/2602)   remote: Counting objects:  38% (989/2602)   
remote: Counting objects:  39% (1015/2602)   remote: Counting objects:  
40% (1041/2602)   remote: Counting objects:  41% (1067/2602)   
remote: Counting objects:  42% (1093/2602)   remote: Counting objects:  
43% (1119/2602)   remote: Counting objects:  44% (1145/2602)   
remote: Counting objects:  45% (1171/2602)   remote: Counting objects:  
46% (1197/2602)   remote: Counting objects:  47% (1223/2602)   
remote: Counting objects:  48% (1249/2602)   remote: Counting objects:  
49% (1275/2602)   remote: Counting objects:  50% (1301/2602)   
remote: Counting objects:  51% (1328/2602)   remote: Counting objects:  
52% (1354/2602)   remote: Counting objects:  53% (1380/2602)   
remote: Counting objects:  54% (1406/2602)   remote: Counting objects:  
55% (1432/2602)   remote: Counting objects:  56% (1458/2602)   

Build failed in Jenkins: kafka-trunk-jdk10 #516

2018-09-24 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H23 (ubuntu xenial) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 032f5319148080ca20adc297594664c8829f7d10
error: Could not read f348f10ef87925081fdf9455ace6d2a86179b483
remote: Enumerating objects: 6066, done.
remote: Counting objects:   0% (1/6066)   remote: Counting objects:   
1% (61/6066)   remote: Counting objects:   2% (122/6066)   
remote: Counting objects:   3% (182/6066)   remote: Counting objects:   
4% (243/6066)   remote: Counting objects:   5% (304/6066)   
remote: Counting objects:   6% (364/6066)   remote: Counting objects:   
7% (425/6066)   remote: Counting objects:   8% (486/6066)   
remote: Counting objects:   9% (546/6066)   remote: Counting objects:  
10% (607/6066)   remote: Counting objects:  11% (668/6066)   
remote: Counting objects:  12% (728/6066)   remote: Counting objects:  
13% (789/6066)   remote: Counting objects:  14% (850/6066)   
remote: Counting objects:  15% (910/6066)   remote: Counting objects:  
16% (971/6066)   remote: Counting objects:  17% (1032/6066)   
remote: Counting objects:  18% (1092/6066)   remote: Counting objects:  
19% (1153/6066)   remote: Counting objects:  20% (1214/6066)   
remote: Counting objects:  21% (1274/6066)   remote: Counting objects:  
22% (1335/6066)   remote: Counting objects:  23% (1396/6066)   
remote: Counting objects:  24% (1456/6066)   remote: Counting objects:  
25% (1517/6066)   remote: Counting objects:  26% (1578/6066)   
remote: Counting objects:  27% (1638/6066)   remote: Counting objects:  
28% (1699/6066)   remote: Counting objects:  29% (1760/6066)   
remote: Counting objects:  30% (1820/6066)   remote: Counting objects:  
31% (1881/6066)   remote: Counting objects:  32% (1942/6066)   
remote: Counting objects:  33% (2002/6066)   remote: Counting objects:  
34% (2063/6066)   remote: Counting objects:  35% (2124/6066)   
remote: Counting objects:  36% (2184/6066)   remote: Counting objects:  
37% (2245/6066)   remote: Counting objects:  38% (2306/6066)   
remote: Counting objects:  39% (2366/6066)   remote: Counting objects:  
40% (2427/6066)   remote: Counting objects:  41% (2488/6066)   
remote: Counting objects:  42% (2548/6066)   remote: Counting objects:  
43% (2609/6066)   remote: Counting objects:  44% (2670/6066)   
remote: Counting objects:  45% (2730/6066)   remote: Counting objects:  
46% (2791/6066)   remote: Counting objects:  47% (2852/6066)   
remote: Counting objects:  48% (2912/6066)   remote: Counting objects:  
49% (2973/6066)   remote: Counting objects:  50% (3033/6066)   
remote: Counting objects:  51% (3094/6066)   remote: Counting objects:  
52% (3155/6066)   remote: Counting objects:  53% (3215/6066)   
remote: Counting objects:  54% (3276/6066)   remote: Counting 

Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
Hi everyone.  This concludes the vote for KIP 368.  The vote passes with
three binding + 1 votes, from Rajini, Harsha, and Jun, and seven
non-binding +1 votes, from Mike, Konstantin, Boerge, Edoardo, Stanislav,
Mickael, and myself.

I have marked the KIP as "Accepted".

The pull request, available at *https://github.com/apache/kafka/pull/5582
*, is far along and should be
finished in the next day or two.

Ron

On Mon, Sep 24, 2018 at 9:25 PM Ron Dagostino  wrote:

> Thanks, Jun.
>
> <<<100
> I added this line to the KIP to clarify the SSL issue:
> "This KIP has no impact on non-SASL connections (e.g. connections that use
> the PLAINTEXT or SSL security protocols) – no such connection will be
> re-authenticated, and no such connection will be closed."
>
> <<<101
> Your comment points out two issues, I think.  First, there is a direct
> clarity problem: in the KIP, by the phrase "requests that queued up during
> re-authentication" I was really intending to refer to both the in-flight
> responses that might return from the broker during re-authentication along
> with any pending Send request that is set on the KafkaChannel instance
> and has not yet been transmitted to the broker (this is the Send that
> triggers the re-authentication check to occur, and when the session is
> "expired" the re-authentication process then begins).  So I clarified the
> text in the KIP tp refer directly to both of these things.  But before I
> insert that amended text, note that the second issue (the implementation
> option of marking the channel unavailable for send during
> re-authentication) also points out a clarity problem in the KIP because the
> channel is in fact unavailable for send during re-authentication.  The
> reason is because KafkaChannel#ready() will return false until the
> Authenticator finishes the re-authentication, and this causes 
> KafkaClient#isReady(Node,
> long) and KafkaClient#ready(Node, long) to both return false.  So in fact
> the client will not be able to queue up send after send.  I've therefore
> updated the KIP text as follows:
> "If re-authentication succeeds then any received responses that queued up
> during re-authentication along with the Send that triggered the
> re-authentication to occur in the first place will subsequently be able to 
> flow
> through (back to the client and along to the broker, respectively), and
> eventually the connection will re-authenticate again, etc.  Note also
> that the client cannot queue up additional send requests beyond the one
> that triggers re-authentication to occur until re-authentication succeeds
> and the triggering one is sent."
>
> <<<102
> Good catch about the client-side metric not having a name or a clear
> definition of what it measures.  That is an oversight.  I will resolve this
> via a post on the DISCUSS thread:
> https://lists.apache.org/thread.html/a63c1612fe9ba2f31272087a00419c59ed7a9917c398721069cd1d01@%3Cdev.kafka.apache.org%3E
>
> <<<103
> We re-use the existing authentication code paths for re-authentication,
> and it appears (
> https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java#L182)
> that the version used by the broker when it is acting as an inter-broker
> SASL client is the max version supported by the destination broker.  Am I
> missing something?
>
> Thanks again, Jun.
>
> Ron
>
>
>
>
> On Mon, Sep 24, 2018 at 7:46 PM Jun Rao  wrote:
>
>> Hi, Ron,
>>
>> Thanks for the KIP. Looks good to me overall. So, +1 assuming the
>> following
>> minor comments will be addressed.
>>
>> 100. connections.max.reauth.ms: A client can authenticate with the broker
>> using SSL. This config has no impact on such connections. It would be
>> useful to make it clear in the documentation. Also, in this case, I guess
>> the broker won't kill the SSL connection after connections.max.reauth.ms?
>>
>> 101. "If re-authentication succeeds then any requests that queued up
>> during
>> re-authentication will subsequently be able to flow through, and
>> eventually
>> the connection will re-authenticate again, etc.". This is more of an
>> implementation detail. I guess the proposal is to queue up new requests in
>> the client when there is is pending re-authentication. An alternative is
>> to
>> mark the Channel unavailable for send during re-authentication. This has
>> the slight benefit of reducing the client memory footprint.
>>
>> 102. "A client-side metric will be created that documents the latency
>> imposed by re-authentication." What's the name of this metric? Does it
>> measure avg or max?
>>
>> 103. "Upgrade all brokers to v2.1.0 or later at whatever rate is desired
>> with 'connections.max.reauth.ms' allowed to default to 0.  If SASL is
>> used
>> for the inter-broker protocol then brokers will check the
>> SASL_AUTHENTICATE
>> API version and use a V1 request when communicating to a broker that has
>> been 

Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
Thanks, Jun.

<<<100
I added this line to the KIP to clarify the SSL issue:
"This KIP has no impact on non-SASL connections (e.g. connections that use
the PLAINTEXT or SSL security protocols) – no such connection will be
re-authenticated, and no such connection will be closed."

<<<101
Your comment points out two issues, I think.  First, there is a direct
clarity problem: in the KIP, by the phrase "requests that queued up during
re-authentication" I was really intending to refer to both the in-flight
responses that might return from the broker during re-authentication along
with any pending Send request that is set on the KafkaChannel instance and
has not yet been transmitted to the broker (this is the Send that triggers
the re-authentication check to occur, and when the session is "expired" the
re-authentication process then begins).  So I clarified the text in the KIP
tp refer directly to both of these things.  But before I insert that
amended text, note that the second issue (the implementation option of
marking the channel unavailable for send during re-authentication) also
points out a clarity problem in the KIP because the channel is in fact
unavailable for send during re-authentication.  The reason is because
KafkaChannel#ready() will return false until the Authenticator finishes the
re-authentication, and this causes KafkaClient#isReady(Node, long) and
KafkaClient#ready(Node,
long) to both return false.  So in fact the client will not be able to
queue up send after send.  I've therefore updated the KIP text as follows:
"If re-authentication succeeds then any received responses that queued up
during re-authentication along with the Send that triggered the
re-authentication to occur in the first place will subsequently be able to flow
through (back to the client and along to the broker, respectively), and
eventually the connection will re-authenticate again, etc.  Note also that
the client cannot queue up additional send requests beyond the one that
triggers re-authentication to occur until re-authentication succeeds and
the triggering one is sent."

<<<102
Good catch about the client-side metric not having a name or a clear
definition of what it measures.  That is an oversight.  I will resolve this
via a post on the DISCUSS thread:
https://lists.apache.org/thread.html/a63c1612fe9ba2f31272087a00419c59ed7a9917c398721069cd1d01@%3Cdev.kafka.apache.org%3E

<<<103
We re-use the existing authentication code paths for re-authentication, and
it appears (
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/security/authenticator/SaslClientAuthenticator.java#L182)
that the version used by the broker when it is acting as an inter-broker
SASL client is the max version supported by the destination broker.  Am I
missing something?

Thanks again, Jun.

Ron




On Mon, Sep 24, 2018 at 7:46 PM Jun Rao  wrote:

> Hi, Ron,
>
> Thanks for the KIP. Looks good to me overall. So, +1 assuming the following
> minor comments will be addressed.
>
> 100. connections.max.reauth.ms: A client can authenticate with the broker
> using SSL. This config has no impact on such connections. It would be
> useful to make it clear in the documentation. Also, in this case, I guess
> the broker won't kill the SSL connection after connections.max.reauth.ms?
>
> 101. "If re-authentication succeeds then any requests that queued up during
> re-authentication will subsequently be able to flow through, and eventually
> the connection will re-authenticate again, etc.". This is more of an
> implementation detail. I guess the proposal is to queue up new requests in
> the client when there is is pending re-authentication. An alternative is to
> mark the Channel unavailable for send during re-authentication. This has
> the slight benefit of reducing the client memory footprint.
>
> 102. "A client-side metric will be created that documents the latency
> imposed by re-authentication." What's the name of this metric? Does it
> measure avg or max?
>
> 103. "Upgrade all brokers to v2.1.0 or later at whatever rate is desired
> with 'connections.max.reauth.ms' allowed to default to 0.  If SASL is used
> for the inter-broker protocol then brokers will check the SASL_AUTHENTICATE
> API version and use a V1 request when communicating to a broker that has
> been upgraded to 2.1.0, but the client will see the "0" session max
> lifetime and will not re-authenticate. ". Currently, for the inter broker
> usage of NetworkClient (ReplicaFetcherThread, ControllerChannelManager,
> etc), the broker version discovery logic is actually disabled and the
> client is expected to use the new version of the request after
> inter.broker.protocol.version is set to the current version. So, we will
> need to rely on this for deciding whether the NetworkClient should use the
> re-authenticate request or not, during upgrade.
>
> Jun
>
> On Mon, Sep 24, 2018 at 4:39 PM, Ron Dagostino  wrote:
>
> > Still looking for a final +1 binding vote to go with 

Re: [DISCUSS] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
Hi everyone.  Jun raised a good point/discovered an oversight in the KIP
during the VOTE thread that we must resolve.

Regarding this statement in the KIP:

"A client-side metric will be created that documents the latency
imposed by re-authentication."

Jun correctly asked:
What's the name of this metric? Does it measure avg or max?

My initial reaction is to measure both:
*reauthentication-latency-ms-{avg,max}*.  Any thoughts?

Ron

On Wed, Sep 19, 2018 at 9:08 AM Ron Dagostino  wrote:

> Thanks, Rajini -- I updated the KIP to fix this.
>
> Ron
>
> On Wed, Sep 19, 2018 at 4:54 AM Rajini Sivaram 
> wrote:
>
>> I should have said `security configs` instead of `channel configs`.
>>
>> The KIP says:
>>
>>- The configuration option this KIP proposes to add to enable
>>server-side expired-connection-kill is '*connections.max.reauth.ms
>>*' (not prefixed with listener
>> prefix
>>or SASL mechanism name – this is a single value for the cluster)
>>- The '*connections.max.reauth.ms *'
>>configuration option will not be dynamically changeable; restarts will
>> be
>>required if the value is to be changed.  However, if a new listener is
>>dynamically added, the value could be set for that listener at that
>> time.
>>
>> Those statements are contradictory. Perhaps the first one should say `it
>> may be optionally prefixed with the listener name`?
>>
>> On Tue, Sep 18, 2018 at 3:55 PM, Ron Dagostino  wrote:
>>
>> > HI Rajini.  The KIP is updated as summarized below, and I will start a
>> vote
>> > immediately.
>> >
>> > <<> > Ok, agreed.  I called it expired-connections-killed-total
>> >
>> > <<> > <<> > Ok, agreed.  I kept existing metrics unchanged but added an additional
>> tag
>> > to the V0 metrics so they are separate.
>> >
>> > <<> > <<<(rate/total with success/failure). Perhaps just success/total is
>> > sufficient?
>> > Ok, agreed, just kept the successful total.
>> >
>> > <<> or
>> > <<> > Ok, agreed, the config is now cluster-wide.
>> >
>> > <<> > <<> > Not sure what this is referring to.  We don't have channel configs here,
>> > right?
>> >
>> > <<> > <<> > <<> > <<> > Yes, I was planning on that optimization; agreed, I removed it from the
>> > list
>> >
>> > <<> > <<> > <<> > for
>> > <<> > <<> implementation, I
>> > <<> implemented
>> > at
>> > <<> > Ok, agreed
>> >
>> >  Thanks again for all the feedback and discussion.
>> >
>> > Ron
>> >
>> > On Tue, Sep 18, 2018 at 6:43 AM Rajini Sivaram > >
>> > wrote:
>> >
>> > > Hi Ron,
>> > >
>> > > Thanks for the updates. The KIP looks good. A few comments and minor
>> > points
>> > > below, but feel free to start vote to try and get it into 2.1.0. More
>> > > community feedback will be really useful.
>> > >
>> > > 1) It may be useful to have a metric of expired connections killed by
>> the
>> > > broker. There could be a client implementation that doesn't support
>> > > re-authentications, but happens to use the latest version of
>> > > SaslAuthenticateRequest. Or cases where re-authentication didn't
>> happen
>> > on
>> > > time.
>> > >
>> > > 2) For `successful-v0-authentication-{rate,total}`, we probably want
>> > > version as a tag rather in the name. Not sure if we need four of these
>> > > (rate/total with success/failure). Perhaps just success/total is
>> > > sufficient?
>> > >
>> > > 3) For the session lifetime config, we don't need to require a
>> listener
>> > or
>> > > mechanism prefix. In most cases, we would expect a single config on
>> the
>> > > broker-side. For all channel configs, we allow an optional listener
>> > prefix,
>> > > so we should do the same here.
>> > >
>> > > 4) The KIP says connections are terminated on requests not related to
>> > > re-authentication (ApiVersionsRequest, SaslHandshakeRequest, and
>> > > SaslAuthenticateRequest). We can skip for ApiVersionsRequest for
>> > > re-authentication, so that doesn't need to be in the list.
>> > >
>> > > 5) The KIP says that the new config will not be dynamically
>> updatable. We
>> > > have a very limited set of configs that are dynamically updatable for
>> an
>> > > existing listener. And we don't want to add this config to the list
>> since
>> > > we don't expect this value to change frequently. But we allow new
>> > listeners
>> > > to be added dynamically and all configs for the listener can be added
>> > > dynamically (with the listener prefix). I think we want to allow that
>> for
>> > > this config (i.e. add a new OAuth listener with re-authentication
>> > enabled).
>> > > We should mention this in the KIP, though in terms of implementation,
>> I
>> > > would leave that for a separate JIRA (it doesn't need to be
>> implemented
>> > at
>> > > the same time).
>> > >
>> > >
>> > >
>> > > On Tue, Sep 18, 2018 at 3:06 AM, Ron Dagostino 
>> > wrote:
>> > >
>> > > > HI again, Rajini.  Would we ever want the max session time to be
>> > > different
>> > > > across different SASL mechanisms?  

Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Ismael Juma
Thanks Colin. I think this is much needed and I'm +1 (binding) on fixing
this issue. However, I have a few minor suggestions:

1. Overload alterConfigs instead of creating a new method name. This gives
us both the short name and a path for removal of the deprecated overload.
2. Did we consider Add/Remove instead of Append/Subtract?

Ismael

On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe  wrote:

> Hi all,
>
> I would like to start voting on KIP-339, which creates a new
> IncrementalAlterConfigs API.
>
> The KIP is described here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
>
> Previous discussion:
> https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
>
> best,
> Colin
>


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Jun Rao
Hi, Ron,

Thanks for the KIP. Looks good to me overall. So, +1 assuming the following
minor comments will be addressed.

100. connections.max.reauth.ms: A client can authenticate with the broker
using SSL. This config has no impact on such connections. It would be
useful to make it clear in the documentation. Also, in this case, I guess
the broker won't kill the SSL connection after connections.max.reauth.ms?

101. "If re-authentication succeeds then any requests that queued up during
re-authentication will subsequently be able to flow through, and eventually
the connection will re-authenticate again, etc.". This is more of an
implementation detail. I guess the proposal is to queue up new requests in
the client when there is is pending re-authentication. An alternative is to
mark the Channel unavailable for send during re-authentication. This has
the slight benefit of reducing the client memory footprint.

102. "A client-side metric will be created that documents the latency
imposed by re-authentication." What's the name of this metric? Does it
measure avg or max?

103. "Upgrade all brokers to v2.1.0 or later at whatever rate is desired
with 'connections.max.reauth.ms' allowed to default to 0.  If SASL is used
for the inter-broker protocol then brokers will check the SASL_AUTHENTICATE
API version and use a V1 request when communicating to a broker that has
been upgraded to 2.1.0, but the client will see the "0" session max
lifetime and will not re-authenticate. ". Currently, for the inter broker
usage of NetworkClient (ReplicaFetcherThread, ControllerChannelManager,
etc), the broker version discovery logic is actually disabled and the
client is expected to use the new version of the request after
inter.broker.protocol.version is set to the current version. So, we will
need to rely on this for deciding whether the NetworkClient should use the
re-authenticate request or not, during upgrade.

Jun

On Mon, Sep 24, 2018 at 4:39 PM, Ron Dagostino  wrote:

> Still looking for a final +1 binding vote to go with the 9 votes so far (2
> binding, 7 non-binding).
>
> Ron
>
> > On Sep 24, 2018, at 3:53 PM, Ron Dagostino  wrote:
> >
> >  **Please vote** . It's getting late in the day and this KIP still
> requires 1 more binding up-vote to be considered for the 2.1.0 release.
> >
> > The current vote is 2 binding +1 votes (Rajini and Harsha) and 7
> non-binding +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav,
> and Mickael).
> >
> > Ron
> >
> >> On Mon, Sep 24, 2018 at 9:47 AM Ron Dagostino 
> wrote:
> >> Hi Everyone.  This KIP still requires 1 more binding up-vote to be
> considered for the 2.1.0 release.  **Please vote before today's end-of-day
> deadline.**
> >>
> >> The current vote is 2 binding +1 votes (Rajini and Harsha) and 7
> non-binding +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav,
> and Mickael).
> >>
> >> Ron
> >>
> >>> On Fri, Sep 21, 2018 at 11:48 AM Mickael Maison <
> mickael.mai...@gmail.com> wrote:
> >>> +1 (non-binding)
> >>> Thanks for the KIP, this is a very nice feature.
> >>> On Fri, Sep 21, 2018 at 4:56 PM Stanislav Kozlovski
> >>>  wrote:
> >>> >
> >>> > Thanks for the KIP, Ron!
> >>> > +1 (non-binding)
> >>> >
> >>> > On Fri, Sep 21, 2018 at 5:26 PM Ron Dagostino 
> wrote:
> >>> >
> >>> > > Hi Everyone.  This KIP requires 1 more binding up-vote to be
> considered for
> >>> > > the 2.1.0 release; please vote before the Monday deadline.
> >>> > >
> >>> > > The current vote is 2 binding +1 votes (Rajini and Harsha) and 5
> >>> > > non-binding +1 votes (myself, Mike, Konstantin, Boerge, and
> Edoardo).
> >>> > >
> >>> > > Ron
> >>> > >
> >>> > > On Wed, Sep 19, 2018 at 12:40 PM Harsha  wrote:
> >>> > >
> >>> > > > KIP looks good. +1 (binding)
> >>> > > >
> >>> > > > Thanks,
> >>> > > > Harsha
> >>> > > >
> >>> > > > On Wed, Sep 19, 2018, at 7:44 AM, Rajini Sivaram wrote:
> >>> > > > > Hi Ron,
> >>> > > > >
> >>> > > > > Thanks for the KIP!
> >>> > > > >
> >>> > > > > +1 (binding)
> >>> > > > >
> >>> > > > > On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin <
> >>> > > > chuhlo...@gmail.com>
> >>> > > > > wrote:
> >>> > > > >
> >>> > > > > > +1 (non binding)
> >>> > > > > >
> >>> > > > > > > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com
> wrote:
> >>> > > > > > >
> >>> > > > > > >
> >>> > > > > > >
> >>> > > > > > > On 2018/09/18 14:59:09, Ron Dagostino 
> wrote:
> >>> > > > > > >> Hi everyone.  I would like to start the vote for KIP-368:
> >>> > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >>> > > > > > 368%3A+Allow+SASL+Connections+to+Periodically+Re-
> Authenticate
> >>> > > > > > >>
> >>> > > > > > >> This KIP proposes adding the ability for SASL clients
> (and brokers
> >>> > > > when
> >>> > > > > > a
> >>> > > > > > >> SASL mechanism is the inter-broker protocol) to
> re-authenticate
> >>> > > > their
> >>> > > > > > >> connections to brokers and for brokers to close
> connections that
> >>> > > > > > continue
> >>> > > 

Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
Still looking for a final +1 binding vote to go with the 9 votes so far (2 
binding, 7 non-binding).

Ron

> On Sep 24, 2018, at 3:53 PM, Ron Dagostino  wrote:
> 
>  **Please vote** . It's getting late in the day and this KIP still requires 1 
> more binding up-vote to be considered for the 2.1.0 release.
> 
> The current vote is 2 binding +1 votes (Rajini and Harsha) and 7 non-binding 
> +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav, and Mickael).
> 
> Ron
> 
>> On Mon, Sep 24, 2018 at 9:47 AM Ron Dagostino  wrote:
>> Hi Everyone.  This KIP still requires 1 more binding up-vote to be 
>> considered for the 2.1.0 release.  **Please vote before today's end-of-day 
>> deadline.**
>> 
>> The current vote is 2 binding +1 votes (Rajini and Harsha) and 7 non-binding 
>> +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav, and Mickael).
>> 
>> Ron
>> 
>>> On Fri, Sep 21, 2018 at 11:48 AM Mickael Maison  
>>> wrote:
>>> +1 (non-binding)
>>> Thanks for the KIP, this is a very nice feature.
>>> On Fri, Sep 21, 2018 at 4:56 PM Stanislav Kozlovski
>>>  wrote:
>>> >
>>> > Thanks for the KIP, Ron!
>>> > +1 (non-binding)
>>> >
>>> > On Fri, Sep 21, 2018 at 5:26 PM Ron Dagostino  wrote:
>>> >
>>> > > Hi Everyone.  This KIP requires 1 more binding up-vote to be considered 
>>> > > for
>>> > > the 2.1.0 release; please vote before the Monday deadline.
>>> > >
>>> > > The current vote is 2 binding +1 votes (Rajini and Harsha) and 5
>>> > > non-binding +1 votes (myself, Mike, Konstantin, Boerge, and Edoardo).
>>> > >
>>> > > Ron
>>> > >
>>> > > On Wed, Sep 19, 2018 at 12:40 PM Harsha  wrote:
>>> > >
>>> > > > KIP looks good. +1 (binding)
>>> > > >
>>> > > > Thanks,
>>> > > > Harsha
>>> > > >
>>> > > > On Wed, Sep 19, 2018, at 7:44 AM, Rajini Sivaram wrote:
>>> > > > > Hi Ron,
>>> > > > >
>>> > > > > Thanks for the KIP!
>>> > > > >
>>> > > > > +1 (binding)
>>> > > > >
>>> > > > > On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin <
>>> > > > chuhlo...@gmail.com>
>>> > > > > wrote:
>>> > > > >
>>> > > > > > +1 (non binding)
>>> > > > > >
>>> > > > > > > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com wrote:
>>> > > > > > >
>>> > > > > > >
>>> > > > > > >
>>> > > > > > > On 2018/09/18 14:59:09, Ron Dagostino  wrote:
>>> > > > > > >> Hi everyone.  I would like to start the vote for KIP-368:
>>> > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>>> > > > > > 368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
>>> > > > > > >>
>>> > > > > > >> This KIP proposes adding the ability for SASL clients (and 
>>> > > > > > >> brokers
>>> > > > when
>>> > > > > > a
>>> > > > > > >> SASL mechanism is the inter-broker protocol) to re-authenticate
>>> > > > their
>>> > > > > > >> connections to brokers and for brokers to close connections 
>>> > > > > > >> that
>>> > > > > > continue
>>> > > > > > >> to use expired sessions.
>>> > > > > > >>
>>> > > > > > >> Ron
>>> > > > > > >>
>>> > > > > > >
>>> > > > > > > +1 (non binding)
>>> > > > > >
>>> > > > > >
>>> > > >
>>> > >
>>> >
>>> >
>>> > --
>>> > Best,
>>> > Stanislav


Jenkins build is back to normal : kafka-trunk-jdk8 #2984

2018-09-24 Thread Apache Jenkins Server
See 




Jenkins build is back to normal : kafka-trunk-jdk10 #515

2018-09-24 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-24 Thread Nikolay Izhikov
Hello, John.

Tests in my PR is green now.
Please, do the review.

https://github.com/apache/kafka/pull/5682

В Пн, 24/09/2018 в 20:36 +0300, Nikolay Izhikov пишет:
> Hello, John.
> 
> Thank you.
> 
> There are failing tests in my PR.
> I'm fixing them wright now.
> 
> Will mail you in a next few hours, after all tests become green again.
> 
> В Пн, 24/09/2018 в 11:46 -0500, John Roesler пишет:
> > Hi Nikolay,
> > 
> > Thanks for the PR. I will review it.
> > 
> > -John
> > 
> > On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov  wrote:
> > 
> > > Hello
> > > 
> > > I've opened a PR [1] for this KIP.
> > > 
> > > [1] https://github.com/apache/kafka/pull/5682
> > > 
> > > John, can you take a look?
> > > 
> > > В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > > > John,
> > > > 
> > > > Got it.
> > > > 
> > > > Will do my best to meet this deadline.
> > > > 
> > > > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > > > Yay! Thanks so much for sticking with this Nikolay.
> > > > > 
> > > > > I look forward to your PR!
> > > > > 
> > > > > Not to put pressure on you, but just to let you know, the deadline for
> > > > > getting your pr *merged* for 2.1 is _October 1st_,
> > > > > so you basically have 2 weeks to send the PR, have the reviews, and
> > > 
> > > get it
> > > > > merged.
> > > > > 
> > > > > (see
> > > > > 
> > > 
> > > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > > > > 
> > > > > Thanks again,
> > > > > -John
> > > > > 
> > > > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov 
> > > > > wrote:
> > > > > 
> > > > > > This KIP is now accepted with:
> > > > > > - 3 binding +1
> > > > > > - 2 non binding +1
> > > > > > 
> > > > > > Thanks, all.
> > > > > > 
> > > > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > > > > 
> > > > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > > > +1 (binding), thank you Nikolay!
> > > > > > > 
> > > > > > > Guozhang
> > > > > > > 
> > > > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> > > 
> > > matth...@confluent.io>
> > > > > > > wrote:
> > > > > > > 
> > > > > > > > Thanks for the KIP.
> > > > > > > > 
> > > > > > > > +1 (binding)
> > > > > > > > 
> > > > > > > > 
> > > > > > > > -Matthias
> > > > > > > > 
> > > > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > > > I'm a +1 (non-binding)
> > > > > > > > > 
> > > > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> > > 
> > > nizhi...@apache.org>
> > > > > > > > 
> > > > > > > > wrote:
> > > > > > > > > 
> > > > > > > > > > Dear commiters.
> > > > > > > > > > 
> > > > > > > > > > Please, vote on a KIP.
> > > > > > > > > > 
> > > > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > > > Hi Nikolay,
> > > > > > > > > > > 
> > > > > > > > > > > You can start a PR any time, but we cannot per it (and
> > > 
> > > probably
> > > > > > 
> > > > > > won't
> > > > > > > > 
> > > > > > > > do
> > > > > > > > > > > serious reviews) until after the KIP is voted and 
> > > > > > > > > > > approved.
> > > > > > > > > > > 
> > > > > > > > > > > Sometimes people start a PR during discussion just to help
> > > > > > 
> > > > > > provide more
> > > > > > > > > > > context, but it's not required (and can also be 
> > > > > > > > > > > distracting
> > > > > > 
> > > > > > because the
> > > > > > > > > > 
> > > > > > > > > > KIP
> > > > > > > > > > > discussion should avoid implementation details).
> > > > > > > > > > > 
> > > > > > > > > > > Let's wait one more day for any other comments and plan to
> > > 
> > > start
> > > > > > 
> > > > > > the
> > > > > > > > 
> > > > > > > > vote
> > > > > > > > > > > on Monday if there are no other debates.
> > > > > > > > > > > 
> > > > > > > > > > > Once you start the vote, you have to leave it up for at
> > > 
> > > least 72
> > > > > > 
> > > > > > hours,
> > > > > > > > > > 
> > > > > > > > > > and
> > > > > > > > > > > it requires 3 binding votes to pass. Only Kafka Committers
> > > 
> > > have
> > > > > > 
> > > > > > binding
> > > > > > > > > > > votes (https://kafka.apache.org/committers).
> > > > > > > > > > > 
> > > > > > > > > > > Thanks,
> > > > > > > > > > > -John
> > > > > > > > > > > 
> > > > > > > > > > > On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck <
> > > 
> > > bbej...@gmail.com>
> > > > > > > > 
> > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > > Hi Nickolay,
> > > > > > > > > > > > 
> > > > > > > > > > > > Thanks for the clarification.
> > > > > > > > > > > > 
> > > > > > > > > > > > -Bill
> > > > > > > > > > > > 
> > > > > > > > > > > > On Fri, Aug 31, 2018 at 11:59 AM Nikolay Izhikov <
> > > > > > 
> > > > > > nizhi...@apache.org
> > > > > > > > > > > > wrote:
> > > > > > > > > > > > 
> > > > > > > > > > > > > Hello, John.
> > > > > > > > > > > > > 
> > > > > > > > > > > > > This is my first KIP, so, please, help me with kafka
> > > > > > 
> > > > > > development
> > > > > > > > > > 
> > > > > > > > > > 

Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
 **Please vote** . It's getting late in the day and this KIP still requires
1 more binding up-vote to be considered for the 2.1.0 release.

The current vote is 2 binding +1 votes (Rajini and Harsha) and 7
non-binding +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav,
and Mickael).

Ron

On Mon, Sep 24, 2018 at 9:47 AM Ron Dagostino  wrote:

> Hi Everyone.  This KIP still requires 1 more binding up-vote to be
> considered for the 2.1.0 release.  **Please vote before today's end-of-day
> deadline.**
>
> The current vote is 2 binding +1 votes (Rajini and Harsha) and 7
> non-binding +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav,
> and Mickael).
>
> Ron
>
> On Fri, Sep 21, 2018 at 11:48 AM Mickael Maison 
> wrote:
>
>> +1 (non-binding)
>> Thanks for the KIP, this is a very nice feature.
>> On Fri, Sep 21, 2018 at 4:56 PM Stanislav Kozlovski
>>  wrote:
>> >
>> > Thanks for the KIP, Ron!
>> > +1 (non-binding)
>> >
>> > On Fri, Sep 21, 2018 at 5:26 PM Ron Dagostino 
>> wrote:
>> >
>> > > Hi Everyone.  This KIP requires 1 more binding up-vote to be
>> considered for
>> > > the 2.1.0 release; please vote before the Monday deadline.
>> > >
>> > > The current vote is 2 binding +1 votes (Rajini and Harsha) and 5
>> > > non-binding +1 votes (myself, Mike, Konstantin, Boerge, and Edoardo).
>> > >
>> > > Ron
>> > >
>> > > On Wed, Sep 19, 2018 at 12:40 PM Harsha  wrote:
>> > >
>> > > > KIP looks good. +1 (binding)
>> > > >
>> > > > Thanks,
>> > > > Harsha
>> > > >
>> > > > On Wed, Sep 19, 2018, at 7:44 AM, Rajini Sivaram wrote:
>> > > > > Hi Ron,
>> > > > >
>> > > > > Thanks for the KIP!
>> > > > >
>> > > > > +1 (binding)
>> > > > >
>> > > > > On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin <
>> > > > chuhlo...@gmail.com>
>> > > > > wrote:
>> > > > >
>> > > > > > +1 (non binding)
>> > > > > >
>> > > > > > > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com
>> wrote:
>> > > > > > >
>> > > > > > >
>> > > > > > >
>> > > > > > > On 2018/09/18 14:59:09, Ron Dagostino 
>> wrote:
>> > > > > > >> Hi everyone.  I would like to start the vote for KIP-368:
>> > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>> > > > > > 368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
>> > > > > > >>
>> > > > > > >> This KIP proposes adding the ability for SASL clients (and
>> brokers
>> > > > when
>> > > > > > a
>> > > > > > >> SASL mechanism is the inter-broker protocol) to
>> re-authenticate
>> > > > their
>> > > > > > >> connections to brokers and for brokers to close connections
>> that
>> > > > > > continue
>> > > > > > >> to use expired sessions.
>> > > > > > >>
>> > > > > > >> Ron
>> > > > > > >>
>> > > > > > >
>> > > > > > > +1 (non binding)
>> > > > > >
>> > > > > >
>> > > >
>> > >
>> >
>> >
>> > --
>> > Best,
>> > Stanislav
>>
>


Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Colin McCabe
On Mon, Sep 24, 2018, at 12:18, Gwen Shapira wrote:
> On Mon, Sep 24, 2018 at 12:04 PM Colin McCabe  wrote:
> >
> > On Mon, Sep 24, 2018, at 11:11, Gwen Shapira wrote:
> > > Can you explain more why we can't add "incremental" to the existing API 
> > > and
> > > then deprecate the old behavior? The "rejected" section says: "We would
> > > also not have been able to deprecate the non-incremental mode." but I'm 
> > > not
> > > sure why not.
> >
> > Hi Gwen,
> >
> > We talked about this previously.  If we extend the existing API, then we
> can't change the behavior of existing programs, which means that
> non-incremental needs to continue to be the default.  Changing the default
> to incremental would be a breaking change which would silently alter the
> behavior of existing programs.  Also, the actions of append, subtract, etc.
> don't fit in the existing API.
> >
> 
> Got it.
> 
> > >
> > > Having two APIs "Alter" and "Modify" with slightly different behavior
> that
> > > is not obvious from their name (i.e. would anyone remember which one is
> > > incremental?) seems pretty bad.
> >
> > The KIP doesn't propose having two APIs named "alter" and "modify".  The
> new API is named IncrementalAlterConfigs.
> 
> You are right, of course. There are few spots that mention ModifyConfigs
> and that got me a bit confused.

Oh, good point.  I found two typos where it still said "modifyConfigs" in the 
KIP text and changed it to be "incrementalAlterConfigs" as it should be.

best,
Colin

> 
> >
> > best,
> > Colin
> >
> > >  Add the fact that in databases, "alter" is
> > > incremental and things will get confusing pretty fast. Obviously if
> > > deprecating the old behavior is impossible, than we have no choice -
> but I
> > > don't see why it would be impossible.
> > >
> > > Gwen
> > >
> > > On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe 
> wrote:
> > > >
> > > > Hi all,
> > > >
> > > > I would like to start voting on KIP-339, which creates a new
> > > IncrementalAlterConfigs API.
> > > >
> > > > The KIP is described here:
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
> > > >
> > > > Previous discussion:
> > > > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> > > >
> > > > best,
> > > > Colin
> > >
> > >
> > >
> > > --
> > > Gwen Shapira
> > > Product Manager | Confluent
> > > 650.450.2760 | @gwenshap
> > > Follow us: Twitter | blog
> 
> 
> 
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog


Build failed in Jenkins: kafka-trunk-jdk10 #514

2018-09-24 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] KAFKA-7430: Improve Transformer interface JavaDoc (#5675)

--
[...truncated 2.21 MB...]

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 STARTED

org.apache.kafka.streams.test.OutputVerifierTest > 
shouldFailIfValueIsDifferentWithNullReverseForCompareValueTimestampWithProducerRecord
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairsAndCustomTimestamps
 PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicNameAndTimestamp PASSED


Re: [DISCUSS] KIP-375: TopicCommand to use AdminClient

2018-09-24 Thread Viktor Somogyi-Vass
Hi Gwen,

Thanks for your feedback. It is the latter, so passing extra connection
properties for the admin client. I'll try to make that clearer in the KIP.
The same option name is used in the ConfigCommand, so that's why I named it
"command-config".

Cheers,
Viktor


On Mon, Sep 24, 2018 at 8:18 PM Gwen Shapira  wrote:

> The "use admin client" part is amazing and thank you.
>
> I'm confused about "commandConfig" - is this a list of configurations for
> use with --config option? Or a list of properties for connecting to brokers
> (like SSL and such)? If the former, it seems unrelated.
>
> On Mon, Sep 24, 2018 at 7:25 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > I wrote up a relatively simple KIP about improving the Kafka protocol and
> > the TopicCommand tool to support the new Java based AdminClient and
> > hopefully to deprecate the Zookeeper side of it.
> >
> > I would be happy to receive some opinions about this. In general I think
> > this would be an important addition as this is one of the few left but
> > important tools that still uses direct Zookeeper connection.
> >
> > Here is the link for the KIP:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-375%3A+TopicCommand+to+use+AdminClient
> >
> > Thanks,
> > Viktor
> >
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter  | blog
> 
>


Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Gwen Shapira
On Mon, Sep 24, 2018 at 12:04 PM Colin McCabe  wrote:
>
> On Mon, Sep 24, 2018, at 11:11, Gwen Shapira wrote:
> > Can you explain more why we can't add "incremental" to the existing API
and
> > then deprecate the old behavior? The "rejected" section says: "We would
> > also not have been able to deprecate the non-incremental mode." but I'm
not
> > sure why not.
>
> Hi Gwen,
>
> We talked about this previously.  If we extend the existing API, then we
can't change the behavior of existing programs, which means that
non-incremental needs to continue to be the default.  Changing the default
to incremental would be a breaking change which would silently alter the
behavior of existing programs.  Also, the actions of append, subtract, etc.
don't fit in the existing API.
>

Got it.

> >
> > Having two APIs "Alter" and "Modify" with slightly different behavior
that
> > is not obvious from their name (i.e. would anyone remember which one is
> > incremental?) seems pretty bad.
>
> The KIP doesn't propose having two APIs named "alter" and "modify".  The
new API is named IncrementalAlterConfigs.

You are right, of course. There are few spots that mention ModifyConfigs
and that got me a bit confused.

>
> best,
> Colin
>
> >  Add the fact that in databases, "alter" is
> > incremental and things will get confusing pretty fast. Obviously if
> > deprecating the old behavior is impossible, than we have no choice -
but I
> > don't see why it would be impossible.
> >
> > Gwen
> >
> > On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe 
wrote:
> > >
> > > Hi all,
> > >
> > > I would like to start voting on KIP-339, which creates a new
> > IncrementalAlterConfigs API.
> > >
> > > The KIP is described here:
> >
https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
> > >
> > > Previous discussion:
> > > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> > >
> > > best,
> > > Colin
> >
> >
> >
> > --
> > Gwen Shapira
> > Product Manager | Confluent
> > 650.450.2760 | @gwenshap
> > Follow us: Twitter | blog



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Gwen Shapira
+1

On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe  wrote:

> Hi all,
>
> I would like to start voting on KIP-339, which creates a new
> IncrementalAlterConfigs API.
>
> The KIP is described here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
>
> Previous discussion:
> https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
>
> best,
> Colin
>


-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Colin McCabe
On Mon, Sep 24, 2018, at 11:11, Gwen Shapira wrote:
> Can you explain more why we can't add "incremental" to the existing API and
> then deprecate the old behavior? The "rejected" section says: "We would
> also not have been able to deprecate the non-incremental mode." but I'm not
> sure why not.

Hi Gwen,

We talked about this previously.  If we extend the existing API, then we can't 
change the behavior of existing programs, which means that non-incremental 
needs to continue to be the default.  Changing the default to incremental would 
be a breaking change which would silently alter the behavior of existing 
programs.  Also, the actions of append, subtract, etc. don't fit in the 
existing API.

> 
> Having two APIs "Alter" and "Modify" with slightly different behavior that
> is not obvious from their name (i.e. would anyone remember which one is
> incremental?) seems pretty bad.

The KIP doesn't propose having two APIs named "alter" and "modify".  The new 
API is named IncrementalAlterConfigs.

best,
Colin

>  Add the fact that in databases, "alter" is
> incremental and things will get confusing pretty fast. Obviously if
> deprecating the old behavior is impossible, than we have no choice - but I
> don't see why it would be impossible.
> 
> Gwen
> 
> On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe  wrote:
> >
> > Hi all,
> >
> > I would like to start voting on KIP-339, which creates a new
> IncrementalAlterConfigs API.
> >
> > The KIP is described here:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
> >
> > Previous discussion:
> > https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
> >
> > best,
> > Colin
> 
> 
> 
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-24 Thread Andrew Otto
FWIW, I’d find this feature useful.

On Mon, Sep 24, 2018 at 2:42 PM Randall Hauch  wrote:

> Ryanne,
>
> If your connector is already using the AdminClient, then you as the
> developer have a choice of switching to the new Connect-based functionality
> or keeping the existing use of the AdminClient. If the connector uses both
> mechanisms (which I wouldn't recommend, simply because of the complexity of
> it for a user), then the topic will be created by the first mechanism to
> actually attempt and successfully create the topic(s) in the Kafka cluster
> that the Connect worker uses. As mentioned in the KIP, "This feature ...
> does not change the topic-specific settings on any existing topics." IOW,
> if the topic already exists, it can't be created again and therefore the
> `topic.creation.*` properties will not apply for that existing topic.
>
> > Do these settings apply to internal topics created by the framework on
> > bahalf of a connector, e.g. via KafkaConfigBackingStore?
>
> No, they don't, and I'm happy to add a clarification to the KIP if you feel
> it is necessary.
>
> > I'd have the same questions if e.g. transformations could be ignored or
> > overridden by connectors or if there were multiple places to specify what
> > serde to use.
>
> There are multiple places that converters can be defined: the worker config
> defines the key and value converters that will be used for all connectors,
> except when a connector defines its own key and value converters.
>
> > I don't see how controlling topic creation based on topic name is
> something
> > we should support across all connectors, as if it is some established
> > pattern or universally useful.
>
> Topics are identified by name, and when you create a topic with specific
> settings or change a topic's settings you identify the topic by name. The
> fact that this KIP uses regular expressions to match topic names doesn't
> seem surprising, since we use regexes elsewhere.
>
> Best regards
>
> On Mon, Sep 24, 2018 at 1:24 PM Ryanne Dolan 
> wrote:
>
> > Randall,
> >
> > Say I've got a connector that needs to control topic creation. What I
> need
> > is an AdminClient s.t. my connector can do what it knows it needs to do.
> > This KIP doesn't address the issues that have been brought up wrt
> > configuration, principals, ACL etc, since I'll still need to construct my
> > own AdminClient.
> >
> > Should such a connector ignore your proposed configuration settings?
> Should
> > it use it's own principal and it's own configuration properties? How does
> > my AdminClient's settings interact with your proposed settings and the
> > existing cluster settings?
> >
> > What happens when a user specifies topic creation settings in a connector
> > config, but the connector then applies it's own topic creation logic? Are
> > the configurations silently ignored? If not, how can a connector honor
> your
> > proposed settings?
> >
> > Do these settings apply to internal topics created by the framework on
> > bahalf of a connector, e.g. via KafkaConfigBackingStore?
> >
> > When do the cluster settings get applied? Only after 3 layers of
> > fall-through?
> >
> > I'd have the same questions if e.g. transformations could be ignored or
> > overridden by connectors or if there were multiple places to specify what
> > serde to use.
> >
> > I don't see how controlling topic creation based on topic name is
> something
> > we should support across all connectors, as if it is some established
> > pattern or universally useful.
> >
> > Ryanne
> >
> > On Mon, Sep 24, 2018, 10:14 AM Randall Hauch  wrote:
> >
> > > Hi, Ryanne. My apologies for not responding earlier, as I was on a long
> > > holiday.
> > >
> > > Thanks for your feedback and questions about this KIP. You've raised
> > > several points in the discussion so far, so let me try to address most
> of
> > > them.
> > >
> > > IIUC, one of your major concerns is that this KIP introduces a new way
> to
> > > define configurations for topics. That is true, and the whole reason is
> > to
> > > simply the user experience for people using source connectors. You
> still
> > > have the freedom to manually pre-create topics before running a
> > connector,
> > > or to rely upon the broker automatically creating topics for the
> > connectors
> > > when those topics don't yet exist -- in both cases, don't include
> > anything
> > > about topic creation in your connector configurations. In fact, when
> you
> > do
> > > this, Connect uses the current behavior by assuming the topics exist or
> > > will be autocreated with the proper configurations.
> > >
> > > But for many environments, this existing approach is not enough. First,
> > if
> > > you're relying upon the broker to autocreate topics, then the brokers
> > > single set of default topic settings must match the requirements of
> your
> > > new topics. This can be difficult when your running multiple kinds of
> > > connectors with differing expectations. Consider using a CDC 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-09-24 Thread Jan Filipiak




On 24.09.2018 16:26, Adam Bellemare wrote:

@Guozhang

Thanks for the information. This is indeed something that will be extremely
useful for this KIP.

@Jan
Thanks for your explanations. That being said, I will not be moving ahead
with an implementation using reshuffle/groupBy solution as you propose.
That being said, if you wish to implement it yourself off of my current PR
and submit it as a competitive alternative, I would be more than happy to
help vet that as an alternate solution. As it stands right now, I do not
really have more time to invest into alternatives without there being a
strong indication from the binding voters which they would prefer.



Hey, total no worries. I think I personally gave up on the streams DSL 
for some time already, otherwise I would have pulled this KIP through 
already. I am currently reimplementing my own DSL based on PAPI.




I will look at finishing up my PR with the windowed state store in the next
week or so, exercising it via tests, and then I will come back for final
discussions. In the meantime, I hope that any of the binding voters could
take a look at the KIP in the wiki. I have updated it according to the
latest plan:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable

I have also updated the KIP PR to use a windowed store. This could be
replaced by the results of KIP-258 whenever they are completed.
https://github.com/apache/kafka/pull/5527

Thanks,

Adam


Is the HighWatermarkResolverProccessorsupplier already updated in the 
PR? expected it to change to Windowed,Long Missing something?






On Fri, Sep 14, 2018 at 2:24 PM, Guozhang Wang  wrote:


Correction on my previous email: KAFKA-5533 is the wrong link, as it is for
corresponding changelog mechanisms. But as part of KIP-258 we do want to
have "handling out-of-order data for source KTable" such that instead of
blindly apply the updates to the materialized store, i.e. following offset
ordering, we will reject updates that are older than the current key's
timestamps, i.e. following timestamp ordering.


Guozhang

On Fri, Sep 14, 2018 at 11:21 AM, Guozhang Wang 
wrote:


Hello Adam,

Thanks for the explanation. Regarding the final step (i.e. the high
watermark store, now altered to be replaced with a window store), I think
another current on-going KIP may actually help:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-
258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB


This is for adding the timestamp into a key-value store (i.e. only for
non-windowed KTable), and then one of its usage, as described in
https://issues.apache.org/jira/browse/KAFKA-5533, is that we can then
"reject" updates from the source topics if its timestamp is smaller than
the current key's latest update timestamp. I think it is very similar to
what you have in mind for high watermark based filtering, while you only
need to make sure that the timestamps of the joining records are

correctly

inherited though the whole topology to the final stage.

Note that this KIP is for key-value store and hence non-windowed KTables
only, but for windowed KTables we do not really have a good support for
their joins anyways (https://issues.apache.org/jira/browse/KAFKA-7107) I
think we can just consider non-windowed KTable-KTable non-key joins for
now. In which case, KIP-258 should help.



Guozhang



On Wed, Sep 12, 2018 at 9:20 PM, Jan Filipiak 
wrote:



On 11.09.2018 18:00, Adam Bellemare wrote:


Hi Guozhang

Current highwater mark implementation would grow endlessly based on
primary key of original event. It is a pair of (
key>,

). This is used to differentiate

between

late arrivals and new updates. My newest proposal would be to replace

it

with a Windowed state store of Duration N. This would allow the same
behaviour, but cap the size based on time. This should allow for all
late-arriving events to be processed, and should be customizable by the
user to tailor to their own needs (ie: perhaps just 10 minutes of

window,

or perhaps 7 days...).


Hi Adam, using time based retention can do the trick here. Even if I
would still like to see the automatic repartitioning optional since I

would

just reshuffle again. With windowed store I am a little bit sceptical

about

how to determine the window. So esentially one could run into problems

when

the rapid change happens near a window border. I will check you
implementation in detail, if its problematic, we could still check _all_
windows on read with not to bad performance impact I guess. Will let you
know if the implementation would be correct as is. I wouldn't not like

to

assume that: offset(A) < offset(B) => timestamp(A)  < timestamp(B). I

think

we can't expect that.




@Jan
I believe I understand what you mean now - thanks for the diagram, it
did really help. You are correct that I do not have the original

primary

key available, and I can see that if it was available then you would be
able to add and remove events from the 

Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-24 Thread Randall Hauch
Ryanne,

If your connector is already using the AdminClient, then you as the
developer have a choice of switching to the new Connect-based functionality
or keeping the existing use of the AdminClient. If the connector uses both
mechanisms (which I wouldn't recommend, simply because of the complexity of
it for a user), then the topic will be created by the first mechanism to
actually attempt and successfully create the topic(s) in the Kafka cluster
that the Connect worker uses. As mentioned in the KIP, "This feature ...
does not change the topic-specific settings on any existing topics." IOW,
if the topic already exists, it can't be created again and therefore the
`topic.creation.*` properties will not apply for that existing topic.

> Do these settings apply to internal topics created by the framework on
> bahalf of a connector, e.g. via KafkaConfigBackingStore?

No, they don't, and I'm happy to add a clarification to the KIP if you feel
it is necessary.

> I'd have the same questions if e.g. transformations could be ignored or
> overridden by connectors or if there were multiple places to specify what
> serde to use.

There are multiple places that converters can be defined: the worker config
defines the key and value converters that will be used for all connectors,
except when a connector defines its own key and value converters.

> I don't see how controlling topic creation based on topic name is
something
> we should support across all connectors, as if it is some established
> pattern or universally useful.

Topics are identified by name, and when you create a topic with specific
settings or change a topic's settings you identify the topic by name. The
fact that this KIP uses regular expressions to match topic names doesn't
seem surprising, since we use regexes elsewhere.

Best regards

On Mon, Sep 24, 2018 at 1:24 PM Ryanne Dolan  wrote:

> Randall,
>
> Say I've got a connector that needs to control topic creation. What I need
> is an AdminClient s.t. my connector can do what it knows it needs to do.
> This KIP doesn't address the issues that have been brought up wrt
> configuration, principals, ACL etc, since I'll still need to construct my
> own AdminClient.
>
> Should such a connector ignore your proposed configuration settings? Should
> it use it's own principal and it's own configuration properties? How does
> my AdminClient's settings interact with your proposed settings and the
> existing cluster settings?
>
> What happens when a user specifies topic creation settings in a connector
> config, but the connector then applies it's own topic creation logic? Are
> the configurations silently ignored? If not, how can a connector honor your
> proposed settings?
>
> Do these settings apply to internal topics created by the framework on
> bahalf of a connector, e.g. via KafkaConfigBackingStore?
>
> When do the cluster settings get applied? Only after 3 layers of
> fall-through?
>
> I'd have the same questions if e.g. transformations could be ignored or
> overridden by connectors or if there were multiple places to specify what
> serde to use.
>
> I don't see how controlling topic creation based on topic name is something
> we should support across all connectors, as if it is some established
> pattern or universally useful.
>
> Ryanne
>
> On Mon, Sep 24, 2018, 10:14 AM Randall Hauch  wrote:
>
> > Hi, Ryanne. My apologies for not responding earlier, as I was on a long
> > holiday.
> >
> > Thanks for your feedback and questions about this KIP. You've raised
> > several points in the discussion so far, so let me try to address most of
> > them.
> >
> > IIUC, one of your major concerns is that this KIP introduces a new way to
> > define configurations for topics. That is true, and the whole reason is
> to
> > simply the user experience for people using source connectors. You still
> > have the freedom to manually pre-create topics before running a
> connector,
> > or to rely upon the broker automatically creating topics for the
> connectors
> > when those topics don't yet exist -- in both cases, don't include
> anything
> > about topic creation in your connector configurations. In fact, when you
> do
> > this, Connect uses the current behavior by assuming the topics exist or
> > will be autocreated with the proper configurations.
> >
> > But for many environments, this existing approach is not enough. First,
> if
> > you're relying upon the broker to autocreate topics, then the brokers
> > single set of default topic settings must match the requirements of your
> > new topics. This can be difficult when your running multiple kinds of
> > connectors with differing expectations. Consider using a CDC connector
> that
> > expects compaction, a high-volume web service connector that should not
> use
> > compaction but expects deletion after 7 days, and another connector with
> > lower volume that uses 30 day retention. Or, consider connectors that are
> > producing to topics that have very different message 

Re: Review and merge - KAFKA-6764

2018-09-24 Thread Suman B N
This has been approved by one reviewer. Can anyone review and merge this -
https://github.com/apache/kafka/pull/5637?

On Fri, Sep 14, 2018 at 5:37 PM Suman B N  wrote:

> Can you pls check this?
>
> On Wed, Sep 12, 2018 at 3:12 PM Suman B N  wrote:
>
>> Team,
>>
>> Review and merge below pull request.
>> Merge Request: https://github.com/apache/kafka/pull/5637
>> Jira: https://issues.apache.org/jira/browse/KAFKA-6764
>>
>> --
>> *Suman*
>> *OlaCabs*
>>
>
>
> --
> *Suman*
> *OlaCabs*
>


-- 
*Suman*
*OlaCabs*


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-24 Thread Ryanne Dolan
Randall,

Say I've got a connector that needs to control topic creation. What I need
is an AdminClient s.t. my connector can do what it knows it needs to do.
This KIP doesn't address the issues that have been brought up wrt
configuration, principals, ACL etc, since I'll still need to construct my
own AdminClient.

Should such a connector ignore your proposed configuration settings? Should
it use it's own principal and it's own configuration properties? How does
my AdminClient's settings interact with your proposed settings and the
existing cluster settings?

What happens when a user specifies topic creation settings in a connector
config, but the connector then applies it's own topic creation logic? Are
the configurations silently ignored? If not, how can a connector honor your
proposed settings?

Do these settings apply to internal topics created by the framework on
bahalf of a connector, e.g. via KafkaConfigBackingStore?

When do the cluster settings get applied? Only after 3 layers of
fall-through?

I'd have the same questions if e.g. transformations could be ignored or
overridden by connectors or if there were multiple places to specify what
serde to use.

I don't see how controlling topic creation based on topic name is something
we should support across all connectors, as if it is some established
pattern or universally useful.

Ryanne

On Mon, Sep 24, 2018, 10:14 AM Randall Hauch  wrote:

> Hi, Ryanne. My apologies for not responding earlier, as I was on a long
> holiday.
>
> Thanks for your feedback and questions about this KIP. You've raised
> several points in the discussion so far, so let me try to address most of
> them.
>
> IIUC, one of your major concerns is that this KIP introduces a new way to
> define configurations for topics. That is true, and the whole reason is to
> simply the user experience for people using source connectors. You still
> have the freedom to manually pre-create topics before running a connector,
> or to rely upon the broker automatically creating topics for the connectors
> when those topics don't yet exist -- in both cases, don't include anything
> about topic creation in your connector configurations. In fact, when you do
> this, Connect uses the current behavior by assuming the topics exist or
> will be autocreated with the proper configurations.
>
> But for many environments, this existing approach is not enough. First, if
> you're relying upon the broker to autocreate topics, then the brokers
> single set of default topic settings must match the requirements of your
> new topics. This can be difficult when your running multiple kinds of
> connectors with differing expectations. Consider using a CDC connector that
> expects compaction, a high-volume web service connector that should not use
> compaction but expects deletion after 7 days, and another connector with
> lower volume that uses 30 day retention. Or, consider connectors that are
> producing to topics that have very different message characteristics:
> different sizes, different throughputs, different partitions, etc. The only
> way to work around this is to pre-create the topics, but this adds more
> complexity and room for errors, especially when a single instance of some
> source connectors can write to dozens (or even hundreds) of topics.
>
> Second, many operators prefer (or are required) to disable topic
> autocreation, since simple mistakes with command line tools can result in
> new topics. In this cases, users have no choice but to manually precreate
> the topics that complicates the process of running a connector and, as
> mentioned above, increases the risk that something goes wrong.
>
> Third, the reason why this KIP introduces a way for connector
> implementations to override some topic settings is because some source
> connectors have very specific requirements. When I wrote the first Debezium
> CDC connectors, many first-time users didn't precreate the topics as
> recommended in the documentation, and didn't change their brokers' default
> topic settings. Only after a few days when they tried reconsuming the full
> streams did they realize that Kafka had deleted messages older than the
> default retention period. Debezium expects / requires compacted topics, so
> all kinds of things went wrong. Connect is often one of the first ways in
> which people get introduced to Kafka, and they simply don't yet have an
> understanding of many of the details that you or I don't have to think
> twice about.
>
> You suggested that maybe Connect should just expose the Admin API. That's
> possible, but IMO it's very heavyweight and complex. The whole point of
> Connect's design is to abstract the connector developer away from most of
> the details of Kafka -- it doesn't even expose the producer and consumer
> APIs, which are much simpler. IMO it would be a mistake to require source
> connector developers to deal with the Admin API -- I even have trouble
> writing code that uses it to properly create 

Re: [DISCUSS] KIP-375: TopicCommand to use AdminClient

2018-09-24 Thread Gwen Shapira
The "use admin client" part is amazing and thank you.

I'm confused about "commandConfig" - is this a list of configurations for
use with --config option? Or a list of properties for connecting to brokers
(like SSL and such)? If the former, it seems unrelated.

On Mon, Sep 24, 2018 at 7:25 AM Viktor Somogyi-Vass 
wrote:

> Hi All,
>
> I wrote up a relatively simple KIP about improving the Kafka protocol and
> the TopicCommand tool to support the new Java based AdminClient and
> hopefully to deprecate the Zookeeper side of it.
>
> I would be happy to receive some opinions about this. In general I think
> this would be an important addition as this is one of the few left but
> important tools that still uses direct Zookeeper connection.
>
> Here is the link for the KIP:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-375%3A+TopicCommand+to+use+AdminClient
>
> Thanks,
> Viktor
>


-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-24 Thread Gwen Shapira
+1 (binding)

On Tue, Sep 18, 2018 at 7:51 AM Edoardo Comar  wrote:

> Hi All,
>
> I'd like to start the vote on KIP-302:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
>
> We'd love to get this in 2.1.0
> Kip freeze is just a few days away ... please cast your votes  :-):-)
>
> Thanks!!
> Edo
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


-- 
*Gwen Shapira*
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter  | blog



Build failed in Jenkins: kafka-trunk-jdk8 #2983

2018-09-24 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
remote: Counting objects: 10014, done.
remote: Compressing objects:  25% (1/4)   remote: Compressing objects:  
50% (2/4)   remote: Compressing objects:  75% (3/4)   remote: 
Compressing objects: 100% (4/4)   remote: Compressing objects: 100% 
(4/4), done.
Receiving objects:   0% (1/10014)   Receiving objects:   1% (101/10014)   
Receiving objects:   2% (201/10014)   Receiving objects:   3% (301/10014)   
Receiving objects:   4% (401/10014)   Receiving objects:   5% (501/10014)   
Receiving objects:   6% (601/10014)   Receiving objects:   7% (701/10014)   
Receiving objects:   8% (802/10014)   Receiving objects:   9% (902/10014)   
Receiving objects:  10% (1002/10014)   Receiving objects:  11% (1102/10014)   
Receiving objects:  12% (1202/10014)   Receiving objects:  13% (1302/10014)   
Receiving objects:  14% (1402/10014)   Receiving objects:  15% (1503/10014)   
Receiving objects:  16% (1603/10014)   Receiving objects:  17% (1703/10014)   
Receiving objects:  18% (1803/10014)   Receiving objects:  19% (1903/10014)   
Receiving objects:  20% (2003/10014)   Receiving objects:  21% (2103/10014)   
Receiving objects:  22% (2204/10014)   Receiving objects:  23% (2304/10014)   
Receiving objects:  24% (2404/10014)   Receiving objects:  25% (2504/10014)   
Receiving objects:  26% (2604/10014)   Receiving objects:  27% (2704/10014)   
Receiving objects:  28% (2804/10014)   Receiving objects:  29% (2905/10014)   
Receiving objects:  30% (3005/10014)   Receiving objects:  31% (3105/10014)   
Receiving objects:  32% (3205/10014)   Receiving objects:  33% (3305/10014)   
Receiving objects:  34% (3405/10014)   Receiving objects:  35% (3505/10014)   
Receiving objects:  36% (3606/10014)   Receiving objects:  37% (3706/10014)   
Receiving objects:  38% (3806/10014)   Receiving objects:  39% (3906/10014)   
Receiving objects:  40% (4006/10014)   Receiving objects:  41% (4106/10014)   
Receiving objects:  42% (4206/10014)   Receiving objects:  43% (4307/10014)   
Receiving objects:  44% (4407/10014)   Receiving objects:  45% (4507/10014)   
Receiving objects:  46% (4607/10014)   Receiving objects:  47% (4707/10014)   
Receiving objects:  48% (4807/10014)   Receiving objects:  49% (4907/10014)   
Receiving objects:  50% (5007/10014)   Receiving objects:  51% (5108/10014)   
Receiving objects:  52% (5208/10014)   Receiving objects:  53% (5308/10014)   
Receiving objects:  54% (5408/10014)   Receiving objects:  55% (5508/10014)   
Receiving objects:  56% (5608/10014)   Receiving objects:  57% (5708/10014)   
Receiving objects:  58% (5809/10014)   Receiving objects:  59% (5909/10014)   
Receiving objects:  60% (6009/10014)   Receiving objects:  61% (6109/10014)   
Receiving objects:  62% (6209/10014)   Receiving objects:  63% (6309/10014)   
Receiving objects:  64% (6409/10014)   Receiving objects:  65% (6510/10014)   
Receiving objects:  66% (6610/10014), 1.16 MiB | 2.30 MiB/s   Receiving 
objects:  67% (6710/10014), 1.16 MiB | 2.30 MiB/s   Receiving objects:  68% 

Re: [VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Gwen Shapira
Can you explain more why we can't add "incremental" to the existing API and
then deprecate the old behavior? The "rejected" section says: "We would
also not have been able to deprecate the non-incremental mode." but I'm not
sure why not.

Having two APIs "Alter" and "Modify" with slightly different behavior that
is not obvious from their name (i.e. would anyone remember which one is
incremental?) seems pretty bad. Add the fact that in databases, "alter" is
incremental and things will get confusing pretty fast. Obviously if
deprecating the old behavior is impossible, than we have no choice - but I
don't see why it would be impossible.

Gwen

On Mon, Sep 24, 2018 at 10:29 AM Colin McCabe  wrote:
>
> Hi all,
>
> I would like to start voting on KIP-339, which creates a new
IncrementalAlterConfigs API.
>
> The KIP is described here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API
>
> Previous discussion:
> https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1
>
> best,
> Colin



--
Gwen Shapira
Product Manager | Confluent
650.450.2760 | @gwenshap
Follow us: Twitter | blog


Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-24 Thread Nikolay Izhikov
Hello, John.

Thank you.

There are failing tests in my PR.
I'm fixing them wright now.

Will mail you in a next few hours, after all tests become green again.

В Пн, 24/09/2018 в 11:46 -0500, John Roesler пишет:
> Hi Nikolay,
> 
> Thanks for the PR. I will review it.
> 
> -John
> 
> On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov  wrote:
> 
> > Hello
> > 
> > I've opened a PR [1] for this KIP.
> > 
> > [1] https://github.com/apache/kafka/pull/5682
> > 
> > John, can you take a look?
> > 
> > В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > > John,
> > > 
> > > Got it.
> > > 
> > > Will do my best to meet this deadline.
> > > 
> > > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > > Yay! Thanks so much for sticking with this Nikolay.
> > > > 
> > > > I look forward to your PR!
> > > > 
> > > > Not to put pressure on you, but just to let you know, the deadline for
> > > > getting your pr *merged* for 2.1 is _October 1st_,
> > > > so you basically have 2 weeks to send the PR, have the reviews, and
> > 
> > get it
> > > > merged.
> > > > 
> > > > (see
> > > > 
> > 
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > > > 
> > > > Thanks again,
> > > > -John
> > > > 
> > > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov 
> > > > wrote:
> > > > 
> > > > > This KIP is now accepted with:
> > > > > - 3 binding +1
> > > > > - 2 non binding +1
> > > > > 
> > > > > Thanks, all.
> > > > > 
> > > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > > > 
> > > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > > +1 (binding), thank you Nikolay!
> > > > > > 
> > > > > > Guozhang
> > > > > > 
> > > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> > 
> > matth...@confluent.io>
> > > > > > wrote:
> > > > > > 
> > > > > > > Thanks for the KIP.
> > > > > > > 
> > > > > > > +1 (binding)
> > > > > > > 
> > > > > > > 
> > > > > > > -Matthias
> > > > > > > 
> > > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > > I'm a +1 (non-binding)
> > > > > > > > 
> > > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> > 
> > nizhi...@apache.org>
> > > > > > > 
> > > > > > > wrote:
> > > > > > > > 
> > > > > > > > > Dear commiters.
> > > > > > > > > 
> > > > > > > > > Please, vote on a KIP.
> > > > > > > > > 
> > > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > > Hi Nikolay,
> > > > > > > > > > 
> > > > > > > > > > You can start a PR any time, but we cannot per it (and
> > 
> > probably
> > > > > 
> > > > > won't
> > > > > > > 
> > > > > > > do
> > > > > > > > > > serious reviews) until after the KIP is voted and approved.
> > > > > > > > > > 
> > > > > > > > > > Sometimes people start a PR during discussion just to help
> > > > > 
> > > > > provide more
> > > > > > > > > > context, but it's not required (and can also be distracting
> > > > > 
> > > > > because the
> > > > > > > > > 
> > > > > > > > > KIP
> > > > > > > > > > discussion should avoid implementation details).
> > > > > > > > > > 
> > > > > > > > > > Let's wait one more day for any other comments and plan to
> > 
> > start
> > > > > 
> > > > > the
> > > > > > > 
> > > > > > > vote
> > > > > > > > > > on Monday if there are no other debates.
> > > > > > > > > > 
> > > > > > > > > > Once you start the vote, you have to leave it up for at
> > 
> > least 72
> > > > > 
> > > > > hours,
> > > > > > > > > 
> > > > > > > > > and
> > > > > > > > > > it requires 3 binding votes to pass. Only Kafka Committers
> > 
> > have
> > > > > 
> > > > > binding
> > > > > > > > > > votes (https://kafka.apache.org/committers).
> > > > > > > > > > 
> > > > > > > > > > Thanks,
> > > > > > > > > > -John
> > > > > > > > > > 
> > > > > > > > > > On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck <
> > 
> > bbej...@gmail.com>
> > > > > > > 
> > > > > > > wrote:
> > > > > > > > > > 
> > > > > > > > > > > Hi Nickolay,
> > > > > > > > > > > 
> > > > > > > > > > > Thanks for the clarification.
> > > > > > > > > > > 
> > > > > > > > > > > -Bill
> > > > > > > > > > > 
> > > > > > > > > > > On Fri, Aug 31, 2018 at 11:59 AM Nikolay Izhikov <
> > > > > 
> > > > > nizhi...@apache.org
> > > > > > > > > > > wrote:
> > > > > > > > > > > 
> > > > > > > > > > > > Hello, John.
> > > > > > > > > > > > 
> > > > > > > > > > > > This is my first KIP, so, please, help me with kafka
> > > > > 
> > > > > development
> > > > > > > > > 
> > > > > > > > > process.
> > > > > > > > > > > > 
> > > > > > > > > > > > Should I start to work on PR now? Or should I wait for
> > 
> > a
> > > > > 
> > > > > "+1" from
> > > > > > > > > > > > commiters?
> > > > > > > > > > > > 
> > > > > > > > > > > > В Пт, 31/08/2018 в 10:33 -0500, John Roesler пишет:
> > > > > > > > > > > > > I see. I guess that once we are in the PR-reviewing
> > 
> > phase,
> > > > > 
> > > > > we'll
> > > > > > > > > 
> > > > > > > > > be in
> > > > > > > > > > > 
> > > > > > > > > > > a
> > > > > > > > > > > > > better position to see 

[VOTE] KIP-339: Create a new IncrementalAlterConfigs API

2018-09-24 Thread Colin McCabe
Hi all,

I would like to start voting on KIP-339, which creates a new 
IncrementalAlterConfigs API.

The KIP is described here: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-339%3A+Create+a+new+ModifyConfigs+API

Previous discussion:
https://sematext.com/opensee/m/Kafka/uyzND1OYRKh2RrGba1

best,
Colin


[jira] [Created] (KAFKA-7437) Store leader epoch in offset commit metadata

2018-09-24 Thread Jason Gustafson (JIRA)
Jason Gustafson created KAFKA-7437:
--

 Summary: Store leader epoch in offset commit metadata
 Key: KAFKA-7437
 URL: https://issues.apache.org/jira/browse/KAFKA-7437
 Project: Kafka
  Issue Type: Improvement
Reporter: Jason Gustafson


This patch implements the changes described in KIP-320 for the persistence of 
leader epoch information in the offset commit metadata: 
https://cwiki.apache.org/confluence/display/KAFKA/KIP-320%3A+Allow+fetchers+to+detect+and+handle+log+truncation



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Build failed in Jenkins: kafka-trunk-jdk8 #2982

2018-09-24 Thread Apache Jenkins Server
See 

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on ubuntu-eu2 (ubuntu trusty) in workspace 

 > git rev-parse --is-inside-work-tree # timeout=10
Fetching changes from the remote Git repository
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
ERROR: Error fetching remote repo 'origin'
hudson.plugins.git.GitException: Failed to fetch from 
https://github.com/apache/kafka.git
at hudson.plugins.git.GitSCM.fetchFrom(GitSCM.java:888)
at hudson.plugins.git.GitSCM.retrieveChanges(GitSCM.java:1155)
at hudson.plugins.git.GitSCM.checkout(GitSCM.java:1186)
at hudson.scm.SCM.checkout(SCM.java:504)
at hudson.model.AbstractProject.checkout(AbstractProject.java:1208)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.defaultCheckout(AbstractBuild.java:574)
at jenkins.scm.SCMCheckoutStrategy.checkout(SCMCheckoutStrategy.java:86)
at 
hudson.model.AbstractBuild$AbstractBuildExecution.run(AbstractBuild.java:499)
at hudson.model.Run.execute(Run.java:1794)
at hudson.model.FreeStyleBuild.run(FreeStyleBuild.java:43)
at hudson.model.ResourceController.execute(ResourceController.java:97)
at hudson.model.Executor.run(Executor.java:429)
Caused by: hudson.plugins.git.GitException: Command "git fetch --tags 
--progress https://github.com/apache/kafka.git 
+refs/heads/*:refs/remotes/origin/*" returned status code 128:
stdout: 
stderr: error: Could not read 6e79e5da0308783ba646378efc44f018cb4f39ac
remote: Counting objects: 10008, done.
remote: Compressing objects:  33% (1/3)   remote: Compressing objects:  
66% (2/3)   remote: Compressing objects: 100% (3/3)   remote: 
Compressing objects: 100% (3/3), done.
Receiving objects:   0% (1/10008)   Receiving objects:   1% (101/10008)   
Receiving objects:   2% (201/10008)   Receiving objects:   3% (301/10008)   
Receiving objects:   4% (401/10008)   Receiving objects:   5% (501/10008)   
Receiving objects:   6% (601/10008)   Receiving objects:   7% (701/10008)   
Receiving objects:   8% (801/10008)   Receiving objects:   9% (901/10008)   
Receiving objects:  10% (1001/10008)   Receiving objects:  11% (1101/10008)   
Receiving objects:  12% (1201/10008)   Receiving objects:  13% (1302/10008)   
Receiving objects:  14% (1402/10008)   Receiving objects:  15% (1502/10008)   
Receiving objects:  16% (1602/10008)   Receiving objects:  17% (1702/10008)   
Receiving objects:  18% (1802/10008)   Receiving objects:  19% (1902/10008)   
Receiving objects:  20% (2002/10008)   Receiving objects:  21% (2102/10008)   
Receiving objects:  22% (2202/10008)   Receiving objects:  23% (2302/10008)   
Receiving objects:  24% (2402/10008)   Receiving objects:  25% (2502/10008)   
Receiving objects:  26% (2603/10008)   Receiving objects:  27% (2703/10008)   
Receiving objects:  28% (2803/10008)   Receiving objects:  29% (2903/10008)   
Receiving objects:  30% (3003/10008)   Receiving objects:  31% (3103/10008)   
Receiving objects:  32% (3203/10008)   Receiving objects:  33% (3303/10008)   
Receiving objects:  34% (3403/10008)   Receiving objects:  35% (3503/10008)   
Receiving objects:  36% (3603/10008)   Receiving objects:  37% (3703/10008)   
Receiving objects:  38% (3804/10008)   Receiving objects:  39% (3904/10008)   
Receiving objects:  40% (4004/10008)   Receiving objects:  41% (4104/10008)   
Receiving objects:  42% (4204/10008)   Receiving objects:  43% (4304/10008)   
Receiving objects:  44% (4404/10008)   Receiving objects:  45% (4504/10008), 
1008.00 KiB | 1.96 MiB/s   Receiving objects:  46% (4604/10008), 1008.00 KiB | 
1.96 MiB/s   Receiving objects:  47% (4704/10008), 1008.00 KiB | 1.96 MiB/s   
Receiving objects:  48% (4804/10008), 1008.00 KiB | 1.96 MiB/s   Receiving 
objects:  49% (4904/10008), 1008.00 KiB | 1.96 MiB/s   Receiving objects:  50% 
(5004/10008), 1008.00 KiB | 1.96 MiB/s   Receiving objects:  51% (5105/10008), 
1008.00 KiB | 1.96 MiB/s   Receiving objects:  52% (5205/10008), 1008.00 KiB | 
1.96 MiB/s   Receiving objects:  53% (5305/10008), 1008.00 KiB | 1.96 MiB/s   
Receiving objects:  54% (5405/10008), 1008.00 KiB | 1.96 MiB/s   Receiving 
objects:  55% (5505/10008), 1008.00 KiB | 1.96 MiB/s   Receiving objects:  56% 
(5605/10008), 1008.00 KiB | 1.96 MiB/s   Receiving objects:  57% (5705/10008), 
1008.00 KiB | 1.96 MiB/s   Receiving objects:  58% (5805/10008), 1008.00 KiB | 
1.96 MiB/s   Receiving objects:  59% (5905/10008), 1008.00 KiB | 1.96 MiB/s   
Receiving objects:  60% (6005/10008), 1008.00 KiB | 

Re: [VOTE] KIP-358: Migrate Streams API to Duration instead of long ms times

2018-09-24 Thread John Roesler
Hi Nikolay,

Thanks for the PR. I will review it.

-John

On Sat, Sep 22, 2018 at 2:36 AM Nikolay Izhikov  wrote:

> Hello
>
> I've opened a PR [1] for this KIP.
>
> [1] https://github.com/apache/kafka/pull/5682
>
> John, can you take a look?
>
> В Пн, 17/09/2018 в 20:16 +0300, Nikolay Izhikov пишет:
> > John,
> >
> > Got it.
> >
> > Will do my best to meet this deadline.
> >
> > В Пн, 17/09/2018 в 11:52 -0500, John Roesler пишет:
> > > Yay! Thanks so much for sticking with this Nikolay.
> > >
> > > I look forward to your PR!
> > >
> > > Not to put pressure on you, but just to let you know, the deadline for
> > > getting your pr *merged* for 2.1 is _October 1st_,
> > > so you basically have 2 weeks to send the PR, have the reviews, and
> get it
> > > merged.
> > >
> > > (see
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=91554044)
> > >
> > > Thanks again,
> > > -John
> > >
> > > On Mon, Sep 17, 2018 at 10:29 AM Nikolay Izhikov 
> > > wrote:
> > >
> > > > This KIP is now accepted with:
> > > > - 3 binding +1
> > > > - 2 non binding +1
> > > >
> > > > Thanks, all.
> > > >
> > > > Especially, John, Matthias, Guozhang, Bill, Damian!
> > > >
> > > > В Чт, 13/09/2018 в 22:16 -0700, Guozhang Wang пишет:
> > > > > +1 (binding), thank you Nikolay!
> > > > >
> > > > > Guozhang
> > > > >
> > > > > On Thu, Sep 13, 2018 at 9:39 AM, Matthias J. Sax <
> matth...@confluent.io>
> > > > > wrote:
> > > > >
> > > > > > Thanks for the KIP.
> > > > > >
> > > > > > +1 (binding)
> > > > > >
> > > > > >
> > > > > > -Matthias
> > > > > >
> > > > > > On 9/5/18 8:52 AM, John Roesler wrote:
> > > > > > > I'm a +1 (non-binding)
> > > > > > >
> > > > > > > On Mon, Sep 3, 2018 at 8:33 AM Nikolay Izhikov <
> nizhi...@apache.org>
> > > > > >
> > > > > > wrote:
> > > > > > >
> > > > > > > > Dear commiters.
> > > > > > > >
> > > > > > > > Please, vote on a KIP.
> > > > > > > >
> > > > > > > > В Пт, 31/08/2018 в 12:05 -0500, John Roesler пишет:
> > > > > > > > > Hi Nikolay,
> > > > > > > > >
> > > > > > > > > You can start a PR any time, but we cannot per it (and
> probably
> > > >
> > > > won't
> > > > > >
> > > > > > do
> > > > > > > > > serious reviews) until after the KIP is voted and approved.
> > > > > > > > >
> > > > > > > > > Sometimes people start a PR during discussion just to help
> > > >
> > > > provide more
> > > > > > > > > context, but it's not required (and can also be distracting
> > > >
> > > > because the
> > > > > > > >
> > > > > > > > KIP
> > > > > > > > > discussion should avoid implementation details).
> > > > > > > > >
> > > > > > > > > Let's wait one more day for any other comments and plan to
> start
> > > >
> > > > the
> > > > > >
> > > > > > vote
> > > > > > > > > on Monday if there are no other debates.
> > > > > > > > >
> > > > > > > > > Once you start the vote, you have to leave it up for at
> least 72
> > > >
> > > > hours,
> > > > > > > >
> > > > > > > > and
> > > > > > > > > it requires 3 binding votes to pass. Only Kafka Committers
> have
> > > >
> > > > binding
> > > > > > > > > votes (https://kafka.apache.org/committers).
> > > > > > > > >
> > > > > > > > > Thanks,
> > > > > > > > > -John
> > > > > > > > >
> > > > > > > > > On Fri, Aug 31, 2018 at 11:09 AM Bill Bejeck <
> bbej...@gmail.com>
> > > > > >
> > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Nickolay,
> > > > > > > > > >
> > > > > > > > > > Thanks for the clarification.
> > > > > > > > > >
> > > > > > > > > > -Bill
> > > > > > > > > >
> > > > > > > > > > On Fri, Aug 31, 2018 at 11:59 AM Nikolay Izhikov <
> > > >
> > > > nizhi...@apache.org
> > > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hello, John.
> > > > > > > > > > >
> > > > > > > > > > > This is my first KIP, so, please, help me with kafka
> > > >
> > > > development
> > > > > > > >
> > > > > > > > process.
> > > > > > > > > > >
> > > > > > > > > > > Should I start to work on PR now? Or should I wait for
> a
> > > >
> > > > "+1" from
> > > > > > > > > > > commiters?
> > > > > > > > > > >
> > > > > > > > > > > В Пт, 31/08/2018 в 10:33 -0500, John Roesler пишет:
> > > > > > > > > > > > I see. I guess that once we are in the PR-reviewing
> phase,
> > > >
> > > > we'll
> > > > > > > >
> > > > > > > > be in
> > > > > > > > > >
> > > > > > > > > > a
> > > > > > > > > > > > better position to see what else can/should be done,
> and
> > > >
> > > > we can
> > > > > > > >
> > > > > > > > talk
> > > > > > > > > > >
> > > > > > > > > > > about
> > > > > > > > > > > > follow-on work at that time.
> > > > > > > > > > > >
> > > > > > > > > > > > Thanks for the clarification,
> > > > > > > > > > > > -John
> > > > > > > > > > > >
> > > > > > > > > > > > On Fri, Aug 31, 2018 at 1:19 AM Nikolay Izhikov <
> > > > > > > >
> > > > > > > > nizhi...@apache.org>
> > > > > > > > > > >
> > > > > > > > > > > wrote:
> > > > > > > > > > > >
> > > > > > > > > > > > > Hello, Bill
> > > > > > > > > > > > >
> > > > > > > > > > > > > > In the "Proposed Changes" 

Re: [VOTE] KIP-331 Add default implementation to close() and configure() for Serializer, Deserializer and Serde

2018-09-24 Thread John Roesler
Thanks for the reply.
I think my question may have been ambiguous.
I was confirming that there would be just one method *without* a default
implementation, and therefore that the lambda I quoted would actually work.
Thanks,
-John

On Sun, Sep 23, 2018 at 10:53 AM Chia-Ping Tsai  wrote:

> > This means that I could supply a serializer like this: "(topic, myData)
> ->
> > myData.toByteArrayOrWhatever()", right? (and of course similar for
> > deserializers).
>
> That code will work well if KIP-331 is merged. However, the lambda
> expressions will fail if we add default implementation to all methods of
> Serializer (ditto for Deserializer). KIP-336 had discussed such changes
> which may happen in the future. That is why KIP-331 doesn't add
> FunctionalInterface annotation to Serialize and Deserializer.
>
> Cheers,
> chia-ping
>
> On 2018/09/21 21:57:51, John Roesler  wrote:
> > If I understand the way that this works in light of KIP-331, all the
> > methods of Serializer, for example, will have defaults except for :
> "byte[]
> > serialize(String topic, T data);"
> >
> > This means that I could supply a serializer like this: "(topic, myData)
> ->
> > myData.toByteArrayOrWhatever()", right? (and of course similar for
> > deserializers).
> >
> > This sounds right to me, so if that is right, I'm still a non-binding +1.
> >
> > Thanks,
> > -John
> >
> > On Thu, Sep 20, 2018 at 10:12 PM Chia-Ping Tsai 
> wrote:
> >
> > > KIP-336[1] has been merged so it is time to activate this thread
> > > (KIP-331[2]). Last discussion is about "Should we add
> FunctionalInterface
> > > annotation to Serializer and Deserializer". In discussion of KIP-336 we
> > > mentioned that we probably add the default implementation for headless
> > > method later. Hence, adding FunctionalInterface annotation is not
> suitable
> > > now.
> > >
> > > KIP-331 has removed the change of adding FunctionalInterface
> annotation.
> > > Please take a look again.
> > >
> > > [1]
> > >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=87298242
> > > [2]
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-331+Add+default+implementation+to+close%28%29+and+configure%28%29+for+Serializer%2C+Deserializer+and+Serde
> > >
> > > Cheers,
> > > Chia-Ping
> > >
> > >
> > > On 2018/07/20 10:56:59, Ismael Juma  wrote:
> > > > Part of the motivation for this KIP is to make these interfaces
> > > functional
> > > > interfaces. But I think that may not be desirable due to the method
> that
> > > > passes headers. So, it doesn't make sense to discuss two separate
> changes
> > > > to the same interfaces in isolation, we should figure out how we want
> > > them
> > > > to work holistically.
> > > >
> > > > Ismael
> > > >
> > > > On Fri, Jul 20, 2018 at 3:50 AM Chia-Ping Tsai 
> > > wrote:
> > > >
> > > > > > The KIP needs 3 binding votes to pass.
> > > > >
> > > > > Thanks for the reminder. I will reopen the ballot box until we get
> 3
> > > > > tickets.
> > > > >
> > > > > > I still think we should include the details of how things will
> look
> > > like
> > > > > > with the headers being passed to serializers/deserializers to
> ensure
> > > > > > things actually make sense as a whole.
> > > > >
> > > > > This KIP is unrelated to the both methods - serialize() and
> > > deserialize().
> > > > > We won't add the default implementation to them in this kip. Please
> > > correct
> > > > > me if I didn't catch what you said.
> > > > >
> > > > > Cheers,
> > > > > Chia-Ping
> > > > >
> > > > > On 2018/07/09 01:55:41, Ismael Juma  wrote:
> > > > > > The KIP needs 3 binding votes to pass. I still think we should
> > > include
> > > > > the
> > > > > > details of how things will look like with the headers being
> passed to
> > > > > > serializers/deserializers to ensure things actually make sense
> as a
> > > > > whole.
> > > > > >
> > > > > > Ismael
> > > > > >
> > > > > >
> > > > > > On Sun, 8 Jul 2018, 18:31 Chia-Ping Tsai, 
> > > wrote:
> > > > > >
> > > > > > > All,
> > > > > > >
> > > > > > > The 72 hours has passed. The vote result of KIP-313 is shown
> below.
> > > > > > >
> > > > > > > 1 binding vote (Matthias J. Sax)
> > > > > > > 4 non-binding votes (John Roesler, Richard Yu, vito jeng and
> > > Chia-Ping)
> > > > > > >
> > > > > > > Cheers,
> > > > > > > Chia-Ping
> > > > > > >
> > > > > > > On 2018/07/05 14:45:01, Chia-Ping Tsai 
> > > wrote:
> > > > > > > > hi all,
> > > > > > > >
> > > > > > > > I would like to start voting on "KIP-331 Add default
> > > implementation
> > > > > to
> > > > > > > close() and configure() for Serializer, Deserializer and Serde"
> > > > > > > >
> > > > > > > >
> > > > > > >
> > > > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-331+Add+default+implementation+to+close%28%29+and+configure%28%29+for+Serializer%2C+Deserializer+and+Serde
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > > Chia-Ping
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


[jira] [Resolved] (KAFKA-2471) Replicas Order and Leader out of sync

2018-09-24 Thread Manikumar (JIRA)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manikumar resolved KAFKA-2471.
--
Resolution: Auto Closed

Closing inactive issue.  Please reopen if the issue still exists in newer 
versions.

> Replicas Order and Leader out of sync
> -
>
> Key: KAFKA-2471
> URL: https://issues.apache.org/jira/browse/KAFKA-2471
> Project: Kafka
>  Issue Type: Bug
>  Components: replication
>Affects Versions: 0.8.2.1
>Reporter: Manish Sharma
>Priority: Major
>
> Our 2 kafka brokers ( 1 & 5) were rebooted due to hypervisor going down and I 
> think we encountered a similar
> issue that was discussed in thread "Problem with node after restart no 
> partitions?".  The resulting JIRA is closed without conclusions or
> recovery steps. 
> Our Brokers 5 and 1 were also running zookeeper of our cluster (along with 
> broker 2),
> we are running kafka version 0.8.2.1
> After doing a controlled restarts over all brokers a few times our cluster 
> seems ok now.
> But there are a some topics that have replicas out of sync with Leaders.
> Partition 2 below has Leader 5 and replicas order should be 5,1 
> {code}
> Topic:2015-01-12PartitionCount:3ReplicationFactor:2 
> Configs:
> Topic: 2015-01-12   Partition: 0Leader: 4   Replicas: 4,3 
>   Isr: 3,4
> Topic: 2015-01-12   Partition: 1Leader: 0   Replicas: 0,4 
>   Isr: 0,4
> Topic: 2015-01-12   Partition: 2Leader: 5   Replicas: 1,5 
>   Isr: 5
> {code}
> I tried reassigning partition 2 replicas to broker 5 (leader) and broker : 0
> Now partition reassignment is stuck for more than a day. 
> %) /usr/local/kafka/bin/kafka-reassign-partitions.sh --zookeeper 
> kafka-trgt05:2182 --reassignment-json-file 2015-01-12_2.json --verify
> Status of partition reassignment:
> Reassignment of partition [2015-01-12,2] is still in progress
> And In zookeeper, reassign_partitions is empty..
> [zk: kafka-trgt05:2182(CONNECTED) 2] ls /admin/reassign_partitions
> []
> This seems like a bug being triggered, that leaves the cluster in unhealthy 
> state.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (KAFKA-7436) overriding auto.offset.reset to none for executor

2018-09-24 Thread Sanjay Kumar (JIRA)
Sanjay Kumar created KAFKA-7436:
---

 Summary: overriding auto.offset.reset to none for executor
 Key: KAFKA-7436
 URL: https://issues.apache.org/jira/browse/KAFKA-7436
 Project: Kafka
  Issue Type: Bug
Reporter: Sanjay Kumar


Hi All,

I am setting overriding auto.offset.reset to "latest", but its throwing me 
warning saying overriding auto.offset.reset to none for executor. 

For example, upon shutting down the stream application or an unexpected 
failure, how will be able to retrieve the previous offset.

Any help on this would be highly appreciated. looking forward for your valuable 
suggestions.

Thanks in advance.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved IP addresses

2018-09-24 Thread Damian Guy
Thanks  +1 (binding)

On Sun, 23 Sep 2018 at 23:43 Edoardo Comar  wrote:

> bumping the thread as the KIP needs 2 more binding votes ... pretty please
> ...
> --
>
> Edoardo Comar
>
> IBM Event Streams
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
>
> From:   "Skrzypek, Jonathan" 
> To: "dev@kafka.apache.org" 
> Date:   20/09/2018 17:08
> Subject:RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS
> resolved IP addresses
>
>
>
> Ok thanks.
>
> +1 (non-binding)
>
> The only thing I'm not too sure about is the naming around configuration
> entries for this, both for KIP-235 and KIP-302.
>
> KIP-235 expands DNS A records for bootstrap :
> resolve.canonical.bootstrap.servers.only
> KIP-302 expands DNS A records for advertised.listeners : use.all.dns.ips
>
> I'm a bit concerned that those don't easily explain what this does.
> Documentation helps obviously, but would we have suggestions for better
> naming ?
> I'm fine if we go for those but worth thinking about I think.
>
> Also, we probably want a third option to have both ? That's why we
> initially put in ".only" for KIP-235's parameter.
>
> Jonathan Skrzypek
>
>
> -Original Message-
> From: Edoardo Comar [mailto:eco...@uk.ibm.com]
> Sent: 20 September 2018 09:55
> To: dev@kafka.apache.org
> Subject: RE: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved
> IP addresses
>
> Hi Jonathan
> we'll update the PR for KIP-302 soon. We do not need KIP-235 actually,
> they only share the name of the configuration entry.
>
> thanks
> Edo
>
> PS - we need votes :-)
>
> --
>
> Edoardo Comar
>
> IBM Message Hub
>
> IBM UK Ltd, Hursley Park, SO21 2JN
>
>
>
> From:   "Skrzypek, Jonathan" 
> To: "dev@kafka.apache.org" 
> Date:   19/09/2018 16:12
> Subject:***UNCHECKED*** RE: [VOTE] KIP-302 - Enable Kafka clients
> to use all  DNS resolved IP addresses
>
>
>
> I'm assuming this needs KIP-235 to be merged.
> Unfortunately I've tripped over some merge issues with git and struggled
> to fix.
> Hopefully this is fixed but any help appreciated :
> https://github.com/apache/kafka/pull/4485
>
>
>
> Jonathan Skrzypek
>
>
>
> -Original Message-
> From: Eno Thereska [mailto:eno.there...@gmail.com]
> Sent: 19 September 2018 11:01
> To: dev@kafka.apache.org
> Subject: Re: [VOTE] KIP-302 - Enable Kafka clients to use all DNS resolved
> IP addresses
>
> +1 (non-binding).
>
> Thanks
> Eno
>
> On Wed, Sep 19, 2018 at 10:09 AM, Rajini Sivaram 
> wrote:
>
> > Hi Edo,
> >
> > Thanks for the KIP!
> >
> > +1 (binding)
> >
> > On Tue, Sep 18, 2018 at 3:51 PM, Edoardo Comar 
> wrote:
> >
> > > Hi All,
> > >
> > > I'd like to start the vote on KIP-302:
> > >
> > >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
>
>
> > > 302+-+Enable+Kafka+clients+to+use+all+DNS+resolved+IP+addresses
> > >
> > > We'd love to get this in 2.1.0
> > > Kip freeze is just a few days away ... please cast your votes  :-):-)
> > >
> > > Thanks!!
> > > Edo
> > >
> > > --
> > >
> > > Edoardo Comar
> > >
> > > IBM Message Hub
> > >
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > > Unless stated otherwise above:
> > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> > > 741598.
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> > 3AU
> > >
> >
>
> 
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to:
> http://www.gs.com/privacy-notices
> <
> http://www.gs.com/privacy-notices
>
> >
>
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>
> 
>
> Your Personal Data: We may collect and process information about you that
> may be subject to data protection laws. For more information about how we
> use and disclose your personal data, how we protect your information, our
> legal basis to use your information, your rights and who you can contact,
> please refer to: www.gs.com/privacy-notices<
> http://www.gs.com/privacy-notices
> >
>
>
>
>
> Unless stated otherwise above:
> IBM United Kingdom Limited - Registered in England and Wales with number
> 741598.
> Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6 3AU
>


[jira] [Created] (KAFKA-7435) Consider standardizing the config object pattern on interface/implementation.

2018-09-24 Thread John Roesler (JIRA)
John Roesler created KAFKA-7435:
---

 Summary: Consider standardizing the config object pattern on 
interface/implementation.
 Key: KAFKA-7435
 URL: https://issues.apache.org/jira/browse/KAFKA-7435
 Project: Kafka
  Issue Type: Improvement
  Components: streams
Reporter: John Roesler
 Fix For: 3.0.0


Currently, the majority of Streams's config objects are structured as a 
"external" builder class (with protected state) and an "internal" subclass 
exposing getters to the state. This is serviceable, but there is an alternative 
we can consider: to use an interface for the external API and the 
implementation class for the internal one.

Advantages:
 * we could use private state, which improves maintainability
 * the setters and getters would all be defined in the same class, improving 
readability
 * users browsing the public API would be able to look at an interface that 
contains less extraneous internal details than the current class
 * there is more flexibility in implementation

Alternatives
 * instead of external-class/internal-subclass, we could use an external 
*final* class with package-protected state and an internal accessor class (not 
a subclass, obviously). This would make it impossible for users to try and 
create custom subclasses of our config objects, which is generally not allowed 
already, but is currently a runtime class cast exception.

Example implementation: [https://github.com/apache/kafka/pull/5677]

This change would break binary, but not source, compatibility, so the earliest 
we could consider it is 3.0.

To be clear, I'm *not* saying this *should* be done, just calling for a 
discussion. Otherwise, I'd make a KIP.

Thoughts?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


Re: [VOTE] KIP-371: Add a configuration to build custom SSL principal name

2018-09-24 Thread Manikumar
Bump. This KIP requires one more binding vote.
Please take a look.

Thanks,

On Sun, Sep 23, 2018 at 9:14 PM Satish Duggana 
wrote:

> +1 (non binding)
>
> Thanks,
> Satish.
>
> On Fri, Sep 21, 2018 at 3:26 PM, Rajini Sivaram 
> wrote:
> > Hi Manikumar,
> >
> > Thanks for the KIP!
> >
> > +1 (binding)
> >
> > On Thu, Sep 20, 2018 at 8:53 PM, Priyank Shah 
> wrote:
> >
> >> +1(non-binding)
> >>
> >> On 9/20/18, 9:18 AM, "Harsha Chintalapani"  wrote:
> >>
> >> +1 (binding).
> >>
> >> Thanks,
> >> Harsha
> >>
> >>
> >> On September 19, 2018 at 5:19:51 AM, Manikumar (
> >> manikumar.re...@gmail.com) wrote:
> >>
> >> Hi All,
> >>
> >> I would like to start voting on KIP-371, which adds a configuration
> >> option
> >> for building custom SSL principal names.
> >>
> >> KIP:
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> >> 371%3A+Add+a+configuration+to+build+custom+SSL+principal+name
> >>
> >> Discussion Thread:
> >> https://lists.apache.org/thread.html/e346f5e3e3dd1feb863594e40eac1e
> >> d54138613a667f319b99344710@%3Cdev.kafka.apache.org%3E
> >>
> >> Thanks,
> >> Manikumar
> >>
> >>
> >>
>


Re: [DISCUSS] KIP-158: Kafka Connect should allow source connectors to set topic-specific settings for new topics

2018-09-24 Thread Randall Hauch
Hi, Ryanne. My apologies for not responding earlier, as I was on a long
holiday.

Thanks for your feedback and questions about this KIP. You've raised
several points in the discussion so far, so let me try to address most of
them.

IIUC, one of your major concerns is that this KIP introduces a new way to
define configurations for topics. That is true, and the whole reason is to
simply the user experience for people using source connectors. You still
have the freedom to manually pre-create topics before running a connector,
or to rely upon the broker automatically creating topics for the connectors
when those topics don't yet exist -- in both cases, don't include anything
about topic creation in your connector configurations. In fact, when you do
this, Connect uses the current behavior by assuming the topics exist or
will be autocreated with the proper configurations.

But for many environments, this existing approach is not enough. First, if
you're relying upon the broker to autocreate topics, then the brokers
single set of default topic settings must match the requirements of your
new topics. This can be difficult when your running multiple kinds of
connectors with differing expectations. Consider using a CDC connector that
expects compaction, a high-volume web service connector that should not use
compaction but expects deletion after 7 days, and another connector with
lower volume that uses 30 day retention. Or, consider connectors that are
producing to topics that have very different message characteristics:
different sizes, different throughputs, different partitions, etc. The only
way to work around this is to pre-create the topics, but this adds more
complexity and room for errors, especially when a single instance of some
source connectors can write to dozens (or even hundreds) of topics.

Second, many operators prefer (or are required) to disable topic
autocreation, since simple mistakes with command line tools can result in
new topics. In this cases, users have no choice but to manually precreate
the topics that complicates the process of running a connector and, as
mentioned above, increases the risk that something goes wrong.

Third, the reason why this KIP introduces a way for connector
implementations to override some topic settings is because some source
connectors have very specific requirements. When I wrote the first Debezium
CDC connectors, many first-time users didn't precreate the topics as
recommended in the documentation, and didn't change their brokers' default
topic settings. Only after a few days when they tried reconsuming the full
streams did they realize that Kafka had deleted messages older than the
default retention period. Debezium expects / requires compacted topics, so
all kinds of things went wrong. Connect is often one of the first ways in
which people get introduced to Kafka, and they simply don't yet have an
understanding of many of the details that you or I don't have to think
twice about.

You suggested that maybe Connect should just expose the Admin API. That's
possible, but IMO it's very heavyweight and complex. The whole point of
Connect's design is to abstract the connector developer away from most of
the details of Kafka -- it doesn't even expose the producer and consumer
APIs, which are much simpler. IMO it would be a mistake to require source
connector developers to deal with the Admin API -- I even have trouble
writing code that uses it to properly create topics, especially around
properly handling all of the potential error conditions.

You also mention that topic settings in a connector configuration might not
reflect the actual topic's settings. This is true, especially if the topic
already existed with different settings before the connector was run.
However, this is also very true of the broker's default topic settings,
which very often don't reflect the actual settings for all of the topics --
the defaults may have been changed, or topics are created manually with
very different settings. The only way to know the settings of a particular
topic are to get them for that topic.

The use of naming rules in the topic creation settings is intentional, and
it allows connector users to define topic settings for topics based upon
the names. In some cases this may require several rules to handle the
different topics, but most of the time a single rule may be all that's
required. I also don't agree that users will start naming topics to
simplify their rules -- many source connectors that write to more than one
topic often don't allow the user to specify the full name of the topics
anyway, and when they do they often only write to one topic.

I still think that the proposed KIP provides a simple way for most source
connector users to control (via configuration) the settings of the topics
that will be created by Connect for that connector, which works with all
existing source connectors out of the box and does not add additional
complexities for source connector 

Re: KIP-213 - Scalable/Usable Foreign-Key KTable joins - Rebooted.

2018-09-24 Thread Adam Bellemare
@Guozhang

Thanks for the information. This is indeed something that will be extremely
useful for this KIP.

@Jan
Thanks for your explanations. That being said, I will not be moving ahead
with an implementation using reshuffle/groupBy solution as you propose.
That being said, if you wish to implement it yourself off of my current PR
and submit it as a competitive alternative, I would be more than happy to
help vet that as an alternate solution. As it stands right now, I do not
really have more time to invest into alternatives without there being a
strong indication from the binding voters which they would prefer.


I will look at finishing up my PR with the windowed state store in the next
week or so, exercising it via tests, and then I will come back for final
discussions. In the meantime, I hope that any of the binding voters could
take a look at the KIP in the wiki. I have updated it according to the
latest plan:

https://cwiki.apache.org/confluence/display/KAFKA/KIP-213+Support+non-key+joining+in+KTable

I have also updated the KIP PR to use a windowed store. This could be
replaced by the results of KIP-258 whenever they are completed.
https://github.com/apache/kafka/pull/5527

Thanks,

Adam



On Fri, Sep 14, 2018 at 2:24 PM, Guozhang Wang  wrote:

> Correction on my previous email: KAFKA-5533 is the wrong link, as it is for
> corresponding changelog mechanisms. But as part of KIP-258 we do want to
> have "handling out-of-order data for source KTable" such that instead of
> blindly apply the updates to the materialized store, i.e. following offset
> ordering, we will reject updates that are older than the current key's
> timestamps, i.e. following timestamp ordering.
>
>
> Guozhang
>
> On Fri, Sep 14, 2018 at 11:21 AM, Guozhang Wang 
> wrote:
>
> > Hello Adam,
> >
> > Thanks for the explanation. Regarding the final step (i.e. the high
> > watermark store, now altered to be replaced with a window store), I think
> > another current on-going KIP may actually help:
> >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > 258%3A+Allow+to+Store+Record+Timestamps+in+RocksDB
> >
> >
> > This is for adding the timestamp into a key-value store (i.e. only for
> > non-windowed KTable), and then one of its usage, as described in
> > https://issues.apache.org/jira/browse/KAFKA-5533, is that we can then
> > "reject" updates from the source topics if its timestamp is smaller than
> > the current key's latest update timestamp. I think it is very similar to
> > what you have in mind for high watermark based filtering, while you only
> > need to make sure that the timestamps of the joining records are
> correctly
> > inherited though the whole topology to the final stage.
> >
> > Note that this KIP is for key-value store and hence non-windowed KTables
> > only, but for windowed KTables we do not really have a good support for
> > their joins anyways (https://issues.apache.org/jira/browse/KAFKA-7107) I
> > think we can just consider non-windowed KTable-KTable non-key joins for
> > now. In which case, KIP-258 should help.
> >
> >
> >
> > Guozhang
> >
> >
> >
> > On Wed, Sep 12, 2018 at 9:20 PM, Jan Filipiak 
> > wrote:
> >
> >>
> >> On 11.09.2018 18:00, Adam Bellemare wrote:
> >>
> >>> Hi Guozhang
> >>>
> >>> Current highwater mark implementation would grow endlessly based on
> >>> primary key of original event. It is a pair of ( key>,
> >>> ). This is used to differentiate
> between
> >>> late arrivals and new updates. My newest proposal would be to replace
> it
> >>> with a Windowed state store of Duration N. This would allow the same
> >>> behaviour, but cap the size based on time. This should allow for all
> >>> late-arriving events to be processed, and should be customizable by the
> >>> user to tailor to their own needs (ie: perhaps just 10 minutes of
> window,
> >>> or perhaps 7 days...).
> >>>
> >> Hi Adam, using time based retention can do the trick here. Even if I
> >> would still like to see the automatic repartitioning optional since I
> would
> >> just reshuffle again. With windowed store I am a little bit sceptical
> about
> >> how to determine the window. So esentially one could run into problems
> when
> >> the rapid change happens near a window border. I will check you
> >> implementation in detail, if its problematic, we could still check _all_
> >> windows on read with not to bad performance impact I guess. Will let you
> >> know if the implementation would be correct as is. I wouldn't not like
> to
> >> assume that: offset(A) < offset(B) => timestamp(A)  < timestamp(B). I
> think
> >> we can't expect that.
> >>
> >>>
> >>>
> >>> @Jan
> >>> I believe I understand what you mean now - thanks for the diagram, it
> >>> did really help. You are correct that I do not have the original
> primary
> >>> key available, and I can see that if it was available then you would be
> >>> able to add and remove events from the Map. That being said, I
> encourage
> >>> you to finish your diagrams / charts just 

[DISCUSS] KIP-375: TopicCommand to use AdminClient

2018-09-24 Thread Viktor Somogyi-Vass
Hi All,

I wrote up a relatively simple KIP about improving the Kafka protocol and
the TopicCommand tool to support the new Java based AdminClient and
hopefully to deprecate the Zookeeper side of it.

I would be happy to receive some opinions about this. In general I think
this would be an important addition as this is one of the few left but
important tools that still uses direct Zookeeper connection.

Here is the link for the KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-375%3A+TopicCommand+to+use+AdminClient

Thanks,
Viktor


Re: [VOTE] KIP 368: Allow SASL Connections to Periodically Re-Authenticate

2018-09-24 Thread Ron Dagostino
Hi Everyone.  This KIP still requires 1 more binding up-vote to be
considered for the 2.1.0 release.  **Please vote before today's end-of-day
deadline.**

The current vote is 2 binding +1 votes (Rajini and Harsha) and 7
non-binding +1 votes (myself, Mike, Konstantin, Boerge, Edoardo, Stanislav,
and Mickael).

Ron

On Fri, Sep 21, 2018 at 11:48 AM Mickael Maison 
wrote:

> +1 (non-binding)
> Thanks for the KIP, this is a very nice feature.
> On Fri, Sep 21, 2018 at 4:56 PM Stanislav Kozlovski
>  wrote:
> >
> > Thanks for the KIP, Ron!
> > +1 (non-binding)
> >
> > On Fri, Sep 21, 2018 at 5:26 PM Ron Dagostino  wrote:
> >
> > > Hi Everyone.  This KIP requires 1 more binding up-vote to be
> considered for
> > > the 2.1.0 release; please vote before the Monday deadline.
> > >
> > > The current vote is 2 binding +1 votes (Rajini and Harsha) and 5
> > > non-binding +1 votes (myself, Mike, Konstantin, Boerge, and Edoardo).
> > >
> > > Ron
> > >
> > > On Wed, Sep 19, 2018 at 12:40 PM Harsha  wrote:
> > >
> > > > KIP looks good. +1 (binding)
> > > >
> > > > Thanks,
> > > > Harsha
> > > >
> > > > On Wed, Sep 19, 2018, at 7:44 AM, Rajini Sivaram wrote:
> > > > > Hi Ron,
> > > > >
> > > > > Thanks for the KIP!
> > > > >
> > > > > +1 (binding)
> > > > >
> > > > > On Tue, Sep 18, 2018 at 6:24 PM, Konstantin Chukhlomin <
> > > > chuhlo...@gmail.com>
> > > > > wrote:
> > > > >
> > > > > > +1 (non binding)
> > > > > >
> > > > > > > On Sep 18, 2018, at 1:18 PM, michael.kamin...@nytimes.com
> wrote:
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 2018/09/18 14:59:09, Ron Dagostino 
> wrote:
> > > > > > >> Hi everyone.  I would like to start the vote for KIP-368:
> > > > > > >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-
> > > > > > 368%3A+Allow+SASL+Connections+to+Periodically+Re-Authenticate
> > > > > > >>
> > > > > > >> This KIP proposes adding the ability for SASL clients (and
> brokers
> > > > when
> > > > > > a
> > > > > > >> SASL mechanism is the inter-broker protocol) to
> re-authenticate
> > > > their
> > > > > > >> connections to brokers and for brokers to close connections
> that
> > > > > > continue
> > > > > > >> to use expired sessions.
> > > > > > >>
> > > > > > >> Ron
> > > > > > >>
> > > > > > >
> > > > > > > +1 (non binding)
> > > > > >
> > > > > >
> > > >
> > >
> >
> >
> > --
> > Best,
> > Stanislav
>


[jira] [Created] (KAFKA-7434) DeadLetterQueueReporter throws NPE if transform throws NPE

2018-09-24 Thread Michal Borowiecki (JIRA)
Michal Borowiecki created KAFKA-7434:


 Summary: DeadLetterQueueReporter throws NPE if transform throws NPE
 Key: KAFKA-7434
 URL: https://issues.apache.org/jira/browse/KAFKA-7434
 Project: Kafka
  Issue Type: Bug
  Components: KafkaConnect
Affects Versions: 2.0.0
 Environment: jdk 8
Reporter: Michal Borowiecki


A NPE thrown from a transform in a connector configured with

errors.deadletterqueue.context.headers.enable=true

causes DeadLetterQueueReporter to break with a NPE.
{quote}{{Executing stage 'TRANSFORMATION' with class 
'org.apache.kafka.connect.transforms.Flatten$Value', where consumed record is 
\{topic='', partition=1, offset=0, timestamp=1537370573366, 
timestampType=CreateTime}. 
(org.apache.kafka.connect.runtime.errors.LogReporter)}}
{{java.lang.NullPointerException}}
{{Task threw an uncaught and unrecoverable exception 
(org.apache.kafka.connect.runtime.WorkerTask)}}
{{java.lang.NullPointerException}}
{{ at 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.toBytes(DeadLetterQueueReporter.java:202)}}
{{ at 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.populateContextHeaders(DeadLetterQueueReporter.java:172)}}
{{ at 
org.apache.kafka.connect.runtime.errors.DeadLetterQueueReporter.report(DeadLetterQueueReporter.java:146)}}
{{ at 
org.apache.kafka.connect.runtime.errors.ProcessingContext.report(ProcessingContext.java:137)}}
{{ at 
org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:108)}}
{{ at 
org.apache.kafka.connect.runtime.TransformationChain.apply(TransformationChain.java:44)}}
{{ at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:532)}}
{{ at 
org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:490)}}
{{ at 
org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:321)}}
{{ at 
org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:225)}}
{{ at 
org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:193)}}
{{ at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)}}
{{ at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)}}
{{ at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)}}
{{ at java.util.concurrent.FutureTask.run(FutureTask.java:266)}}
{{ at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)}}
{{ at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)}}
{{ at java.lang.Thread.run(Thread.java:748)}}
{quote}
 

This is caused by populateContextHeaders only checking if the Throwable is not 
null, but not checking that the message in the Throwable is not null before 
trying to serialize the message:

[https://github.com/apache/kafka/blob/cfd33b313c9856ae2b4b45ed3d4aac41d6ef5a6b/connect/runtime/src/main/java/org/apache/kafka/connect/runtime/errors/DeadLetterQueueReporter.java#L170-L177]

if (context.error() != null) {
    headers.add(ERROR_HEADER_EXCEPTION, 
toBytes(context.error().getClass().getName()));
    headers.add(ERROR_HEADER_EXCEPTION_MESSAGE, 
toBytes(context.error().getMessage()));



toBytes throws an NPE if passed null as the parameter.

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[DISCUSS] Make org.apache.kafka.clients.Metadata#TOPIC_EXPIRY_MS configurable

2018-09-24 Thread Pavel Moukhataev
I'd like to introduce new feature for kafka client:
Making org.apache.kafka.clients.Metadata#TOPIC_EXPIRY_MS configurable
Here is KPI
https://cwiki.apache.org/confluence/display/KAFKA/KIP-375%3A+Make+org.apache.kafka.clients.Metadata%23TOPIC_EXPIRY_MS+configurable

The problem is: if application sends records to some topic rarely then
topic metadata gets expired and sending thread is blocked to wait topic
metadata.

Easy fix is to make TOPIC_EXPIRY_MS configurable.

-- 
Pavel
+7-903-258-5544
skype://pavel.moukhataev


[jira] [Created] (KAFKA-7433) Introduce broker options in TopicCommand to use AdminClient

2018-09-24 Thread Viktor Somogyi (JIRA)
Viktor Somogyi created KAFKA-7433:
-

 Summary: Introduce broker options in TopicCommand to use 
AdminClient
 Key: KAFKA-7433
 URL: https://issues.apache.org/jira/browse/KAFKA-7433
 Project: Kafka
  Issue Type: Improvement
  Components: core
Affects Versions: 2.1.0
Reporter: Viktor Somogyi
Assignee: Viktor Somogyi






--
This message was sent by Atlassian JIRA
(v7.6.3#76005)