Jenkins build is back to normal : kafka-trunk-jdk8 #3877

2019-08-29 Thread Apache Jenkins Server
See 




Build failed in Jenkins: kafka-2.3-jdk8 #98

2019-08-29 Thread Apache Jenkins Server
See 


Changes:

[wangguoz] HOTFIX: AssignedStreamsTasksTest lacks one parameter

--
Started by an SCM change
[EnvInject] - Loading node environment variables.
Building remotely on H27 (ubuntu xenial) in workspace 

[WS-CLEANUP] Deleting project workspace...
[WS-CLEANUP] Deferred wipeout is used...
No credentials specified
Cloning the remote Git repository
Cloning repository https://github.com/apache/kafka.git
 > git init  # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git --version # timeout=10
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
 > git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # 
 > timeout=10
 > git config remote.origin.url https://github.com/apache/kafka.git # timeout=10
Fetching upstream changes from https://github.com/apache/kafka.git
 > git fetch --tags --progress https://github.com/apache/kafka.git 
 > +refs/heads/*:refs/remotes/origin/*
 > git rev-parse refs/remotes/origin/2.3^{commit} # timeout=10
 > git rev-parse refs/remotes/origin/origin/2.3^{commit} # timeout=10
Checking out Revision 2a38ae7c492292282ed4c42845d4348e2eb166d5 
(refs/remotes/origin/2.3)
 > git config core.sparsecheckout # timeout=10
 > git checkout -f 2a38ae7c492292282ed4c42845d4348e2eb166d5
Commit message: "HOTFIX: AssignedStreamsTasksTest lacks one parameter"
 > git rev-list --no-walk f1244e508d6e25c2ee603578a0c897af235fc93a # timeout=10
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[kafka-2.3-jdk8] $ /bin/bash -xe /tmp/jenkins9166934260545446155.sh
+ rm -rf 
+ /home/jenkins/tools/gradle/4.8.1/bin/gradle
/tmp/jenkins9166934260545446155.sh: line 4: 
/home/jenkins/tools/gradle/4.8.1/bin/gradle: No such file or directory
Build step 'Execute shell' marked build as failure
[FINDBUGS] Collecting findbugs analysis files...
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
[FINDBUGS] Searching for all files in 
 that match the pattern 
**/build/reports/findbugs/*.xml
[FINDBUGS] No files found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
No credentials specified
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
 Using GitBlamer to create author and commit information for all 
warnings.
 GIT_COMMIT=2a38ae7c492292282ed4c42845d4348e2eb166d5, 
workspace=
[FINDBUGS] Computing warning deltas based on reference build #94
Recording test results
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
ERROR: Step ?Publish JUnit test result report? failed: No test report files 
were found. Configuration error?
Setting GRADLE_4_8_1_HOME=/home/jenkins/tools/gradle/4.8.1
Not sending mail to unregistered user g...@confluent.io
Not sending mail to unregistered user wangg...@gmail.com


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Maulin Vasavada
I thought about it more. I feel that no matter how we refactor the code
(with or without KIP-383 integrated), ultimately the need of customizing
loading for keys and certs will still remain. Whenever that need arises we
might end up thinking about the solution suggested by our KIP-486. Hence
regardless of the other KIPs and configurations "if we do need to customize
loading of keys/certs, we will need the code changes suggested by this KIP".

Let me know what you guys think.

Harsha, we are working on changing the interfaces for key/trust store
loaders with Certificate and PrivateKey objects. Will probably be able to
update it later today or tomorrow.

Thanks
Maulin





On Thu, Aug 29, 2019 at 2:30 PM Maulin Vasavada 
wrote:

> On that, I actually looked at KIP-383 before briefly. However,  that
> sounded like lot of changes suggested.
>
> One "key" thing we have to keep in mind is - IF we need lot of
> customization Kafka already allows you to use your SslProvider via
> ssl.providers or the changes done by KIP-492 and
> SSLContext.getInstance(protocol, provider) call allows us to return the
> SSLContext with "ALL" the details we would like to customize. Hence I am
> not sure that customization suggested by KIP-383 would be worth the effort.
> We also have similar SSLContext customization outside of Kafka.
>
> Thanks
> Maulin
>
>
>
>
>
> On Thu, Aug 29, 2019 at 12:47 PM Pellerin, Clement <
> clement_pelle...@ibi.com> wrote:
>
>> KIP-383 in its present form was vetoed because it was not possible to add
>> validation of custom properties in a future KIP. The solution to that is
>> the first proposal I wrote for KIP-383 which made the whole SslFactory
>> pluggable. That first solution was also vetoed hence the deadlock.
>>
>> Replacing the whole factory was a much nicer solution. It was vetoed
>> because doing this almost invariably meant the replacement lost all the
>> complex validation code in the default SslFactory.
>>
>> My current idea is to extract the validation code into another public API
>> that SslFactory would call. I did not look at the newly refactored code and
>> I did not study how to do this yet. KIP-383 was not popular at the time and
>> designing a new solution is a lot of work.
>>
>> Is there interest from 3 binding voters for something like this?
>>
>> -Original Message-
>> From: Rajini Sivaram [mailto:rajinisiva...@gmail.com]
>> Sent: Thursday, August 29, 2019 2:57 PM
>> To: dev
>> Subject: Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and
>> TrustStore
>>
>> Hi Maulin,
>>
>> In SSL scenarios, I imagine security providers introduced by KIP-492 are
>> likely to be most useful when you want to use third party providers. The
>> biggest advantage of the config from that KIP is that you don't need to
>> write much code to integrate existing security providers into Kafka
>> brokers
>> or clients. As I understand it, KIP-486 is a more convenient option for
>> the
>> specific problem of loading keystores/truststores differently. It can be
>> achieved in theory with KIP-492, but KIP-486 is a much simpler option for
>> this case.
>>
>> My concern about KIP-486 is that it introduces yet another interface into
>> our already complex security code, while only solving one particular use
>> case. Have you looked at
>>
>> https://cwiki.apache.org/confluence/display/KAFKA/KIP-383%3A++Pluggable+interface+for+SSL+Factory
>> ?
>> The goal was to make
>> org.apache.kafka.common.security.ssl.SslEngineBuilder pluggable.
>> The code has already been refactored by Colin after that KIP was written,
>> making it easier to implement KIP-383. This should enable you to load your
>> keystores and truststores differently. Using a pluggable SslEngineBuilder
>> will also solve several other use cases at the same time. KIP-383 hasn't
>> been voted through yet, but perhaps you could take a look and we could
>> revive that instead if it solves your use case as well?
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Thu, Aug 29, 2019 at 6:42 PM Maulin Vasavada <
>> maulin.vasav...@gmail.com>
>> wrote:
>>
>> > Hi Harsha
>> >
>> > Thank you. Appreciate your time and support on this. Let me go back do
>> some
>> > more research and get back to you on the KeyStore interface part.
>> > Basically, if we return certs and keys in the interface then Kafka code
>> > will have to build KeyStore object - which is also reasonable.
>> >
>> > Thanks
>> > Maulin
>> >
>> > On Thu, Aug 29, 2019 at 10:01 AM Harsha Chintalapani 
>> > wrote:
>> >
>> > > Hi Maulin,
>> > > Use cases are clear now. I am +1 for moving
>> forward
>> > > with the discussions on having such configurable option for users. But
>> > the
>> > > interfaces is proposed doesn't look right to me. We are still talking
>> > about
>> > > keystore interfaces.  Given keystore's are used as filebased way of
>> > > transporting certificates I am not sure it will help the rest of the
>> > > user-base.
>> > >   In short, I am +1 on the KIP's 

Re: KIP-382: MirrorMaker 2.0 progress to delivery?

2019-08-29 Thread Ryanne Dolan
Andrew, thanks for your continued interest in MM2 :)

My plan is to bug the committers after this US holiday weekend.

Ryanne



On Wed, Aug 28, 2019 at 10:57 AM Andrew Schofield 
wrote:

> Hi,
> KIP-382 (MirrorMaker 2.0) has been approved for a while now but the code
> hasn’t yet made it into Kafka. If my memory serves me well, it looked like
> a candidate for 2.3 and it’s now a candidate for 2.4.
>
> For such a significant feature, I guess it’s going to take a little time
> to mature and that’s easiest with a broad range of people using it, and
> that’s only going to happen once it’s in a release. Does anyone have a view
> on the likelihood of it making 2.4?
>
> Thanks,
> Andrew Schofield
>


Re: [VOTE] KIP-401: TransformerSupplier/ProcessorSupplier StateStore connecting

2019-08-29 Thread Paul Whalen
Thanks for the votes all! With two binding votes we’re in need of one more for 
the KIP to be accepted. With the 2.4 release coming in September, it would be 
great to get another committer to take a look soon so I could set aside some 
time to get implementation/documentation done to make it into the release.

Thanks,
Paul

> On Aug 20, 2019, at 5:47 PM, Bill Bejeck  wrote:
> 
> Thanks for the KIP.
> 
> +1 (binding)
> 
> On Tue, Aug 20, 2019 at 6:28 PM Matthias J. Sax 
> wrote:
> 
>> +1 (binding)
>> 
>> 
>>> On 6/17/19 2:32 PM, John Roesler wrote:
>>> I'm +1 (nonbinding) on the current iteration of the proposal.
>>> 
 On Mon, May 27, 2019 at 1:58 PM Paul Whalen  wrote:
 
 I spoke too early a month ago, but I believe the proposal is finalized
>> now
 and ready for voting.
 
 KIP:
 
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756
 
 Discussion:
 
>> https://lists.apache.org/thread.html/600996d83d485f2b8daf45037de64a60cebdfac9b234bf3449b6b753@%3Cdev.kafka.apache.org%3E
 
 Pull request (still a WIP, obviously):
 https://github.com/apache/kafka/pull/6824
 
 Thanks,
 Paul
 
> On Wed, Apr 24, 2019 at 8:00 PM Paul Whalen  wrote:
> 
> Hi all,
> 
> After some good discussion on and adjustments to KIP-401 (which I
>> renamed
> slightly for clarity), chatter has died down so I figured I may as well
> start a vote.
> 
> KIP:
> TransformerSupplier/ProcessorSupplier StateStore connecting
> <
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97553756>
> Discussion:
> 
> 
>> https://lists.apache.org/thread.html/600996d83d485f2b8daf45037de64a60cebdfac9b234bf3449b6b753@%3Cdev.kafka.apache.org%3E
> 
> Thanks!
> Paul
> 
>> 
>> 


[jira] [Resolved] (KAFKA-8828) [BC Break] Global store returns a TimestampedKeyValueStore in 2.3

2019-08-29 Thread Marcos Passos (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8828?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Marcos Passos resolved KAFKA-8828.
--
Resolution: Invalid

> [BC Break] Global store returns a TimestampedKeyValueStore in 2.3
> -
>
> Key: KAFKA-8828
> URL: https://issues.apache.org/jira/browse/KAFKA-8828
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.3.0
>Reporter: Marcos Passos
>Priority: Major
>
> Since 2.3, {{ProcessorContext}} returns a {{TimestampedKeyValueStore}} for 
> global stores, which is backward incompatible. This change makes the upgrade 
> path a lot painful and involves creating a non-trivial adapter to hide the 
> timestamp-related functionality in cases where it is not needed.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Maulin Vasavada
On that, I actually looked at KIP-383 before briefly. However,  that
sounded like lot of changes suggested.

One "key" thing we have to keep in mind is - IF we need lot of
customization Kafka already allows you to use your SslProvider via
ssl.providers or the changes done by KIP-492 and
SSLContext.getInstance(protocol, provider) call allows us to return the
SSLContext with "ALL" the details we would like to customize. Hence I am
not sure that customization suggested by KIP-383 would be worth the effort.
We also have similar SSLContext customization outside of Kafka.

Thanks
Maulin





On Thu, Aug 29, 2019 at 12:47 PM Pellerin, Clement 
wrote:

> KIP-383 in its present form was vetoed because it was not possible to add
> validation of custom properties in a future KIP. The solution to that is
> the first proposal I wrote for KIP-383 which made the whole SslFactory
> pluggable. That first solution was also vetoed hence the deadlock.
>
> Replacing the whole factory was a much nicer solution. It was vetoed
> because doing this almost invariably meant the replacement lost all the
> complex validation code in the default SslFactory.
>
> My current idea is to extract the validation code into another public API
> that SslFactory would call. I did not look at the newly refactored code and
> I did not study how to do this yet. KIP-383 was not popular at the time and
> designing a new solution is a lot of work.
>
> Is there interest from 3 binding voters for something like this?
>
> -Original Message-
> From: Rajini Sivaram [mailto:rajinisiva...@gmail.com]
> Sent: Thursday, August 29, 2019 2:57 PM
> To: dev
> Subject: Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and
> TrustStore
>
> Hi Maulin,
>
> In SSL scenarios, I imagine security providers introduced by KIP-492 are
> likely to be most useful when you want to use third party providers. The
> biggest advantage of the config from that KIP is that you don't need to
> write much code to integrate existing security providers into Kafka brokers
> or clients. As I understand it, KIP-486 is a more convenient option for the
> specific problem of loading keystores/truststores differently. It can be
> achieved in theory with KIP-492, but KIP-486 is a much simpler option for
> this case.
>
> My concern about KIP-486 is that it introduces yet another interface into
> our already complex security code, while only solving one particular use
> case. Have you looked at
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-383%3A++Pluggable+interface+for+SSL+Factory
> ?
> The goal was to make
> org.apache.kafka.common.security.ssl.SslEngineBuilder pluggable.
> The code has already been refactored by Colin after that KIP was written,
> making it easier to implement KIP-383. This should enable you to load your
> keystores and truststores differently. Using a pluggable SslEngineBuilder
> will also solve several other use cases at the same time. KIP-383 hasn't
> been voted through yet, but perhaps you could take a look and we could
> revive that instead if it solves your use case as well?
>
> Regards,
>
> Rajini
>
>
> On Thu, Aug 29, 2019 at 6:42 PM Maulin Vasavada  >
> wrote:
>
> > Hi Harsha
> >
> > Thank you. Appreciate your time and support on this. Let me go back do
> some
> > more research and get back to you on the KeyStore interface part.
> > Basically, if we return certs and keys in the interface then Kafka code
> > will have to build KeyStore object - which is also reasonable.
> >
> > Thanks
> > Maulin
> >
> > On Thu, Aug 29, 2019 at 10:01 AM Harsha Chintalapani 
> > wrote:
> >
> > > Hi Maulin,
> > > Use cases are clear now. I am +1 for moving forward
> > > with the discussions on having such configurable option for users. But
> > the
> > > interfaces is proposed doesn't look right to me. We are still talking
> > about
> > > keystore interfaces.  Given keystore's are used as filebased way of
> > > transporting certificates I am not sure it will help the rest of the
> > > user-base.
> > >   In short, I am +1 on the KIP's motivation and only
> have
> > > questions around returning keystores instead of returning certs,
> private
> > > keys etc. . If others in the community are ok with such interface we
> can
> > > move forward.
> > >
> > > Thanks,
> > > Harsha
> > >
> > >
> > > On Wed, Aug 28, 2019 at 1:51 PM, Maulin Vasavada <
> > > maulin.vasav...@gmail.com>
> > > wrote:
> > >
> > > > Hi Harsha
> > > >
> > > > As we synced-up offline on this topic, we hope you don't have any
> more
> > > > clarifications that you are seeking. If that is the case, can you
> > please
> > > > help us move this forward and discuss what changes you would expect
> on
> > > the
> > > > KIP design in order to make it valuable contribution?
> > > >
> > > > Just FYI - we verified our primary design change with the author of
> > Sun's
> > > > X509 Trustmanager implementation and the outcome is that what we are
> > > > proposing makes sense at the 

RE: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Pellerin, Clement
KIP-383 in its present form was vetoed because it was not possible to add 
validation of custom properties in a future KIP. The solution to that is the 
first proposal I wrote for KIP-383 which made the whole SslFactory pluggable. 
That first solution was also vetoed hence the deadlock.

Replacing the whole factory was a much nicer solution. It was vetoed because 
doing this almost invariably meant the replacement lost all the complex 
validation code in the default SslFactory.

My current idea is to extract the validation code into another public API that 
SslFactory would call. I did not look at the newly refactored code and I did 
not study how to do this yet. KIP-383 was not popular at the time and designing 
a new solution is a lot of work.

Is there interest from 3 binding voters for something like this?

-Original Message-
From: Rajini Sivaram [mailto:rajinisiva...@gmail.com] 
Sent: Thursday, August 29, 2019 2:57 PM
To: dev
Subject: Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

Hi Maulin,

In SSL scenarios, I imagine security providers introduced by KIP-492 are
likely to be most useful when you want to use third party providers. The
biggest advantage of the config from that KIP is that you don't need to
write much code to integrate existing security providers into Kafka brokers
or clients. As I understand it, KIP-486 is a more convenient option for the
specific problem of loading keystores/truststores differently. It can be
achieved in theory with KIP-492, but KIP-486 is a much simpler option for
this case.

My concern about KIP-486 is that it introduces yet another interface into
our already complex security code, while only solving one particular use
case. Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-383%3A++Pluggable+interface+for+SSL+Factory?
The goal was to make
org.apache.kafka.common.security.ssl.SslEngineBuilder pluggable.
The code has already been refactored by Colin after that KIP was written,
making it easier to implement KIP-383. This should enable you to load your
keystores and truststores differently. Using a pluggable SslEngineBuilder
will also solve several other use cases at the same time. KIP-383 hasn't
been voted through yet, but perhaps you could take a look and we could
revive that instead if it solves your use case as well?

Regards,

Rajini


On Thu, Aug 29, 2019 at 6:42 PM Maulin Vasavada 
wrote:

> Hi Harsha
>
> Thank you. Appreciate your time and support on this. Let me go back do some
> more research and get back to you on the KeyStore interface part.
> Basically, if we return certs and keys in the interface then Kafka code
> will have to build KeyStore object - which is also reasonable.
>
> Thanks
> Maulin
>
> On Thu, Aug 29, 2019 at 10:01 AM Harsha Chintalapani 
> wrote:
>
> > Hi Maulin,
> > Use cases are clear now. I am +1 for moving forward
> > with the discussions on having such configurable option for users. But
> the
> > interfaces is proposed doesn't look right to me. We are still talking
> about
> > keystore interfaces.  Given keystore's are used as filebased way of
> > transporting certificates I am not sure it will help the rest of the
> > user-base.
> >   In short, I am +1 on the KIP's motivation and only have
> > questions around returning keystores instead of returning certs, private
> > keys etc. . If others in the community are ok with such interface we can
> > move forward.
> >
> > Thanks,
> > Harsha
> >
> >
> > On Wed, Aug 28, 2019 at 1:51 PM, Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > wrote:
> >
> > > Hi Harsha
> > >
> > > As we synced-up offline on this topic, we hope you don't have any more
> > > clarifications that you are seeking. If that is the case, can you
> please
> > > help us move this forward and discuss what changes you would expect on
> > the
> > > KIP design in order to make it valuable contribution?
> > >
> > > Just FYI - we verified our primary design change with the author of
> Sun's
> > > X509 Trustmanager implementation and the outcome is that what we are
> > > proposing makes sense at the heart of it - "Instead of writing
> > TrustManager
> > > just plugin the Trust store". We are open to discuss additional changes
> > > that you/anybody else would like to see on the functionality however.
> > >
> > > Thanks
> > > Maulin
> > >
> > > On Thu, Aug 22, 2019 at 9:12 PM Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > > wrote:
> > >
> > > Hi Harsha
> > >
> > > Any response on my question? I feel this KIP is worth accommodating.
> Your
> > > help is much appreciated.
> > >
> > > Thanks
> > > Maulin
> > >
> > > On Tue, Aug 20, 2019 at 11:52 PM Maulin Vasavada <
> maulin.vasavada@gmail.
> > > com> wrote:
> > >
> > > Hi Harsha
> > >
> > > I've examined the SPIFFE provider more and have one question -
> > >
> > > If SPIFFE didn't have a need to do checkSpiffeId() call at the below
> > > location, would you really still write the 

Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Rajini Sivaram
Hi Maulin,

In SSL scenarios, I imagine security providers introduced by KIP-492 are
likely to be most useful when you want to use third party providers. The
biggest advantage of the config from that KIP is that you don't need to
write much code to integrate existing security providers into Kafka brokers
or clients. As I understand it, KIP-486 is a more convenient option for the
specific problem of loading keystores/truststores differently. It can be
achieved in theory with KIP-492, but KIP-486 is a much simpler option for
this case.

My concern about KIP-486 is that it introduces yet another interface into
our already complex security code, while only solving one particular use
case. Have you looked at
https://cwiki.apache.org/confluence/display/KAFKA/KIP-383%3A++Pluggable+interface+for+SSL+Factory?
The goal was to make
org.apache.kafka.common.security.ssl.SslEngineBuilder pluggable.
The code has already been refactored by Colin after that KIP was written,
making it easier to implement KIP-383. This should enable you to load your
keystores and truststores differently. Using a pluggable SslEngineBuilder
will also solve several other use cases at the same time. KIP-383 hasn't
been voted through yet, but perhaps you could take a look and we could
revive that instead if it solves your use case as well?

Regards,

Rajini


On Thu, Aug 29, 2019 at 6:42 PM Maulin Vasavada 
wrote:

> Hi Harsha
>
> Thank you. Appreciate your time and support on this. Let me go back do some
> more research and get back to you on the KeyStore interface part.
> Basically, if we return certs and keys in the interface then Kafka code
> will have to build KeyStore object - which is also reasonable.
>
> Thanks
> Maulin
>
> On Thu, Aug 29, 2019 at 10:01 AM Harsha Chintalapani 
> wrote:
>
> > Hi Maulin,
> > Use cases are clear now. I am +1 for moving forward
> > with the discussions on having such configurable option for users. But
> the
> > interfaces is proposed doesn't look right to me. We are still talking
> about
> > keystore interfaces.  Given keystore's are used as filebased way of
> > transporting certificates I am not sure it will help the rest of the
> > user-base.
> >   In short, I am +1 on the KIP's motivation and only have
> > questions around returning keystores instead of returning certs, private
> > keys etc. . If others in the community are ok with such interface we can
> > move forward.
> >
> > Thanks,
> > Harsha
> >
> >
> > On Wed, Aug 28, 2019 at 1:51 PM, Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > wrote:
> >
> > > Hi Harsha
> > >
> > > As we synced-up offline on this topic, we hope you don't have any more
> > > clarifications that you are seeking. If that is the case, can you
> please
> > > help us move this forward and discuss what changes you would expect on
> > the
> > > KIP design in order to make it valuable contribution?
> > >
> > > Just FYI - we verified our primary design change with the author of
> Sun's
> > > X509 Trustmanager implementation and the outcome is that what we are
> > > proposing makes sense at the heart of it - "Instead of writing
> > TrustManager
> > > just plugin the Trust store". We are open to discuss additional changes
> > > that you/anybody else would like to see on the functionality however.
> > >
> > > Thanks
> > > Maulin
> > >
> > > On Thu, Aug 22, 2019 at 9:12 PM Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > > wrote:
> > >
> > > Hi Harsha
> > >
> > > Any response on my question? I feel this KIP is worth accommodating.
> Your
> > > help is much appreciated.
> > >
> > > Thanks
> > > Maulin
> > >
> > > On Tue, Aug 20, 2019 at 11:52 PM Maulin Vasavada <
> maulin.vasavada@gmail.
> > > com> wrote:
> > >
> > > Hi Harsha
> > >
> > > I've examined the SPIFFE provider more and have one question -
> > >
> > > If SPIFFE didn't have a need to do checkSpiffeId() call at the below
> > > location, would you really still write the Provider? *OR* Would you
> just
> > > use TrustManagerFactory.init(KeyStore) signature to pass the KeyStore
> > from
> > > set of certs returned by spiffeIdManager. getTrustedCerts()?
> > >
> > >
> https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/
> > > provider/CertificateUtils.java#L100
> > >
> > > /**
> > >
> > > * Validates that the SPIFFE ID is present and matches the SPIFFE ID
> > > configured in
> > > * the java.security property ssl.spiffe.accept
> > > *
> > > * If the authorized spiffe ids list is empty any spiffe id is
> authorized
> > > *
> > > * @param chain an array of X509Certificate that contains the Peer's
> SVID
> > > to be validated
> > > * @throws CertificateException when either the certificates doesn't
> have
> > a
> > > SPIFFE ID or the SPIFFE ID is not authorized
> > > */
> > > static void checkSpiffeId(X509Certificate[] chain) throws
> > > CertificateException {
> > >
> > > Thanks
> > > Maulin
> > >
> > > On Tue, Aug 20, 2019 at 4:49 PM Harsha Chintalapani 
> > > wrote:
> > >
> 

[DISCUSS] KIP-515: Enable ZK client to use the new TLS supported authentication

2019-08-29 Thread Pere Urbón Bayes
Hi,
 this is my first KIP for a change in Apache Kafka, so I'm really need to
the process. Looking forward to hearing from you and learn the best ropes
here.

I would like to propose this KIP-515 to enable the ZookeeperClients to take
full advantage of the TLS communication in the new Zookeeper 3.5.5.
Specially interesting it the Zookeeper Security Migration, that without
this change will not work with TLS, disabling users to use ACLs when the
Zookeeper cluster use TLS.

link:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-515%3A+Enable+ZK+client+to+use+the+new+TLS+supported+authentication

Looking forward to hearing from you on this,

/cheers

-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/


Re: [ DISCUSS ] KIP-512:Adding headers to RecordMetaData

2019-08-29 Thread Renuka M
Hi Colin,

yes we agree but RecordMetadata in Interceptors and callbacks  will not
have headers which gives context on for which record this MetaData belongs
to. To fill that Gap, we are proposing these changes.


Thanks
Renuka M

On Thu, Aug 29, 2019 at 10:20 AM Colin McCabe  wrote:

> As Gwen commented earlier, the client already has the record that it sent,
> including all the headers.
>
> >
> > Future future = producer.send(myRecord, null);
> > future.get();
> > System.out.println("I sent myRecord with headers " + myRecord.headers());
> >
>
> best,
> Colin
>
>
> On Tue, Aug 27, 2019, at 17:06, Renuka M wrote:
> > Hi  Gwen/Team
> >
> > Can you please review the KIP. Hope we have clarified the question you
> have
> > regarding proposal.
> >
> > Thanks
> > Renuka M
> >
> > On Mon, Aug 26, 2019 at 3:35 PM Renuka M  wrote:
> >
> > > Hi Eric,
> > >
> > > We thought about that but we didn't find the strong  enough reason for
> > > having record itself in Acknowledgement.
> > > Headers are supposed to carry metadata and that is the reason headers
> are
> > > added to producer/consumer records.
> > > Also we feel having headers information in record metadata is good
> enough
> > > to bridge the gap and link the record to its metadata.
> > > Its simple change since we are not adding any new method signatures.
> > > Adding new method signatures requires adoption and deprecation of old
> ones
> > > to reduce duplication.
> > > If we get enough votes on adding new method signature, we are open to
> add
> > > it.
> > >
> > > Thanks
> > > Renuka M
> > >
> > > On Mon, Aug 26, 2019 at 10:54 AM Eric Azama 
> wrote:
> > >
> > >> Have you considered adding a new onAcknowledgement method to the
> > >> ProducerInterceptor with the signature
> onAcknowledgement(RecordMetadata
> > >> metadata, Exception exception, ProducerRecord record)? I would also
> > >> consider adding this to Producer Callbacks as well, since linking a
> > >> Callback to a specific record currently requires creating a new
> Callback
> > >> for every ProducerRecord sent.
> > >>
> > >> This seems like a more robust strategy compared to using Headers.
> Headers
> > >> don't necessarily contain anything that connects them to the original
> > >> ProducerRecord, and forcibly including information in the Headers
> seems
> > >> like unnecessary bloat. If your goal is to link a RecordMetadata to a
> > >> specific ProducerRecord, it seems simpler to make sure the original
> > >> ProducerRecord is accessible at the same time as the RecordMetadata
> > >>
> > >> On Mon, Aug 26, 2019 at 10:26 AM Renuka M 
> wrote:
> > >>
> > >> > Hi Gwen,
> > >> >
> > >> > 1.We are not doing any changes on the broker side. This change is
> only
> > >> on
> > >> > Kafka clients library.
> > >> > 2. RecordMetaData is created by client library while appending
> record to
> > >> > ProducerBatch where offset alone returned by broker. Here we are
> adding
> > >> > headers to RecordMetaData while creating FutureRecordMetaData to
> create
> > >> > context between record and its metadata. I have updated the snippet
> in
> > >> KIP
> > >> > proposed changes in step 3.
> > >> > 3. As we mentioned in alternatives, client side we can link record
> and
> > >> its
> > >> > metadata using callback, but Interceptors having same RecordMetadata
> > >> will
> > >> > not have context on for which record this MetaData belongs to. To
> fill
> > >> that
> > >> > Gap, we are proposing these changes.
> > >> > Please let us know if we are not clear.
> > >> >
> > >> > Thanks
> > >> > Renuka M
> > >> >
> > >> >
> > >> >
> > >> >
> > >> > On Fri, Aug 23, 2019 at 7:08 PM Gwen Shapira 
> wrote:
> > >> >
> > >> > > I am afraid I don't understand the proposal. The RecordMetadata is
> > >> > > information returned from the broker regarding the record. The
> > >> > > producer already has the record (including the headers), so why
> would
> > >> > > the broker need to send the headers back as part of the metadata?
> > >> > >
> > >> > > On Fri, Aug 23, 2019 at 4:22 PM Renuka M 
> > >> wrote:
> > >> > > >
> > >> > > > Hi All,
> > >> > > >
> > >> > > > I am starting this thread to discuss
> > >> > > >
> > >> > >
> > >> >
> > >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
> > >> > > > .
> > >> > > >
> > >> > > > Please provide the feedback.
> > >> > > >
> > >> > > > Thanks
> > >> > > > Renuka M
> > >> > >
> > >> > >
> > >> > >
> > >> > > --
> > >> > > Gwen Shapira
> > >> > > Product Manager | Confluent
> > >> > > 650.450.2760 | @gwenshap
> > >> > > Follow us: Twitter | blog
> > >> > >
> > >> >
> > >>
> > >
> >
>


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Maulin Vasavada
Hi Harsha

Thank you. Appreciate your time and support on this. Let me go back do some
more research and get back to you on the KeyStore interface part.
Basically, if we return certs and keys in the interface then Kafka code
will have to build KeyStore object - which is also reasonable.

Thanks
Maulin

On Thu, Aug 29, 2019 at 10:01 AM Harsha Chintalapani 
wrote:

> Hi Maulin,
> Use cases are clear now. I am +1 for moving forward
> with the discussions on having such configurable option for users. But the
> interfaces is proposed doesn't look right to me. We are still talking about
> keystore interfaces.  Given keystore's are used as filebased way of
> transporting certificates I am not sure it will help the rest of the
> user-base.
>   In short, I am +1 on the KIP's motivation and only have
> questions around returning keystores instead of returning certs, private
> keys etc. . If others in the community are ok with such interface we can
> move forward.
>
> Thanks,
> Harsha
>
>
> On Wed, Aug 28, 2019 at 1:51 PM, Maulin Vasavada <
> maulin.vasav...@gmail.com>
> wrote:
>
> > Hi Harsha
> >
> > As we synced-up offline on this topic, we hope you don't have any more
> > clarifications that you are seeking. If that is the case, can you please
> > help us move this forward and discuss what changes you would expect on
> the
> > KIP design in order to make it valuable contribution?
> >
> > Just FYI - we verified our primary design change with the author of Sun's
> > X509 Trustmanager implementation and the outcome is that what we are
> > proposing makes sense at the heart of it - "Instead of writing
> TrustManager
> > just plugin the Trust store". We are open to discuss additional changes
> > that you/anybody else would like to see on the functionality however.
> >
> > Thanks
> > Maulin
> >
> > On Thu, Aug 22, 2019 at 9:12 PM Maulin Vasavada <
> maulin.vasav...@gmail.com>
> > wrote:
> >
> > Hi Harsha
> >
> > Any response on my question? I feel this KIP is worth accommodating. Your
> > help is much appreciated.
> >
> > Thanks
> > Maulin
> >
> > On Tue, Aug 20, 2019 at 11:52 PM Maulin Vasavada < maulin.vasavada@gmail.
> > com> wrote:
> >
> > Hi Harsha
> >
> > I've examined the SPIFFE provider more and have one question -
> >
> > If SPIFFE didn't have a need to do checkSpiffeId() call at the below
> > location, would you really still write the Provider? *OR* Would you just
> > use TrustManagerFactory.init(KeyStore) signature to pass the KeyStore
> from
> > set of certs returned by spiffeIdManager. getTrustedCerts()?
> >
> > https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/
> > provider/CertificateUtils.java#L100
> >
> > /**
> >
> > * Validates that the SPIFFE ID is present and matches the SPIFFE ID
> > configured in
> > * the java.security property ssl.spiffe.accept
> > *
> > * If the authorized spiffe ids list is empty any spiffe id is authorized
> > *
> > * @param chain an array of X509Certificate that contains the Peer's SVID
> > to be validated
> > * @throws CertificateException when either the certificates doesn't have
> a
> > SPIFFE ID or the SPIFFE ID is not authorized
> > */
> > static void checkSpiffeId(X509Certificate[] chain) throws
> > CertificateException {
> >
> > Thanks
> > Maulin
> >
> > On Tue, Aug 20, 2019 at 4:49 PM Harsha Chintalapani 
> > wrote:
> >
> > Maulin,
> > The code parts you are pointing are specific for Spiffe and if
> > you are talking about validate method which uses PKIX check like any
> other
> > provider does.
> > If you want to default to SunJSSE everywhere you can do so by delegating
> > the calls in these methods to SunJSSE provider.
> >
> > TrustManagerFactory tmf = TrustManagerFactory
> > .getInstance(TrustManagerFactory.getDefaultAlgorithm());and use
> > tmf.chekServerTrusted()
> > or use
> > https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/
> > TrustManagerFactory.html#getInstance(java.lang.String)if you want a
> > specific provider.
> >
> > -Harsha
> >
> > On Tue, Aug 20, 2019 at 4:26 PM, Maulin Vasavada < maulin.vasavada@gmail.
> > com>
> > wrote:
> >
> > Okay, so I take that you guys agree that I have to write a 'custom'
> > algorithm and a provider to make it work , correct?
> >
> > Now, for Harsha's comment "Here the 'Custom' Algorithm is not an
> > implementation per say , ..." , I diagree. You can refer to https://
> >
> > github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/provider/
> >
> > SpiffeTrustManager.java#L91  and
> >
> > https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/
> >
> > provider/CertificateUtils.java#L100
> >
> > "that code" is the customization you have for the custom way to check
> > something on top of regular checks. That method is NOT doing custom
> > truststore loading. It is validating/verifying something in the
> >
> > "custom"
> >
> > way with spiffeId.
> > I bet that without that you won't have a need of the 

Re: Kafka Connect Schema from JSON

2019-08-29 Thread Andrew Otto
In case this is helpful, I wrote a (WIP) version that does what you say,
but using JSONSchema instead of avro schema.
https://github.com/ottomata/kafka-connect-jsonschema

On Thu, Aug 29, 2019 at 12:48 PM Josef Hak  wrote:

> Hi,
>  please is it possible to create kafka connect Schema using json, e.g. avro
> definition json? I need it for writing custom Single Message Transformation
> and can not find how to do that. I miss something like method in Schema
> loadFromJson(String avroDefJson). Without it probably I will have to write
> custom custom parser which creates Schema for me using SchemaBuilder.
>
>
> https://kafka.apache.org/0110/javadoc/org/apache/kafka/connect/data/SchemaBuilder.html
>
> Thanks and Regards
>
> Josef
>


Re: [ DISCUSS ] KIP-512:Adding headers to RecordMetaData

2019-08-29 Thread Colin McCabe
As Gwen commented earlier, the client already has the record that it sent, 
including all the headers.

>
> Future future = producer.send(myRecord, null);
> future.get();
> System.out.println("I sent myRecord with headers " + myRecord.headers());
>

best,
Colin


On Tue, Aug 27, 2019, at 17:06, Renuka M wrote:
> Hi  Gwen/Team
> 
> Can you please review the KIP. Hope we have clarified the question you have
> regarding proposal.
> 
> Thanks
> Renuka M
> 
> On Mon, Aug 26, 2019 at 3:35 PM Renuka M  wrote:
> 
> > Hi Eric,
> >
> > We thought about that but we didn't find the strong  enough reason for
> > having record itself in Acknowledgement.
> > Headers are supposed to carry metadata and that is the reason headers are
> > added to producer/consumer records.
> > Also we feel having headers information in record metadata is good enough
> > to bridge the gap and link the record to its metadata.
> > Its simple change since we are not adding any new method signatures.
> > Adding new method signatures requires adoption and deprecation of old ones
> > to reduce duplication.
> > If we get enough votes on adding new method signature, we are open to add
> > it.
> >
> > Thanks
> > Renuka M
> >
> > On Mon, Aug 26, 2019 at 10:54 AM Eric Azama  wrote:
> >
> >> Have you considered adding a new onAcknowledgement method to the
> >> ProducerInterceptor with the signature onAcknowledgement(RecordMetadata
> >> metadata, Exception exception, ProducerRecord record)? I would also
> >> consider adding this to Producer Callbacks as well, since linking a
> >> Callback to a specific record currently requires creating a new Callback
> >> for every ProducerRecord sent.
> >>
> >> This seems like a more robust strategy compared to using Headers. Headers
> >> don't necessarily contain anything that connects them to the original
> >> ProducerRecord, and forcibly including information in the Headers seems
> >> like unnecessary bloat. If your goal is to link a RecordMetadata to a
> >> specific ProducerRecord, it seems simpler to make sure the original
> >> ProducerRecord is accessible at the same time as the RecordMetadata
> >>
> >> On Mon, Aug 26, 2019 at 10:26 AM Renuka M  wrote:
> >>
> >> > Hi Gwen,
> >> >
> >> > 1.We are not doing any changes on the broker side. This change is only
> >> on
> >> > Kafka clients library.
> >> > 2. RecordMetaData is created by client library while appending record to
> >> > ProducerBatch where offset alone returned by broker. Here we are adding
> >> > headers to RecordMetaData while creating FutureRecordMetaData to create
> >> > context between record and its metadata. I have updated the snippet in
> >> KIP
> >> > proposed changes in step 3.
> >> > 3. As we mentioned in alternatives, client side we can link record and
> >> its
> >> > metadata using callback, but Interceptors having same RecordMetadata
> >> will
> >> > not have context on for which record this MetaData belongs to. To fill
> >> that
> >> > Gap, we are proposing these changes.
> >> > Please let us know if we are not clear.
> >> >
> >> > Thanks
> >> > Renuka M
> >> >
> >> >
> >> >
> >> >
> >> > On Fri, Aug 23, 2019 at 7:08 PM Gwen Shapira  wrote:
> >> >
> >> > > I am afraid I don't understand the proposal. The RecordMetadata is
> >> > > information returned from the broker regarding the record. The
> >> > > producer already has the record (including the headers), so why would
> >> > > the broker need to send the headers back as part of the metadata?
> >> > >
> >> > > On Fri, Aug 23, 2019 at 4:22 PM Renuka M 
> >> wrote:
> >> > > >
> >> > > > Hi All,
> >> > > >
> >> > > > I am starting this thread to discuss
> >> > > >
> >> > >
> >> >
> >> https://cwiki.apache.org/confluence/display/KAFKA/KIP-512%3AAdding+headers+to+RecordMetaData
> >> > > > .
> >> > > >
> >> > > > Please provide the feedback.
> >> > > >
> >> > > > Thanks
> >> > > > Renuka M
> >> > >
> >> > >
> >> > >
> >> > > --
> >> > > Gwen Shapira
> >> > > Product Manager | Confluent
> >> > > 650.450.2760 | @gwenshap
> >> > > Follow us: Twitter | blog
> >> > >
> >> >
> >>
> >
>


Re: [DISCUSS] KIP-495: Dynamically Adjust Log Levels in Connect

2019-08-29 Thread Colin McCabe
On Mon, Aug 26, 2019, at 14:03, Jason Gustafson wrote:
> Hi Arjun,
> 
> From a high level, I feel like we are making light of the JMX api because
> it's convenient and the broker already has it. Personally I would take the
> broker out of the picture. The JMX endpoint is not something we were happy
> with, hence KIP-412. Ultimately I think we will deprecate and remove it and
> there's no point trying to standardize on a deprecated mechanism. Thinking
> just about connect, we already have an HTTP endpoint. The default position
> should be to add new APIs to it rather than introducing other mechanisms.
> The fewer ways you have to interact with a system, the better, right?
> 
> I think the main argument against a REST endpoint is basically that
> adjusting log levels is an administrative operation and connect is lacking
> an authorization framework to enforce administrative access. The same
> argument applies to JMX, but it has the benefit that you can specify
> different credentials and it is easier to isolate since it is running on a
> separate port. As you suggested, I think the same benefits could be
> achieved by having a separate /admin endpoint which is exposed (perhaps
> optionally) on another listener. This is a pretty standard pattern. If
> memory serves, dropwizard has something like this out of the box. We should
> think hard whether there are additional administrative capabilities that we
> would ultimately need. The answer is probably yes, so unless we want to
> double down on JMX, it might be worth thinking through the implications of
> an admin endpoint now so that we're not left with odd compatibility baggage
> in the future.

Hi Jason,

I agree... I think Connect needs a REST admin API.  There will probably be a 
lot of other stuff that we'll want to add to it.

best,
Colin

> 

> Thanks,
> Jason
> 
> 
> 
> 
> On Fri, Aug 23, 2019 at 5:38 PM Arjun Satish  wrote:
> 
> > Jason,
> >
> > Thanks for your comments!
> >
> > I understand the usability issues with JMX that you mention. But it was
> > chosen for the following reasons:
> >
> > 1. Cross-cutting functionality across different components (Kafka brokers,
> > Connect workers and even with Streams jobs). If we go down the REST route,
> > then brokers don't get this feature.
> > 2. Adding this to existing REST servers adds the whole-or-nothing problem.
> > It's hard to disable an endpoint if the functionality is not desired or
> > needs to be protected from users (Connect doesn't have ACLs which makes
> > this even harder to manage). Adding endpoints to different listeners makes
> > configuring Connect harder (and it's already a hard problem as it is). A
> > lot of the existing functionality there is driven around the connector data
> > model (connectors, plugins, their statuses and so on). Adding an '/admin'
> > endpoint may be a way to go, but that has tremendous implications (we are
> > effectively adding an administration endpoint similar to the admin one in
> > brokers), and probably requires a KIP of its own with discussions catered
> > around just that.
> > 3. JMX is currently AK's default way to report metrics and perform other
> > operations. Changing log levels is typically a system level/admin
> > operation, and fits better there, instead of REST APIs (which is more user
> > facing).
> >
> > Having said that, I'm happy to consider alternatives. JMX seemed to be the
> > lowest hanging fruit. But if there are better ideas, we can consider them.
> > At the end of the day, when we download and run Kafka, there should be one
> > way to achieve the same functionality among its components.
> >
> > Finally, I hope I didn't convey that we are reverting/changing the changes
> > made in KIP-412. The proposed changes would be an addition to it. It will
> > give brokers multiple ways of changing log levels. and there is still a
> > consistent way of achieving cross component goals of the KIP.
> >
> > Best,
> >
> >
> > On Fri, Aug 23, 2019 at 4:12 PM Jason Gustafson 
> > wrote:
> >
> > > Let me elaborate a little bit. We made the decision early on for Connect
> > to
> > > use HTTP instead of Kafka's custom RPC protocol. In exchange for losing
> > > some hygienic consistency with Kafka, we took easier integration with
> > > management tools. The scope of the connect REST APIs is really managing
> > the
> > > connect cluster. It has endpoints for creating connectors, changing
> > > configs, seeing their health, etc. Doesn't debugging fit in with that? I
> > am
> > > not sure I see why we would treat this as an exceptional case.
> > >
> > > I personally see JMX as a necessary evil in Kafka because most metrics
> > > agents have native support. But it is particularly painful when it comes
> > to
> > > use as an RPC mechanism. This was the central motivation behind KIP-412,
> > > which makes it very odd to see a new proposal which suggests
> > standardizing
> > > on JMX for log level adjustment. I actually see this as something we'd
> > want
> > > 

[jira] [Created] (KAFKA-8848) Update system test to use new authorizer

2019-08-29 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-8848:
-

 Summary: Update system test to use new authorizer
 Key: KAFKA-8848
 URL: https://issues.apache.org/jira/browse/KAFKA-8848
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.4.0


We should run system tests with the new authorizer.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: Permissions to create a KIP for KAFKA-8843

2019-08-29 Thread Pere Urbón Bayes
Working on it,
  looking forward to do my best with my first KIP,

I will send details for review as soon as I have them.

-- Pere

Missatge de Colin McCabe  del dia dj., 29 d’ag. 2019 a
les 19:06:

> On Thu, Aug 29, 2019, at 09:27, Pere Urbón Bayes wrote:
> > Thanks,
> >   yes I know the KIP-500, how realistic is to have it landing for 2.4? if
> > not really, end of the day we will need something like KAFKA-8843 for
> this
> > version.
> >
> > Looking forward to help out.
> >
>
> Hi Pere,
>
> Thanks for contributing.  KIP-500 will not be implemented in 2.4.
>
> KAFKA-8843 seems like a good improvement.  Since it involves changing
> command-line arguments, I think a KIP will be needed.  However, it should
> be a relatively short one if the change is straightforward.
>
> best,
> Colin
>
>
> > -- Pere
> >
> > Missatge de Guozhang Wang  del dia dj., 29 d’ag.
> 2019 a
> > les 17:35:
> >
> > > Hello Pere,
> > >
> > > Thanks for you interest in contributing to Kafka, I've added you to the
> > > contributors list and you should be able to create wiki pages now.
> > >
> > > BTW there's an on-going KIP-500 which aims at removing ZK dependency of
> > > Kafka recently; depending when it would be voted and be adopted, I
> think
> > > it's still worth doing KAFKA-8843.
> > >
> > > Guozhang
> > >
> > >
> > > On Thu, Aug 29, 2019 at 3:56 AM Pere Urbón Bayes  >
> > > wrote:
> > >
> > >> Hi,
> > >>   I would like to create, and start the process to fix KAFKA-8843, so
> as I
> > >> understand it I should first create a KIP.
> > >>
> > >> I registered to the wiki:
> > >>   email: pere.ur...@gmail.com
> > >>   user: pere.urbon
> > >>
> > >> would you be so nice to provide permissions to create the KIP?
> > >>
> > >> thanks a lot,
> > >>
> > >> --
> > >> Pere Urbon-Bayes
> > >> Software Architect
> > >> http://www.purbon.com
> > >> https://twitter.com/purbon
> > >> https://www.linkedin.com/in/purbon/
> > >>
> > >
> > >
> > > --
> > > -- Guozhang
> > >
> >
> >
> > --
> > Pere Urbon-Bayes
> > Software Architect
> > http://www.purbon.com
> > https://twitter.com/purbon
> > https://www.linkedin.com/in/purbon/
> >
>


-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/


[jira] [Created] (KAFKA-8847) Deprecate and remove usage of supporting classes in kafka.security.auth

2019-08-29 Thread Rajini Sivaram (Jira)
Rajini Sivaram created KAFKA-8847:
-

 Summary: Deprecate and remove usage of supporting classes in 
kafka.security.auth
 Key: KAFKA-8847
 URL: https://issues.apache.org/jira/browse/KAFKA-8847
 Project: Kafka
  Issue Type: Sub-task
  Components: security
Reporter: Rajini Sivaram
Assignee: Rajini Sivaram
 Fix For: 2.4.0


Deprecate Acl, Resource etc. from `kafka.security.auth` and replace references 
to these with the equivalent Java classes.



--
This message was sent by Atlassian Jira
(v8.3.2#803003)


Re: Permissions to create a KIP for KAFKA-8843

2019-08-29 Thread Colin McCabe
On Thu, Aug 29, 2019, at 09:27, Pere Urbón Bayes wrote:
> Thanks,
>   yes I know the KIP-500, how realistic is to have it landing for 2.4? if
> not really, end of the day we will need something like KAFKA-8843 for this
> version.
> 
> Looking forward to help out.
> 

Hi Pere,

Thanks for contributing.  KIP-500 will not be implemented in 2.4.

KAFKA-8843 seems like a good improvement.  Since it involves changing 
command-line arguments, I think a KIP will be needed.  However, it should be a 
relatively short one if the change is straightforward.

best,
Colin


> -- Pere
> 
> Missatge de Guozhang Wang  del dia dj., 29 d’ag. 2019 a
> les 17:35:
> 
> > Hello Pere,
> >
> > Thanks for you interest in contributing to Kafka, I've added you to the
> > contributors list and you should be able to create wiki pages now.
> >
> > BTW there's an on-going KIP-500 which aims at removing ZK dependency of
> > Kafka recently; depending when it would be voted and be adopted, I think
> > it's still worth doing KAFKA-8843.
> >
> > Guozhang
> >
> >
> > On Thu, Aug 29, 2019 at 3:56 AM Pere Urbón Bayes 
> > wrote:
> >
> >> Hi,
> >>   I would like to create, and start the process to fix KAFKA-8843, so as I
> >> understand it I should first create a KIP.
> >>
> >> I registered to the wiki:
> >>   email: pere.ur...@gmail.com
> >>   user: pere.urbon
> >>
> >> would you be so nice to provide permissions to create the KIP?
> >>
> >> thanks a lot,
> >>
> >> --
> >> Pere Urbon-Bayes
> >> Software Architect
> >> http://www.purbon.com
> >> https://twitter.com/purbon
> >> https://www.linkedin.com/in/purbon/
> >>
> >
> >
> > --
> > -- Guozhang
> >
> 
> 
> -- 
> Pere Urbon-Bayes
> Software Architect
> http://www.purbon.com
> https://twitter.com/purbon
> https://www.linkedin.com/in/purbon/
>


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Harsha Chintalapani
Hi Maulin,
Use cases are clear now. I am +1 for moving forward
with the discussions on having such configurable option for users. But the
interfaces is proposed doesn't look right to me. We are still talking about
keystore interfaces.  Given keystore's are used as filebased way of
transporting certificates I am not sure it will help the rest of the
user-base.
  In short, I am +1 on the KIP's motivation and only have
questions around returning keystores instead of returning certs, private
keys etc. . If others in the community are ok with such interface we can
move forward.

Thanks,
Harsha


On Wed, Aug 28, 2019 at 1:51 PM, Maulin Vasavada 
wrote:

> Hi Harsha
>
> As we synced-up offline on this topic, we hope you don't have any more
> clarifications that you are seeking. If that is the case, can you please
> help us move this forward and discuss what changes you would expect on the
> KIP design in order to make it valuable contribution?
>
> Just FYI - we verified our primary design change with the author of Sun's
> X509 Trustmanager implementation and the outcome is that what we are
> proposing makes sense at the heart of it - "Instead of writing TrustManager
> just plugin the Trust store". We are open to discuss additional changes
> that you/anybody else would like to see on the functionality however.
>
> Thanks
> Maulin
>
> On Thu, Aug 22, 2019 at 9:12 PM Maulin Vasavada 
> wrote:
>
> Hi Harsha
>
> Any response on my question? I feel this KIP is worth accommodating. Your
> help is much appreciated.
>
> Thanks
> Maulin
>
> On Tue, Aug 20, 2019 at 11:52 PM Maulin Vasavada < maulin.vasavada@gmail.
> com> wrote:
>
> Hi Harsha
>
> I've examined the SPIFFE provider more and have one question -
>
> If SPIFFE didn't have a need to do checkSpiffeId() call at the below
> location, would you really still write the Provider? *OR* Would you just
> use TrustManagerFactory.init(KeyStore) signature to pass the KeyStore from
> set of certs returned by spiffeIdManager. getTrustedCerts()?
>
> https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/
> provider/CertificateUtils.java#L100
>
> /**
>
> * Validates that the SPIFFE ID is present and matches the SPIFFE ID
> configured in
> * the java.security property ssl.spiffe.accept
> *
> * If the authorized spiffe ids list is empty any spiffe id is authorized
> *
> * @param chain an array of X509Certificate that contains the Peer's SVID
> to be validated
> * @throws CertificateException when either the certificates doesn't have a
> SPIFFE ID or the SPIFFE ID is not authorized
> */
> static void checkSpiffeId(X509Certificate[] chain) throws
> CertificateException {
>
> Thanks
> Maulin
>
> On Tue, Aug 20, 2019 at 4:49 PM Harsha Chintalapani 
> wrote:
>
> Maulin,
> The code parts you are pointing are specific for Spiffe and if
> you are talking about validate method which uses PKIX check like any other
> provider does.
> If you want to default to SunJSSE everywhere you can do so by delegating
> the calls in these methods to SunJSSE provider.
>
> TrustManagerFactory tmf = TrustManagerFactory
> .getInstance(TrustManagerFactory.getDefaultAlgorithm());and use
> tmf.chekServerTrusted()
> or use
> https://docs.oracle.com/javase/7/docs/api/javax/net/ssl/
> TrustManagerFactory.html#getInstance(java.lang.String)if you want a
> specific provider.
>
> -Harsha
>
> On Tue, Aug 20, 2019 at 4:26 PM, Maulin Vasavada < maulin.vasavada@gmail.
> com>
> wrote:
>
> Okay, so I take that you guys agree that I have to write a 'custom'
> algorithm and a provider to make it work , correct?
>
> Now, for Harsha's comment "Here the 'Custom' Algorithm is not an
> implementation per say , ..." , I diagree. You can refer to https://
>
> github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/provider/
>
> SpiffeTrustManager.java#L91  and
>
> https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/
>
> provider/CertificateUtils.java#L100
>
> "that code" is the customization you have for the custom way to check
> something on top of regular checks. That method is NOT doing custom
> truststore loading. It is validating/verifying something in the
>
> "custom"
>
> way with spiffeId.
> I bet that without that you won't have a need of the custom algorithm
>
> in
>
> the first place.
>
> Let me know if you agree to this.
>
> Thanks
> Maulin
>
> On Tue, Aug 20, 2019 at 2:08 PM Sandeep Mopuri 
>
> wrote:
>
> Hi Maulin, thanks for the discussion. As Harsha pointed out, to use the
> KIP492, you need to create a new provider, register a *new* custom
> algorithm for your keymanager and trustmanager factory implementations.
> After this, the kafka server configuration can be done as given below
>
> # Register the provider class with custom algorithm, say CUSTOM
>
> security.
>
> provider.classes=com.company.security.CustomProvider
> 
> 

[VOTE] KIP-486: Support custom way to load KeyStore and TrustStore

2019-08-29 Thread Maulin Vasavada
Hi all

After a good discussion on the KIP at
https://www.mail-archive.com/dev@kafka.apache.org/msg99126.html I think we
are ready to start voting.

KIP:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-486%3A+Support+custom+way+to+load+KeyStore+and+TrustStore

The KIP proposes - supporting custom way to load key/trust store instead of
just having ability to have key/trust store on the file system.

Thanks
Maulin


Re: [DISCUSS] KIP-511: Collect and Expose Client's Name and Version in the Brokers

2019-08-29 Thread Colin McCabe
On Fri, Aug 23, 2019, at 00:07, Magnus Edenhill wrote:
> Great proposal, this feature is well overdue!
> 
> 1)
> From an operator's perspective I don't think the kafka client
> implementation name and version are sufficient,
> I also believe the application name and version are of interest.
> You could have all applications in your cluster run the same kafka client
> and version, but only one type or
> version of an application misbehave and needing to be tracked down.

Hi Magnus,

I think it might be better to leave this out of scope for now, and think about 
it in the context of more generalized request tracing.  This is a very deep 
rabbit hole, and I could easily see it delaying this KIP for a long time.  For 
example, if you have multiple Spark jobs producing to Kafka, just knowing that 
a client is being used by Spark may not be that helpful.  So maybe you want a 
third set of fields to describe the spark application ID and version, etc?  And 
then maybe that, itself, was created by some framework... etc. Probably better 
to defer this discussion for now and see how version tracking works out.

> 
> While the application and client name and version could be combined in the
> ClientName/ClientVersion fields by
> the user (e.g. like User-Agent), it would not be in a generalized format
> and hard for generic monitoring tools to parse correctly.
> 
> So I'd suggest keeping ClientName and ClientVersion as the client
> implementation name ("java" or "org.apache.kafka...") and version,
> which can't be changed by the user/app developer, and providing two
> optional fields for the application counterpart:
> ApplicationName and ApplicationVersion, which are backed by corresponding
> configuration properties (application.name, application.version).
> 
> 2)
> Do ..Name and ..Version need to be two separate fields, seeing how the two
> fields are ambigious when separated?
> If we're looking to identify unique versions, combining the two fields
> would be sufficient (e.g., "java-2.3.1", "librdkafka/1.2.0", "sarama@1.2.3")
> and perhaps easier to work with.
> The actual format or content of the name-version string is irrelevant as
> long as it identifies a unique name+version.
> 

Hmm.  Wouldn't the same arguments you made above about a combined named+version 
field being "hard for generic monitoring tools to parse correctly" apply here?  
In any case, there seems to be no reason not to just have two fields.  It 
avoids string parsing.

> 
> 3)
> As for allowed characters, will the broker fail the ApiVersionResponse if
> any of these fields contain invalid characters,
> or will the broker sanitize the strings?
> For future backwards compatibility (when the broker constraints change but
> clients are not updated) I suggest the latter.
> 

I would argue that we should be strict about the characters that we accept, and 
just close the connection if the string is bad.  There's no reason to let 
clients troll us with "librdkafka  " (two spaces at the end) or a version 
string with slashes or control characters in it.  In fact, I would argue we 
should just allow something like ([\.][a-z][A-Z][0-9])+  This ensures that JMX 
will work well.  We can always loosen the restrictions later if there is a real 
need.

> 4)
> And while we're at it, can we add the broker name and version to the
> ApiVersionResponse?
> While an application must not use this information to detect features (Hi
> Jay!), it is good for troubleshooting
> and providing more meaningful logs to the client user in case a feature
> (based on the actual api versions) is not available.

I can think of several cases where people tried to set up client-side hacks 
based on the broker version, and were only stopped by the fact that we don't 
expose this information.  I agree with Jay that we should think very carefully 
before exposing it.  In any case, this seems out of scope...

best,
Colin

> 
> /Magnus
> 
> 
> Den tors 22 aug. 2019 kl 10:09 skrev David Jacot :
> 
> > Hi Satish,
> >
> > Thank you for your feedback!
> >
> > Please find my answers below.
> >
> > >> Did you consider taking version property by loading
> > “kafka/kafka-version.properties” as a resource while java client is
> > initialized?  “kafka/kafka-version.properties” is shipped with
> > kafka-clients jar.
> >
> > I wasn't aware of the property file. It is exactly what I need. Thanks for
> > pointing that out!
> >
> > >> I assume this metric value will be the total no of clients connected
> > to a broker irrespective of whether name and version follow the
> > expected pattern ([-.\w]+) or not.
> >
> > That is correct.
> >
> > >> It seems client name and client version are treated as tags for
> > `ConnectedClients` metric. If so, you may implement this metric
> > similar to `BrokerTopicMetrics` with topic tag as mentioned here[1].
> > When is the metric removed for a specific client-name and
> > client-version?
> >
> > That is correct. Client name and version are treated as tags like in
> > 

Kafka Connect Schema from JSON

2019-08-29 Thread Josef Hak
Hi,
 please is it possible to create kafka connect Schema using json, e.g. avro
definition json? I need it for writing custom Single Message Transformation
and can not find how to do that. I miss something like method in Schema
loadFromJson(String avroDefJson). Without it probably I will have to write
custom custom parser which creates Schema for me using SchemaBuilder.

https://kafka.apache.org/0110/javadoc/org/apache/kafka/connect/data/SchemaBuilder.html

Thanks and Regards

Josef


Re: Permissions to create a KIP for KAFKA-8843

2019-08-29 Thread Pere Urbón Bayes
Thanks,
  yes I know the KIP-500, how realistic is to have it landing for 2.4? if
not really, end of the day we will need something like KAFKA-8843 for this
version.

Looking forward to help out.

-- Pere

Missatge de Guozhang Wang  del dia dj., 29 d’ag. 2019 a
les 17:35:

> Hello Pere,
>
> Thanks for you interest in contributing to Kafka, I've added you to the
> contributors list and you should be able to create wiki pages now.
>
> BTW there's an on-going KIP-500 which aims at removing ZK dependency of
> Kafka recently; depending when it would be voted and be adopted, I think
> it's still worth doing KAFKA-8843.
>
> Guozhang
>
>
> On Thu, Aug 29, 2019 at 3:56 AM Pere Urbón Bayes 
> wrote:
>
>> Hi,
>>   I would like to create, and start the process to fix KAFKA-8843, so as I
>> understand it I should first create a KIP.
>>
>> I registered to the wiki:
>>   email: pere.ur...@gmail.com
>>   user: pere.urbon
>>
>> would you be so nice to provide permissions to create the KIP?
>>
>> thanks a lot,
>>
>> --
>> Pere Urbon-Bayes
>> Software Architect
>> http://www.purbon.com
>> https://twitter.com/purbon
>> https://www.linkedin.com/in/purbon/
>>
>
>
> --
> -- Guozhang
>


-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/


Re: Permissions to create a KIP for KAFKA-8843

2019-08-29 Thread Guozhang Wang
Hello Pere,

Thanks for you interest in contributing to Kafka, I've added you to the
contributors list and you should be able to create wiki pages now.

BTW there's an on-going KIP-500 which aims at removing ZK dependency of
Kafka recently; depending when it would be voted and be adopted, I think
it's still worth doing KAFKA-8843.

Guozhang


On Thu, Aug 29, 2019 at 3:56 AM Pere Urbón Bayes 
wrote:

> Hi,
>   I would like to create, and start the process to fix KAFKA-8843, so as I
> understand it I should first create a KIP.
>
> I registered to the wiki:
>   email: pere.ur...@gmail.com
>   user: pere.urbon
>
> would you be so nice to provide permissions to create the KIP?
>
> thanks a lot,
>
> --
> Pere Urbon-Bayes
> Software Architect
> http://www.purbon.com
> https://twitter.com/purbon
> https://www.linkedin.com/in/purbon/
>


-- 
-- Guozhang


Permissions to create a KIP for KAFKA-8843

2019-08-29 Thread Pere Urbón Bayes
Hi,
  I would like to create, and start the process to fix KAFKA-8843, so as I
understand it I should first create a KIP.

I registered to the wiki:
  email: pere.ur...@gmail.com
  user: pere.urbon

would you be so nice to provide permissions to create the KIP?

thanks a lot,

-- 
Pere Urbon-Bayes
Software Architect
http://www.purbon.com
https://twitter.com/purbon
https://www.linkedin.com/in/purbon/


Re: [DISCUSS] KIP-486 Support for pluggable KeyStore and TrustStore

2019-08-29 Thread Maulin Vasavada
Hi Rajan

Your email format doesn't show up correctly. Can you repost to make it more
readable?

Thanks
Maulin

On Wed, Aug 28, 2019 at 2:32 PM Rajan Dhabalia  wrote:

> *Hi Harsha/Maulin,I am following  KIP-486 and KIP-492 and it seems
> https://github.com/apache/kafka/pull/7090
>  is the right solution when one
> wants to register custom factory class for KeyManager and TrustManager.
> User can easily configure custom implementation of TrustManager and
> KeyManager using factory Provider class.Configuration of the provider is
> also simple and straightforward: 1. Create custom provider which defines
> factory classes for KeyManager and TrustManagerpublic class CustomProvider
> extends Provider {public CustomProvider()
> {super("NEW_CUSTOM_PROVIDER", 0.1, "Custom KeyStore and
> TrustStore");super.put("KeyManagerFactory.CUSTOM",
> "customKeyManagerFactory");
> super.put("TrustManagerFactory.CUSTOM","customTrustManagerFactory");
>  }}
> 1. Register provider at broker
> startupjava.security.Security.addProvider(new CustomProvider());However,
> this approach is useful when one wants to implement custom TrustManager for
> X509 certs by extending X509ExtendedKeyManager and implement various
> abstract methods such as: checkClientTrusted, checkServerTrusted, etc..In
> JDK, default implementation class of X509ExtendedKeyManager is
> X509TrustManagerImpl and one can’t extend or delegate calls to this class
> because it’s final and same is applicable for other available providers
> such as : BouncyCastleProviderTurstManager/KeyManager mainly serve two
> purposes: 1. Provide certs/key2. Perform validationX509TrustManagerImpl
> performs various RFC specified validations.#7090 can be helpful when user
> has both above asks. However, problem defined at KIP-486 has different ask
> where user wants to provide certs/key without copying/implementing Manager
> class because all the available Manager classes are final and can’t be
> extended or delegated. And security team in most of the companies don’t
> allow custom/copying provider in order to get up to date with various RFC
> validations provided into standard jdk provider.Many users manage keys and
> certs into KMS and sometimes it’s not feasible to copy them to file-system
> instead directly use them from the memory. So, KIP-486 provides a custom
> way to load keys/certs without implementing security-provider.Thanks,Rajan*
>
> On Wed, Aug 28, 2019 at 2:18 PM Maulin Vasavada  >
> wrote:
>
> >
> >
> > Hi Harsha
> >
> > As we synced-up offline on this topic, we hope you don't have any more
> > clarifications that you are seeking. If that is the case, can you please
> > help us move this forward and discuss what changes you would expect on
> the
> > KIP design in order to make it valuable contribution?
> >
> > Just FYI - we verified our primary design change with the author of Sun's
> > X509 Trustmanager implementation and the outcome is that what we are
> > proposing makes sense at the heart of it - "Instead of writing
> TrustManager
> > just plugin the Trust store". We are open to discuss additional changes
> > that you/anybody else would like to see on the functionality however.
> >
> >
> > Thanks
> > Maulin
> >
> > On Thu, Aug 22, 2019 at 9:12 PM Maulin Vasavada <
> maulin.vasav...@gmail.com>
> > wrote:
> >
> >> Hi Harsha
> >>
> >> Any response on my question? I feel this KIP is worth accommodating.
> Your
> >> help is much appreciated.
> >>
> >> Thanks
> >> Maulin
> >>
> >> On Tue, Aug 20, 2019 at 11:52 PM Maulin Vasavada <
> >> maulin.vasav...@gmail.com> wrote:
> >>
> >>> Hi Harsha
> >>>
> >>> I've examined the SPIFFE provider more and have one question -
> >>>
> >>> If SPIFFE didn't have a need to do checkSpiffeId() call at the below
> >>> location, would you really still write the Provider? *OR*
> >>> Would you just use TrustManagerFactory.init(KeyStore) signature to pass
> >>> the KeyStore from set of certs returned by spiffeIdManager.
> >>> getTrustedCerts()?
> >>>
> >>>
> >>>
> https://github.com/spiffe/java-spiffe/blob/master/src/main/java/spiffe/provider/CertificateUtils.java#L100
> >>>
> >>>
> >>> /**
>  * Validates that the SPIFFE ID is present and matches the SPIFFE ID
>  configured in
>  * the java.security property ssl.spiffe.accept
>  *
>  * If the authorized spiffe ids list is empty any spiffe id is
> authorized
>  *
>  * @param chain an array of X509Certificate that contains the Peer's
>  SVID to be validated
>  * @throws CertificateException when either the certificates doesn't
>  have a SPIFFE ID or the SPIFFE ID is not authorized
>  */
>  static void checkSpiffeId(X509Certificate[] chain) throws
>  CertificateException {
> >>>
> >>>
> >>>
> >>> Thanks
> >>> Maulin
> >>>
> >>>
> >>> On Tue, Aug 20, 2019 at 4:49 PM Harsha Chintalapani 
> >>> wrote:
> >>>
>  Maulin,
>   The code parts you are pointing are specific for 

Re: [DISCUSS] KIP-448: Add State Stores Unit Test Support to Kafka Streams Test Utils

2019-08-29 Thread Sophie Blee-Goldman
Hey Yishun! Glad to see this is in the works :)

Within the past month or so, needing state stores for unit tests has been
brought up multiple times. Unfortunately, before now some people had to
rely on internal APIs to get a store for their tests, which is unsafe as
they can (and in this case
,
did) change. While there is an unstable workaround for KV stores, there is
unfortunately no good way to get a window or session store for your tests. This
ticket  explains that
particular issue, plus some ways to resolve it that could get kind of messy.

I think that ticket would likely be subsumed by your KIP (and much
cleaner), but I just wanted to point to some use cases and make sure we
have them covered within this KIP. We definitely have a gap here and I
think it's pretty clear many users would benefit from state store support
in unit tests!

Cheers,
Sophie

On Tue, Aug 27, 2019 at 1:11 PM Yishun Guan  wrote:

> Hi All,
>
> I have finally worked on this KIP again and want to discuss with you
> all before this KIP goes dormant.
>
> Recap: https://issues.apache.org/jira/browse/KAFKA-6460
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-448%3A+Add+State+Stores+Unit+Test+Support+to+Kafka+Streams+Test+Utils
>
> I have updated my KIP.
> 1. Provided an example of how the test will look.
> 2. Allow the tester to use their StateStore of choice as a backend
> store when testing.
> 3. Argument against EasyMock: for now, I don't really have a strong
> point against EasyMock. If people are comfortable with EasyMock and
> think building a full tracking/capturing stateStore is heavyweight,
> this makes sense to me too, and we can put this KIP as `won't
> implement`.
>
>
> I also provided a proof of concept PR for review:
> https://github.com/apache/kafka/pull/7261/files
>
> Thanks,
> Yishun
>
> On Tue, Apr 30, 2019 at 4:03 AM Matthias J. Sax 
> wrote:
> >
> > I just re-read the discussion on the original Jira.
> >
> > It's still a little unclear to me, how this should work end-to-end? It
> > would be good, to describe some test patterns that we want to support
> > first. Maybe using some examples, that show how a test would be written?
> >
> > I don't think that we should build a whole mocking framework similar to
> > EasyMock (or others); why re-invent the wheel? I think the goal should
> > be, to allow people to use their mocking framework of choice, and to
> > easily integrate it with `TopologyTestDriver`, without the need to
> > rewrite the code under test.
> >
> >
> > For the currently internal `KeyValueStoreTestDriver`, it's seems to be a
> > little different, as the purpose of this driver is to test a store
> > implementation. Hence, most users won't need this, because they use the
> > built-in stores anyway, ie, this driver would be for advanced users that
> > build their own stores.
> >
> > I think it's actually two orthogonal things and it might even be good to
> > split both into two KIPs.
> >
> >
> >
> > -Matthias
> >
> >
> > On 4/30/19 7:52 AM, Yishun Guan wrote:
> > > Sounds good! Let me work on this more and add some more information to
> this
> > > KIP before we continue.
> > >
> > > On Tue, Apr 30, 2019, 00:45 Bruno Cadonna  wrote:
> > >
> > >> Hi Yishun,
> > >>
> > >> Thank you for continuing with this KIP. IMO, this KIP is very
> important to
> > >> develop robust code.
> > >>
> > >> I think, a good approach is to do some research on mock development
> on the
> > >> internet and in the literatures and then try to prototype the mocks.
> These
> > >> activities should yield you a list of pros and cons that you can add
> to the
> > >> KIP. With this information it is simpler for everybody to discuss
> this KIP.
> > >>
> > >> Does this make sense to you?
> > >>
> > >> Best,
> > >> Bruno
> > >>
> > >> On Mon, Apr 29, 2019 at 7:11 PM Yishun Guan 
> wrote:
> > >>
> > >>> Hi,
> > >>>
> > >>> Sorry for the late reply, I have read through all your valuable
> > >>> comments. The KIP still needs work at this point.
> > >>>
> > >>> I think at this point, one question comes up is that, how should we
> > >>> implement the mock stores - as Sophie suggested, should we open to
> all
> > >>> Store backend and just wrap around the Store class type which the
> user
> > >>> will be providing - or, as Bruno suggested, we shouldn't have a
> > >>> production backend store to be wrapped around in a mock store, just
> > >>> keep track of the state of each method calls, even EasyMock could be
> > >>> one of the option too.
> > >>>
> > >>> Personally, EasyMock will makes the implementation easier but
> building
> > >>> from scratch provides extra functionality and provides expandability
> > >>> (But I am not sure what kind of extra functionality we want in the
> > >>> future).
> > >>>
> > >>> What do you guys think?
> > >>>
> > >>> Best,