Re: [ANNOUNCE] New Kafka PMC Member: David Arthur

2023-03-13 Thread Rajini Sivaram
Congratulations, David!

Regards,

Rajini

On Mon, Mar 13, 2023 at 9:06 AM Bruno Cadonna  wrote:

> Congrats, David!
>
> Bruno
>
> On 10.03.23 01:36, Matthias J. Sax wrote:
> > Congrats!
> >
> > On 3/9/23 2:59 PM, José Armando García Sancio wrote:
> >> Congrats David!
> >>
> >> On Thu, Mar 9, 2023 at 2:00 PM Kowshik Prakasam 
> >> wrote:
> >>>
> >>> Congrats David!
> >>>
> >>> On Thu, Mar 9, 2023 at 12:09 PM Lucas Brutschy
> >>>  wrote:
> >>>
>  Congratulations!
> 
>  On Thu, Mar 9, 2023 at 8:37 PM Manikumar 
>  wrote:
> >
> > Congrats David!
> >
> >
> > On Fri, Mar 10, 2023 at 12:24 AM Josep Prat
> >  >
> > wrote:
> >>
> >> Congrats David!
> >>
> >> ———
> >> Josep Prat
> >>
> >> Aiven Deutschland GmbH
> >>
> >> Alexanderufer 3-7, 10117 Berlin
> >>
> >> Amtsgericht Charlottenburg, HRB 209739 B
> >>
> >> Geschäftsführer: Oskari Saarenmaa & Hannu Valtonen
> >>
> >> m: +491715557497
> >>
> >> w: aiven.io
> >>
> >> e: josep.p...@aiven.io
> >>
> >> On Thu, Mar 9, 2023, 19:22 Mickael Maison  >
> > wrote:
> >>
> >>> Congratulations David!
> >>>
> >>> On Thu, Mar 9, 2023 at 7:20 PM Chris Egerton
> >>>  >
> >>> wrote:
> 
>  Congrats David!
> 
>  On Thu, Mar 9, 2023 at 1:17 PM Bill Bejeck 
>  wrote:
> 
> > Congratulations David!
> >
> > On Thu, Mar 9, 2023 at 1:12 PM Jun Rao  >
> >>> wrote:
> >
> >> Hi, Everyone,
> >>
> >> David Arthur has been a Kafka committer since 2013. He has been
> > very
> >> instrumental to the community since becoming a committer. It's
>  my
> > pleasure
> >> to announce that David is now a member of Kafka PMC.
> >>
> >> Congratulations David!
> >>
> >> Jun
> >> on behalf of Apache Kafka PMC
> >>
> >
> >>>
> 
> >>
> >>
> >>
>


Re: [VOTE] 3.2.1 RC3

2022-07-26 Thread Rajini Sivaram
Hi David,

+1 (binding)

Verified signatures, ran quickstart with binaries, built from source and
verified with quickstart, checked some javadocs.

Thanks for the RC, David!

Regards,

Rajini




On Tue, Jul 26, 2022 at 4:32 PM Randall Hauch  wrote:

> Thanks for the RC, David.
>
> I was able to successfully complete the following:
>
> - Installed 3.2.1 RC3 and performed quickstart for broker and
> Connect (using Java 17)
> - Verified signatures and checksums
> - Verified the tag
> - Manually compared the release notes to JIRA
> - Build release archive from the tag, installed locally, and ran a portion
> of quickstart
> - Manually spotchecked the Javadocs and release notes linked above
> - The site docs at https://kafka.apache.org/32/documentation.html still
> reference the 3.2.0 version (as expected), but I verified that putting the
> contents of
>
> https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/kafka_2.12-3.2.1-site-docs.tgz
> into the "32" directory of a local Apache server running
> https://github.com/apache/kafka-site, and the proper 3.2.1 version was
> referenced.
>
> So I'm +1 (binding)
>
> Best regards,
>
> Randall
>
> On Thu, Jul 21, 2022 at 8:15 PM David Arthur 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first release candidate of Apache Kafka 3.2.1.
> >
> > This is a bugfix release with several fixes since the release of 3.2.0. A
> > few of the major issues include:
> >
> > * KAFKA-14062 OAuth client token refresh fails with SASL extensions
> > * KAFKA-14079 Memory leak in connectors using errors.tolerance=all
> > * KAFKA-14024 Cooperative rebalance regression causing clients to get
> stuck
> >
> >
> > Release notes for the 3.2.1 release:
> > https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/RELEASE_NOTES.html
> >
> >
> >
> >  Please download, test and vote by Wednesday July 27, 2022 at 17:00
> PT.
> > 
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/
> >
> > Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > Javadoc: https://home.apache.org/~davidarthur/kafka-3.2.1-rc3/javadoc/
> >
> > Tag to be voted upon (off 3.2 branch) is the 3.2.1 tag:
> > https://github.com/apache/kafka/releases/tag/3.2.1-rc3
> >
> > Documentation: https://kafka.apache.org/32/documentation.html
> >
> > Protocol: https://kafka.apache.org/32/protocol.html
> >
> >
> > The past few builds have had flaky test failures. I will update this
> thread
> > with passing build links soon.
> >
> > Unit/Integration test job:
> > https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.2/
> > System test job:
> > https://jenkins.confluent.io/job/system-test-kafka/job/3.2/
> >
> >
> > Thanks!
> > David Arthur
> >
>


Re: Accessing TLS certs and keys from Vault into Kafka

2021-11-18 Thread Rajini Sivaram
You can add a Vault provider for externalized configs by implementing a `
org.apache.kafka.common.config.provider.ConfigProvider`.Details are in
https://cwiki.apache.org/confluence/display/KAFKA/KIP-297%3A+Externalizing+Secrets+for+Connect+Configurations
and
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=100829515.

Regards,

Rajini

On Thu, Nov 18, 2021 at 9:26 AM sai chandra mouli 
wrote:

> Hello,
>
> I have a use case where I am using a vault like ansible vault to encrypt
> and store my SSL related files (certs and Keys) for other existing
> applications. And I would like to know if it's possible to use the same
> vault with Kafka SSL without creating jks, pkcs12 or pem files outside the
> vault or additionally in the server.
>
> Does the KIP 519 and related provide any help in this case?
> If not, any suggestions on how I can achieve this?
>
> Thank you for your time.
>
> Regards,
> Sai chandra mouli
>


Re: [VOTE] 2.6.1 RC3

2020-12-15 Thread Rajini Sivaram
+1 (binding)

Verified signatures, ran tests from source build (one flaky test failed but
passed on rerun), ran Kafka quick start with the binary with both Scala
2.12 and Scala 2.13.

Thanks for running the release, Mickael!

Regards,

Rajini

On Fri, Dec 11, 2020 at 3:23 PM Mickael Maison  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the fourth candidate for release of Apache Kafka 2.6.1.
>
> Since RC2, the following JIRAs have been fixed: KAFKA-10811, KAFKA-10802
>
> Release notes for the 2.6.1 release:
> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, December 18, 12 PM ET ***
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~mimaison/kafka-2.6.1-rc3/javadoc/
>
> * Tag to be voted upon (off 2.6 branch) is the 2.6.1 tag:
> https://github.com/apache/kafka/releases/tag/2.6.1-rc3
>
> * Documentation:
> https://kafka.apache.org/26/documentation.html
>
> * Protocol:
> https://kafka.apache.org/26/protocol.html
>
> * Successful Jenkins builds for the 2.6 branch:
> Unit/integration tests:
> https://ci-builds.apache.org/job/Kafka/job/kafka-2.6-jdk8/62/
>
> /**
>
> Thanks,
> Mickael
>


Re: [ANNOUNCE] New committer: David Jacot

2020-10-16 Thread Rajini Sivaram
Congratulations, David!

Regards,

Rajini

On Fri, Oct 16, 2020 at 5:45 PM Matthias J. Sax  wrote:

> Congrats!
>
> On 10/16/20 9:25 AM, Tom Bentley wrote:
> > Congratulations David!
> >
> > On Fri, Oct 16, 2020 at 5:10 PM Bill Bejeck  wrote:
> >
> >> Congrats David! Well deserved.
> >>
> >> -Bill
> >>
> >> On Fri, Oct 16, 2020 at 12:01 PM Gwen Shapira 
> wrote:
> >>
> >>> The PMC for Apache Kafka has invited David Jacot as a committer, and
> >>> we are excited to say that he accepted!
> >>>
> >>> David Jacot has been contributing to Apache Kafka since July 2015 (!)
> >>> and has been very active since August 2019. He contributed several
> >>> notable KIPs:
> >>>
> >>> KIP-511: Collect and Expose Client Name and Version in Brokers
> >>> KIP-559: Make the Kafka Protocol Friendlier with L7 Proxies:
> >>> KIP-570: Add leader epoch in StopReplicaReques
> >>> KIP-599: Throttle Create Topic, Create Partition and Delete Topic
> >>> Operations
> >>> KIP-496 Added an API for the deletion of consumer offsets
> >>>
> >>> In addition, David Jacot reviewed many community contributions and
> >>> showed great technical and architectural taste. Great reviews are hard
> >>> and often thankless work - but this is what makes Kafka a great
> >>> product and helps us grow our community.
> >>>
> >>> Thanks for all the contributions, David! Looking forward to more
> >>> collaboration in the Apache Kafka community.
> >>>
> >>> --
> >>> Gwen Shapira
> >>>
> >>
> >
>


Re: [VOTE] 2.6.0 RC2

2020-07-31 Thread Rajini Sivaram
Thanks Randall, +1 (binding)

Built from source and ran tests, had a quick look through some Javadoc
changes, ran quickstart and some tests with Java 11 TLSv1.3 on the binary.

Regards,

Rajini


On Tue, Jul 28, 2020 at 10:50 PM Randall Hauch  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the third candidate for release of Apache Kafka 2.6.0. This is a
> major release that includes many new features, including:
>
> * TLSv1.3 has been enabled by default for Java 11 or newer.
> * Smooth scaling out of Kafka Streams applications
> * Kafka Streams support for emit on change
> * New metrics for better operational insight
> * Kafka Connect can automatically create topics for source connectors
> * Improved error reporting options for sink connectors in Kafka Connect
> * New Filter and conditional SMTs in Kafka Connect
> * The default value for the `client.dns.lookup` configuration is
> now `use_all_dns_ips`
> * Upgrade Zookeeper to 3.5.8
>
> This release also includes a few other features, 74 improvements, 175 bug
> fixes, plus other fixes.
>
> Release notes for the 2.6.0 release:
> https://home.apache.org/~rhauch/kafka-2.6.0-rc2/RELEASE_NOTES.html
>
> *** Please download, test and vote by Monday, August 3, 9am PT
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~rhauch/kafka-2.6.0-rc2/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~rhauch/kafka-2.6.0-rc2/javadoc/
>
> * Tag to be voted upon (off 2.6 branch) is the 2.6.0 tag:
> https://github.com/apache/kafka/releases/tag/2.6.0-rc2
>
> * Documentation:
> https://kafka.apache.org/26/documentation.html
>
> * Protocol:
> https://kafka.apache.org/26/protocol.html
>
> * Successful Jenkins builds for the 2.6 branch:
> Unit/integration tests: https://builds.apache.org/job/kafka-2.6-jdk8/101/
> System tests: (link to follow)
>
>
> Thanks,
> Randall Hauch
>


Re: [ANNOUNCE] New committer: Mickael Maison

2019-11-08 Thread Rajini Sivaram
Congratulations, Mickael, well deserved!!

Regards,

Rajini

On Fri, Nov 8, 2019 at 9:08 AM David Jacot  wrote:

> Congrats Mickeal, well deserved!
>
> On Fri, Nov 8, 2019 at 8:56 AM Tom Bentley  wrote:
>
> > Congratulations Mickael!
> >
> > On Fri, Nov 8, 2019 at 6:41 AM Vahid Hashemian <
> vahid.hashem...@gmail.com>
> > wrote:
> >
> > > Congrats Mickael,
> > >
> > > Well deserved!
> > >
> > > --Vahid
> > >
> > > On Thu, Nov 7, 2019 at 9:10 PM Maulin Vasavada <
> > maulin.vasav...@gmail.com>
> > > wrote:
> > >
> > > > Congratulations Mickael!
> > > >
> > > > On Thu, Nov 7, 2019 at 8:27 PM Manikumar 
> > > > wrote:
> > > >
> > > > > Congrats Mickeal!
> > > > >
> > > > > On Fri, Nov 8, 2019 at 9:05 AM Dong Lin 
> wrote:
> > > > >
> > > > > > Congratulations Mickael!
> > > > > >
> > > > > > On Thu, Nov 7, 2019 at 1:38 PM Jun Rao  wrote:
> > > > > >
> > > > > > > Hi, Everyone,
> > > > > > >
> > > > > > > The PMC of Apache Kafka is pleased to announce a new Kafka
> > > committer
> > > > > > > Mickael
> > > > > > > Maison.
> > > > > > >
> > > > > > > Mickael has been contributing to Kafka since 2016. He proposed
> > and
> > > > > > > implemented multiple KIPs. He has also been propomating Kafka
> > > through
> > > > > > blogs
> > > > > > > and public talks.
> > > > > > >
> > > > > > > Congratulations, Mickael!
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Jun (on behalf of the Apache Kafka PMC)
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > >
> > > --
> > >
> > > Thanks!
> > > --Vahid
> > >
> >
>


Re: [VOTE] 2.3.1 RC2

2019-10-24 Thread Rajini Sivaram
+1 (binding)

Verified signatures, built source and ran tests, verified binary using
broker, producer and consumer with security enabled.

Regards,

Rajini



On Wed, Oct 23, 2019 at 11:37 PM Matthias J. Sax 
wrote:

> +1 (binding)
>
> - downloaded and compiled source code
> - verified signatures for source code and Scala 2.11 binary
> - run core/connect/streams quickstart using Scala 2.11 binaries
>
>
> -Matthias
>
>
> On 10/23/19 2:43 PM, Colin McCabe wrote:
> > + d...@kafka.apache.org
> >
> > On Tue, Oct 22, 2019, at 15:48, Colin McCabe wrote:
> >> +1.  I ran the broker, producer, consumer, etc.
> >>
> >> best,
> >> Colin
> >>
> >> On Tue, Oct 22, 2019, at 13:32, Guozhang Wang wrote:
> >>> +1. I've ran the quick start and unit tests.
> >>>
> >>>
> >>> Guozhang
> >>>
> >>> On Tue, Oct 22, 2019 at 12:57 PM David Arthur 
> wrote:
> >>>
>  Thanks, Jonathon and Jason. I've updated the release notes along with
> the
>  signature and checksums. KAFKA-9053 was also missing.
> 
>  On Tue, Oct 22, 2019 at 3:47 PM Jason Gustafson 
>  wrote:
> 
> > +1
> >
> > I ran the basic quickstart on the 2.12 artifact and verified
> > signatures/checksums.
> >
> > I also looked over the release notes. I see that KAFKA-8950 is
> included,
>  so
> > maybe they just need to be refreshed.
> >
> > Thanks for running the release!
> >
> > -Jason
> >
> > On Fri, Oct 18, 2019 at 5:23 AM David Arthur 
> wrote:
> >
> >> We found a few more critical issues and so have decided to do one
> more
>  RC
> >> for 2.3.1. Please review the release notes:
> >>
> 
> https://home.apache.org/~davidarthur/kafka-2.3.1-rc2/RELEASE_NOTES.html
> >>
> >>
> >> *** Please download, test and vote by Tuesday, October 22, 9pm PDT
> >>
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >>
> >> https://kafka.apache.org/KEYS
> >>
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >>
> >> https://home.apache.org/~davidarthur/kafka-2.3.1-rc2/
> >>
> >>
> >> * Maven artifacts to be voted upon:
> >>
> >>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >>
> >> * Javadoc:
> >>
> >> https://home.apache.org/~davidarthur/kafka-2.3.1-rc2/javadoc/
> >>
> >>
> >> * Tag to be voted upon (off 2.3 branch) is the 2.3.1 tag:
> >>
> >> https://github.com/apache/kafka/releases/tag/2.3.1-rc2
> >>
> >>
> >> * Documentation:
> >>
> >> https://kafka.apache.org/23/documentation.html
> >>
> >>
> >> * Protocol:
> >>
> >> https://kafka.apache.org/23/protocol.html
> >>
> >>
> >> * Successful Jenkins builds to follow
> >>
> >>
> >> Thanks!
> >>
> >> David
> >>
> >
> 
> 
>  --
>  David Arthur
> 
> >>>
> >>>
> >>> --
> >>> -- Guozhang
> >>>
> >>
>
>


Re: customising security across the whole of Confluent platform

2019-09-17 Thread Rajini Sivaram
Hi Joris,

I have forwarded your mail to *secur...@confluent.io
* since it is about security in the Confluent
Platform rather than in Apache Kafka.

Regards,

Rajini

On Tue, Sep 17, 2019 at 11:35 AM Joris Peeters 
wrote:

> Hello,
>
> I am trying to come up with a good security approach for a Kafka project
> inside our company. I see there's a variety of options available
> (ACL/RBAC/certificates/...), and I'm hoping someone can suggest a few
> possibilities.
> Some points of interest,
>
> - Kafka will be used both by end-users/researchers (inside the company) and
> by some microservices. The users are on a wide variety of platforms
> (Windows/Linux, JVM/C++/Python/R/Javascript/MATLAB/...). The microservices
> are probably JVM/Python.
> - We'd like to make heavy use of both the Schema Registry and REST proxy,
> for all users,
> - Users & groups are in LDAP,
> - Permissioning would need to be granular and at the level of Kafka
> (topics, consumer groups, ...), the Schema Registry, the REST Proxy, ... -
> based on LDAP group membership.
> - Ideally we'd like to use user/password authentication throughout, which
> is the most widely supported mechanism (as we use many different
> platforms). The user is the LDAP username, but the password is an API key
> that users manage separately. For security reasons we cannot use the LDAP
> password, as it would be too likely to leak (e.g. in git etc). (We already
> use the LDAP user + API key combo for authentication in other places).
>
> For Kafka, I've played around with an approach similar to
> https://github.com/navikt/kafka-plain-saslserver-2-ad, i.e. writing
> classes
> that extend
> org.apache.kafka.common.security.auth.AuthenticateCallbackHandler and
> kafka.security.auth.Authorizer for providing our own rules, which works
> perfectly fine. In this case,
> - the authenticator checks the username against our own database of (user
> -> [apiKeys]),
> - the authoriser meaningfully overrides only the "authorize(..)" function
> (leaving all the *Acl ones as no-ops) and leverages LDAP (in particular
> group membership) to decide upon authorization, using an additional
> database encoding group permissions.
> This is a bit hacky - and more a proof-of-concept than anything else, but
> it does seem to cover our needs, as far as Kafka goes.
>
> However, I believe this is only available for Kafka itself, and a similar
> approach wouldn't extend to the REST proxy and the Schema Registry (and
> Connectors and the Control Center). Ideally, we want a centralized auth
> service.
> In that context, I've been looking at Role-Based Access Control. I think it
> would suit the majority of our purposes, except for the fact that it seems
> to authenticate using LDAP only (using LDAP user/pass for authentication),
> which we cannot do. I was wondering,
>
> - Is it possible to extend the RBAC with a custom authenticator, that maps
> (user+passw) -> bool? We could probably still use LDAP directly for
> authorisation. If not, though - is a similar class extension possible for
> authorisation as well? If I understood correctly, this would need to be
> done inside the Metadata Service.
> - An alternative could be to run a separate LDAP server, identical to our
> company-central one, but where the user passwords are set to their API
> keys. That would be quite a faff, though.
>
> If this is not possible - what is it that makes our use case different from
> the norm? Is it that,
> - generally people are OK with passing LDAP passwords?
> - generally "external" users don't need the schema registry or REST proxy
> (i.e. they are only used by internal microservices), so they can be
> firewalled off?
> - generally all enterprise consumers are on the JVM, making Kerberos auth
> more plausible than in our case?
> - ... ?
>
> Note that I do not have a ton of experience with securing Kafka, so the
> above may well contain numerous mistaken/out-of-date assumptions. In any
> case, thanks for any suggestions.
>
> Best,
> -Joris.
>


TSU NOTIFICATION - Encryption

2019-02-25 Thread Rajini Sivaram
SUBMISSION TYPE:  TSU


SUBMITTED BY: Rajini Sivaram


SUBMITTED FOR:The Apache Software Foundation


POINT OF CONTACT: Secretary, The Apache Software Foundation


FAX:  +1-919-573-9199


MANUFACTURER(S):  The Apache Software Foundation, Oracle


PRODUCT NAME/MODEL #: Apache Kafka


ECCN: 5D002


NOTIFICATION: http://www.apache.org/licenses/exports/


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Rajini Sivaram
Congratulations, Randall!

On Fri, Feb 15, 2019 at 11:56 AM Daniel Hanley  wrote:

> Congratulations Randall!
>
> On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Congrats Randall! :)
> >
> > On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Congratulations Randall!
> > >
> > > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Congrats Randall!
> > > >
> > > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > > wrote:
> > > > >
> > > > > Congrats, Randall! Well deserved!
> > > > >
> > > > > -James
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > > wrote:
> > > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > The PMC of Apache Kafka is happy to announce another new
> committer
> > > joining
> > > > > > the project today: we have invited Randall Hauch as a project
> > > committer and
> > > > > > he has accepted.
> > > > > >
> > > > > > Randall has been participating in the Kafka community for the
> past
> > 3
> > > years,
> > > > > > and is well known as the founder of the Debezium project, a
> popular
> > > project
> > > > > > for database change-capture streams using Kafka (
> > https://debezium.io).
> > > More
> > > > > > recently he has become the main person keeping Kafka Connect
> moving
> > > > > > forward, participated in nearly all KIP discussions and QAs on
> the
> > > mailing
> > > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > > conducted over
> > > > > > a hundred reviews around Kafka Connect, and has also been
> > > evangelizing
> > > > > > Kafka Connect at several Kafka Summit venues.
> > > > > >
> > > > > >
> > > > > > Thank you very much for your contributions to the Connect
> community
> > > Randall
> > > > > > ! And looking forward to many more :)
> > > > > >
> > > > > >
> > > > > > Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
>


Re: [ANNOUNCE] New Committer: Vahid Hashemian

2019-01-15 Thread Rajini Sivaram
Congratulations, Vahid! Well deserved!!

Regards,

Rajini

On Tue, Jan 15, 2019 at 10:45 PM Jason Gustafson  wrote:

> Hi All,
>
> The PMC for Apache Kafka has invited Vahid Hashemian as a project
> committer and
> we are
> pleased to announce that he has accepted!
>
> Vahid has made numerous contributions to the Kafka community over the past
> few years. He has authored 13 KIPs with core improvements to the consumer
> and the tooling around it. He has also contributed nearly 100 patches
> affecting all parts of the codebase. Additionally, Vahid puts a lot of
> effort into community engagement, helping others on the mail lists and
> sharing his experience at conferences and meetups.
>
> We appreciate the contributions and we are looking forward to more.
> Congrats Vahid!
>
> Jason, on behalf of the Apache Kafka PMC
>


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Rajini Sivaram
+1 (binding)

Checked source build and unit tests. Ran quickstart with source and binary.

Thank you for managing the release, Manikumar!

Regards,

Rajini

On Wed, Nov 7, 2018 at 6:18 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Checked signatures, build and quickstart.
>
> Thank you for managing the release, Mani!
>
>
> On Thu, Oct 25, 2018 at 7:29 PM Manikumar 
> wrote:
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.0.1.
> >
> > This is a bug fix release closing 49 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> >
> > Release notes for the 2.0.1 release:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by  Tuesday, October 30, end of day
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/20/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/20/protocol.html
> >
> > * Successful Jenkins builds for the 2.0 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/177/
> >
> > /**
> >
> > Thanks,
> > Manikumar
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [VOTE] 2.0.1 RC0

2018-11-07 Thread Rajini Sivaram
+1 (binding)

Checked source build and unit tests. Ran quickstart with source and binary.

Thank you for managing the release, Manikumar!

Regards,

Rajini

On Wed, Nov 7, 2018 at 6:18 PM Gwen Shapira  wrote:

> +1 (binding)
>
> Checked signatures, build and quickstart.
>
> Thank you for managing the release, Mani!
>
>
> On Thu, Oct 25, 2018 at 7:29 PM Manikumar 
> wrote:
> >
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 2.0.1.
> >
> > This is a bug fix release closing 49 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+2.0.1
> >
> > Release notes for the 2.0.1 release:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by  Tuesday, October 30, end of day
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~manikumar/kafka-2.0.1-rc0/javadoc/
> >
> > * Tag to be voted upon (off 2.0 branch) is the 2.0.1 tag:
> > https://github.com/apache/kafka/releases/tag/2.0.1-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/20/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/20/protocol.html
> >
> > * Successful Jenkins builds for the 2.0 branch:
> > Unit/integration tests:
> https://builds.apache.org/job/kafka-2.0-jdk8/177/
> >
> > /**
> >
> > Thanks,
> > Manikumar
>
>
>
> --
> Gwen Shapira
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter | blog
>


Re: [ANNOUNCE] New Committer: Manikumar Reddy

2018-10-11 Thread Rajini Sivaram
Congratulations, Manikumar!

On Thu, Oct 11, 2018 at 6:57 PM Suman B N  wrote:

> Congratulations Manikumar!
>
> On Thu, Oct 11, 2018 at 11:09 PM Jason Gustafson 
> wrote:
>
> > Hi all,
> >
> > The PMC for Apache Kafka has invited Manikumar Reddy as a committer and
> we
> > are
> > pleased to announce that he has accepted!
> >
> > Manikumar has contributed 134 commits including significant work to add
> > support for delegation tokens in Kafka:
> >
> > KIP-48:
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+Kafka
> > KIP-249
> > <
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-48+Delegation+token+support+for+KafkaKIP-249
> >
> > :
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-249%3A+Add+Delegation+Token+Operations+to+KafkaAdminClient
> >
> > He has broad experience working with many of the core components in Kafka
> > and he has reviewed over 80 PRs. He has also made huge progress
> addressing
> > some of our technical debt.
> >
> > We appreciate the contributions and we are looking forward to more.
> > Congrats Manikumar!
> >
> > Jason, on behalf of the Apache Kafka PMC
> >
>
>
> --
> *Suman*
> *OlaCabs*
>


Re: [ANNOUNCE] New committer: Colin McCabe

2018-09-25 Thread Rajini Sivaram
Congratulations, Colin! Well deserved!

Regards,

Rajini

On Tue, Sep 25, 2018 at 9:39 AM, Ismael Juma  wrote:

> Hi all,
>
> The PMC for Apache Kafka has invited Colin McCabe as a committer and we are
> pleased to announce that he has accepted!
>
> Colin has contributed 101 commits and 8 KIPs including significant
> improvements to replication, clients, code quality and testing. A few
> highlights were KIP-97 (Improved Clients Compatibility Policy), KIP-117
> (AdminClient), KIP-227 (Incremental FetchRequests to Increase Partition
> Scalability), the introduction of findBugs and adding Trogdor (fault
> injection and benchmarking tool).
>
> In addition, Colin has reviewed 38 pull requests and participated in more
> than 50 KIP discussions.
>
> Thank you for your contributions Colin! Looking forward to many more. :)
>
> Ismael, for the Apache Kafka PMC
>


[ANNOUNCE] Apache Kafka 2.0.0 Released

2018-07-30 Thread Rajini Sivaram
 Kafka topics to existing applications or data

systems. For example, a connector to a relational database might

capture every change to a table.





With these APIs, Kafka can be used for two broad classes of application:



** Building real-time streaming data pipelines that reliably get data

between systems or applications.



** Building real-time streaming applications that transform or react

to the streams of data.







Apache Kafka is in use at large and small companies worldwide, including

Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,

Target, The New York Times, Uber, Yelp, and Zalando, among others.







A big thank you for the following 131 contributors to this release!



Adem Efe Gencer, Alex D, Alex Dunayevsky, Allen Wang, Andras Beni,

Andy Bryant, Andy Coates, Anna Povzner, Arjun Satish, asutosh936,

Attila Sasvari, bartdevylder, Benedict Jin, Bill Bejeck, Blake Miller,

Boyang Chen, cburroughs, Chia-Ping Tsai, Chris Egerton, Colin P. Mccabe,

Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,

Daniel Shuy, Daniel Wojda, Dark, David Glasser, Debasish Ghosh, Detharon,

Dhruvil Shah, Dmitry Minkovsky, Dong Lin, Edoardo Comar, emmanuel Harel,

Eugene Sevastyanov, Ewen Cheslack-Postava, Fedor Bobin, fedosov-alexander,

Filipe Agapito, Florian Hussonnois, fredfp, Gilles Degols, gitlw, Gitomain,

Guangxian, Gunju Ko, Gunnar Morling, Guozhang Wang, hmcl, huxi, huxihx,

Igor Kostiakov, Ismael Juma, Jacek Laskowski, Jagadesh Adireddi,

Jarek Rudzinski, Jason Gustafson, Jeff Klukas, Jeremy Custenborder,

Jiangjie (Becket) Qin, Jiangjie Qin, JieFang.He, Jimin Hsieh, Joan Goyeau,

Joel Hamill, John Roesler, Jon Lee, Jorge Quilcate Otoya, Jun Rao,

Kamal C, khairy, Koen De Groote, Konstantine Karantasis, Lee Dongjin,

Liju John, Liquan Pei, lisa2lisa, Lucas Wang, Magesh Nandakumar,

Magnus Edenhill, Magnus Reftel, Manikumar Reddy, Manikumar Reddy O,

manjuapu, Mats Julian Olsen, Matthias J. Sax, Max Zheng, maytals,

Michael Arndt, Michael G. Noll, Mickael Maison, nafshartous, Nick Travers,

nixsticks, Paolo Patierno, parafiend, Patrik Erdes, Radai Rosenblatt,

Rajini Sivaram, Randall Hauch, ro7m, Robert Yokota, Roman Khlebnov,

Ron Dagostino, Sandor Murakozi, Sasaki Toru, Sean Glover,

Sebastian Bauersfeld, Siva Santhalingam, Stanislav Kozlovski, Stephane
Maarek,

Stuart Perks, Surabhi Dixit, Sönke Liebau, taekyung, tedyu, Thomas Leplus,

UVN, Vahid Hashemian, Valentino Proietti, Viktor Somogyi, Vitaly Pushkar,

Wladimir Schmidt, wushujames, Xavier Léauté, xin, yaphet,

Yaswanth Kumar, ying-zheng, Yu







We welcome your help and feedback. For more information on how to

report problems, and to get involved, visit the project website at

https://kafka.apache.org/





Thank you!





Regards,



Rajini


[RESULTS] [VOTE] Release Kafka version 2.0.0

2018-07-28 Thread Rajini Sivaram
This vote passes with 7 +1 votes (4 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Guozhang Wang

* Gwen Shapira

* Jason Gustafson

* Rajini Sivaram

Committers
* No votes

Community:
* Vahid Hashemian

* Ted Yu

* Ron Dagostino


0 votes
* No votes

-1 votes
* No votes

Vote 
thread:https://lists.apache.org/thread.html/ec0d409b03d02df4ec8a6ce5632612724a959bc5842b3e41721f115f@%3Cdev.kafka.apache.org%3E

I'll continue with the release process and the release announcement
will follow in the next few days.

Rajini


CVE-2018-1288: Authenticated Kafka clients may interfere with data replication

2018-07-26 Thread Rajini Sivaram
CVE-2018-1288: Authenticated Kafka clients may interfere with data
replication



Severity: Moderate



Vendor: The Apache Software Foundation



Versions Affected:

Apache Kafka 0.9.0.0 to 0.9.0.1, 0.10.0.0 to 0.10.2.1, 0.11.0.0 to
0.11.0.2, 1.0.0



Description:

Authenticated Kafka users may perform action reserved for the Broker via a
manually created fetch request interfering with data replication, resulting
in data loss.



Mitigation:

Apache Kafka users should upgrade to one of the following versions where
this vulnerability has been fixed.


   - 0.10.2.2 or higher
   - 0.11.0.3 or higher
   - 1.0.1 or higher
   - 1.1.0 or higher



Acknowledgements:

We would like to thank Edoardo Comar and Mickael Maison for reporting this
issue and providing a resolution.



Regards,


Rajini


CVE-2017-12610: Authenticated Kafka clients may impersonate other users

2018-07-26 Thread Rajini Sivaram
CVE-2017-12610: Authenticated Kafka clients may impersonate other users


Severity: Moderate



Vendor: The Apache Software Foundation



Versions Affected:

Apache Kafka 0.10.0.0 to 0.10.2.1, 0.11.0.0 to 0.11.0.1



Description:

Authenticated Kafka clients may use impersonation via a manually crafted
protocol message with SASL/PLAIN or SASL/SCRAM authentication when using
the built-in PLAIN or SCRAM server implementations in Apache Kafka.



Mitigation:

Apache Kafka users should upgrade to one of the following versions where
this vulnerability has been fixed:


   - 0.10.2.2 or higher
   - 0.11.0.2 or higher
   - 1.0.0 or higher



Acknowledgements:

This issue was reported by Rajini Sivaram.



Regards,


Rajini


[VOTE] 2.0.0 RC3

2018-07-24 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the fourth candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820


A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)


Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/RELEASE_NOTES.html


*** Please download, test and vote by Friday July 27, 4pm PT.


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc3/javadoc/


* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/releases/tag/2.0.0-rc3

* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html


* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/90/

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/41/


/**


Thanks,



Rajini


Re: [VOTE] 2.0.0 RC2

2018-07-19 Thread Rajini Sivaram
Hi all,

We found a blocker in 2.0.0 which is a bug in the newly added OAuth
protocol implementation (https://issues.apache.org/jira/browse/KAFKA-7182).
Since the current implementation doesn't conform to the SASL/OAUTHBEARER
spec in RFC-7628, we need to fix this before the release to conform to the
spec and avoid compatibility issues later. A fix is currently being
reviewed and I will try and create RC3 later today.

Many thanks to everyone who tested and voted for RC2. Please help test RC3
if you have time.

Regards,

Rajini

On Wed, Jul 18, 2018 at 4:03 PM, Guozhang Wang  wrote:

> +1. Verified the following:
>
> - javadocs
> - web docs
> - maven staging repository
>
> Besides what Ismael mentioned on upgrade guide, some of the latest doc
> fixes in 2.0 seems not be reflected in
> http://kafka.apache.org/20/documentation.html yet (this does not need a
> new
> RC, we can just re-copy-and-paste to kafka-site again).
>
>
> Thanks Rajini!
>
>
> Guozhang
>
>
>
> On Wed, Jul 18, 2018 at 7:48 AM, Ismael Juma  wrote:
>
> > Thanks Rajini! A documentation issue that we must fix before the release
> > (but does not require another RC), 1.2 (which became 2.0) is mentioned in
> > the upgrade notes:
> >
> > http://kafka.apache.org/20/documentation.html#upgrade
> >
> > Ismael
> >
> > On Sun, Jul 15, 2018 at 9:25 AM Rajini Sivaram 
> > wrote:
> >
> > > Hi Ismael,
> > >
> > > Thank you for pointing that out. I have re-uploaded the RC2 artifacts
> to
> > > maven including streams-scala_2.12. Also submitted a PR to update
> build &
> > > release scripts to include this.
> > >
> > > Thank you,
> > >
> > > Rajini
> > >
> > >
> > >
> > > On Fri, Jul 13, 2018 at 7:19 AM, Ismael Juma 
> wrote:
> > >
> > > > Hi Rajini,
> > > >
> > > > Thanks for generating the RC. It seems like the kafka-streams-scala
> > 2.12
> > > > artifact is missing from the Maven repository:
> > > >
> > > > https://repository.apache.org/content/groups/staging/org/
> apache/kafka/
> > > >
> > > > Since this is the first time we are publishing this artifact, it is
> > > > possible that this never worked properly.
> > > >
> > > > Ismael
> > > >
> > > > On Tue, Jul 10, 2018 at 10:17 AM Rajini Sivaram <
> > rajinisiva...@gmail.com
> > > >
> > > > wrote:
> > > >
> > > > > Hello Kafka users, developers and client-developers,
> > > > >
> > > > >
> > > > > This is the third candidate for release of Apache Kafka 2.0.0.
> > > > >
> > > > >
> > > > > This is a major version release of Apache Kafka. It includes 40 new
> > > KIPs
> > > > > and
> > > > >
> > > > > several critical bug fixes. Please see the 2.0.0 release plan for
> > more
> > > > > details:
> > > > >
> > > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > > action?pageId=80448820
> > > > >
> > > > >
> > > > > A few notable highlights:
> > > > >
> > > > >- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for
> > > CreateTopics
> > > > >(KIP-277)
> > > > >- SASL/OAUTHBEARER implementation (KIP-255)
> > > > >- Improved quota communication and customization of quotas
> > (KIP-219,
> > > > >KIP-257)
> > > > >- Efficient memory usage for down conversion (KIP-283)
> > > > >- Fix log divergence between leader and follower during fast
> > leader
> > > > >failover (KIP-279)
> > > > >- Drop support for Java 7 and remove deprecated code including
> old
> > > > scala
> > > > >clients
> > > > >- Connect REST extension plugin, support for externalizing
> secrets
> > > and
> > > > >improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> > > > >- Scala API for Kafka Streams and other Streams API improvements
> > > > >(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> > > > >
> > > > >
> > > > > Release notes for the 2.0.0 release:
> > > > >
> > > > > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/
> RELEASE_NOTES.html
> > > > >
> > > > >
> &

Re: [VOTE] 2.0.0 RC2

2018-07-15 Thread Rajini Sivaram
Hi Ismael,

Thank you for pointing that out. I have re-uploaded the RC2 artifacts to
maven including streams-scala_2.12. Also submitted a PR to update build &
release scripts to include this.

Thank you,

Rajini



On Fri, Jul 13, 2018 at 7:19 AM, Ismael Juma  wrote:

> Hi Rajini,
>
> Thanks for generating the RC. It seems like the kafka-streams-scala 2.12
> artifact is missing from the Maven repository:
>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> Since this is the first time we are publishing this artifact, it is
> possible that this never worked properly.
>
> Ismael
>
> On Tue, Jul 10, 2018 at 10:17 AM Rajini Sivaram 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> >
> > This is the third candidate for release of Apache Kafka 2.0.0.
> >
> >
> > This is a major version release of Apache Kafka. It includes 40 new  KIPs
> > and
> >
> > several critical bug fixes. Please see the 2.0.0 release plan for more
> > details:
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.
> action?pageId=80448820
> >
> >
> > A few notable highlights:
> >
> >- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
> >(KIP-277)
> >- SASL/OAUTHBEARER implementation (KIP-255)
> >- Improved quota communication and customization of quotas (KIP-219,
> >KIP-257)
> >- Efficient memory usage for down conversion (KIP-283)
> >- Fix log divergence between leader and follower during fast leader
> >failover (KIP-279)
> >- Drop support for Java 7 and remove deprecated code including old
> scala
> >clients
> >- Connect REST extension plugin, support for externalizing secrets and
> >improved error handling (KIP-285, KIP-297, KIP-298 etc.)
> >- Scala API for Kafka Streams and other Streams API improvements
> >(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
> >
> >
> > Release notes for the 2.0.0 release:
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by Friday, July 13, 4pm PT
> >
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> > http://kafka.apache.org/KEYS
> >
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/
> >
> >
> > * Maven artifacts to be voted upon:
> >
> > https://repository.apache.org/content/groups/staging/
> >
> >
> > * Javadoc:
> >
> > http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/
> >
> >
> > * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
> >
> > https://github.com/apache/kafka/tree/2.0.0-rc2
> >
> >
> >
> > * Documentation:
> >
> > http://kafka.apache.org/20/documentation.html
> >
> >
> > * Protocol:
> >
> > http://kafka.apache.org/20/protocol.html
> >
> >
> > * Successful Jenkins builds for the 2.0 branch:
> >
> > Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/72/
> >
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/
> >
> >
> > /**
> >
> >
> > Thanks,
> >
> >
> > Rajini
> >
>


[VOTE] 2.0.0 RC2

2018-07-10 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the third candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820


A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)


Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/RELEASE_NOTES.html


*** Please download, test and vote by Friday, July 13, 4pm PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc2/javadoc/


* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/tree/2.0.0-rc2



* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html


* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/72/

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/27/


/**


Thanks,


Rajini


Re: [ANNOUNCE] Apache Kafka 0.10.2.2 Released

2018-07-04 Thread Rajini Sivaram
Thanks for driving the release, Matthias!

On Tue, Jul 3, 2018 at 8:48 PM, Matthias J. Sax  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 0.10.2.2.
>
>
> This is a bug fix release and it includes fixes and improvements from
> 29 JIRAs, including a few critical bugs.
>
>
> All of the changes in this release can be found in the release notes:
>
>
> https://dist.apache.org/repos/dist/release/kafka/0.10.2.2/RELEASE_NOTES.
> html
>
>
>
> You can download the source release from:
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.2.2/kafka-0.10.2.
> 2-src.tgz
>
>
> and binary releases from:
>
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.2.2/kafka_2.11-0.
> 10.2.2.tgz
> (Scala 2.11)
>
> https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.2.2/kafka_2.12-0.
> 10.2.2.tgz
> (Scala 2.12)
>
>
> - 
> - ---
>
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
>
> ** The Producer API allows an application to publish a stream records
> to one or more Kafka topics.
>
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming
> the input streams to output streams.
>
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.three key capabilities:
>
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
>
> Apache Kafka is in use at large and small companies worldwide,
> including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> Zalando, among others.
>
>
>
> A big thank you for the following 30 contributors to this release!
>
>
> Ewen Cheslack-Postava, Matthias J. Sax, Randall Hauch, Eno Thereska,
> Damian Guy, Rajini Sivaram, Colin P. Mccabe, Kelvin Rutt, Kyle
> Winkelman, Max Zheng, Guozhang Wang, Xavier Léauté, Konstantine
> Karantasis, Paolo Patierno, Robert Yokota, Tommy Becker, Arjun Satish,
> Xi Hu, Armin Braun, Edoardo Comar, Gunnar Morling, Gwen Shapira,
> Hooman Broujerdi, Ismael Juma, Jaikiran Pai, Jarek Rudzinski, Jason
> Gustafson, Jun Rao, Manikumar Reddy, Maytee Chinavanichkit
>
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> http://kafka.apache.org/
>
>
> Thank you!
>
>
> Regards,
>  -Matthias
>
>
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIzBAEBCgAdFiEEeiQdEa0SVXokodP3DccxaWtLg18FAls70woACgkQDccxaWtL
> g1+Xzw//Rb7K691p0R2qPOixZfllEuO926C9dIjiq9XA+dZrabgC4tMgAtE07Pf4
> i6ZUeIqVLH3IDYIKji92K+JUIWpu6fdmCc999bJUOJG+zABMbO0uRYm7/4LwfMPR
> kfjxRhxu31ewvafs3crE4Kfkekw4FLFIwHiaz3i/mKC1Ty6V4oiJcwHP4PZizE2r
> rTNbt0ZHzviiBH3klOoDh+ZZFwbDZn7EHUXm8o9fiiC52o/7TIqVWwmNzZJlNGRc
> bxC3boGXAXjgBwm7iqxBgkPku/kTTWpxj6jkHbS2NQfCZE5V7INQC2HlnynPHc7j
> m2F2plSvKOm4gi54q6SSiXkjcXA2dBJDe3y/jNpckXSQ31sNXsTi6vbRMkMPj8dJ
> j0SKhFoSCDpWejgLkUMg6hZgepgz7G1uYHA9K8SfCyCooqxsEY4I3dClNOySORly
> 4brdjZWpclhCn+zpekqBFZ9Sn3ipG4MOvH64chPEvYnysHkRH26FqXNPOK185V0Z
> Czl0dL0aEoJWZ3LxLTSoFkncKgqrcE00q4VknK3zGW65tlQ1DqTXtK3Ta1q8vX98
> PCCR4Tjhu0RcBAV2L4o43itKzIaLCp9lElA1341oQUB+tiPRA0GvWGg36EomehzF
> 1qdbjBug91CLyefZVVeEfTiqmNAYNyR1Zmx99rryx+Fp+5Ek9YI=
> =yjnJ
> -END PGP SIGNATURE-
>


Re: [ANNOUNCE] Apache Kafka 0.11.0.3 Released

2018-07-04 Thread Rajini Sivaram
Thanks for driving the release, Matthias!

On Tue, Jul 3, 2018 at 10:08 PM, Jason Gustafson  wrote:

> Awesome. Thanks Matthias!
>
> On Tue, Jul 3, 2018 at 12:44 PM, Yishun Guan  wrote:
>
> > Nice! Thanks~
> >
> > On Tue, Jul 3, 2018, 12:16 PM Ismael Juma  wrote:
> >
> > > Thanks Matthias!
> > >
> > > On Tue, 3 Jul 2018, 11:31 Matthias J. Sax,  wrote:
> > >
> > > > -BEGIN PGP SIGNED MESSAGE-
> > > > Hash: SHA512
> > > >
> > > > The Apache Kafka community is pleased to announce the release for
> > > > Apache Kafka 0.11.0.3.
> > > >
> > > >
> > > > This is a bug fix release and it includes fixes and improvements from
> > > > 27 JIRAs, including a few critical bugs.
> > > >
> > > >
> > > > All of the changes in this release can be found in the release notes:
> > > >
> > > >
> > > > https://dist.apache.org/repos/dist/release/kafka/0.11.0.3/
> > RELEASE_NOTES.
> > > > html
> > > > <
> > > https://dist.apache.org/repos/dist/release/kafka/0.11.0.3/
> > RELEASE_NOTES.html
> > > >
> > > >
> > > >
> > > >
> > > > You can download the source release from:
> > > >
> > > >
> > > > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 3/kafka-0.11.0.
> > > > 3-src.tgz
> > > > <
> > > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 3/kafka-0.11.0.3-src.tgz
> > > >
> > > >
> > > >
> > > > and binary releases from:
> > > >
> > > >
> > > > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 3/kafka_2.11-0.
> > > > 11.0.3.tgz
> > > > (Scala 2.11)
> > > >
> > > > https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
> > 3/kafka_2.12-0.
> > > > 11.0.3.tgz
> > > > (Scala 2.12)
> > > >
> > > >
> > > > -
> > > 
> 
> > > > - ---
> > > >
> > > >
> > > > Apache Kafka is a distributed streaming platform with four core APIs:
> > > >
> > > >
> > > > ** The Producer API allows an application to publish a stream records
> > > > to one or more Kafka topics.
> > > >
> > > >
> > > > ** The Consumer API allows an application to subscribe to one or more
> > > > topics and process the stream of records produced to them.
> > > >
> > > >
> > > > ** The Streams API allows an application to act as a stream
> processor,
> > > > consuming an input stream from one or more topics and producing an
> > > > output stream to one or more output topics, effectively transforming
> > > > the input streams to output streams.
> > > >
> > > >
> > > > ** The Connector API allows building and running reusable producers
> or
> > > > consumers that connect Kafka topics to existing applications or data
> > > > systems. For example, a connector to a relational database might
> > > > capture every change to a table.three key capabilities:
> > > >
> > > >
> > > >
> > > > With these APIs, Kafka can be used for two broad classes of
> > application:
> > > >
> > > >
> > > > ** Building real-time streaming data pipelines that reliably get data
> > > > between systems or applications.
> > > >
> > > >
> > > > ** Building real-time streaming applications that transform or react
> > > > to the streams of data.
> > > >
> > > >
> > > >
> > > > Apache Kafka is in use at large and small companies worldwide,
> > > > including Capital One, Goldman Sachs, ING, LinkedIn, Netflix,
> > > > Pinterest, Rabobank, Target, The New York Times, Uber, Yelp, and
> > > > Zalando, among others.
> > > >
> > > >
> > > >
> > > > A big thank you for the following 26 contributors to this release!
> > > >
> > > >
> > > > Matthias J. Sax, Ewen Cheslack-Postava, Konstantine Karantasis,
> > > > Guozhang Wang, Rajini Sivaram, Randall Hauch, tedyu, Jagadesh
> > > > Adireddi, Jarek Rudzin

Re: [kafka-clients] [VOTE] 1.0.2 RC1

2018-07-03 Thread Rajini Sivaram
Hi Matthias,

+1 (binding)

Thank you for running the release.

Ran quick start with binary, tests with source, checked javadocs.

Regards,

Rajini

On Mon, Jul 2, 2018 at 9:34 PM, Harsha  wrote:

> +1.
>
> 1) Ran unit tests
> 2) 3 node cluster , tested basic operations.
>
> Thanks,
> Harsha
>
> On Mon, Jul 2nd, 2018 at 11:57 AM, Jun Rao  wrote:
>
> >
> >
> >
> > Hi, Matthias,
> >
> > Thanks for the running the release. Verified quickstart on scala 2.12
> > binary. +1
> >
> > Jun
> >
> > On Fri, Jun 29, 2018 at 10:02 PM, Matthias J. Sax <
> matth...@confluent.io >
> >
> > wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the second candidate for release of Apache Kafka 1.0.2.
> > >
> > > This is a bug fix release addressing 27 tickets:
> > > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.0.2
> > >
> > > Release notes for the 1.0.2 release:
> > > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by end of next week (7/6/18).
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > http://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~mjsax/kafka-1.0.2-rc1/javadoc/
> > >
> > > * Tag to be voted upon (off 1.0 branch) is the 1.0.2 tag:
> > > https://github.com/apache/kafka/releases/tag/1.0.2-rc1
> > >
> > > * Documentation:
> > > http://kafka.apache.org/10/documentation.html
> > >
> > > * Protocol:
> > > http://kafka.apache.org/10/protocol.html
> > >
> > > * Successful Jenkins builds for the 1.0 branch:
> > > Unit/integration tests: https://builds.apache.org/job/
> kafka-1.0-jdk7/214/
> >
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/1.0/225/
> > >
> > > /**
> > >
> > > Thanks,
> > > -Matthias
> > >
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> > Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> > an
> > > email to kafka-clients+ unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit https://groups.google.com/d/
> > > msgid/kafka-clients/ca183ad4-9285-e423-3850-261f9dfec044%40
> confluent.io.
> >
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> >
> >
> >
> >
>


Re: [kafka-clients] [VOTE] 2.0.0 RC1

2018-06-30 Thread Rajini Sivaram
Hi Manikumar,

Thank you for pointing that out, I had forgotten to drop the old artifacts.
New artifacts should be there now.

Regards,

Rajini

On Sat, Jun 30, 2018 at 7:44 AM, Manikumar 
wrote:

> looks like maven artifacts are not updated in the staging repo. They are
> still at old timestamp.
> https://repository.apache.org/content/groups/staging/org/
> apache/kafka/kafka_2.11/2.0.0/
>
> On Sat, Jun 30, 2018 at 12:06 AM Rajini Sivaram 
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>>
>> This is the second candidate for release of Apache Kafka 2.0.0.
>>
>>
>> This is a major version release of Apache Kafka. It includes 40 new  KIPs
>> and
>>
>> several critical bug fixes. Please see the 2.0.0 release plan for more
>> details:
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>>
>>
>> A few notable highlights:
>>
>>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for
>>CreateTopics (KIP-277)
>>- SASL/OAUTHBEARER implementation (KIP-255)
>>- Improved quota communication and customization of quotas (KIP-219,
>>KIP-257)
>>- Efficient memory usage for down conversion (KIP-283)
>>- Fix log divergence between leader and follower during fast leader
>>failover (KIP-279)
>>- Drop support for Java 7 and remove deprecated code including old
>>scala clients
>>- Connect REST extension plugin, support for externalizing secrets
>>and improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>>- Scala API for Kafka Streams and other Streams API improvements
>>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>>
>> Release notes for the 2.0.0 release:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html
>>
>>
>>
>> *** Please download, test and vote by Tuesday, July 3rd, 4pm PT
>>
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>
>> http://kafka.apache.org/KEYS
>>
>>
>> * Release artifacts to be voted upon (source and binary):
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/
>>
>>
>> * Maven artifacts to be voted upon:
>>
>> https://repository.apache.org/content/groups/staging/
>>
>>
>> * Javadoc:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/
>>
>>
>> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>>
>> https://github.com/apache/kafka/tree/2.0.0-rc1
>>
>>
>> * Documentation:
>>
>> http://kafka.apache.org/20/documentation.html
>>
>>
>> * Protocol:
>>
>> http://kafka.apache.org/20/protocol.html
>>
>>
>> * Successful Jenkins builds for the 2.0 branch:
>>
>> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/66/
>>
>> System tests: https://jenkins.confluent.io/job/system-test-
>> kafka/job/2.0/15/
>>
>>
>>
>> Please test and verify the release artifacts and submit a vote for this RC
>> or report any issues so that we can fix them and roll out a new RC ASAP!
>>
>> Although this release vote requires PMC votes to pass, testing, votes,
>> and bug
>> reports are valuable and appreciated from everyone.
>>
>>
>> Thanks,
>>
>>
>> Rajini
>>
>> --
>> You received this message because you are subscribed to the Google Groups
>> "kafka-clients" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
>> email to kafka-clients+unsubscr...@googlegroups.com.
>> To post to this group, send email to kafka-clie...@googlegroups.com.
>> Visit this group at https://groups.google.com/group/kafka-clients.
>> To view this discussion on the web visit https://groups.google.com/d/
>> msgid/kafka-clients/CAOJcB39GdTWOaK4qysvyPyGU8Ldm8
>> 2t_TA364x1MP8a8OAod6A%40mail.gmail.com
>> <https://groups.google.com/d/msgid/kafka-clients/CAOJcB39GdTWOaK4qysvyPyGU8Ldm82t_TA364x1MP8a8OAod6A%40mail.gmail.com?utm_medium=email_source=footer>
>> .
>> For more options, visit https://groups.google.com/d/optout.
>>
>


Re: [kafka-clients] [VOTE] 0.11.0.3 RC0

2018-06-29 Thread Rajini Sivaram
Hi Matthias,

+1 (binding)

Verified binary using quick start, verified source by building and running
tests, checked release notes.

Thanks for running the release!

Regards,

Rajini


On Fri, Jun 29, 2018 at 11:07 PM, Jun Rao  wrote:

> Hi, Matthias,
>
> Thanks for running the release. Verified quickstart on scala 2.12 binary.
> +1
>
> Jun
>
> On Fri, Jun 22, 2018 at 3:14 PM, Matthias J. Sax 
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the first candidate for release of Apache Kafka 0.11.0.3.
> >
> > This is a bug fix release closing 27 tickets:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+0.11.0.3
> >
> > Release notes for the 0.11.0.3 release:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, 6/26/18 end-of-day, so we
> > can close the vote on Wednesday.
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~mjsax/kafka-0.11.0.3-rc0/javadoc/
> >
> > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.3 tag:
> > https://github.com/apache/kafka/releases/tag/0.11.0.3-rc0
> >
> > * Documentation:
> > http://kafka.apache.org/0110/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0110/protocol.html
> >
> > * Successful Jenkins builds for the 0.11.0 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.11.0-jdk7/385/
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/0.11.0/217/
> >
> > /**
> >
> > Thanks,
> >   -Matthias
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/2f54734a-8e8d-3cd7-060b-5f2b3010a20e%40confluent.io.
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: [kafka-clients] [VOTE] 1.1.1 RC2

2018-06-29 Thread Rajini Sivaram
Hi Dong,

+1 (binding)

Verified binary using quick start, ran tests from source, checked release
notes.

Thanks for running the release!

Regards,

Rajini

On Fri, Jun 29, 2018 at 11:11 PM, Jun Rao  wrote:

> Hi, Dong,
>
> Thanks for running the release. Verified quickstart on scala 2.12 binary.
> +1
>
> Jun
>
> On Thu, Jun 28, 2018 at 6:12 PM, Dong Lin  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the second candidate for release of Apache Kafka 1.1.1.
> >
> > Apache Kafka 1.1.1 is a bug-fix release for the 1.1 branch that was first
> > released with 1.1.0 about 3 months ago. We have fixed about 25 issues
> since
> > that release. A few of the more significant fixes include:
> >
> > KAFKA-6925  - Fix
> > memory leak in StreamsMetricsThreadImpl
> > KAFKA-6937  - In-sync
> > replica delayed during fetch if replica throttle is exceeded
> > KAFKA-6917  - Process
> > txn completion asynchronously to avoid deadlock
> > KAFKA-6893  - Create
> > processors before starting acceptor to avoid ArithmeticException
> > KAFKA-6870  -
> > Fix ConcurrentModificationException in SampledStat
> > KAFKA-6878  - Fix
> > NullPointerException when querying global state store
> > KAFKA-6879  - Invoke
> > session init callbacks outside lock to avoid Controller deadlock
> > KAFKA-6857  - Prevent
> > follower from truncating to the wrong offset if undefined leader epoch is
> > requested
> > KAFKA-6854  - Log
> > cleaner fails with transaction markers that are deleted during clean
> > KAFKA-6747  - Check
> > whether there is in-flight transaction before aborting transaction
> > KAFKA-6748  - Double
> > check before scheduling a new task after the punctuate call
> > KAFKA-6739  -
> > Fix IllegalArgumentException when down-converting from V2 to V0/V1
> > KAFKA-6728  -
> > Fix NullPointerException when instantiating the HeaderConverter
> >
> > Kafka 1.1.1 release plan:
> > https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+1.1.1
> >
> > Release notes for the 1.1.1 release:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Thursday, July 3, 12pm PT ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~lindong/kafka-1.1.1-rc2/javadoc/
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.1-rc2 tag:
> > https://github.com/apache/kafka/tree/1.1.1-rc2
> >
> > * Documentation:
> > http://kafka.apache.org/11/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/11/protocol.html
> >
> > * Successful Jenkins builds for the 1.1 branch:
> > Unit/integration tests: *https://builds.apache.org/
> job/kafka-1.1-jdk7/157/
> > *
> > System tests: https://jenkins.confluent.io/job/system-test-kafka-br
> > anch-builder/1817
> >
> >
> > Please test and verify the release artifacts and submit a vote for this
> RC,
> > or report any issues so we can fix them and get a new RC out ASAP.
> Although
> > this release vote requires PMC votes to pass, testing, votes, and bug
> > reports are valuable and appreciated from everyone.
> >
> >
> > Regards,
> > Dong
> >
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/CAAaarBb1KsyD_KLuz6V4pfKQiUNQFLb9Lb_eNU%
> > 2BsWjd7Vr%2B_%2Bw%40mail.gmail.com
> >  KLuz6V4pfKQiUNQFLb9Lb_eNU%2BsWjd7Vr%2B_%2Bw%40mail.
> gmail.com?utm_medium=email_source=footer>
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


[VOTE] 2.0.0 RC1

2018-06-29 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the second candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820


A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)

Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/RELEASE_NOTES.html



*** Please download, test and vote by Tuesday, July 3rd, 4pm PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc1/javadoc/


* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/tree/2.0.0-rc1


* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html


* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/66/

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/15/



Please test and verify the release artifacts and submit a vote for this RC
or report any issues so that we can fix them and roll out a new RC ASAP!

Although this release vote requires PMC votes to pass, testing, votes, and
bug
reports are valuable and appreciated from everyone.


Thanks,


Rajini


Re: [VOTE] 2.0.0 RC0

2018-06-22 Thread Rajini Sivaram
Any and all testing is welcome, but testing in the following areas would be
particularly helpful:

   1. Performance and stress testing. Heroku and LinkedIn have helped with this
   in the past (and issues have been found and fixed).
   2. Client developers can verify that their clients can produce/consume
compressed/uncompressed data to/from 2.0.0 brokers
   3. End users can verify that their apps work correctly with the new
   release.

Thank you!

Rajini

On Thu, Jun 21, 2018 at 12:24 PM, Rajini Sivaram 
wrote:

> Sorry, the documentation does go live with the RC (thanks to Ismael for
> pointing this out), so here are the links:
>
> * Documentation:
>
> http://kafka.apache.org/20/documentation.html
>
>
> * Protocol:
>
> http://kafka.apache.org/20/protocol.html
>
>
>
> Regards,
>
>
> Rajini
>
>
> On Wed, Jun 20, 2018 at 9:08 PM, Rajini Sivaram 
> wrote:
>
>> Hello Kafka users, developers and client-developers,
>>
>>
>> This is the first candidate for release of Apache Kafka 2.0.0.
>>
>>
>> This is a major version release of Apache Kafka. It includes 40 new  KIPs
>> and
>>
>> several critical bug fixes. Please see the 2.0.0 release plan for more
>> details:
>>
>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>>
>>
>> A few notable highlights:
>>
>>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for
>>CreateTopics (KIP-277)
>>- SASL/OAUTHBEARER implementation (KIP-255)
>>- Improved quota communication and customization of quotas (KIP-219,
>>KIP-257)
>>- Efficient memory usage for down conversion (KIP-283)
>>- Fix log divergence between leader and follower during fast leader
>>failover (KIP-279)
>>- Drop support for Java 7 and remove deprecated code including old
>>scala clients
>>- Connect REST extension plugin, support for externalizing secrets
>>and improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>>- Scala API for Kafka Streams and other Streams API improvements
>>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>>
>>
>> Release notes for the 2.0.0 release:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/RELEASE_NOTES.html
>>
>>
>> *** Please download, test and vote by Monday, June 25, 4pm PT
>>
>>
>> Kafka's KEYS file containing PGP keys we use to sign the release:
>>
>> http://kafka.apache.org/KEYS
>>
>>
>> * Release artifacts to be voted upon (source and binary):
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/
>>
>>
>> * Maven artifacts to be voted upon:
>>
>> https://repository.apache.org/content/groups/staging/
>>
>>
>> * Javadoc:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/javadoc/
>>
>>
>> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>>
>> https://github.com/apache/kafka/tree/2.0.0-rc0
>>
>>
>> * Documentation:
>>
>> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/kafka_2.11-
>> 2.0.0-site-docs.tgz
>>
>> (Since documentation cannot go live until 2.0.0 is released, please
>> download and verify)
>>
>>
>> * Successful Jenkins builds for the 2.0 branch:
>>
>> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/48/
>>
>> System tests: https://jenkins.confluent.io/job/system-test-kafka/jo
>> b/2.0/6/ (2 failures are known flaky tests)
>>
>>
>>
>> Please test and verify the release artifacts and submit a vote for this RC
>> or report any issues so that we can fix them and roll out a new RC ASAP!
>>
>> Although this release vote requires PMC votes to pass, testing, votes,
>> and bug
>> reports are valuable and appreciated from everyone.
>>
>>
>> Thanks,
>>
>>
>> Rajini
>>
>>
>>
>


Re: [VOTE] 2.0.0 RC0

2018-06-21 Thread Rajini Sivaram
Sorry, the documentation does go live with the RC (thanks to Ismael for
pointing this out), so here are the links:

* Documentation:

http://kafka.apache.org/20/documentation.html


* Protocol:

http://kafka.apache.org/20/protocol.html



Regards,


Rajini


On Wed, Jun 20, 2018 at 9:08 PM, Rajini Sivaram 
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the first candidate for release of Apache Kafka 2.0.0.
>
>
> This is a major version release of Apache Kafka. It includes 40 new  KIPs
> and
>
> several critical bug fixes. Please see the 2.0.0 release plan for more
> details:
>
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820
>
>
> A few notable highlights:
>
>- Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
>(KIP-277)
>- SASL/OAUTHBEARER implementation (KIP-255)
>- Improved quota communication and customization of quotas (KIP-219,
>KIP-257)
>- Efficient memory usage for down conversion (KIP-283)
>- Fix log divergence between leader and follower during fast leader
>failover (KIP-279)
>- Drop support for Java 7 and remove deprecated code including old
>scala clients
>- Connect REST extension plugin, support for externalizing secrets and
>improved error handling (KIP-285, KIP-297, KIP-298 etc.)
>- Scala API for Kafka Streams and other Streams API improvements
>(KIP-270, KIP-150, KIP-245, KIP-251 etc.)
>
>
> Release notes for the 2.0.0 release:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/RELEASE_NOTES.html
>
>
> *** Please download, test and vote by Monday, June 25, 4pm PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> http://kafka.apache.org/KEYS
>
>
> * Release artifacts to be voted upon (source and binary):
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/
>
>
> * Maven artifacts to be voted upon:
>
> https://repository.apache.org/content/groups/staging/
>
>
> * Javadoc:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/javadoc/
>
>
> * Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:
>
> https://github.com/apache/kafka/tree/2.0.0-rc0
>
>
> * Documentation:
>
> http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/kafka_2.11-
> 2.0.0-site-docs.tgz
>
> (Since documentation cannot go live until 2.0.0 is released, please
> download and verify)
>
>
> * Successful Jenkins builds for the 2.0 branch:
>
> Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/48/
>
> System tests: https://jenkins.confluent.io/job/system-test-kafka/jo
> b/2.0/6/ (2 failures are known flaky tests)
>
>
>
> Please test and verify the release artifacts and submit a vote for this RC
> or report any issues so that we can fix them and roll out a new RC ASAP!
>
> Although this release vote requires PMC votes to pass, testing, votes,
> and bug
> reports are valuable and appreciated from everyone.
>
>
> Thanks,
>
>
> Rajini
>
>
>


[VOTE] 2.0.0 RC0

2018-06-20 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the first candidate for release of Apache Kafka 2.0.0.


This is a major version release of Apache Kafka. It includes 40 new  KIPs
and

several critical bug fixes. Please see the 2.0.0 release plan for more
details:

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=80448820


A few notable highlights:

   - Prefixed wildcard ACLs (KIP-290), Fine grained ACLs for CreateTopics
   (KIP-277)
   - SASL/OAUTHBEARER implementation (KIP-255)
   - Improved quota communication and customization of quotas (KIP-219,
   KIP-257)
   - Efficient memory usage for down conversion (KIP-283)
   - Fix log divergence between leader and follower during fast leader
   failover (KIP-279)
   - Drop support for Java 7 and remove deprecated code including old scala
   clients
   - Connect REST extension plugin, support for externalizing secrets and
   improved error handling (KIP-285, KIP-297, KIP-298 etc.)
   - Scala API for Kafka Streams and other Streams API improvements
   (KIP-270, KIP-150, KIP-245, KIP-251 etc.)


Release notes for the 2.0.0 release:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/RELEASE_NOTES.html


*** Please download, test and vote by Monday, June 25, 4pm PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/javadoc/


* Tag to be voted upon (off 2.0 branch) is the 2.0.0 tag:

https://github.com/apache/kafka/tree/2.0.0-rc0


* Documentation:

http://home.apache.org/~rsivaram/kafka-2.0.0-rc0/
kafka_2.11-2.0.0-site-docs.tgz

(Since documentation cannot go live until 2.0.0 is released, please
download and verify)


* Successful Jenkins builds for the 2.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-2.0-jdk8/48/

System tests: https://jenkins.confluent.io/job/system-test-kafka/job/2.0/6/ (2
failures are known flaky tests)



Please test and verify the release artifacts and submit a vote for this RC
or report any issues so that we can fix them and roll out a new RC ASAP!

Although this release vote requires PMC votes to pass, testing, votes, and
bug
reports are valuable and appreciated from everyone.


Thanks,


Rajini


Re: KIP-226 - Dynamic Broker Configuration

2018-04-19 Thread Rajini Sivaram
Hi Darshan,

We currently allow only keystores to be dynamically updated. And you need
to use kaka-configs.sh to update the keystore config. See
https://kafka.apache.org/documentation/#dynamicbrokerconfigs.

On Thu, Apr 19, 2018 at 6:51 AM, Darshan 
wrote:

> Hi
>
> KIP-226 is released in 1.1. I had a questions about it.
>
> If we add a new certificate (programmatically) in the truststore that
> Kafka Broker is using it, do we need to issue any CLI or other command for
> Kafka broker to read the new certificate or with KIP-226 everything happens
> automatically ?
>
> Thanks.
>
>
>


Fwd: [ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Rajini Sivaram
Resending to kaka-clients group:

-- Forwarded message --
From: Rajini Sivaram <rsiva...@apache.org>
Date: Thu, Mar 29, 2018 at 10:27 AM
Subject: [ANNOUNCE] Apache Kafka 1.1.0 Released
To: annou...@apache.org, Users <users@kafka.apache.org>, dev <
d...@kafka.apache.org>, kafka-clients <kafka-clie...@googlegroups.com>


The Apache Kafka community is pleased to announce the release for

Apache Kafka 1.1.0.


Kafka 1.1.0 includes a number of significant new features.

Here is a summary of some notable changes:


** Kafka 1.1.0 includes significant improvements to the Kafka Controller

   that speed up controlled shutdown. ZooKeeper session expiration edge
cases

   have also been fixed as part of this effort.


** Controller improvements also enable more partitions to be supported on a

   single cluster. KIP-227 introduced incremental fetch requests, providing

   more efficient replication when the number of partitions is large.


** KIP-113 added support for replica movement between log directories to

   enable data balancing with JBOD.


** Some of the broker configuration options like SSL keystores can now be

   updated dynamically without restarting the broker. See KIP-226 for
details

   and the full list of dynamic configs.


** Delegation token based authentication (KIP-48) has been added to Kafka

   brokers to support large number of clients without overloading Kerberos

   KDCs or other authentication servers.


** Several new features have been added to Kafka Connect, including header

   support (KIP-145), SSL and Kafka cluster identifiers in the Connect REST

   interface (KIP-208 and KIP-238), validation of connector names (KIP-212)

   and support for topic regex in sink connectors (KIP-215). Additionally,

   the default maximum heap size for Connect workers was increased to 2GB.


** Several improvements have been added to the Kafka Streams API, including

   reducing repartition topic partitions footprint, customizable error

   handling for produce failures and enhanced resilience to broker

   unavailability.  See KIPs 205, 210, 220, 224 and 239 for details.


All of the changes in this release can be found in the release notes:



https://dist.apache.org/repos/dist/release/kafka/1.1.0/RELEASE_NOTES.html




You can download the source release from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka-1.1.0-src.tgz



and binary releases from:



https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.11-1.1.0.tgz

(Scala 2.11)


https://www.apache.org/dyn/closer.cgi?path=/kafka/1.1.0/kafka_2.12-1.1.0.tgz

(Scala 2.12)



--



Apache Kafka is a distributed streaming platform with four core APIs:



** The Producer API allows an application to publish a stream records to

one or more Kafka topics.



** The Consumer API allows an application to subscribe to one or more

topics and process the stream of records produced to them.



** The Streams API allows an application to act as a stream processor,

consuming an input stream from one or more topics and producing an output

stream to one or more output topics, effectively transforming the input

streams to output streams.



** The Connector API allows building and running reusable producers or

consumers that connect Kafka topics to existing applications or data

systems. For example, a connector to a relational database might capture

every change to a table.three key capabilities:




With these APIs, Kafka can be used for two broad classes of application:

** Building real-time streaming data pipelines that reliably get data

between systems or applications.



** Building real-time streaming applications that transform or react to the

streams of data.




Apache Kafka is in use at large and small companies worldwide, including

Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,

Target, The New York Times, Uber, Yelp, and Zalando, among others.




A big thank you for the following 120 contributors to this release!


Adem Efe Gencer, Alex Good, Andras Beni, Andy Bryant, Antony Stubbs,

Apurva Mehta, Arjun Satish, bartdevylder, Bill Bejeck, Charly Molter,

Chris Egerton, Clemens Valiente, cmolter, Colin P. Mccabe,

Colin Patrick McCabe, ConcurrencyPractitioner, Damian Guy, dan norwood,

Daniel Wojda, Derrick Or, Dmitry Minkovsky, Dong Lin, Edoardo Comar,

ekenny, Elyahou, Eugene Sevastyanov, Ewen Cheslack-Postava, Filipe Agapito,

fredfp, Gavrie Philipson, Gunnar Morling, Guozhang Wang, hmcl, Hugo Louro,

huxi, huxihx, Igor Kostiakov, Ismael Juma, Ivan Babrou, Jacek Laskowski,

Jakub Scholz, Jason Gustafson, Jeff Klukas, Jeff Widman, Jeremy
Custenborder,

Jeyhun Karimov, Jiangjie (Becket) Qin, Jiangjie Qin, Jimin Hsieh, Joel
Hamill,

John Roesler, Jorge Quilcate Otoya, Jun Rao, Kamal C, Kamil Szymański,

Koen De Groote, Konstantine Karantasis, lisa2lisa, Logan Buckley,

[ANNOUNCE] Apache Kafka 1.1.0 Released

2018-03-29 Thread Rajini Sivaram
 Rosenblatt,

Rajini Sivaram, Randall Hauch, Richard Yu, RichardYuSTUG, Robert Yokota,

Rohan, Rohan Desai, Romain Hardouin, Ron Dagostino, sachinbhalekar,

Sagar Chavan, Sandor Murakozi, Satish Duggana, Scott, Sean McCauliff,

Siva Santhalingam, siva santhalingam, Soenke Liebau, Steven Aerts, Study,

Tanvi Jaywant, tedyu, Tobias Gies, Tom Bentley, Tommy Becker, Travis
Wellman,

umesh chaudhary, Vahid Hashemian, Viktor Somogyi, Wladimir Schmidt,

wushujames, Xavier Léauté, Xin Li, Yaswanth Kumar, ying-zheng, Yu, Yu-Jhe



Many thanks to Damian Guy for driving this release.


We welcome your help and feedback. For more information on how to

report problems, and to get involved, visit the project website at

http://kafka.apache.org/



Thank you!


Rajini


Re: [VOTE] 1.1.0 RC4

2018-03-28 Thread Rajini Sivaram
This vote passes with 9 +1 votes (4 bindings) and no 0 or -1 votes.

+1 votes
PMC Members:
* Jason Gustafson
* Jun Rao
* Gwen Shapira

* Rajini Sivaram

Committers:
* No votes

Community:
* Ted Yu
* Manikumar

* Jeff Chao

* Vahid Hashemian

* Brett Rann

0 votes
* No votes

-1 votes
* No votes
Vote thread:https://markmail.org/message/trlhjyebmidsamuu

I'll continue with the release process and the release announcement will follow.

Thanks,


Rajini




On Wed, Mar 28, 2018 at 6:34 AM, Gwen Shapira <g...@confluent.io> wrote:

> +1
>
> Checked keys, built, ran quickstart. LGTM.
>
> On Fri, Mar 23, 2018 at 4:37 PM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the fifth candidate for release of Apache Kafka 1.1.0.
> >
> > https://cwiki.apache.org/confluence/pages/viewpage.action?
> pageId=75957546
> >
> > A few highlights:
> >
> > * Significant Controller improvements (much faster and session expiration
> > edge
> > cases fixed)
> > * Data balancing across log directories (JBOD)
> > * More efficient replication when the number of partitions is large
> > * Dynamic Broker Configs
> > * Delegation tokens (KIP-48)
> > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> >
> > Release notes for the 1.1.0 release:
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by Tuesday March 27th 4pm PT.
> >
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> > http://kafka.apache.org/KEYS
> >
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/
> >
> >
> > * Maven artifacts to be voted upon:
> >
> > https://repository.apache.org/content/groups/staging/
> >
> >
> > * Javadoc:
> >
> > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/javadoc/
> >
> >
> > * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> >
> > https://github.com/apache/kafka/tree/1.1.0-rc4
> >
> >
> >
> > * Documentation:
> >
> > http://kafka.apache.org/11/documentation.html
> >
> >
> > * Protocol:
> >
> > http://kafka.apache.org/11/protocol.html
> >
> >
> >
> > Thanks,
> >
> >
> > Rajini
> >
>
>
>
> --
> *Gwen Shapira*
> Product Manager | Confluent
> 650.450.2760 | @gwenshap
> Follow us: Twitter <https://twitter.com/ConfluentInc> | blog
> <http://www.confluent.io/blog>
>


Re: [VOTE] 1.1.0 RC4

2018-03-27 Thread Rajini Sivaram
Can we get some more votes for this RC so that the release can be rolled
out soon?

Many thanks,

Rajini

On Sat, Mar 24, 2018 at 6:54 PM, Ted Yu <yuzhih...@gmail.com> wrote:

> I wasn't able to reproduce the test failure when it is run alone.
>
> This seems to be flaky test.
>
> +1 from me.
>
> On Sat, Mar 24, 2018 at 11:49 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > Hi Ted,
> >
> > Thank you for testing the RC. I haven't been able to recreate that
> failure
> > after running the test a 100 times. Was it a one-off transient failure or
> > does it fail consistently for you?
> >
> >
> > On Sat, Mar 24, 2018 at 2:51 AM, Ted Yu <yuzhih...@gmail.com> wrote:
> >
> > > When I ran test suite, I got one failure:
> > >
> > > kafka.api.PlaintextConsumerTest > testAsyncCommit FAILED
> > > java.lang.AssertionError: expected:<5> but was:<1>
> > > at org.junit.Assert.fail(Assert.java:88)
> > > at org.junit.Assert.failNotEquals(Assert.java:834)
> > > at org.junit.Assert.assertEquals(Assert.java:645)
> > > at org.junit.Assert.assertEquals(Assert.java:631)
> > > at
> > > kafka.api.BaseConsumerTest.awaitCommitCallback(
> > BaseConsumerTest.scala:214)
> > >     at
> > > kafka.api.PlaintextConsumerTest.testAsyncCommit(
> > > PlaintextConsumerTest.scala:513)
> > >
> > > Not sure if anyone else saw similar error.
> > >
> > > Cheers
> > >
> > > On Fri, Mar 23, 2018 at 4:37 PM, Rajini Sivaram <
> rajinisiva...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the fifth candidate for release of Apache Kafka 1.1.0.
> > > >
> > > > https://cwiki.apache.org/confluence/pages/viewpage.
> > > action?pageId=75957546
> > > >
> > > > A few highlights:
> > > >
> > > > * Significant Controller improvements (much faster and session
> > expiration
> > > > edge
> > > > cases fixed)
> > > > * Data balancing across log directories (JBOD)
> > > > * More efficient replication when the number of partitions is large
> > > > * Dynamic Broker Configs
> > > > * Delegation tokens (KIP-48)
> > > > * Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)
> > > >
> > > > Release notes for the 1.1.0 release:
> > > >
> > > > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/RELEASE_NOTES.html
> > > >
> > > >
> > > > *** Please download, test and vote by Tuesday March 27th 4pm PT.
> > > >
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > >
> > > > http://kafka.apache.org/KEYS
> > > >
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > >
> > > > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/
> > > >
> > > >
> > > > * Maven artifacts to be voted upon:
> > > >
> > > > https://repository.apache.org/content/groups/staging/
> > > >
> > > >
> > > > * Javadoc:
> > > >
> > > > http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/javadoc/
> > > >
> > > >
> > > > * Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:
> > > >
> > > > https://github.com/apache/kafka/tree/1.1.0-rc4
> > > >
> > > >
> > > >
> > > > * Documentation:
> > > >
> > > > http://kafka.apache.org/11/documentation.html
> > > >
> > > >
> > > > * Protocol:
> > > >
> > > > http://kafka.apache.org/11/protocol.html
> > > >
> > > >
> > > >
> > > > Thanks,
> > > >
> > > >
> > > > Rajini
> > > >
> > >
> >
>


[VOTE] 1.1.0 RC4

2018-03-23 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,

This is the fifth candidate for release of Apache Kafka 1.1.0.

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=75957546

A few highlights:

* Significant Controller improvements (much faster and session expiration edge
cases fixed)
* Data balancing across log directories (JBOD)
* More efficient replication when the number of partitions is large
* Dynamic Broker Configs
* Delegation tokens (KIP-48)
* Kafka Streams API improvements (KIP-205 / 210 / 220 / 224 / 239)

Release notes for the 1.1.0 release:

http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/RELEASE_NOTES.html


*** Please download, test and vote by Tuesday March 27th 4pm PT.


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-1.1.0-rc4/javadoc/


* Tag to be voted upon (off 1.1 branch) is the 1.1.0 tag:

https://github.com/apache/kafka/tree/1.1.0-rc4



* Documentation:

http://kafka.apache.org/11/documentation.html


* Protocol:

http://kafka.apache.org/11/protocol.html



Thanks,


Rajini


Re: [VOTE] 1.0.1 RC1

2018-02-15 Thread Rajini Sivaram
+1

Ran quickstart with binaries, built source and ran tests,

Thank you for running the release, Ewen.

Regards,

Rajini

On Thu, Feb 15, 2018 at 2:31 AM, Guozhang Wang  wrote:

> +1
>
> Ran tests, verified web docs.
>
> On Wed, Feb 14, 2018 at 6:00 PM, Satish Duggana 
> wrote:
>
> > +1 (non-binding)
> >
> > - Ran testAll/releaseTarGzAll on 1.0.1-rc1
> >  tag
> > - Ran through quickstart of core/streams
> >
> > Thanks,
> > Satish.
> >
> >
> > On Tue, Feb 13, 2018 at 11:30 PM, Damian Guy 
> wrote:
> >
> > > +1
> > >
> > > Ran tests, verified streams quickstart works
> > >
> > > On Tue, 13 Feb 2018 at 17:52 Damian Guy  wrote:
> > >
> > > > Thanks Ewen - i had the staging repo set up as profile that i forgot
> to
> > > > add to my maven command. All good.
> > > >
> > > > On Tue, 13 Feb 2018 at 17:41 Ewen Cheslack-Postava <
> e...@confluent.io>
> > > > wrote:
> > > >
> > > >> Damian,
> > > >>
> > > >> Which quickstart are you referring to? The streams quickstart only
> > > >> executes
> > > >> pre-built stuff afaict.
> > > >>
> > > >> In any case, if you're building a maven streams project, did you
> > modify
> > > it
> > > >> to point to the staging repository at
> > > >> https://repository.apache.org/content/groups/staging/ in addition
> to
> > > the
> > > >> default repos? During rc it wouldn't fetch from maven central since
> it
> > > >> hasn't been published there yet.
> > > >>
> > > >> If that is configured, more compete maven output would be helpful to
> > > track
> > > >> down where it is failing to resolve the necessary archetype.
> > > >>
> > > >> -Ewen
> > > >>
> > > >> On Tue, Feb 13, 2018 at 3:03 AM, Damian Guy 
> > > wrote:
> > > >>
> > > >> > Hi Ewen,
> > > >> >
> > > >> > I'm trying to run the streams quickstart and I'm getting:
> > > >> > [ERROR] Failed to execute goal
> > > >> > org.apache.maven.plugins:maven-archetype-plugin:3.0.1:generate
> > > >> > (default-cli) on project standalone-pom: The desired archetype
> does
> > > not
> > > >> > exist (org.apache.kafka:streams-quickstart-java:1.0.1)
> > > >> >
> > > >> > Something i'm missing?
> > > >> >
> > > >> > Thanks,
> > > >> > Damian
> > > >> >
> > > >> > On Tue, 13 Feb 2018 at 10:16 Manikumar  >
> > > >> wrote:
> > > >> >
> > > >> > > +1 (non-binding)
> > > >> > >
> > > >> > > ran quick-start, unit tests on the src.
> > > >> > >
> > > >> > >
> > > >> > >
> > > >> > > On Tue, Feb 13, 2018 at 5:31 AM, Ewen Cheslack-Postava <
> > > >> > e...@confluent.io>
> > > >> > > wrote:
> > > >> > >
> > > >> > > > Thanks for the heads up, I forgot to drop the old ones, I've
> > done
> > > >> that
> > > >> > > and
> > > >> > > > rc1 artifacts should be showing up now.
> > > >> > > >
> > > >> > > > -Ewen
> > > >> > > >
> > > >> > > >
> > > >> > > > On Mon, Feb 12, 2018 at 12:57 PM, Ted Yu  >
> > > >> wrote:
> > > >> > > >
> > > >> > > > > +1
> > > >> > > > >
> > > >> > > > > Ran test suite which passed.
> > > >> > > > >
> > > >> > > > > BTW it seems the staging repo hasn't been updated yet:
> > > >> > > > >
> > > >> > > > > https://repository.apache.org/content/groups/staging/org/
> > > >> > > > > apache/kafka/kafka-clients/
> > > >> > > > >
> > > >> > > > > On Mon, Feb 12, 2018 at 10:16 AM, Ewen Cheslack-Postava <
> > > >> > > > e...@confluent.io
> > > >> > > > > >
> > > >> > > > > wrote:
> > > >> > > > >
> > > >> > > > > > And of course I'm +1 since I've already done normal
> release
> > > >> > > validation
> > > >> > > > > > before posting this.
> > > >> > > > > >
> > > >> > > > > > -Ewen
> > > >> > > > > >
> > > >> > > > > > On Mon, Feb 12, 2018 at 10:15 AM, Ewen Cheslack-Postava <
> > > >> > > > > e...@confluent.io
> > > >> > > > > > >
> > > >> > > > > > wrote:
> > > >> > > > > >
> > > >> > > > > > > Hello Kafka users, developers and client-developers,
> > > >> > > > > > >
> > > >> > > > > > > This is the second candidate for release of Apache Kafka
> > > >> 1.0.1.
> > > >> > > > > > >
> > > >> > > > > > > This is a bugfix release for the 1.0 branch that was
> first
> > > >> > released
> > > >> > > > > with
> > > >> > > > > > > 1.0.0 about 3 months ago. We've fixed 49 significant
> > issues
> > > >> since
> > > >> > > > that
> > > >> > > > > > > release. Most of these are non-critical, but in
> aggregate
> > > >> these
> > > >> > > fixes
> > > >> > > > > > will
> > > >> > > > > > > have significant impact. A few of the more significant
> > fixes
> > > >> > > include:
> > > >> > > > > > >
> > > >> > > > > > > * KAFKA-6277: Make loadClass thread-safe for class
> loaders
> > > of
> > > >> > > Connect
> > > >> > > > > > > plugins
> > > >> > > > > > > * KAFKA-6185: Selector memory leak with high likelihood
> of
> > > >> OOM in
> > > >> > > > case
> > > >> > > > > of
> > > >> > > > > > > down conversion
> > > >> > > > > > > * KAFKA-6269: KTable 

Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram

2018-01-18 Thread Rajini Sivaram
Thanks everyone!

Regards,

Rajini

On Thu, Jan 18, 2018 at 8:53 AM, Damian Guy <damian@gmail.com> wrote:

> Congratulations Rajini!
>
> On Thu, 18 Jan 2018 at 00:57 Hu Xi <huxi...@hotmail.com> wrote:
>
> > Congratulations, Rajini Sivaram.  Very well deserved!
> >
> >
> > 
> > 发件人: Konstantine Karantasis <konstant...@confluent.io>
> > 发送时间: 2018年1月18日 6:23
> > 收件人: d...@kafka.apache.org
> > 抄送: users@kafka.apache.org
> > 主题: Re: [ANNOUNCE] New Kafka PMC Member: Rajini Sivaram
> >
> > Congrats Rajini!
> >
> > -Konstantine
> >
> > On Wed, Jan 17, 2018 at 2:18 PM, Becket Qin <becket@gmail.com>
> wrote:
> >
> > > Congratulations, Rajini!
> > >
> > > On Wed, Jan 17, 2018 at 1:52 PM, Ismael Juma <ism...@juma.me.uk>
> wrote:
> > >
> > > > Congratulations Rajini!
> > > >
> > > > On 17 Jan 2018 10:49 am, "Gwen Shapira" <g...@confluent.io> wrote:
> > > >
> > > > Dear Kafka Developers, Users and Fans,
> > > >
> > > > Rajini Sivaram became a committer in April 2017.  Since then, she
> > > remained
> > > > active in the community and contributed major patches, reviews and
> KIP
> > > > discussions. I am glad to announce that Rajini is now a member of the
> > > > Apache Kafka PMC.
> > > >
> > > > Congratulations, Rajini and looking forward to your future
> > contributions.
> > > >
> > > > Gwen, on behalf of Apache Kafka PMC
> > > >
> > >
> >
>


Re: [ANNOUNCE] New committer: Matthias J. Sax

2018-01-14 Thread Rajini Sivaram
Congratulations Matthias!

On Sat, Jan 13, 2018 at 11:34 AM, Mickael Maison 
wrote:

> Congratulations Matthias !
>
> On Sat, Jan 13, 2018 at 7:01 AM, Paolo Patierno 
> wrote:
> > Congratulations Matthias ! Very well deserved !
> > 
> > From: Guozhang Wang 
> > Sent: Friday, January 12, 2018 11:59:21 PM
> > To: d...@kafka.apache.org; users@kafka.apache.org
> > Subject: [ANNOUNCE] New committer: Matthias J. Sax
> >
> > Hello everyone,
> >
> > The PMC of Apache Kafka is pleased to announce Matthias J. Sax as our
> > newest Kafka committer.
> >
> > Matthias has made tremendous contributions to Kafka Streams API since
> early
> > 2016. His footprint has been all over the places in Streams: in the past
> > two years he has been the main driver on improving the join semantics
> > inside Streams DSL, summarizing all their shortcomings and bridging the
> > gaps; he has also been largely working on the exactly-once semantics of
> > Streams by leveraging on the transaction messaging feature in 0.11.0. In
> > addition, Matthias have been very active in community activity that goes
> > beyond mailing list: he's getting the close to 1000 up votes and 100
> > helpful flags on SO for answering almost all questions about Kafka
> Streams.
> >
> > Thank you for your contribution and welcome to Apache Kafka, Matthias!
> >
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC
>


Fwd: [ANNOUNCE] Apache Kafka 0.11.0.2 Released

2017-11-17 Thread Rajini Sivaram
Resending to kaka-clients...


The Apache Kafka community is pleased to announce the release for Apache Kafka
0.11.0.2.


This is a bug fix release and it includes fixes and improvements from 16 JIRAs,
including a few critical bugs.


All of the changes in this release can be found in the release notes:


https://dist.apache.org/repos/dist/release/kafka/0.11.0.2/RELEASE_NOTES.html



You can download the source release from:


https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
2/kafka-0.11.0.2-src.tgz


and binary releases from:


https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
2/kafka_2.11-0.11.0.2.tgz (Scala 2.11)

https://www.apache.org/dyn/closer.cgi?path=/kafka/0.11.0.
2/kafka_2.12-0.11.0.2.tgz (Scala 2.12)



---


Apache Kafka is a distributed streaming platform with four core APIs:


** The Producer API allows an application to publish a stream records to
one or more Kafka topics.


** The Consumer API allows an application to subscribe to one or more
topics and process the stream of records produced to them.


** The Streams API allows an application to act as a stream processor,
consuming an input stream from one or more topics and producing an output
stream to one or more output topics, effectively transforming the input
streams to output streams.


** The Connector API allows building and running reusable producers or
consumers that connect Kafka topics to existing applications or data
systems. For example, a connector to a relational database might capture
every change to a table.three key capabilities:



With these APIs, Kafka can be used for two broad classes of application:


** Building real-time streaming data pipelines that reliably get data
between systems or applications.


** Building real-time streaming applications that transform or react to the
streams of data.



Apache Kafka is in use at large and small companies worldwide,
including Capital
One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank, Target,
The New York Times, Uber, Yelp, and Zalando, among others.



A big thank you for the following 20 contributors to this release!


Alex Good, Apurva Mehta, bartdevylder, Colin P. Mccabe, Damian Guy, Erkan
Unal, Ewen Cheslack-Postava, Guozhang Wang, Hugo Louro, Jason Gustafson,
Konstantine Karantasis, Manikumar Reddy, manjuapu, Mickael Maison, oleg,
Onur Karaman, Rajini Sivaram, siva santhalingam, Xavier Léauté, Xin Li


We welcome your help and feedback. For more information on how to
report problems,
and to get involved, visit the project website at http://kafka.apache.org/


Thank you!


Regards,


Rajini


Re: [VOTE] 0.11.0.2 RC0

2017-11-16 Thread Rajini Sivaram
Correction from previous note:

Vote closed with 3 binding PMC votes (Gwen, Guozhang, Ismael ) and 4
non-binding votes.

On Thu, Nov 16, 2017 at 10:03 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:

> +1 from me
>
> The vote has passed with 4 binding votes (Gwen, Guozhang, Ismael and
> Rajini) and 3 non-binding votes (Ted, Satish and Tim). I will close the
> voting thread and complete the release process.
>
> Many thanks to everyone for voting.
>
> Regards,
>
> Rajini
>
> On Thu, Nov 16, 2017 at 3:01 AM, Ismael Juma <ism...@juma.me.uk> wrote:
>
>> +1 (binding). Tested the quickstart with the source and binary (Scala
>> 2.12)
>> artifacts, ran the tests on the source artifact and verified some
>> signatures and hashes on source and binary (Scala 2.12) artifacts.
>>
>> Thanks for managing this release Rajini!
>>
>> On Sat, Nov 11, 2017 at 12:37 AM, Rajini Sivaram <rajinisiva...@gmail.com
>> >
>> wrote:
>>
>> > Hello Kafka users, developers and client-developers,
>> >
>> >
>> > This is the first candidate for release of Apache Kafka 0.11.0.2.
>> >
>> >
>> > This is a bug fix release and it includes fixes and improvements from 16
>> > JIRAs,
>> > including a few critical bugs.
>> >
>> >
>> > Release notes for the 0.11.0.2 release:
>> >
>> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html
>> >
>> >
>> > *** Please download, test and vote by Wednesday the 15th of November,
>> 8PM
>> > PT
>> >
>> >
>> > Kafka's KEYS file containing PGP keys we use to sign the release:
>> >
>> > http://kafka.apache.org/KEYS
>> >
>> >
>> > * Release artifacts to be voted upon (source and binary):
>> >
>> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
>> >
>> >
>> > * Maven artifacts to be voted upon:
>> >
>> > https://repository.apache.org/content/groups/staging/
>> >
>> >
>> > * Javadoc:
>> >
>> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/
>> >
>> >
>> > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:
>> >
>> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
>> > 25639822d6e23803c599cba35ad3dc1a2817b404
>> >
>> >
>> >
>> > * Documentation:
>> >
>> > Note the documentation can't be pushed live due to changes that will
>> > not go live
>> > until the release. You can manually verify by downloading
>> >
>> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/kafka_2.
>> > 11-0.11.0.2-site-docs.tgz
>> >
>> >
>> >
>> > * Protocol:
>> >
>> > http://kafka.apache.org/0110/protocol.html
>> >
>> >
>> > * Successful Jenkins builds for the 0.11.0 branch:
>> >
>> > Unit/integration tests: https://builds.apache.org/job/
>> > kafka-0.11.0-jdk7/333/
>> >
>> >
>> >
>> >
>> > Thanks,
>> >
>> >
>> > Rajini
>> >
>>
>
>


Re: [VOTE] 0.11.0.2 RC0

2017-11-16 Thread Rajini Sivaram
+1 from me

The vote has passed with 4 binding votes (Gwen, Guozhang, Ismael and
Rajini) and 3 non-binding votes (Ted, Satish and Tim). I will close the
voting thread and complete the release process.

Many thanks to everyone for voting.

Regards,

Rajini

On Thu, Nov 16, 2017 at 3:01 AM, Ismael Juma <ism...@juma.me.uk> wrote:

> +1 (binding). Tested the quickstart with the source and binary (Scala 2.12)
> artifacts, ran the tests on the source artifact and verified some
> signatures and hashes on source and binary (Scala 2.12) artifacts.
>
> Thanks for managing this release Rajini!
>
> On Sat, Nov 11, 2017 at 12:37 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> >
> > This is the first candidate for release of Apache Kafka 0.11.0.2.
> >
> >
> > This is a bug fix release and it includes fixes and improvements from 16
> > JIRAs,
> > including a few critical bugs.
> >
> >
> > Release notes for the 0.11.0.2 release:
> >
> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html
> >
> >
> > *** Please download, test and vote by Wednesday the 15th of November, 8PM
> > PT
> >
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> >
> > http://kafka.apache.org/KEYS
> >
> >
> > * Release artifacts to be voted upon (source and binary):
> >
> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
> >
> >
> > * Maven artifacts to be voted upon:
> >
> > https://repository.apache.org/content/groups/staging/
> >
> >
> > * Javadoc:
> >
> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/
> >
> >
> > * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:
> >
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > 25639822d6e23803c599cba35ad3dc1a2817b404
> >
> >
> >
> > * Documentation:
> >
> > Note the documentation can't be pushed live due to changes that will
> > not go live
> > until the release. You can manually verify by downloading
> >
> > http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/kafka_2.
> > 11-0.11.0.2-site-docs.tgz
> >
> >
> >
> > * Protocol:
> >
> > http://kafka.apache.org/0110/protocol.html
> >
> >
> > * Successful Jenkins builds for the 0.11.0 branch:
> >
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.11.0-jdk7/333/
> >
> >
> >
> >
> > Thanks,
> >
> >
> > Rajini
> >
>


Re: [VOTE] 0.11.0.2 RC0

2017-11-10 Thread Rajini Sivaram
Resending to include kafka-clients.

On Sat, Nov 11, 2017 at 12:37 AM, Rajini Sivaram <rajinisiva...@gmail.com>
wrote:

> Hello Kafka users, developers and client-developers,
>
>
> This is the first candidate for release of Apache Kafka 0.11.0.2.
>
>
> This is a bug fix release and it includes fixes and improvements from 16 
> JIRAs,
> including a few critical bugs.
>
>
> Release notes for the 0.11.0.2 release:
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html
>
>
> *** Please download, test and vote by Wednesday the 15th of November, 8PM
> PT
>
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
>
> http://kafka.apache.org/KEYS
>
>
> * Release artifacts to be voted upon (source and binary):
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
>
>
> * Maven artifacts to be voted upon:
>
> https://repository.apache.org/content/groups/staging/
>
>
> * Javadoc:
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/
>
>
> * Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:
>
> https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> 25639822d6e23803c599cba35ad3dc1a2817b404
>
>
>
> * Documentation:
>
> Note the documentation can't be pushed live due to changes that will not
> go live until the release. You can manually verify by downloading
>
> http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/
> kafka_2.11-0.11.0.2-site-docs.tgz
>
>
>
> * Protocol:
>
> http://kafka.apache.org/0110/protocol.html
>
>
> * Successful Jenkins builds for the 0.11.0 branch:
>
> Unit/integration tests: https://builds.apache.
> org/job/kafka-0.11.0-jdk7/333/
>
>
>
>
> Thanks,
>
>
> Rajini
>
>


[VOTE] 0.11.0.2 RC0

2017-11-10 Thread Rajini Sivaram
Hello Kafka users, developers and client-developers,


This is the first candidate for release of Apache Kafka 0.11.0.2.


This is a bug fix release and it includes fixes and improvements from 16 JIRAs,
including a few critical bugs.


Release notes for the 0.11.0.2 release:

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/RELEASE_NOTES.html


*** Please download, test and vote by Wednesday the 15th of November, 8PM PT


Kafka's KEYS file containing PGP keys we use to sign the release:

http://kafka.apache.org/KEYS


* Release artifacts to be voted upon (source and binary):

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/


* Maven artifacts to be voted upon:

https://repository.apache.org/content/groups/staging/


* Javadoc:

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/javadoc/


* Tag to be voted upon (off 0.11.0 branch) is the 0.11.0.2 tag:

https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=25639822d6e23803c599cba35ad3dc1a2817b404



* Documentation:

Note the documentation can't be pushed live due to changes that will
not go live
until the release. You can manually verify by downloading

http://home.apache.org/~rsivaram/kafka-0.11.0.2-rc0/kafka_2.11-0.11.0.2-site-docs.tgz



* Protocol:

http://kafka.apache.org/0110/protocol.html


* Successful Jenkins builds for the 0.11.0 branch:

Unit/integration tests: https://builds.apache.org/job/kafka-0.11.0-jdk7/333/




Thanks,


Rajini


Re: [ANNOUNCE] New committer: Onur Karaman

2017-11-06 Thread Rajini Sivaram
Congratulations, Onur!

On Mon, Nov 6, 2017 at 8:10 PM, Dong Lin  wrote:

> Congratulations Onur!
>
> On Mon, Nov 6, 2017 at 9:24 AM, Jun Rao  wrote:
>
> > Hi, everyone,
> >
> > The PMC of Apache Kafka is pleased to announce a new Kafka committer Onur
> > Karaman.
> >
> > Onur's most significant work is the improvement of Kafka controller,
> which
> > is the brain of a Kafka cluster. Over time, we have accumulated quite a
> few
> > correctness and performance issues in the controller. There have been
> > attempts to fix controller issues in isolation, which would make the code
> > base more complicated without a clear path of solving all problems. Onur
> is
> > the one who took a holistic approach, by first documenting all known
> > issues, writing down a new design, coming up with a plan to deliver the
> > changes in phases and executing on it. At this point, Onur has completed
> > the two most important phases: making the controller single threaded and
> > changing the controller to use the async ZK api. The former fixed
> multiple
> > deadlocks and race conditions. The latter significantly improved the
> > performance when there are many partitions. Experimental results show
> that
> > Onur's work reduced the controlled shutdown time by a factor of 100 times
> > and the controller failover time by a factor of 3 times.
> >
> > Congratulations, Onur!
> >
> > Thanks,
> >
> > Jun (on behalf of the Apache Kafka PMC)
> >
>


Re: Spring release using apache clients 11

2017-07-20 Thread Rajini Sivaram
David,

The release plans are here: https://github.com/spring-projects/spring-kafka/
milestone/20?closed=1

We have already included TX and headers support to the current M3 which is
planned just after the next SF 5.0 RC3, which is expected tomorrow.

Regards,

Rajini

On Thu, Jul 20, 2017 at 5:01 PM, David Espinosa  wrote:

> Hi, somebody know if we will any spring integration/kafka release soon
> using apache clients 11?
>


Re: How to perform keytool operation using Java code

2017-07-13 Thread Rajini Sivaram
Hi Raghav,

You could take a look at
https://github.com/apache/kafka/blob/trunk/clients/src/test/java/org/apache/kafka/test/TestSslUtils.java

Regards,

Rajini

On Wed, Jul 12, 2017 at 10:23 PM, Raghav  wrote:

> Guys, Would anyone know about it ?
>
> On Tue, Jul 11, 2017 at 6:20 AM, Raghav  wrote:
>
>> Hi
>>
>> I followed https://kafka.apache.org/documentation/#security to create
>> keystore and trust store using Java Keytool. Now, I am looking to do the
>> same stuff programmatically using Java. I am struggling to find the right
>> Java classes to perform following operations:
>>
>> 1. How to extract CSR from a keystore using Java classes ?
>>
>> 2. How to add a CA cert to a keystore using Java classes ?
>>
>> I tried to following http://docs.oracle.com/javase/6/docs/api/java/secu
>> rity/KeyStore.html#load%28java.io.InputStream,%20char%5B%5D%29 but could
>> not get answers.
>>
>> Any help here is greatly appreciated.
>>
>> Thanks.
>>
>> --
>> Raghav
>>
>
>
>
> --
> Raghav
>


Re: Kafka Authorization and ACLs Broken

2017-07-05 Thread Rajini Sivaram
Hi Raghav,

Yes, you should be able to use AdminClient from 0.11.0. Take a look at the
Javadocs (
https://kafka.apache.org/0110/javadoc/org/apache/kafka/clients/admin/package-summary.html).
The integration tests may be useful too (
https://github.com/apache/kafka/blob/trunk/core/src/test/scala/integration/kafka/api/AdminClientIntegrationTest.scala
,
https://github.com/apache/kafka/blob/trunk/core/src/test/scala/integration/kafka/api/SaslSslAdminClientIntegrationTest.scala
).

Regards,

Rajini

On Wed, Jul 5, 2017 at 4:10 PM, Raghav <raghavas...@gmail.com> wrote:

> Hi Rajini
>
> Now that 0.11.0 is out, can we use the Admin client ? Are there some
> example code for these ?
>
> Thanks.
>
> On Wed, May 24, 2017 at 9:06 PM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Hi Raghav,
>>
>> Yes, you can create ACLs programmatically. Take a look at the use of
>> AclCommand.main in https://github.com/apache/kafk
>> a/blob/trunk/core/src/test/scala/integration/kafka/api/
>> EndToEndAuthorizationTest.scala
>>
>> If you can wait for the next release 0.11.0 that will be out next month,
>> you can use the new Java AdminClient, which allows you to do this in a much
>> neater way. Take a look at the interface https://github.com/a
>> pache/kafka/blob/trunk/clients/src/main/java/org/apache/
>> kafka/clients/admin/AdminClient.java
>> <https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/admin/AdminClient.java>
>>
>> If your release is not imminent, then you could build Kafka from the
>> 0.11.0 branch and use the new AdminClient. When the release is out, you can
>> switch over to the binary release.
>>
>> Regards,
>>
>> Rajini
>>
>>
>>
>> On Wed, May 24, 2017 at 4:13 PM, Raghav <raghavas...@gmail.com> wrote:
>>
>>> Hi Rajini
>>>
>>> Quick question on Configuring ACLs: We used bin/kafka-acls.sh to
>>> configure ACL rules, which internally uses Kafka Admin APIs to configure
>>> the ACLs.
>>>
>>> Can I add, remove and list ACLs via zk client libraries ? I want to be
>>> able to add, remove, list ACLs via my code rather than using Kafka-acl.sh.
>>> Is there a guideline for recommended set of libraries to use to do such
>>> operations ?
>>>
>>> As always thanks so much.
>>>
>>>
>>>
>>> On Wed, May 24, 2017 at 7:04 AM, Rajini Sivaram <rajinisiva...@gmail.com
>>> > wrote:
>>>
>>>> Raghav/Darshan,
>>>>
>>>> Can you try these steps on a clean installation of Kafka? It works for
>>>> me, so hopefully it will work for you. And then you can adapt to your
>>>> scenario.
>>>>
>>>> *Create keystores and truststores:*
>>>>
>>>> keytool -genkey -alias kafka -keystore server.keystore.jks -dname
>>>> "CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
>>>> -keypass server-key-password
>>>>
>>>> keytool -exportcert -file server-cert-file -keystore
>>>> server.keystore.jks -alias kafka -storepass server-keystore-password
>>>>
>>>> keytool -importcert -file server-cert-file -keystore
>>>> server.truststore.jks -alias kafka -storepass server-truststore-password
>>>> -noprompt
>>>>
>>>> keytool -importcert -file server-cert-file -keystore
>>>> client.truststore.jks -alias kafkaclient -storepass
>>>> client-truststore-password -noprompt
>>>>
>>>>
>>>> keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
>>>> "CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
>>>> -keypass client-key-password
>>>>
>>>> keytool -exportcert -file client-cert-file -keystore
>>>> client.keystore.jks -alias kafkaclient -storepass client-keystore-password
>>>>
>>>> keytool -importcert -file client-cert-file -keystore
>>>> server.truststore.jks -alias kafkaclient -storepass
>>>> server-truststore-password -noprompt
>>>>
>>>> *Configure broker: Add these lines at the end of your server.properties*
>>>>
>>>> listeners=SSL://:9093
>>>>
>>>> advertised.listeners=SSL://127.0.0.1:9093
>>>>
>>>> ssl.keystore.location=/tmp/acl/server.keystore.jks
>>>>
>>>> ssl.keystore.password=server-keystore-password
>>>>
>>>> ssl.key.password=server-key-p

Re: advertised.listeners

2017-05-31 Thread Rajini Sivaram
If you want to use different interfaces with the same security protocol,
you can specify listener names. You can then also configure different
security properties for internal/external if you need.

listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093

advertised.listeners=INTERNAL://1.x.x.x:9092,EXTERNAL://172.x.x.x:9093

listener.security.protocol.map=INTERNAL:SSL,EXTERNAL:SSL

inter.broker.listener.name=INTERNAL

On Wed, May 31, 2017 at 6:22 PM, Raghav  wrote:

> Hello Darshan
>
> Have you tried SSL://0.0.0.0:9093 ?
>
> Rajani had suggested something similar to me a week back while I was
> trying to get a ACL based setup.
>
> Thanks.
>
> On Wed, May 31, 2017 at 8:58 AM, Darshan 
> wrote:
>
>> Hi
>>
>> Our Kafka broker has two IPs on two different interfaces.
>>
>> eth0 has 172.x.x.x for external leg
>> eth1 has 1.x.x.x for internal leg
>>
>>
>> Kafka Producer is on 172.x.x.x subnet, and Kafka Consumer is on 1.x.x.x
>> subnet.
>>
>> If we use advertised.listeners=SSL://172.x.x.x:9093, then Producer can
>> producer the message, but Consumer cannot receive the message.
>>
>> What value should we use for advertised.listeners so that Producer can
>> write and Consumers can read ?
>>
>> Thanks.
>>
>
>
>
> --
> Raghav
>


Re: Kafka Authorization and ACLs Broken

2017-05-24 Thread Rajini Sivaram
Raghav/Darshan,

Can you try these steps on a clean installation of Kafka? It works for me,
so hopefully it will work for you. And then you can adapt to your scenario.

*Create keystores and truststores:*

keytool -genkey -alias kafka -keystore server.keystore.jks -dname
"CN=KafkaBroker,O=Pivotal,C=UK" -storepass server-keystore-password
-keypass server-key-password

keytool -exportcert -file server-cert-file -keystore server.keystore.jks
-alias kafka -storepass server-keystore-password

keytool -importcert -file server-cert-file -keystore server.truststore.jks
-alias kafka -storepass server-truststore-password -noprompt

keytool -importcert -file server-cert-file -keystore client.truststore.jks
-alias kafkaclient -storepass client-truststore-password -noprompt


keytool -genkey -alias kafkaclient -keystore client.keystore.jks -dname
"CN=KafkaClient,O=Pivotal,C=UK" -storepass client-keystore-password
-keypass client-key-password

keytool -exportcert -file client-cert-file -keystore client.keystore.jks
-alias kafkaclient -storepass client-keystore-password

keytool -importcert -file client-cert-file -keystore server.truststore.jks
-alias kafkaclient -storepass server-truststore-password -noprompt

*Configure broker: Add these lines at the end of your server.properties*

listeners=SSL://:9093

advertised.listeners=SSL://127.0.0.1:9093

ssl.keystore.location=/tmp/acl/server.keystore.jks

ssl.keystore.password=server-keystore-password

ssl.key.password=server-key-password

ssl.truststore.location=/tmp/acl/server.truststore.jks

ssl.truststore.password=server-truststore-password

security.inter.broker.protocol=SSL

security.protocol=SSL

ssl.client.auth=required

allow.everyone.if.no.acl.found=false

authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer

super.users=User:CN=KafkaBroker,O=Pivotal,C=UK

*Configure producer: producer.properties*

security.protocol=SSL

ssl.truststore.location=/tmp/acl/client.truststore.jks

ssl.truststore.password=client-truststore-password

ssl.keystore.location=/tmp/acl/client.keystore.jks

ssl.keystore.password=client-keystore-password

ssl.key.password=client-key-password


*Configure consumer: consumer.properties*

security.protocol=SSL

ssl.truststore.location=/tmp/acl/client.truststore.jks

ssl.truststore.password=client-truststore-password

ssl.keystore.location=/tmp/acl/client.keystore.jks

ssl.keystore.password=client-keystore-password

ssl.key.password=client-key-password

group.id=testgroup

*Create topic:*

bin/kafka-topics.sh  --zookeeper localhost --create --topic testtopic
--replication-factor 1 --partitions 1


*Configure ACLs:*

bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --producer
--topic testtopic

bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181
--add --allow-principal "User:CN=KafkaClient,O=Pivotal,C=UK" --consumer
--topic testtopic --group test group


*Run console producer and type in some messages:*

bin/kafka-console-producer.sh  --producer.config
/tmp/acl/producer.properties --topic testtopic --broker-list 127.0.0.1:9093


*Run console consumer, you should see messages from above:*

bin/kafka-console-consumer.sh  --consumer.config
/tmp/acl/consumer.properties --topic testtopic --bootstrap-server
127.0.0.1:9093 --from-beginning



On Tue, May 23, 2017 at 12:57 PM, Raghav  wrote:

> Darshan,
>
> I have not yet successfully gotten the ACLs to work in Kafka. I am still
> looking for help. I will update this email thread if I do find. In case you
> get it working, please let me know.
>
> Thanks.
>
> R
>
> On Tue, May 23, 2017 at 8:49 AM, Darshan Purandare <
> purandare.dars...@gmail.com> wrote:
>
> > Raghav
> >
> > I saw few posts of yours around Kafka ACLs and the problems. I have seen
> > similar issues where Writer has not been able to write to any topic. I
> have
> > seen "leader not available" and sometimes "unknown topic or partition",
> and
> > "topic_authorization_failed" error.
> >
> > Let me know if you find a valid config that works.
> >
> > Thanks.
> >
> >
> >
> > On Tue, May 23, 2017 at 8:44 AM, Raghav  wrote:
> >
> >> Hello Kafka Users
> >>
> >> I am a new Kafka user and trying to make Kafka SSL work with
> Authorization
> >> and ACLs. I followed posts from Kafka and Confluent docs exactly to the
> >> point but my producer cannot write to kafka broker. I get
> >> "LEADER_NOT_FOUND" errors. And even Consumer throws the same errors.
> >>
> >> Can someone please share their config which worked with ACLs.
> >>
> >> Here is my config. Please help.
> >>
> >> server.properties config
> >> 
> >> 
> >> broker.id=0
> >> auto.create.topics.enable=true
> >> delete.topic.enable=true
> >>
> >> listeners=PLAINTEXT://kafka1.example.com:9092
> >> 

Re: ACL with SSL is not working

2017-05-22 Thread Rajini Sivaram
If you are using auto-create of topics, you also need to grant Create
access to kaka-cluster.

On Mon, May 22, 2017 at 9:51 AM, Raghav <raghavas...@gmail.com> wrote:

> Hi Rajini
>
> I tried again with IP addresses this time, and I get the following error
> log for the given ACLS. Is there something wrong in the way I am giving
> user name ?
>
> *List of ACL*
>
> [root@kafka-dev1 KAFKA]# bin/kafka-acls --authorizer-properties
> zookeeper.connect=localhost:2181 --add --allow-principal User:CN=kafka2
> --allow-host 10.10.0.23 --operation Read --operation Write --topic
> kafka-testtopic
> Adding ACLs for resource `Topic:kafka-testtopic`:
> User:CN=kafka2 has Allow permission for operations: Read from
> hosts: 10.10.0.23
> User:CN=kafka2 has Allow permission for operations: Write from
> hosts: 10.10.0.23
> [root@kafka-dev1 KAFKA]#
>
> *Authorizer LOGS*
>
> [2017-05-22 06:45:44,520] DEBUG No acl found for resource
> Cluster:kafka-cluster, authorized = false (kafka.authorizer.logger)
> [2017-05-22 06:45:44,520] DEBUG Principal = User:CN=kafka2 is Denied
> Operation = Create from host = 10.10.0.23 on resource =
> Cluster:kafka-cluster (kafka.authorizer.logger)
>
> On Mon, May 22, 2017 at 6:34 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > Raghav,
> >
> > I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
> > tried defining ACLs with host IP address?
> >
> > On Mon, May 22, 2017 at 9:19 AM, Raghav <raghavas...@gmail.com> wrote:
> >
> > > Hi
> > >
> > > I enabled the DEBUG logs on Kafka authorizer, and I see the following
> > logs
> > > for the given ACLs. Am I missing something in my config here ? Any help
> > is
> > > greatly appreciated. Thanks.
> > >
> > >
> > > *List of ACL*
> > >
> > > [root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
> > > zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
> > > Current ACLs for resource `Topic:kafka-testtopic`:
> > > User:* has Allow permission for operations: Read from hosts:
> bin
> > > User:CN=kafka2 has Allow permission for operations: Write from
> > > hosts: kafka2.example.com
> > > User:CN=kafka2 has Allow permission for operations: Read from
> > > hosts: kafka2.example.com
> > > [root@kafka1 KAFKA]#
> > >
> > >
> > > *Authorizer LOGS*
> > >
> > > [2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
> > > Operation = Describe from host = 10.10.0.23 on resource =
> > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > [2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
> > > Operation = Describe from host = 10.10.0.23 on resource =
> > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > [2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
> > > Operation = Describe from host = 10.10.0.23 on resource =
> > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > > [2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
> > > Operation = Describe from host = 10.10.0.23 on resource =
> > > Topic:kafka-testtopic (kafka.authorizer.logger)
> > >
> > >
> > > Thanks.
> > >
> > >
> > > On Sun, May 21, 2017 at 10:52 PM, Raghav <raghavas...@gmail.com>
> wrote:
> > >
> > > > I tried all possible ways (including the way you suggested Michael),
> > but
> > > I
> > > > still get the same error.
> > > >
> > > > Is there a step by step guide to get ACLs working in Kafka with SSL ?
> > > >
> > > > Thanks.
> > > >
> > > > On Fri, May 19, 2017 at 11:40 AM, Michael Rauter <
> > mrau...@anexia-it.com>
> > > > wrote:
> > > >
> > > >> Hi,
> > > >>
> > > >> with SSL client authentication the user identifier is the dname of
> the
> > > >> certificate
> > > >>
> > > >> in your case “CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US”
> > > >>
> > > >> for example when you want to set an ACL rule (read and write for
> topic
> > > >> TOPICNAME from every host):
> > > >>
> > > >> $ kafka-acls --authorizer-properties zookeeper.connect=zookeeper:
> 2181
> > > >> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
> > > >> --allow-host "*" --oper

Re: ACL with SSL is not working

2017-05-22 Thread Rajini Sivaram
Raghav,

I don't believe we do reverse DNS lookup for matching ACL hosts. Have you
tried defining ACLs with host IP address?

On Mon, May 22, 2017 at 9:19 AM, Raghav  wrote:

> Hi
>
> I enabled the DEBUG logs on Kafka authorizer, and I see the following logs
> for the given ACLs. Am I missing something in my config here ? Any help is
> greatly appreciated. Thanks.
>
>
> *List of ACL*
>
> [root@kafka1 KAFKA]# bin/kafka-acls.sh --authorizer-properties
> zookeeper.connect=localhost:2181 --list --topic kafka-testtopic
> Current ACLs for resource `Topic:kafka-testtopic`:
> User:* has Allow permission for operations: Read from hosts: bin
> User:CN=kafka2 has Allow permission for operations: Write from
> hosts: kafka2.example.com
> User:CN=kafka2 has Allow permission for operations: Read from
> hosts: kafka2.example.com
> [root@kafka1 KAFKA]#
>
>
> *Authorizer LOGS*
>
> [2017-05-22 06:10:16,635] DEBUG Principal = User:CN=kafka2 is Denied
> Operation = Describe from host = 10.10.0.23 on resource =
> Topic:kafka-testtopic (kafka.authorizer.logger)
> [2017-05-22 06:10:16,736] DEBUG Principal = User:CN=kafka2 is Denied
> Operation = Describe from host = 10.10.0.23 on resource =
> Topic:kafka-testtopic (kafka.authorizer.logger)
> [2017-05-22 06:10:16,839] DEBUG Principal = User:CN=kafka2 is Denied
> Operation = Describe from host = 10.10.0.23 on resource =
> Topic:kafka-testtopic (kafka.authorizer.logger)
> [2017-05-22 06:10:16,942] DEBUG Principal = User:CN=kafka2 is Denied
> Operation = Describe from host = 10.10.0.23 on resource =
> Topic:kafka-testtopic (kafka.authorizer.logger)
>
>
> Thanks.
>
>
> On Sun, May 21, 2017 at 10:52 PM, Raghav  wrote:
>
> > I tried all possible ways (including the way you suggested Michael), but
> I
> > still get the same error.
> >
> > Is there a step by step guide to get ACLs working in Kafka with SSL ?
> >
> > Thanks.
> >
> > On Fri, May 19, 2017 at 11:40 AM, Michael Rauter 
> > wrote:
> >
> >> Hi,
> >>
> >> with SSL client authentication the user identifier is the dname of the
> >> certificate
> >>
> >> in your case “CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US”
> >>
> >> for example when you want to set an ACL rule (read and write for topic
> >> TOPICNAME from every host):
> >>
> >> $ kafka-acls --authorizer-properties zookeeper.connect=zookeeper:2181
> >> --add --allow-principal User:CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US
> >> --allow-host "*" --operation Read --operation Write --topic TOPICNAME
> >>
> >>
> >> Am 19.05.17, 20:02 schrieb "Raghav" :
> >>
> >> If it helps, this is how I generated the keystone for my client
> >>
> >> $ keytool -alias kafka-dev2 -validity 365 -keystore
> >> kafka-dev2.keystore.jks
> >> -dname "CN=Bob,O=FB,OU=MA,L=MP,ST=CA,C=US" -genkey -ext SAN=DNS:
> >> kafka-dev2.example.com -storepass 123456
> >>
> >> Anything wrong here ?
> >>
> >> On Fri, May 19, 2017 at 10:32 AM, Raghav 
> >> wrote:
> >>
> >> > Hi
> >> >
> >> > I have a SSL setup with Kafka Broker, Producer and Consumer, and
> it
> >> works
> >> > fine. I tried to setup ACLs as given on the website. When I start
> my
> >> > producer, I am getting this error:
> >> >
> >> >
> >> > [root@kafka-dev2 KAFKA]# bin/kafka-console-producer --broker-list
> >> > kafka-dev1.example.com:9093 --topic test --producer.config
> >> > ./etc/kafka/producer.properties
> >> >
> >> > HelloWorld
> >> >
> >> > [2017-05-19 10:24:42,437] WARN Error while fetching metadata with
> >> > correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION}
> >> > (org.apache.kafka.clients.NetworkClient)
> >> > [root@kafka-dev2 KAFKA]#
> >> >
> >> >
> >> > server config has the following entries
> >> > 
> >> > authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
> >> > super.users=User:Bob
> >> > 
> >> >
> >> > When certificate was being generated for Producer (Bob was used in
> >> the
> >> > CNAME.)
> >> >
> >> >
> >> > Am I missing something here ? Please help
> >> >
> >> > Thanks.
> >> >
> >> > Raghav
> >> >
> >>
> >>
> >>
> >> --
> >> Raghav
> >>
> >>
> >>
> >
> >
> > --
> > Raghav
> >
>
>
>
> --
> Raghav
>


Re: Securing Kafka - Keystore and Truststore question

2017-05-22 Thread Rajini Sivaram
Raghav,

*My guess about the problem is that I was generate a csr (certificate
signing request), which is different from actually extracting certificate.
Please correct me if I am wrong.*

Yes, that is correct. Use "keytool -exportcert" to extract the certificate.


*To actually address our problem of minimizing key exchanges between our
Kafka Clients (customers) and us (Kafka Brokers), I experimented that if we
generate a keystone and trust store for them, and then ask them to use it
in their client, it works fine. It reduces the number of round trips. Let
me know if something like this is ok or can their be a security breach ?*

The issue with this approach is that you also have access to the customer's
private key. And you need a secure way to transferring this key to the
customer. The standard way of customer generating the key-pair and giving
you only the public certificate avoids these issues.


On Fri, May 19, 2017 at 1:19 PM, Raghav <raghavas...@gmail.com> wrote:

> Rajini
>
> I was generating a certificate (using key tool by first doing -genkey and
> creating a keystore, and then subsequently extracting certificate using
> -centreq) for Kafka client (Producer). Once this certificate was available,
> I was trying to add this certificate to Kafka Broker's trust store. While
> doing this, key tool would not allow to add this certificate to trust store
> of Kafka broker.
>
> My guess about the problem is that I was generate a csr (certificate
> signing request), which is different from actually extracting certificate.
> Please correct me if I am wrong.
>
> To actually address our problem of minimizing key exchanges between our
> Kafka Clients (customers) and us (Kafka Brokers), I experimented that if we
> generate a keystone and trust store for them, and then ask them to use it
> in their client, it works fine. It reduces the number of round trips. Let
> me know if something like this is ok or can their be a security breach ?
>
> Thanks.
>
> Raghav
>
>
>
> On Thu, May 18, 2017 at 10:26 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raghav,
>>
>> If you send me the full command sequence, I can take a look. Also, which
>> JRE are you using?
>>
>> Regards,
>>
>> Rajini
>>
>> On Thu, May 18, 2017 at 12:19 PM, Raghav <raghavas...@gmail.com> wrote:
>>
>>> Rajini
>>>
>>> I just tried this. It turns out that I can't import cert-file by itself
>>> in trust store until it is signed by a CA. Could be because of the format ?
>>> Any idea here ...
>>>
>>> In the above steps, if I sign the server-cert-file and client-cert-file
>>> by a private CA then I can add them to trust store and key store. In this
>>> test, I did not add the CA cert in either keystone or trust store.
>>>
>>> Thanks for all your help.
>>>
>>>
>>>
>>>
>>> On Thu, May 18, 2017 at 8:26 AM, Rajini Sivaram <rajinisiva...@gmail.com
>>> > wrote:
>>>
>>>> Raghav,
>>>>
>>>> Perhaps what you want to do is:
>>>>
>>>> *You do (for the brokers):*
>>>>
>>>> Generate key-pair for broker:
>>>>
>>>> keytool -keystore kafka.server.keystore.jks -alias localhost -validity
>>>> 365 -genkey
>>>>
>>>> Export certificate to a file to send to your customers:
>>>>
>>>> keytool -exportcert -file server-cert-file -keystore
>>>> kafka.server.keystore.jks -alias localhost
>>>>
>>>>
>>>> And you send server-cert-file to your customers.
>>>>
>>>> Once you get your customer's client-cert-file, you do:
>>>>
>>>> keytool -importcert -file client-cert-file -keystore
>>>> kafka.server.truststore.jks -alias customerA
>>>>
>>>> If you are using SSL for inter-broker communication, your broker
>>>> certificate also needs to be in the server truststore:
>>>>
>>>> keytool -importcert -file server-cert-file -keystore
>>>> kafka.client.truststore.jks -alias broker
>>>>
>>>>
>>>> *Your customers do (for the clients):*
>>>>
>>>> Generate key-pair for client:
>>>>
>>>> keytool -keystore kafka.client.keystore.jks -alias localhost -validity
>>>> 365 -genkey
>>>>
>>>> Export certificate to a file to send to to you:
>>>>
>>>> keytool -exportcert -file client-cert-file -keystore
>>>> kafka.client.keystore.jks -alias localhost
>>>>
>&g

Re: Securing Kafka - Keystore and Truststore question

2017-05-18 Thread Rajini Sivaram
Raghav,

If you send me the full command sequence, I can take a look. Also, which
JRE are you using?

Regards,

Rajini

On Thu, May 18, 2017 at 12:19 PM, Raghav <raghavas...@gmail.com> wrote:

> Rajini
>
> I just tried this. It turns out that I can't import cert-file by itself in
> trust store until it is signed by a CA. Could be because of the format ?
> Any idea here ...
>
> In the above steps, if I sign the server-cert-file and client-cert-file by
> a private CA then I can add them to trust store and key store. In this
> test, I did not add the CA cert in either keystone or trust store.
>
> Thanks for all your help.
>
>
>
>
> On Thu, May 18, 2017 at 8:26 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raghav,
>>
>> Perhaps what you want to do is:
>>
>> *You do (for the brokers):*
>>
>> Generate key-pair for broker:
>>
>> keytool -keystore kafka.server.keystore.jks -alias localhost -validity
>> 365 -genkey
>>
>> Export certificate to a file to send to your customers:
>>
>> keytool -exportcert -file server-cert-file -keystore
>> kafka.server.keystore.jks -alias localhost
>>
>>
>> And you send server-cert-file to your customers.
>>
>> Once you get your customer's client-cert-file, you do:
>>
>> keytool -importcert -file client-cert-file -keystore
>> kafka.server.truststore.jks -alias customerA
>>
>> If you are using SSL for inter-broker communication, your broker
>> certificate also needs to be in the server truststore:
>>
>> keytool -importcert -file server-cert-file -keystore
>> kafka.client.truststore.jks -alias broker
>>
>>
>> *Your customers do (for the clients):*
>>
>> Generate key-pair for client:
>>
>> keytool -keystore kafka.client.keystore.jks -alias localhost -validity
>> 365 -genkey
>>
>> Export certificate to a file to send to to you:
>>
>> keytool -exportcert -file client-cert-file -keystore
>> kafka.client.keystore.jks -alias localhost
>>
>>
>> Your customers send you their client-cert-file
>>
>> Your customers create their truststore using the broker certificate
>> server-cert-file that you send to them:
>>
>> keytool -importcert -file server-cert-file -keystore
>> kafka.client.truststore.jks -alias broker
>>
>>
>>
>> You then configure your brokers with (kafka.server.keystore.jks, ka
>> fka.server.truststore.jks).Your customers configure their clients with (
>> kafka.client.keystore.jks, kafka.client.truststore.jks).
>>
>>
>> Hope that helps.
>>
>> Regards,
>>
>> Rajini
>>
>>
>>
>> On Thu, May 18, 2017 at 10:33 AM, Raghav <raghavas...@gmail.com> wrote:
>>
>>> Rajini,
>>>
>>> Sure, will submit a PR shortly.
>>>
>>> Your answer is very helpful, but I think I did not put the question
>>> correctly. Pardon my ignore but I am still trying to get my ways around
>>> Kafka security.
>>>
>>> I was trying to understand, can we (Kafka Broker) just add the
>>> certificate (unsigned or signed) from customer to our trust store without
>>> adding the CA cert to trust store... could that work ?
>>>
>>> 1. Let's say Kafka broker (there is only 1 for simplicity) generates a
>>> keystore and generates a key using the command below
>>>
>>> keytool -keystore kafka.server.keystore.jks -alias localhost -validity 
>>> *365* -genkey
>>>
>>> keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file 
>>> server-cert-file
>>>
>>> 2. Similarly, Kafka Client (Producer) does the same
>>>
>>> keytool -keystore kafka.client.keystore.jks -alias localhost -validity 
>>> *365* -genkey
>>>
>>> keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file 
>>> client-cert-file
>>>
>>>
>>> 3. Now, we add *client-cert-file* into the trust store of server, and
>>> *server-cert-file* into the trust store of client. Given that each
>>> trust store has other party's certificate in their trust store, does CA
>>> certificate come into the picture ?
>>>
>>> On Thu, May 18, 2017 at 6:26 AM, Rajini Sivaram <rajinisiva...@gmail.com
>>> > wrote:
>>>
>>>> Raghav,
>>>>
>>>> Yes, you can create a truststore with your customers' certificates and
>>>> vice-versa. It will be best to give your CA certificate to your customers
&g

Re: Securing Kafka - Keystore and Truststore question

2017-05-18 Thread Rajini Sivaram
Raghav,

Perhaps what you want to do is:

*You do (for the brokers):*

Generate key-pair for broker:

keytool -keystore kafka.server.keystore.jks -alias localhost -validity 365
-genkey

Export certificate to a file to send to your customers:

keytool -exportcert -file server-cert-file -keystore
kafka.server.keystore.jks -alias localhost


And you send server-cert-file to your customers.

Once you get your customer's client-cert-file, you do:

keytool -importcert -file client-cert-file -keystore
kafka.server.truststore.jks -alias customerA

If you are using SSL for inter-broker communication, your broker
certificate also needs to be in the server truststore:

keytool -importcert -file server-cert-file -keystore
kafka.client.truststore.jks -alias broker


*Your customers do (for the clients):*

Generate key-pair for client:

keytool -keystore kafka.client.keystore.jks -alias localhost -validity 365
-genkey

Export certificate to a file to send to to you:

keytool -exportcert -file client-cert-file -keystore
kafka.client.keystore.jks -alias localhost


Your customers send you their client-cert-file

Your customers create their truststore using the broker certificate
server-cert-file that you send to them:

keytool -importcert -file server-cert-file -keystore
kafka.client.truststore.jks -alias broker



You then configure your brokers with (kafka.server.keystore.jks,
kafka.server.truststore.jks).Your customers configure their clients with (
kafka.client.keystore.jks, kafka.client.truststore.jks).


Hope that helps.

Regards,

Rajini



On Thu, May 18, 2017 at 10:33 AM, Raghav <raghavas...@gmail.com> wrote:

> Rajini,
>
> Sure, will submit a PR shortly.
>
> Your answer is very helpful, but I think I did not put the question
> correctly. Pardon my ignore but I am still trying to get my ways around
> Kafka security.
>
> I was trying to understand, can we (Kafka Broker) just add the certificate
> (unsigned or signed) from customer to our trust store without adding the CA
> cert to trust store... could that work ?
>
> 1. Let's say Kafka broker (there is only 1 for simplicity) generates a
> keystore and generates a key using the command below
>
> keytool -keystore kafka.server.keystore.jks -alias localhost -validity *365* 
> -genkey
>
> keytool -keystore kafka.server.keystore.jks -alias localhost -certreq -file 
> server-cert-file
>
> 2. Similarly, Kafka Client (Producer) does the same
>
> keytool -keystore kafka.client.keystore.jks -alias localhost -validity *365* 
> -genkey
>
> keytool -keystore kafka.client.keystore.jks -alias localhost -certreq -file 
> client-cert-file
>
>
> 3. Now, we add *client-cert-file* into the trust store of server, and
> *server-cert-file* into the trust store of client. Given that each trust
> store has other party's certificate in their trust store, does CA
> certificate come into the picture ?
>
> On Thu, May 18, 2017 at 6:26 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raghav,
>>
>> Yes, you can create a truststore with your customers' certificates and
>> vice-versa. It will be best to give your CA certificate to your customers
>> and get the CA certificate from each of your customers and add them to your
>> broker's truststore. You can both then create additional certificates if
>> you need without any changes to your truststore as long as the CA
>> certificates are valid. Unlike certificates signed by a trusted authority,
>> you will need to add the CAs of every customer to your truststore. Kafka
>> brokers don't reload certificates, so if you wanted to add another
>> customer's certificate to your truststore, you will need to restart your
>> broker.
>>
>> Would you like to submit a PR with the information that is missing in the
>> Apache Kafka documentation that you think may be useful?
>>
>> Regards,
>>
>> Rajini
>>
>> On Wed, May 17, 2017 at 6:21 PM, Raghav <raghavas...@gmail.com> wrote:
>>
>>> Another quick question:
>>>
>>> Say we chose to add our customer's certificates directly to our brokers
>>> trust store and vice verse, could that work ? There is no documentation on
>>> Kafka or Confluent site for this ?
>>>
>>> Thanks.
>>>
>>>
>>> On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram <rajinisiva...@gmail.com
>>> > wrote:
>>>
>>>> Raghav,
>>>>
>>>> 1. Yes, your customers can use certificates signed by a trusted
>>>> authority. You can simply omit the truststore configuration for your broker
>>>> in server.properties, and Kafka would use the default, which will trust the
>>>> client certificates. If your brokers are

Re: Securing Kafka - Keystore and Truststore question

2017-05-18 Thread Rajini Sivaram
Raghav,

Yes, you can create a truststore with your customers' certificates and
vice-versa. It will be best to give your CA certificate to your customers
and get the CA certificate from each of your customers and add them to your
broker's truststore. You can both then create additional certificates if
you need without any changes to your truststore as long as the CA
certificates are valid. Unlike certificates signed by a trusted authority,
you will need to add the CAs of every customer to your truststore. Kafka
brokers don't reload certificates, so if you wanted to add another
customer's certificate to your truststore, you will need to restart your
broker.

Would you like to submit a PR with the information that is missing in the
Apache Kafka documentation that you think may be useful?

Regards,

Rajini

On Wed, May 17, 2017 at 6:21 PM, Raghav <raghavas...@gmail.com> wrote:

> Another quick question:
>
> Say we chose to add our customer's certificates directly to our brokers
> trust store and vice verse, could that work ? There is no documentation on
> Kafka or Confluent site for this ?
>
> Thanks.
>
>
> On Wed, May 17, 2017 at 1:56 PM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raghav,
>>
>> 1. Yes, your customers can use certificates signed by a trusted
>> authority. You can simply omit the truststore configuration for your broker
>> in server.properties, and Kafka would use the default, which will trust the
>> client certificates. If your brokers are using SSL for inter-broker
>> communication and you are still using your private CA for broker's
>> keystore, then you will need two separate endpoints in your listener
>> configuration, one for your customer's clients and another for inter-broker
>> communication so that you can specify a truststore with your private
>> ca-cert for your broker connections.
>>
>> 2. Yes, all the commands can specify password on the command line, so you
>> should be able to generate all the stores using a script without any
>> interactions.
>>
>> Regards,
>>
>> Rajini
>>
>>
>> On Wed, May 17, 2017 at 2:49 PM, Raghav <raghavas...@gmail.com> wrote:
>>
>>> One follow up questions Rajini:
>>>
>>> 1. Can we use some other mechanism like have our customer's use a well
>>> known CA which JKS understands, and in that case we don't have to ask our
>>> customers to do this certificate-in and certificate-out thing ? I am just
>>> trying to understand if we can make our customer's workflow easier.
>>> Anything else that you can suggest here
>>>
>>> 2. Can we automate the key gen steps mentioned on apache website and
>>> adding to keystone and trust store so that we don't have to manually supply
>>> password ? Currently, everytime I tried to do steps mentioned in
>>> https://kafka.apache.org/documentation/#security I have to manually
>>> give password. It would be great if we can automate this process either
>>> through script or Java code. Any suggestions ...
>>>
>>>
>>> Many thanks.
>>>
>>> On Tue, May 16, 2017 at 10:58 AM, Raghav <raghavas...@gmail.com> wrote:
>>>
>>>> Many thanks, Rajini.
>>>>
>>>> On Tue, May 16, 2017 at 8:43 AM, Rajini Sivaram <
>>>> rajinisiva...@gmail.com> wrote:
>>>>
>>>>> Hi Raghav,
>>>>>
>>>>> If your Kafka broker is configured with *ssl.client.auth=required,* your
>>>>> customer's clients need to provide a keystore. In any case, they need a
>>>>> truststore since your broker is using SSL. For the truststore, you can
>>>>> given them ca-cert, as you mentioned. Client keystore contains a
>>>>> certificate and a private key.
>>>>>
>>>>> In the round-trip you described, customers generate the keys and give
>>>>> you the certificate signing request, keeping their private key private. 
>>>>> You
>>>>> then send them back a signed certificate that goes into their keystore.
>>>>> This is the standard way of signing and is secure.
>>>>>
>>>>> In the single step scenario that you described, you generate the
>>>>> customer's key-pair consisting of certificate and private key. You then
>>>>> need to send them both the signed certificate and the private key. This is
>>>>> less secure. Unlike the round-trip, you now have the private key of the
>>>>> customer.
>>>>>
>>>>> Regards,
>>>>>
>&g

Re: Securing Kafka - Keystore and Truststore question

2017-05-17 Thread Rajini Sivaram
Raghav,

1. Yes, your customers can use certificates signed by a trusted authority.
You can simply omit the truststore configuration for your broker in
server.properties, and Kafka would use the default, which will trust the
client certificates. If your brokers are using SSL for inter-broker
communication and you are still using your private CA for broker's
keystore, then you will need two separate endpoints in your listener
configuration, one for your customer's clients and another for inter-broker
communication so that you can specify a truststore with your private
ca-cert for your broker connections.

2. Yes, all the commands can specify password on the command line, so you
should be able to generate all the stores using a script without any
interactions.

Regards,

Rajini


On Wed, May 17, 2017 at 2:49 PM, Raghav <raghavas...@gmail.com> wrote:

> One follow up questions Rajini:
>
> 1. Can we use some other mechanism like have our customer's use a well
> known CA which JKS understands, and in that case we don't have to ask our
> customers to do this certificate-in and certificate-out thing ? I am just
> trying to understand if we can make our customer's workflow easier.
> Anything else that you can suggest here
>
> 2. Can we automate the key gen steps mentioned on apache website and
> adding to keystone and trust store so that we don't have to manually supply
> password ? Currently, everytime I tried to do steps mentioned in
> https://kafka.apache.org/documentation/#security I have to manually give
> password. It would be great if we can automate this process either through
> script or Java code. Any suggestions ...
>
>
> Many thanks.
>
> On Tue, May 16, 2017 at 10:58 AM, Raghav <raghavas...@gmail.com> wrote:
>
>> Many thanks, Rajini.
>>
>> On Tue, May 16, 2017 at 8:43 AM, Rajini Sivaram <rajinisiva...@gmail.com>
>> wrote:
>>
>>> Hi Raghav,
>>>
>>> If your Kafka broker is configured with *ssl.client.auth=required,* your
>>> customer's clients need to provide a keystore. In any case, they need a
>>> truststore since your broker is using SSL. For the truststore, you can
>>> given them ca-cert, as you mentioned. Client keystore contains a
>>> certificate and a private key.
>>>
>>> In the round-trip you described, customers generate the keys and give
>>> you the certificate signing request, keeping their private key private. You
>>> then send them back a signed certificate that goes into their keystore.
>>> This is the standard way of signing and is secure.
>>>
>>> In the single step scenario that you described, you generate the
>>> customer's key-pair consisting of certificate and private key. You then
>>> need to send them both the signed certificate and the private key. This is
>>> less secure. Unlike the round-trip, you now have the private key of the
>>> customer.
>>>
>>> Regards,
>>>
>>> Rajini
>>>
>>>
>>> On Tue, May 16, 2017 at 10:47 AM, Raghav <raghavas...@gmail.com> wrote:
>>>
>>>> Hi Rajini
>>>>
>>>> This was very helpful. I have another questions on similar lines.
>>>>
>>>> We host Kafka Broker, and we also have our own private CA. We want our
>>>> customers to setup their Kafka Clients (Producer and Consumer) using SSL
>>>> using *ssl.client.auth=required*.
>>>>
>>>> Is there a way, we can generate certificate for our clients, sign it
>>>> using our private CA, and then hand over our customers these  two
>>>> certificates (1. ca-cert 2. cert-signed), which if they add to their
>>>> keystroke and truststore, they can send message to our Kafka brokers while
>>>> keeping *ssl.client.auth=required*.
>>>>
>>>> We are looking to minimize our customer's pre-setup steps. For example
>>>> in normal scenario, customers will need to generate certificate, and hand
>>>> over their certificate request to our private CA, which we then sign it,
>>>> and send them signed certificate and private CA's certificate. So there is
>>>> one round trip. Just wondering if we can reduce this 2 step into 1 step.
>>>>
>>>> Thanks.
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Fri, May 12, 2017 at 8:53 AM, Rajini Sivaram <
>>>> rajinisiva...@gmail.com> wrote:
>>>>
>>>>> Raqhav,
>>>>>
>>>>> 1. Clients

Re: Securing Kafka - Keystore and Truststore question

2017-05-16 Thread Rajini Sivaram
Hi Raghav,

If your Kafka broker is configured with *ssl.client.auth=required,* your
customer's clients need to provide a keystore. In any case, they need a
truststore since your broker is using SSL. For the truststore, you can
given them ca-cert, as you mentioned. Client keystore contains a
certificate and a private key.

In the round-trip you described, customers generate the keys and give you
the certificate signing request, keeping their private key private. You
then send them back a signed certificate that goes into their keystore.
This is the standard way of signing and is secure.

In the single step scenario that you described, you generate the customer's
key-pair consisting of certificate and private key. You then need to send
them both the signed certificate and the private key. This is less secure.
Unlike the round-trip, you now have the private key of the customer.

Regards,

Rajini


On Tue, May 16, 2017 at 10:47 AM, Raghav <raghavas...@gmail.com> wrote:

> Hi Rajini
>
> This was very helpful. I have another questions on similar lines.
>
> We host Kafka Broker, and we also have our own private CA. We want our
> customers to setup their Kafka Clients (Producer and Consumer) using SSL
> using *ssl.client.auth=required*.
>
> Is there a way, we can generate certificate for our clients, sign it using
> our private CA, and then hand over our customers these  two certificates
> (1. ca-cert 2. cert-signed), which if they add to their keystroke and
> truststore, they can send message to our Kafka brokers while keeping
> *ssl.client.auth=required*.
>
> We are looking to minimize our customer's pre-setup steps. For example in
> normal scenario, customers will need to generate certificate, and hand over
> their certificate request to our private CA, which we then sign it, and
> send them signed certificate and private CA's certificate. So there is one
> round trip. Just wondering if we can reduce this 2 step into 1 step.
>
> Thanks.
>
>
>
>
>
>
>
>
>
>
>
> On Fri, May 12, 2017 at 8:53 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raqhav,
>>
>> 1. Clients need a keystore if you are using TLS client authentication. To
>> enable client authentication, you need to configure ssl.client.auth in
>> server.properties. This can be set to required|requested|none. If you
>> don't
>> enable client authentication, any client will be able to connect to your
>> broker. You could alternatively use SASL for client authentication.
>> .
>> 2. Client keystore is mandatory if ssl.client.auth=required, optional for
>> requested and not used for none. The truststore configured on the client
>> is
>> used to authenticate the server. So you have to provide it unless your
>> broker is using certificates signed by a trusted authority.
>>
>> Hope that helps.
>>
>> Rajini
>>
>> On Fri, May 12, 2017 at 11:35 AM, Raghav <raghavas...@gmail.com> wrote:
>>
>> > Hi
>> >
>> > I read the documentation here:
>> > https://kafka.apache.org/documentation/#security_ssl
>> >
>> > I have few questions about trust store and keystore based on this
>> scenario:
>> >
>> > We have 5 Kafka Brokers in our cluster. We want our clients to write to
>> our
>> > Kafka brokers in a secure way. Suppose, we also host a private CA as
>> > mentioned in the documentation above, and provide our clients the
>> *ca-cert*
>> > file, which they add it to their trust store.
>> >
>> > 1. Do we require our clients to generate their certificate and have it
>> > signed by our private CA, and add it to their keystore?
>> >
>> > 2. When is keystore used by clients, and when is truststore used by
>> clients
>> > ?
>> >
>> >
>> > Thanks.
>> >
>> > --
>> > R
>> >
>>
>
>
>
> --
> Raghav
>


Re: Securing Kafka - Keystore and Truststore question

2017-05-12 Thread Rajini Sivaram
Raqhav,

1. Clients need a keystore if you are using TLS client authentication. To
enable client authentication, you need to configure ssl.client.auth in
server.properties. This can be set to required|requested|none. If you don't
enable client authentication, any client will be able to connect to your
broker. You could alternatively use SASL for client authentication.
.
2. Client keystore is mandatory if ssl.client.auth=required, optional for
requested and not used for none. The truststore configured on the client is
used to authenticate the server. So you have to provide it unless your
broker is using certificates signed by a trusted authority.

Hope that helps.

Rajini

On Fri, May 12, 2017 at 11:35 AM, Raghav  wrote:

> Hi
>
> I read the documentation here:
> https://kafka.apache.org/documentation/#security_ssl
>
> I have few questions about trust store and keystore based on this scenario:
>
> We have 5 Kafka Brokers in our cluster. We want our clients to write to our
> Kafka brokers in a secure way. Suppose, we also host a private CA as
> mentioned in the documentation above, and provide our clients the *ca-cert*
> file, which they add it to their trust store.
>
> 1. Do we require our clients to generate their certificate and have it
> signed by our private CA, and add it to their keystore?
>
> 2. When is keystore used by clients, and when is truststore used by clients
> ?
>
>
> Thanks.
>
> --
> R
>


Re: [ANNOUNCE] New committer: Rajini Sivaram

2017-04-25 Thread Rajini Sivaram
Thanks everyone!

It has been a pleasure working with all of you in the Kafka community. Many
thanks to the PMC for this exciting opportunity.

Regards,

Rajini

On Tue, Apr 25, 2017 at 10:51 AM, Damian Guy <damian@gmail.com> wrote:

> Congrats
> On Tue, 25 Apr 2017 at 09:57, Mickael Maison <mickael.mai...@gmail.com>
> wrote:
>
> > Congratulation Rajini !
> > Great news
> >
> > On Tue, Apr 25, 2017 at 8:54 AM, Edoardo Comar <eco...@uk.ibm.com>
> wrote:
> > > Congratulations Rajini !!!
> > > Well deserved
> > > --
> > > Edoardo Comar
> > > IBM MessageHub
> > > eco...@uk.ibm.com
> > > IBM UK Ltd, Hursley Park, SO21 2JN
> > >
> > > IBM United Kingdom Limited Registered in England and Wales with number
> > > 741598 Registered office: PO Box 41, North Harbour, Portsmouth, Hants.
> > PO6
> > > 3AU
> > >
> > >
> > >
> > > From:   Gwen Shapira <g...@confluent.io>
> > > To: d...@kafka.apache.org, Users <users@kafka.apache.org>,
> > > priv...@kafka.apache.org
> > > Date:   24/04/2017 22:07
> > > Subject:[ANNOUNCE] New committer: Rajini Sivaram
> > >
> > >
> > >
> > > The PMC for Apache Kafka has invited Rajini Sivaram as a committer and
> we
> > > are pleased to announce that she has accepted!
> > >
> > > Rajini contributed 83 patches, 8 KIPs (all security and quota
> > > improvements) and a significant number of reviews. She is also on the
> > > conference committee for Kafka Summit, where she helped select content
> > > for our community event. Through her contributions she's shown good
> > > judgement, good coding skills, willingness to work with the community
> on
> > > finding the best
> > > solutions and very consistent follow through on her work.
> > >
> > > Thank you for your contributions, Rajini! Looking forward to many more
> :)
> > >
> > > Gwen, for the Apache Kafka PMC
> > >
> > >
> > >
> > > Unless stated otherwise above:
> > > IBM United Kingdom Limited - Registered in England and Wales with
> number
> > > 741598.
> > > Registered office: PO Box 41, North Harbour, Portsmouth, Hampshire PO6
> > 3AU
> >
>


Re: Consumption on a explicitly (dynamically) created topic has a 5 minute delay

2017-03-02 Thread Rajini Sivaram
This issue is being addressed in KAFKA-4631. See
https://issues.apache.org/jira/browse/KAFKA-4631 and the discussion in the
PR https://github.com/apache/kafka/pull/2622 for details.

Regards,

Rajini

On Thu, Mar 2, 2017 at 4:35 AM, Jaikiran Pai 
wrote:

> For future reference - I asked this question on dev mailing list and based
> on the discussion there was able to come up with a workaround to get this
> working. Details here https://www.mail-archive.com/d
> e...@kafka.apache.org/msg67613.html
>
> -Jaikiran
>
>
> On Wednesday 22 February 2017 01:16 PM, Jaikiran Pai wrote:
>
>> We are on Kafka 0.10.0.1 (server and client) and use Java
>> consumer/producer APIs. We have an application where we create Kafka topics
>> dynamically (using the AdminUtils Java API) and then start
>> producing/consuming on those topics. The issue we frequently run into is
>> this:
>>
>> 1. Application process creates a topic "foo-bar" via
>> AdminUtils.createTopic. This is sucessfully completed.
>> 2. Same application process then creates a consumer (using new Java
>> consumer API) on that foo-bar topic as a next step.
>> 3. The consumer that gets created in step#2 however, doesn't seem to be
>> enrolled in consumer group for this topic because of this (notice the last
>> line in the log):
>>
>> 2017-02-21 00:58:43,359 [Thread-6] DEBUG 
>> org.apache.kafka.clients.consumer.KafkaConsumer
>> - Kafka consumer created
>> 2017-02-21 00:58:43,360 [Thread-6] DEBUG 
>> org.apache.kafka.clients.consumer.KafkaConsumer
>> - Subscribed to topic(s): foo-bar
>> 2017-02-21 00:58:43,543 [Thread-6] DEBUG org.apache.kafka.clients.consu
>> mer.internals.AbstractCoordinator - Received group coordinator response
>> ClientResponse(receivedTimeMs=1487667523542, disconnected=false,
>> request=ClientRequest(expectResponse=true, callback=org.apache.kafka.clie
>> nts.consumer.internals.ConsumerNetworkClient$RequestFutureCo
>> mpletionHandler@50aad50f, request=RequestSend(header={ap
>> i_key=10,api_version=0,correlation_id=0,client_id=consumer-1},
>> body={group_id=my-app-group}), createdTimeMs=1487667523378,
>> sendTimeMs=1487667523529), responseBody={error_code=0,coo
>> rdinator={node_id=0,host=localhost,port=9092}})
>> 2017-02-21 00:58:43,543 [Thread-6] INFO org.apache.kafka.clients.consu
>> mer.internals.AbstractCoordinator - Discovered coordinator
>> localhost:9092 (id: 2147483647 rack: null) for group my-app-group.
>> 2017-02-21 00:58:43,545 [Thread-6] INFO org.apache.kafka.clients.consu
>> mer.internals.ConsumerCoordinator - Revoking previously assigned
>> partitions [] for group my-app-group
>> 2017-02-21 00:58:43,545 [Thread-6] INFO org.apache.kafka.clients.consu
>> mer.internals.AbstractCoordinator - (Re-)joining group my-app-group
>> 2017-02-21 00:58:43,548 [Thread-6] DEBUG org.apache.kafka.clients.consu
>> mer.internals.AbstractCoordinator - Sending JoinGroup
>> ({group_id=my-app-group,session_timeout=3,member_id=,
>> protocol_type=consumer,group_protocols=[{protocol_name=
>> range,protocol_metadata=java.nio.HeapByteBuffer[pos=0 lim=59 cap=59]}]})
>> to coordinator localhost:9092 (id: 2147483647 <(214)%20748-3647> rack:
>> null)
>> 2017-02-21 00:58:43,548 [Thread-6] DEBUG 
>> org.apache.kafka.common.metrics.Metrics
>> - Added sensor with name node-2147483647.bytes-sent
>> 2017-02-21 00:58:43,549 [Thread-6] DEBUG 
>> org.apache.kafka.common.metrics.Metrics
>> - Added sensor with name node-2147483647.bytes-received
>> 2017-02-21 00:58:43,549 [Thread-6] DEBUG 
>> org.apache.kafka.common.metrics.Metrics
>> - Added sensor with name node-2147483647.latency
>> 2017-02-21 00:58:43,552 [Thread-6] DEBUG org.apache.kafka.clients.consu
>> mer.internals.AbstractCoordinator - Received successful join group
>> response for group my-app-group: {error_code=0,generation_id=1,
>> group_protocol=range,leader_id=consumer-1-1453e523-402a-43fe
>> -87e8-795ae4c68c5d,member_id=consumer-1-1453e523-402a-43fe-
>> 87e8-795ae4c68c5d,members=[{member_id=consumer-1-1453e523-
>> 402a-43fe-87e8-795ae4c68c5d,member_metadata=java.nio.HeapByteBuffer[pos=0
>> lim=59 cap=59]}]}
>> 2017-02-21 00:58:43,552 [Thread-6] DEBUG org.apache.kafka.clients.consu
>> mer.internals.ConsumerCoordinator - Performing assignment for group
>> my-app-group using strategy range with subscriptions
>> {consumer-1-1453e523-402a-43fe-87e8-795ae4c68c5d=Subscriptio
>> n(topics=[foo-bar])}
>> *2017-02-21 00:58:43,552 [Thread-6] DEBUG org.apache.kafka.clients.consu
>> mer.internals.AbstractPartitionAssignor - Skipping assignment for topic
>> foo-bar since no metadata is available*
>>
>>
>> 4. A few seconds later, a separate process, produces (via Java producer
>> API) on the foo-bar topic, some messages.
>> 5. The consumer created in step#2 (although is waiting for messages) on
>> the foo-bar topic, _doesn't_ consume these messages.
>> 6. *5 minutes later* the Kafka server triggers a consumer rebalance which
>> then successfully assigns partition(s) of this foo-bar topic to 

Re: [kafka-clients] [VOTE] 0.10.2.0 RC2

2017-02-16 Thread Rajini Sivaram
+1 (non-binding)

Ran quick start and some security tests on binary, checked source build and
tests.

Thank you,

Rajini

On Thu, Feb 16, 2017 at 2:04 AM, Jun Rao  wrote:

> Hi, Ewen,
>
> Thanks for running the release. +1. Verified quickstart on 2.10 binary.
>
> Jun
>
> On Tue, Feb 14, 2017 at 10:39 AM, Ewen Cheslack-Postava  >
> wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 0.10.2.0.
> >
> > This is a minor version release of Apache Kafka. It includes 19 new KIPs.
> > See the release notes and release plan (https://cwiki.apache.org/conf
> > luence/display/KAFKA/Release+Plan+0.10.2.0) for more details. A few
> > feature highlights: SASL-SCRAM support, improved client compatibility to
> > allow use of clients newer than the broker, session windows and global
> > tables in the Kafka Streams API, single message transforms in the Kafka
> > Connect framework.
> >
> > Important note: in addition to the artifacts generated using JDK7 for
> > Scala 2.10 and 2.11, this release also includes experimental artifacts
> > built using JDK8 for Scala 2.12.
> >
> > Important code changes since RC1 (non-docs, non system tests):
> >
> > KAFKA-4756; The auto-generated broker id should be passed to
> > MetricReporter.configure
> > KAFKA-4761; Fix producer regression handling small or zero batch size
> >
> > Release notes for the 0.10.2.0 release:
> > http://home.apache.org/~ewencp/kafka-0.10.2.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by February 17th 5pm ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > http://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > http://home.apache.org/~ewencp/kafka-0.10.2.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/
> >
> > * Javadoc:
> > http://home.apache.org/~ewencp/kafka-0.10.2.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 0.10.2 branch) is the 0.10.2.0 tag:
> > https://git-wip-us.apache.org/repos/asf?p=kafka.git;a=tag;h=
> > 5712b489038b71ed8d5a679856d1dfaa925eadc1
> >
> >
> > * Documentation:
> > http://kafka.apache.org/0102/documentation.html
> >
> > * Protocol:
> > http://kafka.apache.org/0102/protocol.html
> >
> > * Successful Jenkins builds for the 0.10.2 branch:
> > Unit/integration tests: https://builds.apache.org/job/
> > kafka-0.10.2-jdk7/77/
> > System tests: https://jenkins.confluent.io/job/system-test-kafka-0.10.2/
> > 29/
> >
> > /**
> >
> > Thanks,
> > Ewen
> >
> > --
> > You received this message because you are subscribed to the Google Groups
> > "kafka-clients" group.
> > To unsubscribe from this group and stop receiving emails from it, send an
> > email to kafka-clients+unsubscr...@googlegroups.com.
> > To post to this group, send email to kafka-clie...@googlegroups.com.
> > Visit this group at https://groups.google.com/group/kafka-clients.
> > To view this discussion on the web visit https://groups.google.com/d/
> > msgid/kafka-clients/CAE1jLMORScgr1RekNgY0fLykSPh_%
> > 2BgkRYN7vok3fz1ou%3DuW3kw%40mail.gmail.com
> >  CAE1jLMORScgr1RekNgY0fLykSPh_%2BgkRYN7vok3fz1ou%3DuW3kw%
> 40mail.gmail.com?utm_medium=email_source=footer>
> > .
> > For more options, visit https://groups.google.com/d/optout.
> >
>


Re: Passing SSL client principal to custom JAAS module with SSL or SASL_SSL

2017-02-13 Thread Rajini Sivaram
Christopher,

It is definitely worth writing this up and starting a discussion on the dev
list. A KIP is required if there are changes to public interfaces or
configuration. I imagine this will require some config changes and hence if
you can write up a small KIP, that will be useful for discussion.

Regards,

Rajini

On Mon, Feb 13, 2017 at 1:17 PM, Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:

> Thanks for the response Rajini.
>
> It might be nice to support both but really I just need a mechanism to get
> hold of the client credentials when using SSL and then to do some extra
> custom authentication processing with the credentials.   I was thinking
> that to do this it would make sense to optionally allow the configuration
> of a custom JAAS LoginModule to be used when authentication with SSL so
> that authenticaiton logic could be plugged in. (just like the SASL SSL
> channel allows a configurable LoginModule) The credentials could then be
> obtained with the help of a X509 CallbackHandler.  Also if a login module
> is configured then it could return the principal instead of having to write
> a custom principal builder class.
>
> I am happy to work on a pull request for this change.  I'm not sure if a
> change like this would require a KIP but I can start a dev list thread to
> see what others think.
>
>
> On Mon, Feb 13, 2017 at 7:10 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
> > Christopher,
> >
> > SSL client authentication is currently disabled when SASL_SSL is used, so
> > it is not possible to use client certificate credentials with SASL_SSL.
> Are
> > you expecting to authenticate clients using certificates as well as using
> > SASL? Or do you just need some mechanism to get hold of the client
> > credentials with SSL?
> >
> > Regards,
> >
> > Rajini
> >
> > On Fri, Feb 10, 2017 at 5:46 PM, Christopher Shannon <
> > christopher.l.shan...@gmail.com> wrote:
> >
> > > I need to create a custom JAAS module for authentication but I need to
> > pass
> > > client certificate credentials as the principal.  SASL_SSL mode has
> > support
> > > for a JAAS module but from looking at the source code there doesn't
> > appear
> > > to be a way to pass SSL client credentials to the module.  The only
> > > callback handlers are for username/password and for kerberos.  However,
> > the
> > > SSL mode can extract a principal from the client certificate but when
> > using
> > > SSL without SASL there appears to be no way to plug in a JAAS module.
> > >
> > > So it seems that I am looking for kind of a combination of SSL and
> > SASL_SSL
> > > modes.  Is there anyway to configure out the box what I am trying to do
> > or
> > > is this going to require a code change? I can work on a pull request if
> > > necessary.
> > >
> >
>


Re: Passing SSL client principal to custom JAAS module with SSL or SASL_SSL

2017-02-13 Thread Rajini Sivaram
Christopher,

SSL client authentication is currently disabled when SASL_SSL is used, so
it is not possible to use client certificate credentials with SASL_SSL. Are
you expecting to authenticate clients using certificates as well as using
SASL? Or do you just need some mechanism to get hold of the client
credentials with SSL?

Regards,

Rajini

On Fri, Feb 10, 2017 at 5:46 PM, Christopher Shannon <
christopher.l.shan...@gmail.com> wrote:

> I need to create a custom JAAS module for authentication but I need to pass
> client certificate credentials as the principal.  SASL_SSL mode has support
> for a JAAS module but from looking at the source code there doesn't appear
> to be a way to pass SSL client credentials to the module.  The only
> callback handlers are for username/password and for kerberos.  However, the
> SSL mode can extract a principal from the client certificate but when using
> SSL without SASL there appears to be no way to plug in a JAAS module.
>
> So it seems that I am looking for kind of a combination of SSL and SASL_SSL
> modes.  Is there anyway to configure out the box what I am trying to do or
> is this going to require a code change? I can work on a pull request if
> necessary.
>


Re: Kafka SSL encryption plus external CA

2016-12-21 Thread Rajini Sivaram
Stephane,

I believe that should work, though I haven't tried it myself.

On Wed, Dec 21, 2016 at 12:11 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Thanks Rajini.
>
> I used a CNAME broker-bootstrap-A.example.com that round robins to the
> actual brokers broker-1.example.com, broker-2.example.com (etc etc).
> Therefore no brokers advertises the bootstrap DNS name we’re using. Is
> that an issue? The SSL certificate wildcard will match both boostrap CNAME
> and advertised hostnames
>
> We basically have the CNAME in order to cover all the brokers only using 3
> DNS records, but the bootstrap CNAME is never advertised by any of the
> broker. Is it an issue?
>
> Kind regards,
> Stephane
>
> [image: Simple Machines]
>
> *Stephane Maarek* | Developer
>
> +61 416 575 980 <+61%20416%20575%20980>
> steph...@simplemachines.com.au
> simplemachines.com.au
> Level 2, 145 William Street, Sydney NSW 2010
>
> On 21 December 2016 at 12:22:54 am, Rajini Sivaram (
> rajinisiva...@gmail.com) wrote:
>
> Stephane,
>
> Bootstrap brokers are also verified by the client in exactly the same way,
> so they should also match the wildcard of their certificate. Basically,
> clients need to make a secure SSL connection to one of the bootstrap
> brokers to obtain advertised hostnames of brokers, so they need to complete
> hostname verification of the bootstrap brokers.
>
>
> On Tue, Dec 20, 2016 at 12:21 AM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
>> Thanks Rajini!
>>
>> Also, I currently have each broker advertising as broker1.mydomain.com,
>> broker2.mydomain.com broker6.mydomain.com etc…
>> I have setup CNAME with round robin fashion to group brokers by
>> availability zone i.e. broker-a.mydomain.com broker-b.mydomain.com
>> broker-c.mydomain.com. I use them for setting up the bootstrap such as I
>> got high resiliency and don’t need to change the client code if I had or
>> remove or change brokers.
>>
>> Do I need the bootstrap servers to match the wildcard of the certificate,
>> or is the SSL verification happening after we get the advertised hostnames
>> from the brokers?
>>
>> Kind regards,
>> Stephane
>>
>> [image: Simple Machines]
>>
>> *Stephane Maarek* | Developer
>>
>> +61 416 575 980 <+61%20416%20575%20980>
>> steph...@simplemachines.com.au
>> simplemachines.com.au
>> Level 2, 145 William Street, Sydney NSW 2010
>>
>> On 20 December 2016 at 4:27:28 am, Rajini Sivaram (
>> rajinisiva...@gmail.com) wrote:
>>
>> Stephane,
>>
>> If you are using a trusted CA like Verisign, clients don't need to specify
>> a truststore. The host names specified in advertised.listeners in the
>> broker must match the wildcard DNS names in the certificates if clients
>> configure ssl.endpoint.identification.algorithm=https. If
>> ssl.endpoint.identification.algorithm is not specified, by default
>> hostname
>> is not validated. It should be set to https however to prevent
>> man-in-the-middle attacks. There is an open JIRA to make this the default
>> in Kafka.
>>
>> It makes sense to enable SSL in dev and prod to ensure that the code path
>> being run in dev is the same as in prod.
>>
>>
>>
>> On Mon, Dec 19, 2016 at 3:50 AM, Stephane Maarek <
>> steph...@simplemachines.com.au> wrote:
>>
>> > Hi,
>> >
>> > I have read the docs extensively but yet there are a few answers I can’t
>> > find. It has to do with external CA
>> > Please confirm my understanding if possible:
>> >
>> > I can create my own CA to sign all the brokers and clients certificates.
>> > Pros:
>> > - cheap, easy, automated. I need to find a way to access that CA
>> > programatically for new brokers if I want to automated their deployment,
>> > but I could use something like credstash or vault for that.
>> > Cons:
>> > - all of my clients needs to trust the CA. That means somehow find a way
>> > for my clients to get access to the CA using ca-cert and add it to their
>> > truststore… correct?
>> >
>> > I don’t really like the fact that I need to provide the CA cert file to
>> > every client. That seems quite hard to achieve, and prevents my users
>> from
>> > using the Kafka cluster directly. What’s the best way for the Kafka
>> clients
>> > to get access to the CA, while my users are doing dev, etc? Most of our
>> > applications run in Docker, which means we usually pass stuff around
>> using
>>

Re: Kafka SSL encryption plus external CA

2016-12-20 Thread Rajini Sivaram
Stephane,

Bootstrap brokers are also verified by the client in exactly the same way,
so they should also match the wildcard of their certificate. Basically,
clients need to make a secure SSL connection to one of the bootstrap
brokers to obtain advertised hostnames of brokers, so they need to complete
hostname verification of the bootstrap brokers.


On Tue, Dec 20, 2016 at 12:21 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Thanks Rajini!
>
> Also, I currently have each broker advertising as broker1.mydomain.com,
> broker2.mydomain.com broker6.mydomain.com etc…
> I have setup CNAME with round robin fashion to group brokers by
> availability zone i.e. broker-a.mydomain.com broker-b.mydomain.com
> broker-c.mydomain.com. I use them for setting up the bootstrap such as I
> got high resiliency and don’t need to change the client code if I had or
> remove or change brokers.
>
> Do I need the bootstrap servers to match the wildcard of the certificate,
> or is the SSL verification happening after we get the advertised hostnames
> from the brokers?
>
> Kind regards,
> Stephane
>
> [image: Simple Machines]
>
> *Stephane Maarek* | Developer
>
> +61 416 575 980 <+61%20416%20575%20980>
> steph...@simplemachines.com.au
> simplemachines.com.au
> Level 2, 145 William Street, Sydney NSW 2010
>
> On 20 December 2016 at 4:27:28 am, Rajini Sivaram (rajinisiva...@gmail.com)
> wrote:
>
> Stephane,
>
> If you are using a trusted CA like Verisign, clients don't need to specify
> a truststore. The host names specified in advertised.listeners in the
> broker must match the wildcard DNS names in the certificates if clients
> configure ssl.endpoint.identification.algorithm=https. If
> ssl.endpoint.identification.algorithm is not specified, by default
> hostname
> is not validated. It should be set to https however to prevent
> man-in-the-middle attacks. There is an open JIRA to make this the default
> in Kafka.
>
> It makes sense to enable SSL in dev and prod to ensure that the code path
> being run in dev is the same as in prod.
>
>
>
> On Mon, Dec 19, 2016 at 3:50 AM, Stephane Maarek <
> steph...@simplemachines.com.au> wrote:
>
> > Hi,
> >
> > I have read the docs extensively but yet there are a few answers I can’t
> > find. It has to do with external CA
> > Please confirm my understanding if possible:
> >
> > I can create my own CA to sign all the brokers and clients certificates.
> > Pros:
> > - cheap, easy, automated. I need to find a way to access that CA
> > programatically for new brokers if I want to automated their deployment,
> > but I could use something like credstash or vault for that.
> > Cons:
> > - all of my clients needs to trust the CA. That means somehow find a way
> > for my clients to get access to the CA using ca-cert and add it to their
> > truststore… correct?
> >
> > I don’t really like the fact that I need to provide the CA cert file to
> > every client. That seems quite hard to achieve, and prevents my users
> from
> > using the Kafka cluster directly. What’s the best way for the Kafka
> clients
> > to get access to the CA, while my users are doing dev, etc? Most of our
> > applications run in Docker, which means we usually pass stuff around
> using
> > environment variables.
> >
> >
> > My next idea was to use an external CA (like Verisign) to sign my
> > certificate with a wildcard *.kafka.mydomain.com (A records pointing to
> > internal IPs - the DNS name would be the advertised kafka hostname). My
> > goal was then for the clients not to require to trust the CA because it
> > would be automatically trusted? Do I have the correct understanding? Or
> do
> > I still need to add the external CA to the truststore of my clients?
> > (basically I’m trying to reproduce the behaviour of what a web browser
> > does).
> >
> >
> > Finally, is it recommended to enable SSL in my dev Kafka cluster vs my
> prod
> > Kafka cluster, or to have SSL on each cluster?
> >
> > Thanks!
> >
> > Kind regards,
> > Stephane
> >
>
>
>
> --
> Regards,
>
> Rajini
>
>


Re: Kafka ACL's with SSL Protocol is not working

2016-12-20 Thread Rajini Sivaram
Raghu,

Only the principal used for inter broker communication needs to be a super
user. For other users, you can set ACLs based on their role. You will need
different keystores for broker and clients with different principals so
that you can configure different permissions. You can configure User:Broker
as superuser and User:User_1 with produce permissions and User:User_2 with
consume permissions.

On Mon, Dec 19, 2016 at 8:10 PM, Raghu B <raghu98...@gmail.com> wrote:

> Thanks Rajani for the above Info but I want to restrict a user from
> performing all the operations (I think that defines ACL), I just want
> User_1 to produce messages and User_2 to consume messages.
>
> How can we achieve that.
>
> Thanks in advance
>
> On Mon, Dec 19, 2016 at 3:13 AM, Rajini Sivaram <rajinisiva...@gmail.com>
> wrote:
>
>> Raghu,
>>
>> It could be because the principal used for inter broker communication
>> doesn't have all the necessary permissions. If you are using PLAINTEXT for
>> inter-broker, the principal is ANONYMOUS, if using SSL, it would be
>> similar
>> to the one you are setting for client. You can configure broker principal
>> as super.users to give full access.
>>
>> On Fri, Dec 16, 2016 at 10:16 PM, Raghu B <raghu98...@gmail.com> wrote:
>>
>> > Thank you Rajani, your suggestion is really helpful.
>> >
>> >
>> > [2016-12-16 21:55:36,720] DEBUG Principal =
>> > User:CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown
>> is
>> > Allowed Operation = Create from host = 172.28.89.63 on resource =
>> > Cluster:kafka-cluster (kafka.authorizer.logger)
>> >
>> > Finally I am getting the user as exactly what I set in my SSL-Cert (Not
>> > Anonymous).
>> >
>> > But, I am getting another Error i.e
>> >
>> >
>> > [2016-12-16 13:55:36,449] WARN Error while fetching metadata with
>> > correlation id 45 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
>> > (org.apache.kafka.clients.NetworkClient)
>> > [2016-12-16 13:55:36,609] WARN Error while fetching metadata with
>> > correlation id 46 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
>> > (org.apache.kafka.clients.NetworkClient)
>> > [2016-12-16 13:55:36,766] WARN Error while fetching metadata with
>> > correlation id 47 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
>> > (org.apache.kafka.clients.NetworkClient)
>> >
>> >
>> > I created the topic and my kafka node is working without any issues (I
>> > restarted several time)
>> >
>> > [raghu@Kafka-238343-1-33109167 kafka_2.11-0.10.1.0]$
>> *bin/kafka-topics.sh
>> > --describe --zookeeper localhost:2181 --topic my-ssl-topic*
>> >
>> > Topic:my-ssl-topic PartitionCount:1 ReplicationFactor:1 Configs:
>> > Topic: my-ssl-topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
>> >
>> > Thanks in advance,
>> > Raghu
>> >
>> >
>> > On Fri, Dec 16, 2016 at 1:30 AM, Rajini Sivaram <rsiva...@pivotal.io>
>> > wrote:
>> >
>> > > You need to set ssl.client.auth="required" in server.properties.
>> > >
>> > > Regards,
>> > >
>> > > Rajini
>> > >
>> > > On Wed, Dec 14, 2016 at 12:12 AM, Raghu B <raghu98...@gmail.com>
>> wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > I am trying to enable ACL's in my Kafka cluster with along with SSL
>> > > > Protocol.
>> > > >
>> > > > I tried with each and every parameters but no luck, so I need help
>> to
>> > > > enable the SSL(without Kerberos) and I am attaching all the
>> > configuration
>> > > > details in this.
>> > > >
>> > > > Kindly Help me.
>> > > >
>> > > >
>> > > > *I tested SSL without ACL, it worked fine
>> > > > (listeners=SSL://10.247.195.122:9093 <http://10.247.195.122:9093>)*
>> > > >
>> > > >
>> > > > *This is my Kafka server properties file:*
>> > > >
>> > > > *# ACL SETTINGS
>> > > #*
>> > > >
>> > > > *auto.create.topics.enable=true*
>> > > >
>> > > > *authorizer.class.name
>> > > > <http://authorizer.class.name>=kafka.security.auth.
>> > SimpleAclAuthorizer*
>> > > >
>> > > > *secu

Re: Kafka ACL's with SSL Protocol is not working

2016-12-19 Thread Rajini Sivaram
Raghu,

It could be because the principal used for inter broker communication
doesn't have all the necessary permissions. If you are using PLAINTEXT for
inter-broker, the principal is ANONYMOUS, if using SSL, it would be similar
to the one you are setting for client. You can configure broker principal
as super.users to give full access.

On Fri, Dec 16, 2016 at 10:16 PM, Raghu B <raghu98...@gmail.com> wrote:

> Thank you Rajani, your suggestion is really helpful.
>
>
> [2016-12-16 21:55:36,720] DEBUG Principal =
> User:CN=writeuser,OU=Unknown,O=Unknown,L=Unknown,ST=Unknown,C=Unknown is
> Allowed Operation = Create from host = 172.28.89.63 on resource =
> Cluster:kafka-cluster (kafka.authorizer.logger)
>
> Finally I am getting the user as exactly what I set in my SSL-Cert (Not
> Anonymous).
>
> But, I am getting another Error i.e
>
>
> [2016-12-16 13:55:36,449] WARN Error while fetching metadata with
> correlation id 45 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2016-12-16 13:55:36,609] WARN Error while fetching metadata with
> correlation id 46 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
> [2016-12-16 13:55:36,766] WARN Error while fetching metadata with
> correlation id 47 : {my-ssl-topic=LEADER_NOT_AVAILABLE}
> (org.apache.kafka.clients.NetworkClient)
>
>
> I created the topic and my kafka node is working without any issues (I
> restarted several time)
>
> [raghu@Kafka-238343-1-33109167 kafka_2.11-0.10.1.0]$ *bin/kafka-topics.sh
> --describe --zookeeper localhost:2181 --topic my-ssl-topic*
>
> Topic:my-ssl-topic PartitionCount:1 ReplicationFactor:1 Configs:
> Topic: my-ssl-topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
>
> Thanks in advance,
> Raghu
>
>
> On Fri, Dec 16, 2016 at 1:30 AM, Rajini Sivaram <rsiva...@pivotal.io>
> wrote:
>
> > You need to set ssl.client.auth="required" in server.properties.
> >
> > Regards,
> >
> > Rajini
> >
> > On Wed, Dec 14, 2016 at 12:12 AM, Raghu B <raghu98...@gmail.com> wrote:
> >
> > > Hi All,
> > >
> > > I am trying to enable ACL's in my Kafka cluster with along with SSL
> > > Protocol.
> > >
> > > I tried with each and every parameters but no luck, so I need help to
> > > enable the SSL(without Kerberos) and I am attaching all the
> configuration
> > > details in this.
> > >
> > > Kindly Help me.
> > >
> > >
> > > *I tested SSL without ACL, it worked fine
> > > (listeners=SSL://10.247.195.122:9093 <http://10.247.195.122:9093>)*
> > >
> > >
> > > *This is my Kafka server properties file:*
> > >
> > > *# ACL SETTINGS
> > #*
> > >
> > > *auto.create.topics.enable=true*
> > >
> > > *authorizer.class.name
> > > <http://authorizer.class.name>=kafka.security.auth.
> SimpleAclAuthorizer*
> > >
> > > *security.inter.broker.protocol=SSL*
> > >
> > > *#allow.everyone.if.no.acl.found=true*
> > >
> > > *#principal.builder.class=CustomizedPrincipalBuilderClass*
> > >
> > > *#super.users=User:"CN=writeuser,OU=Unknown,O=
> > > Unknown,L=Unknown,ST=Unknown,C=Unknown"*
> > >
> > > *#super.users=User:Raghu;User:Admin*
> > >
> > > *#offsets.storage=kafka*
> > >
> > > *#dual.commit.enabled=true*
> > >
> > > *listeners=SSL://10.247.195.122:9093 <http://10.247.195.122:9093>*
> > >
> > > *#listeners=PLAINTEXT://10.247.195.122:9092 <
> http://10.247.195.122:9092
> > >*
> > >
> > > *#listeners=PLAINTEXT://10.247.195.122:9092
> > > <http://10.247.195.122:9092>,SSL://10.247.195.122:9093
> > > <http://10.247.195.122:9093>*
> > >
> > > *#advertised.listeners=PLAINTEXT://10.247.195.122:9092
> > > <http://10.247.195.122:9092>*
> > >
> > >
> > > *
> > > ssl.keystore.location=/home/raghu/kafka/security/server.keystore.jks*
> > >
> > > *ssl.keystore.password=123456*
> > >
> > > *ssl.key.password=123456*
> > >
> > > *
> > > ssl.truststore.location=/home/raghu/kafka/security/server.
> > truststore.jks*
> > >
> > > *ssl.truststore.password=123456*
> > >
> > >
> > >
> > > *Set the ACL from Authorizer CLI:*
> > >
> > > > *bin/kafka-acls.sh --

Re: Kafka SSL encryption plus external CA

2016-12-19 Thread Rajini Sivaram
Stephane,

If you are using a trusted CA like Verisign, clients don't need to specify
a truststore. The host names specified in advertised.listeners in the
broker must match the wildcard DNS names in the certificates if clients
configure ssl.endpoint.identification.algorithm=https. If
ssl.endpoint.identification.algorithm is not specified, by default hostname
is not validated. It should be set to  https however to prevent
man-in-the-middle attacks. There is an open JIRA to make this the default
in Kafka.

It makes sense to enable SSL in dev and prod to ensure that the code path
being run in dev is the same as in prod.



On Mon, Dec 19, 2016 at 3:50 AM, Stephane Maarek <
steph...@simplemachines.com.au> wrote:

> Hi,
>
> I have read the docs extensively but yet there are a few answers I can’t
> find. It has to do with external CA
> Please confirm my understanding if possible:
>
> I can create my own CA to sign all the brokers and clients certificates.
> Pros:
> - cheap, easy, automated. I need to find a way to access that CA
> programatically for new brokers if I want to automated their deployment,
> but I could use something like credstash or vault for that.
> Cons:
> - all of my clients needs to trust the CA. That means somehow find a way
> for my clients to get access to the CA  using ca-cert and add it to their
> truststore… correct?
>
> I don’t really like the fact that I need to provide the CA cert file to
> every client. That seems quite hard to achieve, and prevents my users from
> using the Kafka cluster directly. What’s the best way for the Kafka clients
> to get access to the CA, while my users are doing dev, etc? Most of our
> applications run in Docker, which means we usually pass stuff around using
> environment variables.
>
>
> My next idea was to use an external CA (like Verisign) to sign my
> certificate with a wildcard *.kafka.mydomain.com (A records pointing to
> internal IPs - the DNS name would be the advertised kafka hostname). My
> goal was then for the clients not to require to trust the CA because it
> would be automatically trusted? Do I have the correct understanding? Or do
> I still need to add the external CA to the truststore of my clients?
> (basically I’m trying to reproduce the behaviour of what a web browser
> does).
>
>
> Finally, is it recommended to enable SSL in my dev Kafka cluster vs my prod
> Kafka cluster, or to have SSL on each cluster?
>
> Thanks!
>
> Kind regards,
> Stephane
>



-- 
Regards,

Rajini


Re: Kafka ACL's with SSL Protocol is not working

2016-12-16 Thread Rajini Sivaram
You need to set ssl.client.auth="required" in server.properties.

Regards,

Rajini

On Wed, Dec 14, 2016 at 12:12 AM, Raghu B  wrote:

> Hi All,
>
> I am trying to enable ACL's in my Kafka cluster with along with SSL
> Protocol.
>
> I tried with each and every parameters but no luck, so I need help to
> enable the SSL(without Kerberos) and I am attaching all the configuration
> details in this.
>
> Kindly Help me.
>
>
> *I tested SSL without ACL, it worked fine
> (listeners=SSL://10.247.195.122:9093 )*
>
>
> *This is my Kafka server properties file:*
>
> *# ACL SETTINGS #*
>
> *auto.create.topics.enable=true*
>
> *authorizer.class.name
> =kafka.security.auth.SimpleAclAuthorizer*
>
> *security.inter.broker.protocol=SSL*
>
> *#allow.everyone.if.no.acl.found=true*
>
> *#principal.builder.class=CustomizedPrincipalBuilderClass*
>
> *#super.users=User:"CN=writeuser,OU=Unknown,O=
> Unknown,L=Unknown,ST=Unknown,C=Unknown"*
>
> *#super.users=User:Raghu;User:Admin*
>
> *#offsets.storage=kafka*
>
> *#dual.commit.enabled=true*
>
> *listeners=SSL://10.247.195.122:9093 *
>
> *#listeners=PLAINTEXT://10.247.195.122:9092 *
>
> *#listeners=PLAINTEXT://10.247.195.122:9092
> ,SSL://10.247.195.122:9093
> *
>
> *#advertised.listeners=PLAINTEXT://10.247.195.122:9092
> *
>
>
> *
> ssl.keystore.location=/home/raghu/kafka/security/server.keystore.jks*
>
> *ssl.keystore.password=123456*
>
> *ssl.key.password=123456*
>
> *
> ssl.truststore.location=/home/raghu/kafka/security/server.truststore.jks*
>
> *ssl.truststore.password=123456*
>
>
>
> *Set the ACL from Authorizer CLI:*
>
> > *bin/kafka-acls.sh --authorizer-properties
> zookeeper.connect=10.247.195.122:2181  --list
> --topic ssltopic*
>
> *Current ACLs for resource `Topic:ssltopic`: *
>
> *  User:CN=writeuser, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown,
> C=Unknown has Allow permission for operations: Write from hosts: * *
>
>
> *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ bin/kafka-console-producer.sh
> --broker-list 10.247.195.122:9093  --topic
> ssltopic --producer.config client-ssl.properties*
>
>
> *[2016-12-13 14:53:45,839] WARN Error while fetching metadata with
> correlation id 0 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> (org.apache.kafka.clients.NetworkClient)*
>
> *[2016-12-13 14:53:45,984] WARN Error while fetching metadata with
> correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> (org.apache.kafka.clients.NetworkClient)*
>
>
> *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ cat client-ssl.properties*
>
> *#group.id =sslgroup*
>
> *security.protocol=SSL*
>
> *ssl.truststore.location=/Users/rbaddam/Desktop/Dev/
> kafka_2.11-0.10.1.0/ssl/client.truststore.jks*
>
> *ssl.truststore.password=123456*
>
> * #Configure Below if you use Client Auth*
>
>
> *ssl.keystore.location=/Users/rbaddam/Desktop/Dev/kafka_2.
> 11-0.10.1.0/ssl/client.keystore.jks*
>
> *ssl.keystore.password=123456*
>
> *ssl.key.password=123456*
>
>
> *XXXWMXXX-7:kafka_2.11-0.10.1.0 rbaddam$ bin/kafka-console-consumer.sh
> --bootstrap-server 10.247.195.122:9093 
> --new-consumer --consumer.config client-ssl.properties --topic ssltopic
> --from-beginning*
>
> *[2016-12-13 14:53:28,817] WARN Error while fetching metadata with
> correlation id 1 : {ssltopic=UNKNOWN_TOPIC_OR_PARTITION}
> (org.apache.kafka.clients.NetworkClient)*
>
> *[2016-12-13 14:53:28,819] ERROR Unknown error when running consumer:
> (kafka.tools.ConsoleConsumer$)*
>
> *org.apache.kafka.common.errors.GroupAuthorizationException: Not
> authorized
> to access group: console-consumer-52826*
>
>
> Thanks in advance,
>
> Raghu - raghu98...@gmail.com
>


Re: Detecting when all the retries are expired for a message

2016-12-07 Thread Rajini Sivaram
If you just want to test retries, you could restart Kafka while the
producer is running and you should see the producer retry while Kafka is
down/leader is being elected after Kafka restarts. If you specifically want
a TimeoutException to trigger all retries, I am not sure how you can. I
would suggest that you raise a JIRA since the current behaviour is not very
intuitive.


On Wed, Dec 7, 2016 at 6:51 AM, Mevada, Vatsal <mev...@sky.optymyze.com>
wrote:

> @Asaf
>
>
>
> Do I need to raise new bug for this?
>
>
>
> @Rajini
>
>
>
> Please suggest some the configuration with which retries should work
> according to you. The code is already there in the mail chain. I am adding
> it here again:
>
>
>
> public void produce(String topicName, String filePath, String
> bootstrapServers, String encoding) {
>
> try (BufferedReader bf = getBufferedReader(filePath,
> encoding);
>
> KafkaProducer<Object, String> producer =
> initKafkaProducer(bootstrapServers)) {
>
> String line;
>
> while ((line = bf.readLine()) != null) {
>
> producer.send(new
> ProducerRecord<>(topicName, line), (metadata, e) -> {
>
> if (e !=
> null) {
>
>
>   e.printStackTrace();
>
> }
>
> });
>
> }
>
> producer.flush();
>
> } catch (IOException e) {
>
> Throwables.propagate(e);
>
> }
>
> }
>
>
>
> private static KafkaProducer<Object, String> initKafkaProducer(String
> bootstrapServer) {
>
> Properties properties = new Properties();
>
> properties.put("bootstrap.servers", bootstrapServer);
>
> properties.put("key.serializer", StringSerializer.class.
> getCanonicalName());
>
> properties.put("value.serializer",StringSerializer.
> class.getCanonicalName());
>
> properties.put("acks", "-1");
>
> properties.put("retries", 5);
>
> properties.put("request.timeout.ms", 1);
>
> return new KafkaProducer<>(properties);
>
> }
>
>
>
> private BufferedReader getBufferedReader(String filePath, String encoding)
> throws UnsupportedEncodingException, FileNotFoundException {
>
> return new BufferedReader(new InputStreamReader(new
> FileInputStream(filePath), Optional.ofNullable(encoding).
> orElse("UTF-8")));
>
> }
>
>
>
> Regards,
>
> Vatsal
>
>
>
> -Original Message-
> From: Rajini Sivaram [mailto:rajinisiva...@googlemail.com]
> Sent: 06 December 2016 17:27
> To: users@kafka.apache.org
> Subject: Re: Detecting when all the retries are expired for a message
>
>
>
> I believe batches in RecordAccumulator are expired after
> request.timeout.ms, so they wouldn't get retried in this case. I think
> the config options are quite confusing, making it hard to figure out the
> behavior without looking into the code.
>
>
>
> On Tue, Dec 6, 2016 at 10:10 AM, Asaf Mesika <asaf.mes...@gmail.com
> <mailto:asaf.mes...@gmail.com>> wrote:
>
>
>
> > Vatsal:
>
> >
>
> > I don't think they merged the fix for this bug (retries doesn't work)
>
> > in 0.9.x to 0.10.0.1: https://github.com/apache/kafka/pull/1547
>
> >
>
> >
>
> > On Tue, Dec 6, 2016 at 10:19 AM Mevada, Vatsal
>
> > <mev...@sky.optymyze.com<mailto:mev...@sky.optymyze.com>>
>
> > wrote:
>
> >
>
> > > Hello,
>
> > >
>
> > > Bumping up this thread in case anyone of you have any say on this
> issue.
>
> > >
>
> > > Regards,
>
> > > Vatsal
>
> > >
>
> > > -Original Message-
>
> > > From: Mevada, Vatsal
>
> > > Sent: 02 December 2016 16:16
>
> > > To: Kafka Users <users@kafka.apache.org<mailto:users@kafka.apache.org>
> >
>
> > > Subject: RE: Detecting when all the retries are expired for a
>
> > > message
>
> > >
>
> > > I executed the same producer code for a single record file with
>
> > > following
>
> > > confi

Re: Detecting when all the retries are expired for a message

2016-12-06 Thread Rajini Sivaram
I believe batches in RecordAccumulator are expired after request.timeout.ms,
so they wouldn't get retried in this case. I think the config options are
quite confusing, making it hard to figure out the behavior without looking
into the code.

On Tue, Dec 6, 2016 at 10:10 AM, Asaf Mesika  wrote:

> Vatsal:
>
> I don't think they merged the fix for this bug (retries doesn't work) in
> 0.9.x to 0.10.0.1: https://github.com/apache/kafka/pull/1547
>
>
> On Tue, Dec 6, 2016 at 10:19 AM Mevada, Vatsal 
> wrote:
>
> > Hello,
> >
> > Bumping up this thread in case anyone of you have any say on this issue.
> >
> > Regards,
> > Vatsal
> >
> > -Original Message-
> > From: Mevada, Vatsal
> > Sent: 02 December 2016 16:16
> > To: Kafka Users 
> > Subject: RE: Detecting when all the retries are expired for a message
> >
> > I executed the same producer code for a single record file with following
> > config:
> >
> > properties.put("bootstrap.servers", bootstrapServer);
> > properties.put("key.serializer",
> > StringSerializer.class.getCanonicalName());
> > properties.put("value.serializer",
> > StringSerializer.class.getCanonicalName());
> > properties.put("acks", "-1");
> > properties.put("retries", 5);
> > properties.put("request.timeout.ms", 1);
> >
> > I have kept request.timeout.ms=1 to make sure that message delivery will
> > fail with TimeoutException. Since the retries are 5 then the program
> > should take at-least 5 ms (50 seconds) to complete for single record.
> > However the program is completing almost instantly with only one callback
> > with TimeoutException. I suspect that producer is not going for any
> > retries. Or am I missing something in my code?
> >
> > My Kafka version is 0.10.0.1.
> >
> > Regards,
> > Vatsal
> > Am I missing any configuration or
> > -Original Message-
> > From: Ismael Juma [mailto:isma...@gmail.com]
> > Sent: 02 December 2016 13:30
> > To: Kafka Users 
> > Subject: RE: Detecting when all the retries are expired for a message
> >
> > The callback is called after the retries have been exhausted.
> >
> > Ismael
> >
> > On 2 Dec 2016 3:34 am, "Mevada, Vatsal"  wrote:
> >
> > > @Ismael:
> > >
> > > I can handle TimeoutException in the callback. However as per the
> > > documentation of Callback(link: https://kafka.apache.org/0100/
> > > javadoc/org/apache/kafka/clients/producer/Callback.html),
> > > TimeoutException is a retriable exception and it says that it "may be
> > > covered by increasing #.retries". So even if I get TimeoutException in
> > > callback, wouldn't it try to send message again until all the retries
> > > are done? Would it be safe to assume that message delivery is failed
> > > permanently just by encountering TimeoutException in callback?
> > >
> > > Here is a snippet from above mentioned documentation:
> > > "exception - The exception thrown during processing of this record.
> > > Null if no error occurred. Possible thrown exceptions include:
> > > Non-Retriable exceptions (fatal, the message will never be sent):
> > > InvalidTopicException OffsetMetadataTooLargeException
> > > RecordBatchTooLargeException RecordTooLargeException
> > > UnknownServerException Retriable exceptions (transient, may be covered
> > > by increasing #.retries): CorruptRecordException
> > > InvalidMetadataException NotEnoughReplicasAfterAppendException
> > > NotEnoughReplicasException OffsetOutOfRangeException TimeoutException
> > > UnknownTopicOrPartitionException"
> > >
> > > @asaf :My kafka - API version is 0.10.0.1. So I think I should not
> > > face the issue that you are mentioning. I mentioned documentation link
> > > of 0.9 by mistake.
> > >
> > > Regards,
> > > Vatsal
> > > -Original Message-
> > > From: Asaf Mesika [mailto:asaf.mes...@gmail.com]
> > > Sent: 02 December 2016 00:32
> > > To: Kafka Users 
> > > Subject: Re: Detecting when all the retries are expired for a message
> > >
> > > There's a critical bug in that section that has only been fixed in
> > > 0.9.0.2 which has not been release yet. Without the fix it doesn't
> > really retry.
> > > I forked the kafka repo, applied the fix, built it and placed it in
> > > our own Nexus Maven repository until 0.9.0.2 will be released.
> > >
> > > https://github.com/logzio/apache-kafka/commits/0.9.0.1-logzio
> > >
> > > Feel free to use it.
> > >
> > > On Thu, Dec 1, 2016 at 4:52 PM Ismael Juma  wrote:
> > >
> > > > The callback should give you what you are asking for. Has it not
> > > > worked as you expect when you tried it?
> > > >
> > > > Ismael
> > > >
> > > > On Thu, Dec 1, 2016 at 1:22 PM, Mevada, Vatsal
> > > > 
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > >
> > > > >
> > > > > I am reading a file and dumping each record on Kafka. Here is my
> > 

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Ignore the comment about lookups. Your client is finding mybalancer01 since
it was working earlier and kafka doesn't need to lookup mybalancer01. It
will be good to check the jaas config and then run with debug logging.

On Mon, Nov 21, 2016 at 5:16 PM, Rajini Sivaram <
rajinisiva...@googlemail.com> wrote:

> Not sure if this is the exact jaas.conf that you have, because this has
> mismatched passwords:
>
> KafkaServer {
>org.apache.kafka.common.security.plain.PlainLoginModule required
>username="someuser"
>user_kafka="somePassword"
>password="kafka-password";
> };
>
>
> To use username "some user", password "somePassword", the config should be:
>
> KafkaServer {
>org.apache.kafka.common.security.plain.PlainLoginModule required
>username="someuser"
>user_*someuser*="somePassword"
>password="*somePassword*";
> };
>
>
> But I would have expected to see an error in the Kafka logs if the
> inter-broker config was incorrect.
>
> I am assuming all your hostnames can be found from the different machines
> since PLAINTEXT was working earlier. But it would be worth checking that
> mykafka01 can lookup mybalancer01. It would be worth running Kafka and a
> console producer with debug logging turned on. Kafka uses
> *config/log4j.properties* and console producer uses
> *config/tools-log4j.properties.* Since you are testing with PLAINTEXT, it
> should be easy to run a console producer with just standard arguments with
> bootstrap server set to mybalancer01:9093.
>
>
>
> On Mon, Nov 21, 2016 at 4:37 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
>> Thanks again. So this might be very telling of the underlying problem:
>>
>>
>> I did what you suggested:
>>
>>
>> 1) I disabled (actually deleted) the first rule; then
>>
>> 2) I changed the load balancer's second (which is now its only) rule to
>> accept TCP:9093 and to translate that to TCP:9093, making the conneciton
>> PLAINTEXT all the way through to Kafka; then
>>
>> 3) I tried connecting a Scala consumer to the load balancer URL (
>> mybalancer01.example.com) and I'm getting that ClosedChannelException
>>
>>
>> For now there is only one Kafka broker sitting behind the load balancer.
>> It's server.properties look like:
>>
>>
>> listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
>>
>> advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
>> ,SASL_PLAINTEXT://mykafka01.example.com:9092
>>
>> advertised.host.name=mykafka01.example.com
>>
>> security.inter.broker.protocol=SASL_PLAINTEXT
>>
>> sasl.enabled.mechanisms=PLAIN
>>
>> sasl.mechanism.inter.broker.protocol=PLAIN
>>
>> broker.id=1
>>
>> num.partitions=4
>>
>> zookeeper.connect=zkA:2181,zkB:2181,zkC:2181
>>
>> num.network.threads=3
>>
>> num.io.threads=8
>>
>> socket.send.buffer.bytes=102400
>>
>> socket.receive.buffer.bytes=102400
>>
>> log.dirs=/tmp/kafka-logs
>>
>> num.recovery.threads.per.data.dir=1
>>
>> log.retention.hours=168
>>
>> log.segment.bytes=1073741824
>>
>> log.retention.check.interval.ms=30
>>
>> zookeeper.connection.timeout.ms=6000
>>
>> offset.metadata.max.bytes=4096
>>
>>
>> Above, 'zkA', 'zkB' and 'zkC' are defined inside `/etc/hosts` and are
>> valid server names.
>>
>>
>> And then inside the kafka-run-class.sh script, instead of the default:
>>
>>
>> if [ -z "$KAFKA_OPTS" ]; then
>>
>>   KAFKA_OPTS=""
>>
>> fi
>>
>>
>> I have:
>>
>>
>> if [ -z "$KAFKA_OPTS" ]; then
>>
>>   KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/con
>> fig/jaas.conf"
>>
>> fi
>>
>>
>> I also added the /opt/kafka/config/jaas.conf file like you suggested, and
>> only changed the names of users and passwords:
>>
>>
>> KafkaServer {
>>
>>   org.apache.kafka.common.security.plain.PlainLoginModule required
>>
>>   username="someuser"
>>
>>   user_kafka="somePassword"
>>
>>   password="kafka-password";
>>
>> };
>>
>>
>> The fact that I can no longer even consume from a topic over PLAINTEXT
>> (which is a regression of where I was before we started trying to add SSL)
>> tells me there is something wrong in either server.properties or jaas.conf.
>> I've c

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Not sure if this is the exact jaas.conf that you have, because this has
mismatched passwords:

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="someuser"
   user_kafka="somePassword"
   password="kafka-password";
};


To use username "some user", password "somePassword", the config should be:

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="someuser"
   user_*someuser*="somePassword"
   password="*somePassword*";
};


But I would have expected to see an error in the Kafka logs if the
inter-broker config was incorrect.

I am assuming all your hostnames can be found from the different machines
since PLAINTEXT was working earlier. But it would be worth checking that
mykafka01 can lookup mybalancer01. It would be worth running Kafka and a
console producer with debug logging turned on. Kafka uses
*config/log4j.properties* and console producer uses
*config/tools-log4j.properties.* Since you are testing with PLAINTEXT, it
should be easy to run a console producer with just standard arguments with
bootstrap server set to mybalancer01:9093.



On Mon, Nov 21, 2016 at 4:37 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> Thanks again. So this might be very telling of the underlying problem:
>
>
> I did what you suggested:
>
>
> 1) I disabled (actually deleted) the first rule; then
>
> 2) I changed the load balancer's second (which is now its only) rule to
> accept TCP:9093 and to translate that to TCP:9093, making the conneciton
> PLAINTEXT all the way through to Kafka; then
>
> 3) I tried connecting a Scala consumer to the load balancer URL (
> mybalancer01.example.com) and I'm getting that ClosedChannelException
>
>
> For now there is only one Kafka broker sitting behind the load balancer.
> It's server.properties look like:
>
>
> listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
>
> advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093,SASL_
> PLAINTEXT://mykafka01.example.com:9092
>
> advertised.host.name=mykafka01.example.com
>
> security.inter.broker.protocol=SASL_PLAINTEXT
>
> sasl.enabled.mechanisms=PLAIN
>
> sasl.mechanism.inter.broker.protocol=PLAIN
>
> broker.id=1
>
> num.partitions=4
>
> zookeeper.connect=zkA:2181,zkB:2181,zkC:2181
>
> num.network.threads=3
>
> num.io.threads=8
>
> socket.send.buffer.bytes=102400
>
> socket.receive.buffer.bytes=102400
>
> log.dirs=/tmp/kafka-logs
>
> num.recovery.threads.per.data.dir=1
>
> log.retention.hours=168
>
> log.segment.bytes=1073741824
>
> log.retention.check.interval.ms=30
>
> zookeeper.connection.timeout.ms=6000
>
> offset.metadata.max.bytes=4096
>
>
> Above, 'zkA', 'zkB' and 'zkC' are defined inside `/etc/hosts` and are
> valid server names.
>
>
> And then inside the kafka-run-class.sh script, instead of the default:
>
>
> if [ -z "$KAFKA_OPTS" ]; then
>
>   KAFKA_OPTS=""
>
> fi
>
>
> I have:
>
>
> if [ -z "$KAFKA_OPTS" ]; then
>
>   KAFKA_OPTS="-Djava.security.auth.login.config=/opt/kafka/
> config/jaas.conf"
>
> fi
>
>
> I also added the /opt/kafka/config/jaas.conf file like you suggested, and
> only changed the names of users and passwords:
>
>
> KafkaServer {
>
>   org.apache.kafka.common.security.plain.PlainLoginModule required
>
>   username="someuser"
>
>   user_kafka="somePassword"
>
>   password="kafka-password";
>
> };
>
>
> The fact that I can no longer even consume from a topic over PLAINTEXT
> (which is a regression of where I was before we started trying to add SSL)
> tells me there is something wrong in either server.properties or jaas.conf.
> I've checked the Kafka broker logs (server.log) each time I try connecting
> and this is the only line that gets printed:
>
>
> [2016-11-21 15:18:14,859] INFO [Group Metadata Manager on Broker 2]:
> Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.
> GroupMetadataManager)
>
>
> Not sure if that means anything. Any idea where I might be going wrong?
> Thanks again!
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Monday, November 21, 2016 11:03:14 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Rule #1 and Rule #2 cannot co-exist. You are basically configuring your LB
> to point to a Kafka broker and you are pointing each Kafka broker to point
> to a LB. So you need a pair of ports with a security protocol for the
> connection to work. With two ru

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Rule #1 and Rule #2 cannot co-exist. You are basically configuring your LB
to point to a Kafka broker and you are pointing each Kafka broker to point
to a LB. So you need a pair of ports with a security protocol for the
connection to work. With two rules, Kafka picks up the wrong LB port for
one of the security protocols.

If you want to try without SSL first, the simplest way to try it out would
be to disable Rule #1 and change Rule #2 to use port 9093 instead of 9095.
Then you should be able to connect using PLAINTEXT (the test that is
currently not working).

I think you have the configuration:

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092

And you have a client connecting with PLAINTEXT on mybalancer01:*9095*. The
first connection would work, but subsequent connections would use the
address provided by Kafka from advertised.listeners. The client  will start
connecting with PLAINTEXT on mybalancer01:*9093*, which is expecting SSL.
If you disable Rule #1 and change Rule #2 to use port 9093, you should be
able to test PLAINTEXT without changing Kafka config.

On Mon, Nov 21, 2016 at 3:32 PM, Zac Harvey  wrote:

> In the last email I should have mentioned: don't pay too much attention to
> the code snippet, and after reviewing it, I can see it actually incomplete
> (I forgot to include the section where I configure the topics and broker
> configs to talk to Kafka!).
>
>
> What I'm really concerned about is that before we added all these SSL
> configs, I had plaintext (plaintext:9092 in/out of the load balancer
> to/from Kafka) working fine. Now my consumer code can't even connect to the
> load balancer/Kafka.
>
>
> So I guess what I was really asking was: does that exception
> (ClosedChannelException) indicate bad configs on the Kafka broker?
>
> 
> From: Zac Harvey 
> Sent: Thursday, November 17, 2016 4:44:06 PM
> To: users@kafka.apache.org
> Subject: Can Kafka/SSL be terminated at a load balancer?
>
> We have two Kafka nodes and for reasons outside of this question, would
> like to set up a load balancer to terminate SSL with producers (clients).
> The SSL cert hosted by the load balancer will be signed by trusted/root CA
> that clients should natively trust.
>
>
> Is this possible to do, or does Kafka somehow require SSL to be setup
> directly on the Kafka servers themselves?
>
>
> Thanks!
>



-- 
Regards,

Rajini


Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
A load balancer that balances the load across the brokers wouldn't work,
but here the LB is being used as a terminating SSL proxy. That should work
if each Kafka is configured with its own proxy.

On Mon, Nov 21, 2016 at 2:57 PM, tao xiao <xiaotao...@gmail.com> wrote:

> I doubt the LB solution will work for Kafka. Client needs to connect to the
> leader of a partition to produce/consume messages. If we put a LB in front
> of all brokers which means all brokers share the same LB how does the LB
> figure out the leader?
> On Mon, Nov 21, 2016 at 10:26 PM Martin Gainty <mgai...@hotmail.com>
> wrote:
>
> >
> >
> >
> >
> > 
> > From: Zac Harvey <zac.har...@welltok.com>
> > Sent: Monday, November 21, 2016 8:59 AM
> > To: users@kafka.apache.org
> > Subject: Re: Can Kafka/SSL be terminated at a load balancer?
> >
> > Thanks again Rajini,
> >
> >
> > Using these configs, would clients connect to the load balancer over
> > SSL/9093? And then would I configure the load balancer to forward traffic
> > from SSL/9093 to plaintext/9093?
> >
> > MG>Zach
> >
> > MG>i could be wrong but SSL port != plaintext port ..but consider:
> >
> > MG>consider recent testcase where all traffic around a certain location
> > gets bogged with DOS attacks
> >
> > MG>what are the legitimate role(s) of the LB when SSL Traffic and HTTP1.1
> > Traffic and FTP Traffic are ALL blocked?
> >
> > MG>LB should never be stripping SSL headers to redirect to PlainText
> > because you are not rerouting to a faster route
> >
> > MG>most net engineers worth their salt will configure their routers to
> > static routes to loop around bogged-down routers
> >
> > MG>WDYT?
> >
> > Thanks again, just still a little uncertain about the traffic/ports
> coming
> > into the load balancer!
> >
> >
> > Best,
> >
> > Zac
> >
> > 
> > From: Rajini Sivaram <rajinisiva...@googlemail.com>
> > Sent: Monday, November 21, 2016 8:48:41 AM
> > To: users@kafka.apache.org
> > Subject: Re: Can Kafka/SSL be terminated at a load balancer?
> >
> > Zac,
> >
> > Yes, that is correct. Ruby clients will not be authenticated by Kafka.
> They
> > talk SSL to the load balancer and the load balancer uses PLAINTEXT
> without
> > authentication to talk to Kafka.
> >
> > On Mon, Nov 21, 2016 at 1:29 PM, Zac Harvey <zac.har...@welltok.com>
> > wrote:
> >
> > > *Awesome* explanation Rajini - thank you!
> > >
> > >
> > > Just to confirm: the SASL/PLAIN configs would only be for the
> interbroker
> > > communication, correct? Meaning, beyond your recommended changes to
> > > server.properties, and the addition of the new jaas.conf file, the
> > > producers (Ruby clients) wouldn't need to authenticate, correct?
> > >
> > >
> > > Thanks again for all the great help so far, you've already helped me
> more
> > > than you know!
> > >
> > >
> > > Zac
> > >
> > > 
> > > From: Rajini Sivaram <rajinisiva...@googlemail.com>
> > > Sent: Monday, November 21, 2016 3:53:47 AM
> > > To: users@kafka.apache.org
> > > Subject: Re: Can Kafka/SSL be terminated at a load balancer?
> > >
> > > Zac,
> > >
> > > *advertised.listeners* is used to make client connections from
> > > producers/consumers as well as for client-side connections for
> > inter-broker
> > > communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
> > > would work for inter-broker, bypassing the load balancer, but clients
> > would
> > > also then attempt to connect directly to *mykafka01*.  Setting it to
> > > *SSL://mybalancer01* would work for producers/consumers, but brokers
> > would
> > > try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
> > > works for both. You need two endpoints, one for inter-broker that
> > bypasses
> > > *mybalancer01* and another for clients that uses *mybalancer01*. With
> the
> > > current Kafka configuration, you would require two security protocols
> to
> > > enable two endpoints.
> > >
> > > You could enable SSL in Kafka (using self-signed certificates if you
> > need)
> > > for one of the two endpoints to overcome this limitation. But
> presumably
> > > you have a secure inte

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Zac,

Yes, that is correct.

With the configuration:

listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092



   - Clients talk to port 9093 on load balancer using SSL.
   - Load balancer talks to port 9093 on Kafka brokers using PLAINTEXT
   (that is config you need to add on the load balancer)
   - Brokers talk to each other for inter-broker comms on port 9092 using
   SASL_PLAINTEXT

The connections for the two cases are:
*RubyClient  * <=== SSL ===>  *Load balancer (mybalancer01:9093)* <
PLAINTEXT ==> *KafkaBroker (mykafka01:9093)*
*KafkaBroker (mykafka02:9092) *< SASL_PLAINTEXT ==> *KafkaBroker
(mykafka01:9092)*

You can use different ports on Kafka if you find that using 9093 for SSL on
one side and PLAINTEXT on the other is confusing.


On Mon, Nov 21, 2016 at 1:59 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> Thanks again Rajini,
>
>
> Using these configs, would clients connect to the load balancer over
> SSL/9093? And then would I configure the load balancer to forward traffic
> from SSL/9093 to plaintext/9093?
>
>
> Thanks again, just still a little uncertain about the traffic/ports coming
> into the load balancer!
>
>
> Best,
>
> Zac
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Monday, November 21, 2016 8:48:41 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Zac,
>
> Yes, that is correct. Ruby clients will not be authenticated by Kafka. They
> talk SSL to the load balancer and the load balancer uses PLAINTEXT without
> authentication to talk to Kafka.
>
> On Mon, Nov 21, 2016 at 1:29 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
> > *Awesome* explanation Rajini - thank you!
> >
> >
> > Just to confirm: the SASL/PLAIN configs would only be for the interbroker
> > communication, correct? Meaning, beyond your recommended changes to
> > server.properties, and the addition of the new jaas.conf file, the
> > producers (Ruby clients) wouldn't need to authenticate, correct?
> >
> >
> > Thanks again for all the great help so far, you've already helped me more
> > than you know!
> >
> >
> > Zac
> >
> > 
> > From: Rajini Sivaram <rajinisiva...@googlemail.com>
> > Sent: Monday, November 21, 2016 3:53:47 AM
> > To: users@kafka.apache.org
> > Subject: Re: Can Kafka/SSL be terminated at a load balancer?
> >
> > Zac,
> >
> > *advertised.listeners* is used to make client connections from
> > producers/consumers as well as for client-side connections for
> inter-broker
> > communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
> > would work for inter-broker, bypassing the load balancer, but clients
> would
> > also then attempt to connect directly to *mykafka01*.  Setting it to
> > *SSL://mybalancer01* would work for producers/consumers, but brokers
> would
> > try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
> > works for both. You need two endpoints, one for inter-broker that
> bypasses
> > *mybalancer01* and another for clients that uses *mybalancer01*. With the
> > current Kafka configuration, you would require two security protocols to
> > enable two endpoints.
> >
> > You could enable SSL in Kafka (using self-signed certificates if you
> need)
> > for one of the two endpoints to overcome this limitation. But presumably
> > you have a secure internal network running Kafka and want to avoid the
> cost
> > of encryption in Kafka. The simplest solution I can think of is to use
> > SASL_PLAINTEXT using SASL/PLAIN for inter-broker as a workaround. The
> > configuration options in server.properties would look like:
> >
> > listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
> >
> > advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
> > ,SASL_PLAINTEXT://mykafka01.example.com:9092
> >
> > security.inter.broker.protocol=SASL_PLAINTEXT
> >
> > sasl.enabled.mechanisms=PLAIN
> >
> > sasl.mechanism.inter.broker.protocol=PLAIN
> >
> >
> > You also need a JAAS configuration file configured for the broker JVM (
> > *KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/jaas.conf"*) . See
> > https://kafka.apache.org/documentation#security_sasl for configuring
> > SASL.*
> > jaas.conf* would look something like:
> >
> > KafkaServer {
> >
> > org.apache.kafka.comm

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Zac,

Yes, that is correct. Ruby clients will not be authenticated by Kafka. They
talk SSL to the load balancer and the load balancer uses PLAINTEXT without
authentication to talk to Kafka.

On Mon, Nov 21, 2016 at 1:29 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> *Awesome* explanation Rajini - thank you!
>
>
> Just to confirm: the SASL/PLAIN configs would only be for the interbroker
> communication, correct? Meaning, beyond your recommended changes to
> server.properties, and the addition of the new jaas.conf file, the
> producers (Ruby clients) wouldn't need to authenticate, correct?
>
>
> Thanks again for all the great help so far, you've already helped me more
> than you know!
>
>
> Zac
>
> ____
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Monday, November 21, 2016 3:53:47 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Zac,
>
> *advertised.listeners* is used to make client connections from
> producers/consumers as well as for client-side connections for inter-broker
> communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
> would work for inter-broker, bypassing the load balancer, but clients would
> also then attempt to connect directly to *mykafka01*.  Setting it to
> *SSL://mybalancer01* would work for producers/consumers, but brokers would
> try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
> works for both. You need two endpoints, one for inter-broker that bypasses
> *mybalancer01* and another for clients that uses *mybalancer01*. With the
> current Kafka configuration, you would require two security protocols to
> enable two endpoints.
>
> You could enable SSL in Kafka (using self-signed certificates if you need)
> for one of the two endpoints to overcome this limitation. But presumably
> you have a secure internal network running Kafka and want to avoid the cost
> of encryption in Kafka. The simplest solution I can think of is to use
> SASL_PLAINTEXT using SASL/PLAIN for inter-broker as a workaround. The
> configuration options in server.properties would look like:
>
> listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092
>
> advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
> ,SASL_PLAINTEXT://mykafka01.example.com:9092
>
> security.inter.broker.protocol=SASL_PLAINTEXT
>
> sasl.enabled.mechanisms=PLAIN
>
> sasl.mechanism.inter.broker.protocol=PLAIN
>
>
> You also need a JAAS configuration file configured for the broker JVM (
> *KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/jaas.conf"*) . See
> https://kafka.apache.org/documentation#security_sasl for configuring
> SASL.*
> jaas.conf* would look something like:
>
> KafkaServer {
>
> org.apache.kafka.common.security.plain.PlainLoginModule required
>
> username="kafka"
>
> user_kafka="kafka-password"
>
> password="kafka-password";
>
> };
>
>
> Hope that helps.
>
>
> On Fri, Nov 18, 2016 at 6:39 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
> > Thanks again Rajini!
> >
> >
> > One last followup question, if you don't mind. You said that my
> > server.properties file should look something like this:
> >
> >
> > listeners=SSL://:9093
> > advertised.listeners=SSL://mybalancer01.example.com:9093
> > security.inter.broker.protocol=SSL
> >
> > However, please remember that I'm looking for the load balancer to
> > terminate SSL, meaning that (my desired) communication between the load
> > balancer and Kafka would be over plaintext (not SSL).  In other words:
> >
> > Ruby Producers/Clients <SSL:9093> Load Balancer <
> > Plaintext:9092 > Kafka
> >
> > So producers/client connect to the load balancer over SSL and port 9093,
> > but then the load balancer communicates with Kafka over plaintext and
> port
> > 9092.
> >
> > I also don't need inter broker communication to be SSL; it can be
> > plaintext.
> >
> > If this is the case, do I still need to change server.properties, or can
> I
> > leave it like so:
> >
> > listeners=plaintext://:9092
> > advertised.listeners=plaintext://mybalancer01.example.com:9092
> >
> > Or could it just be:
> >
> > listeners=plaintext://:9092
> > advertised.listeners=plaintext://mykafka01.example.com:9092
> >
> > Thanks again!
> > Zac
> >
> >
> >
> >
> >
> > 
> > From: Rajini Sivaram <rajinisiva...@googlemail.co

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-21 Thread Rajini Sivaram
Zac,

*advertised.listeners* is used to make client connections from
producers/consumers as well as for client-side connections for inter-broker
communication. In your scenario, setting it to *PLAINTEXT://mykafka01*
would work for inter-broker, bypassing the load balancer, but clients would
also then attempt to connect directly to *mykafka01*.  Setting it to
*SSL://mybalancer01* would work for producers/consumers, but brokers would
try to connect to *mybalancer01* using PLAINTEXT. Unfortunately neither
works for both. You need two endpoints, one for inter-broker that bypasses
*mybalancer01* and another for clients that uses *mybalancer01*. With the
current Kafka configuration, you would require two security protocols to
enable two endpoints.

You could enable SSL in Kafka (using self-signed certificates if you need)
for one of the two endpoints to overcome this limitation. But presumably
you have a secure internal network running Kafka and want to avoid the cost
of encryption in Kafka. The simplest solution I can think of is to use
SASL_PLAINTEXT using SASL/PLAIN for inter-broker as a workaround. The
configuration options in server.properties would look like:

listeners=PLAINTEXT://:9093,SASL_PLAINTEXT://:9092

advertised.listeners=PLAINTEXT://mybalancer01.example.com:9093
,SASL_PLAINTEXT://mykafka01.example.com:9092

security.inter.broker.protocol=SASL_PLAINTEXT

sasl.enabled.mechanisms=PLAIN

sasl.mechanism.inter.broker.protocol=PLAIN


You also need a JAAS configuration file configured for the broker JVM (
*KAFKA_OPTS="-Djava.security.auth.login.config=/kafka/jaas.conf"*) . See
https://kafka.apache.org/documentation#security_sasl for configuring SASL.*
jaas.conf* would look something like:

KafkaServer {

org.apache.kafka.common.security.plain.PlainLoginModule required

username="kafka"

user_kafka="kafka-password"

password="kafka-password";

};


Hope that helps.


On Fri, Nov 18, 2016 at 6:39 PM, Zac Harvey <zac.har...@welltok.com> wrote:

> Thanks again Rajini!
>
>
> One last followup question, if you don't mind. You said that my
> server.properties file should look something like this:
>
>
> listeners=SSL://:9093
> advertised.listeners=SSL://mybalancer01.example.com:9093
> security.inter.broker.protocol=SSL
>
> However, please remember that I'm looking for the load balancer to
> terminate SSL, meaning that (my desired) communication between the load
> balancer and Kafka would be over plaintext (not SSL).  In other words:
>
> Ruby Producers/Clients <SSL:9093> Load Balancer <
> Plaintext:9092 > Kafka
>
> So producers/client connect to the load balancer over SSL and port 9093,
> but then the load balancer communicates with Kafka over plaintext and port
> 9092.
>
> I also don't need inter broker communication to be SSL; it can be
> plaintext.
>
> If this is the case, do I still need to change server.properties, or can I
> leave it like so:
>
> listeners=plaintext://:9092
> advertised.listeners=plaintext://mybalancer01.example.com:9092
>
> Or could it just be:
>
> listeners=plaintext://:9092
> advertised.listeners=plaintext://mykafka01.example.com:9092
>
> Thanks again!
> Zac
>
>
>
>
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Friday, November 18, 2016 9:57:22 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> You should set advertised.listeners rather than the older
> advertised.host.name property in server.properties:
>
>
>- listeners=SSL://:9093
>- advertised.listeners=SSL://mybalancer01.example.com:9093
>- security.inter.broker.protocol=SSL
>
>
> If your listeners are on particular interfaces, you can set address in the
> 'listeners' property too.
>
>
> If you want inter-broker communication to bypass the SSL proxy, you would
> need another security protocol that can be used for inter-broker
> communication (PLAINTEXT in the example below).
>
>
>
>- listeners=SSL://:9093,PLAINTEXT://:9092
>- advertised.listeners=SSL://mybalancer01.example.com:9093,PLAINTEXT://
>mykafka01.example.com:9092
>- security.inter.broker.protocol=PLAINTEXT
>
>  I haven't used the Ruby clients, so I am not sure about client
> configuration. With Java clients, if you don't specify truststore, the
> default trust stores are used, so with trusted CA-signed certificates, no
> additional client configuration is required. You can test your installation
> using the console producer and consumer that are shipped with Kafka to make
> sure it is working before you run with Ruby clients.
>
>
>
> On Fri, Nov 18, 2016 at 1:23 PM, Zac Harvey <zac.ha

Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-18 Thread Rajini Sivaram
You should set advertised.listeners rather than the older
advertised.host.name property in server.properties:


   - listeners=SSL://:9093
   - advertised.listeners=SSL://mybalancer01.example.com:9093
   - security.inter.broker.protocol=SSL


If your listeners are on particular interfaces, you can set address in the
'listeners' property too.


If you want inter-broker communication to bypass the SSL proxy, you would
need another security protocol that can be used for inter-broker
communication (PLAINTEXT in the example below).



   - listeners=SSL://:9093,PLAINTEXT://:9092
   - advertised.listeners=SSL://mybalancer01.example.com:9093,PLAINTEXT://
   mykafka01.example.com:9092
   - security.inter.broker.protocol=PLAINTEXT

 I haven't used the Ruby clients, so I am not sure about client
configuration. With Java clients, if you don't specify truststore, the
default trust stores are used, so with trusted CA-signed certificates, no
additional client configuration is required. You can test your installation
using the console producer and consumer that are shipped with Kafka to make
sure it is working before you run with Ruby clients.



On Fri, Nov 18, 2016 at 1:23 PM, Zac Harvey <zac.har...@welltok.com> wrote:

>
> Thanks Rajini,
>
>
> So currently one of our Kafka nodes is 'mykafka01.example.com', and in
> its server.properties file, I have advertised.host.name=mykafka01
> .example.com. Our load balancer lives at mybalancer01.example.com, and
> this what producers will connect to (over SSL) to send messages to Kafka.
>
>
> It sounds like you're saying I need to change my Kafka node's
> server.properties to have advertised.host.name=mybalancer01.example.com,
> yes? If not, can you perhaps provide a quick snippet of the changes I would
> need to make to server.properties?
>
>
> Again, the cert served by the balancer will be a highly-trusted (root
> CA-signed) certificate that all clients will natively trust. Interestingly
> enough, most (if not all) the Kafka producers/clients will be written in
> Ruby (using the zendesk Kafka-Ruby gem<https://github.com/
> zendesk/ruby-kafka>), so there wont be any JKS configuration options
> available for those Ruby clients.
>
>
> Besides making the change to server.properties that I mentioned above, are
> there any other client-side configs that will need to be made for the Ruby
> clients to connect over SSL?
>
>
> Thank you enormously here!
>
>
> Best,
>
> Zac
>
>
> 
> From: Rajini Sivaram <rajinisiva...@googlemail.com>
> Sent: Friday, November 18, 2016 5:15:13 AM
> To: users@kafka.apache.org
> Subject: Re: Can Kafka/SSL be terminated at a load balancer?
>
> Zac,
>
> Kafka has its own built-in load-balancing mechanism based on partition
> assignment. Requests are processed by partition leaders, distributing load
> across the brokers in the cluster. If you want to put a proxy like HAProxy
> with SSL termination in front of your brokers for added security, you can
> do that. You can have completely independent trust chain between
> clients->proxy and proxy->broker. You need to configure Kafka brokers with
> the proxy host as the host in the advertised listeners for the security
> protocol used by clients.
>
> On Thu, Nov 17, 2016 at 9:44 PM, Zac Harvey <zac.har...@welltok.com>
> wrote:
>
> > We have two Kafka nodes and for reasons outside of this question, would
> > like to set up a load balancer to terminate SSL with producers (clients).
> > The SSL cert hosted by the load balancer will be signed by trusted/root
> CA
> > that clients should natively trust.
> >
> >
> > Is this possible to do, or does Kafka somehow require SSL to be setup
> > directly on the Kafka servers themselves?
> >
> >
> > Thanks!
> >
>
>
>
> --
> Regards,
>
> Rajini
>



-- 
Regards,

Rajini


Re: Massive SSL performance degredation

2016-11-18 Thread Rajini Sivaram
You can use the tools shipped with Kafka to measure latency.

For latency at low load, run:


   - bin/kafka-run-class.sh kafka.tools.EndToEndLatency


You may also find it useful to run producer performance test at different
throughputs. The tool prints out latency as well:


   - bin/kafka-producer-perf-test.sh


On Fri, Nov 18, 2016 at 1:25 AM, Hans Jespersen  wrote:

> Publish lots of messages and measure in seconds or minutes. Otherwise you
> are just benchmarking the initial SSL handshake setup time which should
> normally be a one time overhead, not a per message overhead. If you just
> send one message then of course SSL is much slower.
>
> -hans
>
> > On Nov 18, 2016, at 1:07 AM, Aaron Wilkinson 
> wrote:
> >
> > Hi, Hans.  I was able to get the command line producer / consumer working
> > with SSL but I'm not sure how to measure millisecond resolution latency
> > with them.  I thought maybe the '--property print.timestamp=true'
> argument
> > would help, but only has second resolution.  Do you know of any way to
> get
> > the consumer to print out a receipt time-stamp with millisecond
> > resolution?  Or of any extended documentation on the command line tools
> in
> > general?
> >
> > Oh also, a couple other tidbits that may help:
> > Ubuntu 16.04
> > Kafka 10.1.0
> > openjdk version "1.8.0_111"
> > TLS 1.2
> >
> > I was wondering if maybe this could be my problem:
> > http://stackoverflow.com/questions/25992131/slow-aes-
> gcm-encryption-and-decryption-with-java-8u20
> >
> > I didn't specify any cipher suites in either the broker or the client
> > config which I gather leaves it up to the broker/client to decide during
> > TLS handshaking.  I'm not sure if there is an easy way to figure out
> which
> > one they ended up with...  I'll work on specifying which cipher suite I
> > want and try to pick something with which java is simpatico.
> >
> >
> >> On Thu, Nov 17, 2016 at 4:04 PM, Hans Jespersen 
> wrote:
> >>
> >> What is the difference using the bin/kafka-console-producer and
> >> kafka-console-consumer as pub/sub clients?
> >>
> >> see http://docs.confluent.io/3.1.0/kafka/ssl.html
> >>
> >> -hans
> >>
> >> /**
> >> * Hans Jespersen, Principal Systems Engineer, Confluent Inc.
> >> * h...@confluent.io (650)924-2670
> >> */
> >>
> >> On Thu, Nov 17, 2016 at 11:56 PM, Aaron Wilkinson <
> aa...@modopayments.com>
> >> wrote:
> >>
> >>> Pardon if this is a oft repeated issue, but all the information I could
> >>> find said I should expect a 20-50% performance hit when using SSL with
> >>> kafka, and I am seeing closer to 2000-3000%
> >>>
> >>> I'm trying to get kafka to behave like a fast, secured message bus.
> So I
> >>> am sending small messages, one at a time.  I have set up a simple, 2
> >>> machine experiment in AWS with 1 client machine and 1 zookeeper/broker
> >>> machine and I'm an running a very linear test.
> >>>
> >>> There are 2 topics: "request" and "response" and 2 threads on the
> client
> >>> machine each of which connects to those 2 topics.  Thread 1 produces a
> >>> "request", thread 2 consumes it and then produces a "response" which
> >> thread
> >>> 1 then consumes.  At that point thread 1 proceeds to send the next
> >>> "request" and the process repeats.
> >>>
> >>> So there are a total of 4 connections to the broker.
> >>>
> >>> I can run a sustained test without SSL and see 1 to 1.5 ms per message
> >> hop
> >>> (where a "hop" means the message has traveled across 1 of the 4
> >>> connections- either a production or a consumption of either the request
> >> or
> >>> the response).
> >>>
> >>> Each connection for which I turn on SSL increases the hop time 35 to 45
> >> ms.
> >>>
> >>> Now, the problem could be with the stack I'm using (PHP 7 talking to
> the
> >>> broker via the librdkafka C library).  But before I go about trying to
> >>> reproduce this with a java client (which is not my forte) I was
> wondering
> >>> if anyone else has run into a similar issue either with PHP or any
> other
> >>> language / library.  Or does anyone know a direct way to figure out
> >> whether
> >>> this slow down is at the broker or at the client?
> >>>
> >>> Thanks in advance for your help!
> >>> Aaron
> >>>
> >>
>



-- 
Regards,

Rajini


Re: Can Kafka/SSL be terminated at a load balancer?

2016-11-18 Thread Rajini Sivaram
Zac,

Kafka has its own built-in load-balancing mechanism based on partition
assignment. Requests are processed by partition leaders, distributing load
across the brokers in the cluster. If you want to put a proxy like HAProxy
with SSL termination in front of your brokers for added security, you can
do that. You can have completely independent trust chain between
clients->proxy and proxy->broker. You need to configure Kafka brokers with
the proxy host as the host in the advertised listeners for the security
protocol used by clients.

On Thu, Nov 17, 2016 at 9:44 PM, Zac Harvey  wrote:

> We have two Kafka nodes and for reasons outside of this question, would
> like to set up a load balancer to terminate SSL with producers (clients).
> The SSL cert hosted by the load balancer will be signed by trusted/root CA
> that clients should natively trust.
>
>
> Is this possible to do, or does Kafka somehow require SSL to be setup
> directly on the Kafka servers themselves?
>
>
> Thanks!
>



-- 
Regards,

Rajini


Re: connection closed by kafka

2016-11-02 Thread Rajini Sivaram
Broker closes client connections that are idle for a configurable period of
time (broker property connections.max.idle.ms). The default idle time is 10
minutes which matches the close time in the logs.

On Wed, Nov 2, 2016 at 2:43 PM, Jaikiran Pai 
wrote:

> Which exact version of Kafka installation and Kafka client is this? And
> which language/library of Kafka client? Also, are you describing this
> situation in the context of producing messages? Can you post your relevant
> code from the application where you deal with this?
>
> Connection management is an internal detail of Kafka client libraries and
> usually won't end up outside of it, so you shouldn't really notice any of
> these issues.
>
>
> -Jaikiran
>
>
> On Friday 28 October 2016 05:50 AM, Jianbin Wei wrote:
>
>> In our environment we notice that sometimes Kafka would close the
>> connection after one message is sent over.  The client does not detect that
>> and tries to send another message again.  That triggers a RST packet.
>>
>> Any idea why the Kafka broker would close the connection?
>>
>> Attached you can find the packets between our client and kafka broker.
>>
>>
>> 20:55:40.834543 IP 172.18.69.194.34445 > 172.18.69.180.9092: Flags [S],
>> seq 31787730, win 14600, options [mss 1460,nop,nop,sackOK,nop,wscale 9],
>> length 0
>> 0x:  4500 0034 8cc1 4000 4006 ca67 ac12 45c2  E..4..@
>> .@..g..E.
>> 0x0010:  ac12 45b4 868d 2384 01e5 0ad2    ..E...#.
>> 0x0020:  8002 3908 e3c1  0204 05b4 0101 0402  ..9.
>> 0x0030:  0103 0309
>> 20:55:40.834744 IP 172.18.69.180.9092 > 172.18.69.194.34445: Flags [S.],
>> seq 1238329644, ack 31787731, win 14600, options [mss
>> 1460,nop,nop,sackOK,nop,wscale 1], length 0
>> 0x:  4500 0034  4000 4006 5729 ac12 45b4  E..4..@
>> .@.W)..E.
>> 0x0010:  ac12 45c2 2384 868d 49cf 692c 01e5 0ad3  ..E.#...I.i,
>> 0x0020:  8012 3908 e89e  0204 05b4 0101 0402  ..9.
>> 0x0030:  0103 0301
>> 20:55:40.834787 IP 172.18.69.194.34445 > 172.18.69.180.9092: Flags [.],
>> ack 1, win 29, length 0
>> 0x:  4500 0028 8cc2 4000 4006 ca72 ac12 45c2  E..(..@.@..r..E.
>> 0x0010:  ac12 45b4 868d 2384 01e5 0ad3 49cf 692d  ..E...#.I.i-
>> 0x0020:  5010 001d e3b5   P...
>> 20:55:40.834921 IP 172.18.69.194.34445 > 172.18.69.180.9092: Flags [P.],
>> seq 1:691, ack 1, win 29, length 690
>> 0x:  4500 02da 8cc3 4000 4006 c7bf ac12 45c2  E.@
>> .@.E.
>> 0x0010:  ac12 45b4 868d 2384 01e5 0ad3 49cf 692d  ..E...#.I.i-
>> 0x0020:  5018 001d e667   02ae    Pg..
>> 0x0030:   0003 000c 6b61 666b 612d 7079 7468  ..kafka-pyth
>> 0x0040:  6f6e 0001  03e8  0001 000e 6576  onev
>> 0x0050:  656e 745f 6e73 706f 6c69 6379  0001  ent_nspolicy
>> 0x0060:     0272      ...r
>> 0x0070:   0266 4ff3 bd11   0004 3131  ...fO.11
>> 0x0080:  3238  0254 5b30 2c7b 2261 7022 3a22  28...T[0,{"ap":"
>>
>> 20:55:40.835297 IP 172.18.69.180.9092 > 172.18.69.194.34445: Flags [.],
>> ack 691, win 7990, length 0
>> 0x:  4500 0028 e872 4000 4006 6ec2 ac12 45b4  E..(.r@
>> .@.n...E.
>> 0x0010:  ac12 45c2 2384 868d 49cf 692d 01e5 0d85  ..E.#...I.i-
>> 0x0020:  5010 1f36 408b       P..6@.
>> 20:55:40.837837 IP 172.18.69.180.9092 > 172.18.69.194.34445: Flags [P.],
>> seq 1:47, ack 691, win 7990, length 46
>> 0x:  4500 0056 e873 4000 4006 6e93 ac12 45b4  E..V.s@
>> .@.n...E.
>> 0x0010:  ac12 45c2 2384 868d 49cf 692d 01e5 0d85  ..E.#...I.i-
>> 0x0020:  5018 1f36 ece3   002a  0003  P..6...*
>> 0x0030:   0001 000e 6576 656e 745f 6e73 706f  ..event_nspo
>> 0x0040:  6c69 6379  0001      licy
>> 0x0050:   0003 6527   e'
>> 20:55:40.837853 IP 172.18.69.194.34445 > 172.18.69.180.9092: Flags [.],
>> ack 47, win 29, length 0
>> 0x:  4500 0028 8cc4 4000 4006 ca70 ac12 45c2  E..(..@.@..p..E.
>> 0x0010:  ac12 45b4 868d 2384 01e5 0d85 49cf 695b  ..E...#.I.i[
>> 0x0020:  5010 001d e3b5   P...
>>
>> Closed here
>
 21:05:40.839440 IP 172.18.69.180.9092 > 172.18.69.194.34445: Flags
>> [F.], seq 47, ack 691, win 7990, length 0
>> 0x:  4500 0028 e874 4000 4006 6ec0 ac12 45b4  E..(.t@
>> .@.n...E.
>> 0x0010:  ac12 45c2 2384 868d 49cf 695b 01e5 0d85  ..E.#...I.i[
>> 0x0020:  5011 1f36 405c       P..6@\
>> 21:05:40.876047 IP 172.18.69.194.34445 > 172.18.69.180.9092: Flags [.],
>> ack 48, win 

Re: [ANNOUNCE] New committer: Jiangjie (Becket) Qin

2016-10-31 Thread Rajini Sivaram
Congratulations, Becket!

On Mon, Oct 31, 2016 at 8:38 PM, Matthias J. Sax 
wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA512
>
> Congrats!
>
> On 10/31/16 11:01 AM, Renu Tewari wrote:
> > Congratulations Becket!! Absolutely thrilled to hear this. Well
> > deserved!
> >
> > regards renu
> >
> >
> > On Mon, Oct 31, 2016 at 10:35 AM, Joel Koshy 
> > wrote:
> >
> >> The PMC for Apache Kafka has invited Jiangjie (Becket) Qin to
> >> join as a committer and we are pleased to announce that he has
> >> accepted!
> >>
> >> Becket has made significant contributions to Kafka over the last
> >> two years. He has been deeply involved in a broad range of KIP
> >> discussions and has contributed several major features to the
> >> project. He recently completed the implementation of a series of
> >> improvements (KIP-31, KIP-32, KIP-33) to Kafka’s message format
> >> that address a number of long-standing issues such as avoiding
> >> server-side re-compression, better accuracy for time-based log
> >> retention, log roll and time-based indexing of messages.
> >>
> >> Congratulations Becket! Thank you for your many contributions. We
> >> are excited to have you on board as a committer and look forward
> >> to your continued participation!
> >>
> >> Joel
> >>
> >
> -BEGIN PGP SIGNATURE-
> Comment: GPGTools - https://gpgtools.org
>
> iQIcBAEBCgAGBQJYF6uzAAoJECnhiMLycopPBuwP/1N2MtwWw7ms5gAfT/jvVCGi
> mdNvdJprSwJHe3qwsc+glsvAqwS6OZfaVzK2qQcaxMX5KjQtwkkOKyErOl9hG7jD
> Vw0aDcCbPuV2oEZ4m9K2J4Q3mZIfFrevicVb7oPGf4Yjt1sh9wxP08o7KHP2l5pN
> 3mpIBEDp4rZ2pg/jXldyh57dW1btg3gZi1gNczWvXEAKf1ypXRPwPeDbvXADXDv3
> 0NgmcXn242geoggnIbL30WgjH0bwHpVjLBr++YQ33FzRoHzASfAYHR/jSDKAytQe
> a7Bkc69Bb1NSzkfhiJa+VW9V2DweO8kD+Xfz4dM02GQF0iJkAqare7a6zWedk/+U
> hJRPz+tGlDSLePCYdyNj1ivJrFOmIQtyFOI3SBANfaneOmGJhPKtlNQQlNFKDbWS
> CD1pBsc1iHNq6rXy21evc/aFk0Rrfs5d4rU9eG6jD8jc1mCbSwtzJI0vweX0r9Y/
> 6Ao8cnsmDejYfap5lUMWeQfZOTkNRNpbkL7eoiVpe6wZw1nGL3T7GkrrWGRS3EQO
> qp4Jjp+7yY4gIqsLfYouaHTEzAX7yN78QNUNCB4OqUiEL9+a8wTQ7dlTgXinEd8r
> Kh9vTfpW7fb4c58aSpzntPUU4YFD3MHMam0iu5UrV9d5DrVTFDMJ83k15Z5DyTMt
> 45nPYdjvJgFGWLYFnPwr
> =VbpG
> -END PGP SIGNATURE-
>



-- 
Regards,

Rajini


Re: difficulty to delete a topic because of its syntax

2016-10-06 Thread Rajini Sivaram
Hamza,

Can you raise a JIRA with details on how the topic was created by Kafka
with an invalid name? Sounds like there might be a missing validation
somewhere.

Regards,

Rajini

On Thu, Oct 6, 2016 at 10:12 AM, Hamza HACHANI 
wrote:

> Thanks Todd,
>
>
> I've resolved it by suing what you told me.
>
> Thanks very much. But i think that there is a problem with kafka by
> letting the saving names of topic and logs where there is a space as i
> showes in the images.
>
> Have a good day to you all.
>
>
> Hamza
>
> 
> De : Hamza HACHANI 
> Envoyé : mercredi 5 octobre 2016 19:23:00
> À : users@kafka.apache.org
> Objet : RE: difficulty to delete a topic because of its syntax
>
>
> Hi,
>
> Attached the files showing what i'm talking about.
>
>
> Hamza
>
> 
> De : Todd S 
> Envoyé : mercredi 5 octobre 2016 07:25:48
> À : users@kafka.apache.org
> Objet : Re: difficulty to delete a topic because of its syntax
>
> You *could* go in to zookeeper and nuke the topic, then delete the files on
> disk
>
> Slightly more risky but it should work
>
> On Wednesday, 5 October 2016, Manikumar  wrote:
>
> > Kafka doesn't support white spaces in topic names.  Only support '.', '_'
> > and '-' these are allowed.
> > Not sure how you got white space in topic name.
> >
> > On Wed, Oct 5, 2016 at 8:19 PM, Hamza HACHANI  > >
> > wrote:
> >
> > > Well ackwardly when i list the topics i find it but when i do delete it
> > it
> > > says that this topic does not exist.
> > >
> > > 
> > > De : Ben Davison >
> > > Envoyé : mercredi 5 octobre 2016 02:37:14
> > > À : users@kafka.apache.org 
> > > Objet : Re: difficulty to delete a topic because of its syntax
> > >
> > > Try putting "" or '' around the string when running the command.
> > >
> > > On Wed, Oct 5, 2016 at 3:29 PM, Hamza HACHANI  > >
> > > wrote:
> > >
> > > > It's between "the" and "metric"
> > > >
> > > > 
> > > > De : Ali Akhtar >
> > > > Envoyé : mercredi 5 octobre 2016 02:16:33
> > > > À : users@kafka.apache.org 
> > > > Objet : Re: difficulty to delete a topic because of its syntax
> > > >
> > > > I don't see a space in that topic name
> > > >
> > > > On Wed, Oct 5, 2016 at 6:42 PM, Hamza HACHANI <
> hamza.hach...@supcom.tn
> > >
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > I created a topic called device-connection-invert-key-value-the
> > > > > metric-changelog.
> > > > >
> > > > > I insit that there is a space in it.
> > > > >
> > > > >
> > > > >
> > > > > Now that i want to delete it because my  cluster can no longer work
> > > > > correctly i can't do it as it  only reads the first part of it : (
> > > > > device-connection-invert-key-value-the) which obviously it doesn't
> > > find.
> > > > >
> > > > > Does some body have a wolution to delete it ?
> > > > >
> > > > > Thanks in advance.
> > > > >
> > > > >
> > > > > Hamza
> > > > >
> > > > >
> > > >
> > >
> > > --
> > >
> > >
> > > This email, including attachments, is private and confidential. If you
> > have
> > > received this email in error please notify the sender and delete it
> from
> > > your system. Emails are not secure and may contain viruses. No
> liability
> > > can be accepted for viruses that might be transferred by this email or
> > any
> > > attachment. Any unauthorised copying of this message or unauthorised
> > > distribution and publication of the information contained herein are
> > > prohibited.
> > >
> > > 7digital Limited. Registered office: 69 Wilson Street, London EC2A 2BB.
> > > Registered in England and Wales. Registered No. 04843573.
> > >
> >
>



-- 
Regards,

Rajini


Re: SASL_PLAINTEXT Authentication/Connection failure

2016-09-16 Thread Rajini Sivaram
Max,

You need to use the new consumer since the old consumer does not support
security features. For console-consumer, you need to add the option
--new-consumer.

On Fri, Sep 16, 2016 at 10:14 AM, Max Bridgewater <max.bridgewa...@gmail.com
> wrote:

> Thanks Rajini. That was the issue. Now I am facing another one. I am not
> sure why my consumer is trying to use the topic in PLAINTEXT. The consumer
> config is:
>
> security.protocol=SASL_PLAINTEXT
> sasl.mechanism=PLAIN
>
>
> KAFKA_OPTS is set to /home/kafka/kafka_client_jaas.conf. I can confirm
> that
> this file is being read because if I change the file name to something
> non-existing, I get file not found exception.
>
> The content of this jaas file:
>
> KafkaClient {
>   org.apache.kafka.common.security.plain.PlainLoginModule required
>   username="alice"
>   password="alice-secret";
> };
>
>
> I launch the consumer with:
> bin/kafka-console-consumer.sh  --zookeeper localhost:2181 --topic test3
> --from-beginning --consumer.config=config/consumer.properties
>
> The server config:
>
> listeners=SASL_PLAINTEXT://localhost:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
>
> The producer config:
>
> security.protocol=SASL_PLAINTEXT
> sasl.mechanism=PLAIN
>
> Now, when I launch the consumer, I get following error:
>
> [2016-09-16 05:09:11,908] WARN
> [test-consumer-group_pascalvm-1474016950388-699882ba-leader-
> finder-thread],
> Failed to find leader for Set([test3,0])
> (kafka.consumer.ConsumerFetcherManager$LeaderFinderThread)
> kafka.common.BrokerEndPointNotAvailableException: End point with security
> protocol PLAINTEXT not found for broker 0
> at kafka.cluster.Broker$$anonfun$5.apply(Broker.scala:131)
> at kafka.cluster.Broker$$anonfun$5.apply(Broker.scala:131)
> at scala.collection.MapLike$class.getOrElse(MapLike.scala:128)
> at scala.collection.AbstractMap.getOrElse(Map.scala:58)
> at kafka.cluster.Broker.getBrokerEndPoint(Broker.scala:130)
> at
> kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChanne
> l$1.apply(ZkUtils.scala:166)
> at
> kafka.utils.ZkUtils$$anonfun$getAllBrokerEndPointsForChanne
> l$1.apply(ZkUtils.scala:166)
> at
> scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:244)
> at
> scala.collection.TraversableLike$$anonfun$map$
> 1.apply(TraversableLike.scala:244)
>
> What am I missing?
>
>
>
>
> On Fri, Sep 16, 2016 at 3:57 AM, Rajini Sivaram <
> rajinisiva...@googlemail.com> wrote:
>
> > Max,
> >
> > I think there is a typo in your configuration. You intended admin
> password
> > to be admin-secret?
> >
> > KafkaServer {
> >org.apache.kafka.common.security.plain.PlainLoginModule required
> >username="admin"
> >password="admin-secret"
> >user_admin="alice-secret"  *=> Change to **"admin-secret"*
> >user_alice="alice-secret";
> > };
> >
> >
> > Since your inter-broker security protocol is SASL_PLAINTEXT, the
> controller
> > uses SASL with the username "admin" and that connection is failing since
> > the server thinks the expected password is "alice-secret".
> >
> >
> >
> > On Fri, Sep 16, 2016 at 8:43 AM, Max Bridgewater <
> > max.bridgewa...@gmail.com>
> > wrote:
> >
> > > Hi,
> > >
> > > I am trying to get SASL_PLAINTEXT or SASL_SSL to work. Sofar I am not
> > > successful. I posted the full story on SO:
> > > http://stackoverflow.com/questions/39521691/kafka-
> > authentication-producer-
> > > unable-to-connect-producer
> > >
> > > Bottom line is, when I start the server in SASL_PLAINTEXT mode, the
> below
> > > exception keeps popping up in the logs. The first issue is that you see
> > it
> > > only when you change log level to DEBUG, while in reality the server
> > isn't
> > > in a functioning state. Should the error be printed at error level?
> > >
> > > Now, the real issue is I don't understand why this is happening. It
> seems
> > > the server is connecting to itself and trying to authenticate against
> > > itself and failing to do so. What is wrong in my configuration?
> > >
> > > In  server.properties, I have:
> > >
> > > isteners=SASL_PLAINTEXT://0.0.0.0:9092
> > > security.inter.broker.protocol=SASL_PLAINTEXT
> > > sasl.mechanism.inter.broker.protocol=PLAIN
> > &

Re: SASL_PLAINTEXT Authentication/Connection failure

2016-09-16 Thread Rajini Sivaram
Max,

I think there is a typo in your configuration. You intended admin password
to be admin-secret?

KafkaServer {
   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret"
   user_admin="alice-secret"  *=> Change to **"admin-secret"*
   user_alice="alice-secret";
};


Since your inter-broker security protocol is SASL_PLAINTEXT, the controller
uses SASL with the username "admin" and that connection is failing since
the server thinks the expected password is "alice-secret".



On Fri, Sep 16, 2016 at 8:43 AM, Max Bridgewater 
wrote:

> Hi,
>
> I am trying to get SASL_PLAINTEXT or SASL_SSL to work. Sofar I am not
> successful. I posted the full story on SO:
> http://stackoverflow.com/questions/39521691/kafka-authentication-producer-
> unable-to-connect-producer
>
> Bottom line is, when I start the server in SASL_PLAINTEXT mode, the below
> exception keeps popping up in the logs. The first issue is that you see it
> only when you change log level to DEBUG, while in reality the server isn't
> in a functioning state. Should the error be printed at error level?
>
> Now, the real issue is I don't understand why this is happening. It seems
> the server is connecting to itself and trying to authenticate against
> itself and failing to do so. What is wrong in my configuration?
>
> In  server.properties, I have:
>
> isteners=SASL_PLAINTEXT://0.0.0.0:9092
> security.inter.broker.protocol=SASL_PLAINTEXT
> sasl.mechanism.inter.broker.protocol=PLAIN
> sasl.enabled.mechanisms=PLAIN
>
> Replacing 0.0.0.0 with localhost and 127.0.0.1 produces same result.
>
> I also have KAFKA_OPTS set to /home/kafka/kafka_client_jaas.conf. And the
> content of kafka_client_jaas.conf is:
>
> KafkaServer {
>org.apache.kafka.common.security.plain.PlainLoginModule required
>username="admin"
>password="admin-secret"
>user_admin="alice-secret"
>user_alice="alice-secret";
> };
>
> No client is up. The only things I have up are ZK and the Kafka server.
> Here is the stack trace:
>
> 2016-09-15 22:06:09 DEBUG NetworkClient:496 - Initiating connection to node
> 0 at 0.0.0.0:9092.
> 2016-09-15 22:06:09 DEBUG Acceptor:52 - Accepted connection from /
> 127.0.0.1
> on /127.0.1.1:9092. sendBufferSize [actual|requested]: [102400|102400]
> recvBufferSize [actual|requested]: [102400|102400]
> 2016-09-15 22:06:09 DEBUG Processor:52 - Processor 0 listening to new
> connection from /127.0.0.1:59669
> 2016-09-15 22:06:09 DEBUG SaslClientAuthenticator:204 - Set SASL client
> state to SEND_HANDSHAKE_REQUEST
> 2016-09-15 22:06:09 DEBUG SaslClientAuthenticator:133 - Creating
> SaslClient: client=null;service=kafka;serviceHostname=0.0.0.0;mechs=
> [PLAIN]
> 2016-09-15 22:06:09 DEBUG SaslClientAuthenticator:204 - Set SASL client
> state to RECEIVE_HANDSHAKE_RESPONSE
> 2016-09-15 22:06:09 DEBUG NetworkClient:476 - Completed connection to node
> 0
> 2016-09-15 22:06:09 DEBUG SaslServerAuthenticator:269 - Set SASL server
> state to HANDSHAKE_REQUEST
> 2016-09-15 22:06:09 DEBUG SaslServerAuthenticator:310 - Handle Kafka
> request SASL_HANDSHAKE
> 2016-09-15 22:06:09 DEBUG SaslServerAuthenticator:354 - Using SASL
> mechanism 'PLAIN' provided by client
> 2016-09-15 22:06:09 DEBUG SaslServerAuthenticator:269 - Set SASL server
> state to AUTHENTICATE
> 2016-09-15 22:06:09 DEBUG SaslClientAuthenticator:204 - Set SASL client
> state to INITIAL
> 2016-09-15 22:06:09 DEBUG SaslClientAuthenticator:204 - Set SASL client
> state to INTERMEDIATE
> 2016-09-15 22:06:09 DEBUG SaslServerAuthenticator:269 - Set SASL server
> state to FAILED
> 2016-09-15 22:06:09 DEBUG Selector:345 - Connection with /127.0.0.1
> disconnected
> java.io.IOException: javax.security.sasl.SaslException: Authentication
> failed: Invalid JAAS configuration [Caused by
> javax.security.sasl.SaslException: Authentication failed: Invalid username
> or password]
> at
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.
> authenticate(SaslServerAuthenticator.java:243)
> at
> org.apache.kafka.common.network.KafkaChannel.prepare(KafkaChannel.java:64)
> at
> org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.
> java:318)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:283)
> at kafka.network.Processor.poll(SocketServer.scala:472)
> at kafka.network.Processor.run(SocketServer.scala:412)
> at java.lang.Thread.run(Thread.java:745)
> Caused by: javax.security.sasl.SaslException: Authentication failed:
> Invalid JAAS configuration [Caused by javax.security.sasl.SaslException:
> Authentication failed: Invalid username or password]
> at
> org.apache.kafka.common.security.plain.PlainSaslServer.evaluateResponse(
> PlainSaslServer.java:101)
> at
> org.apache.kafka.common.security.authenticator.SaslServerAuthenticator.
> authenticate(SaslServerAuthenticator.java:228)
> ... 6 more
> Caused by: javax.security.sasl.SaslException: 

Re: Building API to make Kafka reactive

2016-06-29 Thread Rajini Sivaram
Hi Shekar,

We are working on a reactive streams API for Kafka. It is in its very early
experimental stage, but if you want to take a look, the code is in github (
https://github.com/reactor/reactor-kafka). I think you can add a session id
without making it part of the Kafka API. In the coming weeks, we will be
trying out some examples to improve the API. We welcome any feedback.

Regards,

Rajini

On Wed, Jun 29, 2016 at 7:45 AM, Lohith Samaga M 
wrote:

> Hi Shekar,
> Alternatively, you could make each stage of your pipeline to write
> to a Cassandra (or other DB) and your API will read from it. With Cassandra
> TTL, the row will be deleted after TTL is passed. No manual cleanup is
> required.
>
> Best regards / Mit freundlichen Grüßen / Sincères salutations
> M. Lohith Samaga
>
>
>
> -Original Message-
> From: Shekar Tippur [mailto:ctip...@gmail.com]
> Sent: Wednesday, June 29, 2016 12.10
> To: users
> Subject: Building API to make Kafka reactive
>
> I am looking at building a reactive api on top of Kafka.
> This API produces event to Kafka topic. I want to add a unique session id
> into the payload.
> The data gets transformed as it goes through different stages of a
> pipeline. I want to specify a final topic where I want the api to know that
> the processing was successful.
> The API should give different status at each part of the pipeline.
> At the ingestion, the API responds with "submitted"
> During the progression, the API returns "in progress"
> After successful completion, the API returns "Success"
>
> Couple of questions:
> 1. Is this feasible?
> 2. I was looking at project reactor (https://projectreactor.io) where the
> docs talk about event bus. I wanted to see if I can implement a consumer
> that points to the "end" topic and throws an event into the event bus.
> Since I would know the session ID, I can process the request accordingly.
>
> Appreciate your inputs.
>
> - Shekar
> Information transmitted by this e-mail is proprietary to Mphasis, its
> associated companies and/ or its customers and is intended
> for use only by the individual or entity to which it is addressed, and may
> contain information that is privileged, confidential or
> exempt from disclosure under applicable law. If you are not the intended
> recipient or it appears that this mail has been forwarded
> to you without proper authority, you are notified that any use or
> dissemination of this information in any manner is strictly
> prohibited. In such cases, please notify us immediately at
> mailmas...@mphasis.com and delete this mail from your records.
>


Re: Quotas feature Kafka 0.9.0.1

2016-06-08 Thread Rajini Sivaram
Liju,

Quotas are not applied to the replica fetch followers.

Regards,

Rajini

On Fri, Jun 3, 2016 at 7:25 PM, Liju John  wrote:

> Hi ,
>
> We are exploring the new quotas feature with Kafka 0.9.01.
> Could you please let me know if quotas feature works for fetch follower as
> well ?
> We see that when a broker is down for a long time and brought back , the
> replica catches up aggressively , impacting the whole cluster.
> Would it be possible to throttle Fetch follower as well with quotas?
>
>
> Regards,
> Liju John
>


Re: Broker replication error “Not authorized to access topics: [Topic authorization failed.] ”

2016-06-01 Thread Rajini Sivaram
The server configuration in
http://stackoverflow.com/questions/37536259/broker-replication-error-not-authorized-to-access-topics-topic-authorization
 specifies security.inter.broker.protocol=PLAINTEXT. This would result in
the principal "anonymous" to be used for inter-broker communication. Looks
like you are expecting to use the username "admin" for the broker, so you
should set security.inter.broker.protocol=SASL_PLAINTEXT. There is also a
missing entry in the KafkaServer section of jaas.conf. You need to add
user_admin="welcome1".

Hope that helps.

On Wed, Jun 1, 2016 at 7:23 AM, Gerard Klijs 
wrote:

> What do you have configured, do you have the brokers set as super users,
> with the right certificate?
>
> On Wed, Jun 1, 2016 at 6:43 AM 换个头像  wrote:
>
> > Hi Kafka Experts,
> >
> >
> > I setup a secured kafka cluster(slal-plain authentication). But when I
> try
> > to add ACLs for some existing topics, all three brokers  output errors
> like
> > "Not authorized to access topics: [Topic authorization failed.]".
> >
> >
> > I checked my configuration several times according to official
> > document(security section), but still not able to figure out why this
> error
> > caused.
> >
> >
> > Please help.
> >
> >
> > Broker replication error “Not authorized to access topics: [Topic
> > authorization failed.] ”
> >
> >
> http://stackoverflow.com/questions/37536259/broker-replication-error-not-authorized-to-access-topics-topic-authorization
> >
> >
> > Regards
> > Shawn
>



-- 
Regards,

Rajini


Re: Using SSL with KafkaConsumer w/o client certificates

2016-04-21 Thread Rajini Sivaram
Have you configured a truststore in server.properties? You don't need this
when using security.inter.broker.protocol=PLAINTEXT and client-auth is
disabled, but you do need to set truststore for the client-mode connections
made by the broker when security.inter.broker.protocol=SSL. If that still
doesn't help, running the broker the JVM option -Djavax.net.debug=ssl would
give more debug info.

On Wed, Apr 20, 2016 at 11:15 PM,  wrote:

> After making the suggested change, I see this error during startup
>
> [2016-04-20 18:03:10,522] INFO [Kafka Server 0], started
> (kafka.server.KafkaServer)
> [2016-04-20 18:03:11,093] WARN Failed to send SSL Close message
> (org.apache.kafka.common.network.SslTransportLayer)
> java.io.IOException: Broken pipe
> at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
> at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
> at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
> at sun.nio.ch.IOUtil.write(IOUtil.java:65)
> at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
> at
>
> org.apache.kafka.common.network.SslTransportLayer.flush(SslTransportLayer.java:194)
> at
>
> org.apache.kafka.common.network.SslTransportLayer.close(SslTransportLayer.java:161)
> at
> org.apache.kafka.common.network.KafkaChannel.close(KafkaChannel.java:50)
> at org.apache.kafka.common.network.Selector.close(Selector.java:442)
> at org.apache.kafka.common.network.Selector.poll(Selector.java:310)
> at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:270)
> at
>
> kafka.utils.NetworkClientBlockingOps$.recurse$1(NetworkClientBlockingOps.scala:128)
> at
>
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntilFound$extension(NetworkClientBlockingOps.scala:139)
> at
>
> kafka.utils.NetworkClientBlockingOps$.kafka$utils$NetworkClientBlockingOps$$pollUntil$extension(NetworkClientBlockingOps.scala:105)
> at
>
> kafka.utils.NetworkClientBlockingOps$.blockingReady$extension(NetworkClientBlockingOps.scala:58)
> at
>
> kafka.controller.RequestSendThread.brokerReady(ControllerChannelManager.scala:225)
> at
>
> kafka.controller.RequestSendThread.liftedTree1$1(ControllerChannelManager.scala:172)
> at
>
> kafka.controller.RequestSendThread.doWork(ControllerChannelManager.scala:171)
> at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:63)
>
> and also errors during shutdown
>
> [2016-04-20 18:09:15,293] INFO [Kafka Server 0], Starting controlled
> shutdown (kafka.server.KafkaServer)
> [2016-04-20 18:09:15,330] WARN [Kafka Server 0], Error during controlled
> shutdown, possibly because leader movement took longer than the configured
> socket.timeout.ms: Connection to Node(0, debian, 9094) failed
> (kafka.server.KafkaServer)
>
> the relevant configs are
>
>
> listeners=SSL://:9094
> security.inter.broker.protocol=SSL
> port=9094
>
> Marko
>
> > If your only listener is SSL, you should set
> > security.inter.broker.protocol
> > to SSL even for single-broker cluster since it is used by the controller.
> > I
> > would have expected an error in the logs though if this was not
> configured
> > correctly.
> >
> > On Wed, Apr 20, 2016 at 1:34 AM,  wrote:
> >
> >> There is only one broker in this case. There are no errors (besides the
> >> warning below) on either the broker or the client side. It just returns
> >> an
> >> empty topic list if plaintext is not configured, even though client is
> >> using SSL in both cases.
> >>
> >> marko
> >>
> >> > Hi,
> >> >
> >> > That warning is harmless. Personally, I think it may be a good idea to
> >> > remove as it confuses people in cases such as this.
> >> >
> >> > Do you have multiple brokers? Are the brokers configured to use SSL
> >> for
> >> > inter-broker communication (security.inter.broker.protocol)? This is
> >> > required if the only listener is for SSL.
> >> >
> >> > Ismael
> >> >
> >> > On Wed, Apr 20, 2016 at 12:42 AM,  wrote:
> >> >
> >> >> What is the correct way of using SSL between the client and brokers
> >> if
> >> >> client certificates are not used? The broker (0.9.0.0) reports the
> >> >> following in the log
> >> >>
> >> >> WARN SSL peer is not authenticated, returning ANONYMOUS instead
> >> >>
> >> >> as a result of this (I belive) KafkaConsumer.listTopics() returns an
> >> >> empty
> >> >> map. Does this require a custom Authenticator on the broker side? If
> >> so,
> >> >> are there examples on how to do that?
> >> >>
> >> >> Interestingly enough, modifying (no other changes)
> >> >>
> >> >> listeners=SSL://:9094
> >> >>
> >> >> to
> >> >>
> >> >> listeners=PLAINTEXT://:9093,SSL://:9094
> >> >>
> >> >> makes the listTopics() method to return the topics. If SSL is used by
> >> >> the
> >> >> consumer in both cases, I'm not sure why having the plaintext port
> >> would
> >> >> affect the SSL behavior.
> >> >>
> >> >> --
> >> >> Best 

Re: Using SSL with KafkaConsumer w/o client certificates

2016-04-20 Thread Rajini Sivaram
If your only listener is SSL, you should set security.inter.broker.protocol
to SSL even for single-broker cluster since it is used by the controller. I
would have expected an error in the logs though if this was not configured
correctly.

On Wed, Apr 20, 2016 at 1:34 AM,  wrote:

> There is only one broker in this case. There are no errors (besides the
> warning below) on either the broker or the client side. It just returns an
> empty topic list if plaintext is not configured, even though client is
> using SSL in both cases.
>
> marko
>
> > Hi,
> >
> > That warning is harmless. Personally, I think it may be a good idea to
> > remove as it confuses people in cases such as this.
> >
> > Do you have multiple brokers? Are the brokers configured to use SSL for
> > inter-broker communication (security.inter.broker.protocol)? This is
> > required if the only listener is for SSL.
> >
> > Ismael
> >
> > On Wed, Apr 20, 2016 at 12:42 AM,  wrote:
> >
> >> What is the correct way of using SSL between the client and brokers if
> >> client certificates are not used? The broker (0.9.0.0) reports the
> >> following in the log
> >>
> >> WARN SSL peer is not authenticated, returning ANONYMOUS instead
> >>
> >> as a result of this (I belive) KafkaConsumer.listTopics() returns an
> >> empty
> >> map. Does this require a custom Authenticator on the broker side? If so,
> >> are there examples on how to do that?
> >>
> >> Interestingly enough, modifying (no other changes)
> >>
> >> listeners=SSL://:9094
> >>
> >> to
> >>
> >> listeners=PLAINTEXT://:9093,SSL://:9094
> >>
> >> makes the listTopics() method to return the topics. If SSL is used by
> >> the
> >> consumer in both cases, I'm not sure why having the plaintext port would
> >> affect the SSL behavior.
> >>
> >> --
> >> Best regards,
> >> Marko
> >> www.kafkatool.com
> >>
> >>
> >
>
>
>


-- 
Regards,

Rajini


Re: [DISCUSS] KIP-12 - Kafka Sasl/Kerberos implementation

2015-04-22 Thread Rajini Sivaram
When we were working on the client-side SSL implementation for Kafka, we
found that returning selection interest from handshake() method wasn't
sufficient to handle some of the SSL sequences. We resorted to managing the
selection key and interest state within SSLChannel to avoid SSL-specific
knowledge escaping out of SSL classes into protocol-independent network
code. The current server-side SSL patch doesn't address these scenarios
yet, but we may want to take these into account while designing the common
Channel class/interface.

   1. *Support for running potentially long-running delegated tasks outside
   the network thread*: It is recommended that delegated tasks indicated by
   a handshake status of NEED_TASK are run on a separate thread since they may
   block (
   http://docs.oracle.com/javase/7/docs/api/javax/net/ssl/SSLEngine.html).
   It is easier to encapsulate this in SSLChannel without any changes to
   common code if selection keys are managed within the Channel.
   2. *Renegotiation handshake*: During a read operation, handshake status
   may indicate that renegotiation is required. It will be good to encapsulate
   this state change (and any knowledge of these SSL-specific state
   transitions) within SSLChannel. Our experience was that managing keys and
   state within the SSLChannel rather than in Selector made this code neater.
   3. *Graceful shutdown of the SSL connection*s: Our experience was that
   we could encapsulate all of the logic for shutting down SSLEngine
   gracefully within SSLChannel when the selection key and state are owned and
   managed by SSLChannel.
   4. *And finally a minor point:* We found that by managing selection key
   and selection interests within SSLChannel, protocol-independent Selector
   didn't need the concept of handshake at all and all channel state
   management and handshake related code could be held in protocol-specific
   classes. This may be worth taking into consideration since it makes it
   easier for common network layer code to be maintained without any
   understanding of the details of individual security protocols.

The channel classes we used are included in the patch in
https://issues.apache.org/jira/browse/KAFKA-1690. The patch contains unit
tests to validate these scenarios as well as other buffer overflow
conditions which may be useful for server-side code when the scenarios
described above are implemented.
Regards,

Rajini



On Tue, Apr 21, 2015 at 11:13 PM, Sriharsha Chintalapani 
harsh...@fastmail.fm wrote:

 Hi Jay,
   Thanks for the review.

1. Isn't the blocking handshake going to be a performance concern? Can
 we
 do the handshake non-blocking instead? If anything that causes connections
 to drop can incur blocking network roundtrips won't that eat up all the
 network threads immediately? I guess I would have to look at that code to
 know...
 I’ve non-blocking handshake on the server side as well as for new
 producer client.  Blocking handshake is only done for BlockingChannel.scala
 and it just loops over the non-blocking hand shake until the context is
 established. So on the server side (SocketServer.scala) as it goes through
 the steps and returns “READ or WRITE” signal for next step.  For
 BlockingChannel the worst case I look at is the connection timeout but most
 times this handshake will finish up much quicker . I am cleaning up the
 code will send up a patch in next few days .

 2. Do we need to support blocking channel at all? That is just for the old
 clients, and I think we should probably just leave those be to reduce
 scope
 here.
 So blocking channel used not only by simple consumer but also
 ControllerChannelManager and controlled shutdown also. Are we planning on
 deprecating it. I think at least for ControllerChannelManager it makes
 sense  to have a blocking channel. If the users want to lock down the
 cluster i.e no PLAINTEXT channels are allowed than all the communication
 has to go through either SSL and KERBEROS so in this case we need add this
 capability to BlockingChannel.



 3. Can we change the APIs to drop the getters when that is not required by
 the API being implemented. In general we don't use setters and getters as
 a
 naming convention.

 My bad on adding getters and setters :). I’ll work on removing it and
 change the KIP accordingly. I still need some accessor methods though .

 Thanks,

 Harsha



 On April 21, 2015 at 2:51:15 PM, Jay Kreps (jay.kr...@gmail.com) wrote:

 Hey Sriharsha,

 Thanks for the excellent write-up.

 Couple of minor questions:

 1. Isn't the blocking handshake going to be a performance concern? Can we
 do the handshake non-blocking instead? If anything that causes connections
 to drop can incur blocking network roundtrips won't that eat up all the
 network threads immediately? I guess I would have to look at that code to
 know...

 2. Do we need to support blocking channel at all? That is just for the old
 clients, and I think we should