Re: SSL setup in Kafka 2.10.0.10.2.1 for keystore and truststore files

2017-10-03 Thread Jakub Scholz
> Regarding host name validation, does FQDN with hostname always present in
CN (common name) of the certificate? What if I want to use some free form
text in CSR for CN field to make it for multiple host?

You have two options. Either you can use wildcard certificates as suggested
by Martin. Or you can add more hostnames into the Subject Alternative
Names, These will be also checked during the hostname verification. Unlike
the wildcard certificate these can be also for completely different
domains. But if you need signed certificate from CA it is up to you to
check with your CA whether they sign it or not.

> I am not sure if keytool command generated self signed certificates needs
to be imported to both client and server application everytime?
> Is this also valid for Verisign or other standard CA generated
certificate?

If you use self-signed certificate, you can verify its identity only using
the public key. So you always have to copy the public keys around and load
them into the counter part truststores. With certificates signed by public
CA such as Verisign you don't need to do this. You just need to make sure
that the application you are using has the Verisign keys whcih don't change
often.

Jakub



On Tue, Oct 3, 2017 at 7:44 PM, Awadhesh Gupta 
wrote:

> Hi,
>
> I validated the client chain in server log after enabling the SSL log and
> it was showing entries of both the certificate in chain.
>
> I imported server csr (ca-cert file generated from command penssl req
> -new -x509 -keyout ca-key -out ca-cert -days $VALIDITY) to Client trust
> store and client csr to Server trust store and then found no error in
> Server/Client SSL communication. I could see the publisher can produce the
> messages and consumer can consume the messages without any error.
>
> I am not sure if keytool command generated self signed certificates needs
> to be imported to both client and server application everytime?
> Is this also valid for Verisign or other standard CA generated certificate?
>
> Regarding host name validation, does FQDN with hostname always present in
> CN (common name) of the certificate? What if I want to use some free form
> text in CSR for CN field to make it for multiple host?
>
>
> Thanks
> Awadhesh
>
> On Fri, Sep 29, 2017 at 5:59 PM, Jakub Scholz  wrote:
>
>> This normally means that the truststore in your producer doesn't contain
>> a)
>> the public key of your broker or b) the public keys of the CA which signed
>> the broker key. With this error it didn't even get to the verification of
>> the client certificate yet. Looking at the blog post it looks like there
>> is
>> something wrong with your kafka.client.truststore.jks. What you can try is
>> to run these two commands and compare the output - whether they talk about
>> the same certificates. On on the host where you run the client:
>>   keytool -list -v -keystore kafka.client.truststore.jks
>> And this one on the broker:
>>   keytool -list -v -keystore kafka.server.keystore.jks
>>
>> You can also compare the certificates in the SSL debug log. Section
>> starting with "adding as trusted cert:" lists what is in your client
>> truststore. Section called "*** Certificate chain" shows the certificates
>> which are used by the broker.
>>
>> When using SSL between different hosts you normally should not need
>> anything special, since the hostname validation
>> (ssl.endpoint.identification.algorithm is AFAIK disabled by default). If
>> you enable the hostname verification you will need that the hostname (CN
>> or
>> alternative DNS names from the broker key) needs to match the hostname
>> which you use to connect to. But this is not your case - the error would
>> be
>> different.
>>
>> Jakub
>>
>> On Fri, Sep 29, 2017 at 1:05 PM, Awadhesh Gupta > >
>> wrote:
>>
>> > Thanks M Manna.
>> >
>> > I followed the steps to recreate the keystore & truststore for SSL
>> setup on
>> > both Client machine and  it is working fine if I run the client
>> and
>> > broker on same Linux host.
>> >
>> > Problem starts when I publish the messages from Kafka Client deployed on
>> > different Linux machine.
>> >
>> > I enabled SSL log in kafka-run-class.sh to see the handshake traces.
>> >
>> > I am getting following error in Producer log for Kafka broker
>> > certificates - Does client application should have access of Server
>> > certificates as well?
>> > Exception traces:
>> >
>> > kafka-producer-network-thread | console-producer, fatal error: 46:
>> General
>> > SSLEngine problem
>> > Caused by: sun.security.validator.ValidatorException: PKIX path
>> building
>> > failed: sun.security.provider.certpath.SunCertPathBuilderException:
>> unable
>> > to find valid certification path to requested target
>> >
>> > kafka-producer-network-thread | console-producer, SEND TLSv1.2, Alert:
>> > fatal, description= certificate_unknown
>> >
>> > Want to understand if we need to consider any specific 

Re: Kafka Streams Avro SerDe version/id caching

2017-10-03 Thread Svante Karlsson
I've implemented the same logic for a c++ client - caching is the only way
to go since the performance impact of not doing it would be to big. So bet
on caching on all clients.

2017-10-03 18:12 GMT+02:00 Damian Guy :

> If you are using the confluent schema registry then the will be cached by
> the SchemaRegistryClient.
>
> Thanks,
> Damian
>
> On Tue, 3 Oct 2017 at 09:00 Ted Yu  wrote:
>
> > I did a quick search in the code base - there doesn't seem to be caching
> as
> > you described.
> >
> > On Tue, Oct 3, 2017 at 6:36 AM, Kristopher Kane 
> > wrote:
> >
> > > If using a Byte SerDe and schema registry in the consumer configs of a
> > > Kafka streams application, does it cache the Avro schemas by ID and
> > version
> > > after fetching from the registry once?
> > >
> > > Thanks,
> > >
> > > Kris
> > >
> >
>


Re: Using Kafka on DC/OS + Marathon

2017-10-03 Thread Sean Glover
No, I don't.  I help others that do :)

On Tue, Oct 3, 2017 at 1:12 PM, Valentin Forst  wrote:

> Hi Sean,
>
> Thanks a lot for this info !
> Are you running DC/OS in prod?
>
> Regards
> Valentin
>
> > Am 03.10.2017 um 15:29 schrieb Sean Glover :
> >
> > Hi Valentin,
> >
> > Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
> > `kafka` package.  Mesosphere has put a lot of effort into making Kafka
> work
> > on DC/OS.  Since Kafka requires persistent disk it's required to make
> sure
> > after initial deployment brokers stay put on their assigned Mesos agents.
> > Deployment and common ops tasks are supported with the help of the Kafka
> > scheduler developed in the mesosphere/dcos-commons repo.  For example,
> > configuration changes to brokers can be made through the DC/OS Kafka
> > service (through the UI or the CLI) and deployed out to brokers as a
> > rolling upgrade, where one at a time each broker server.config's are
> > updated and the server is cleanly bounced.  The Kafka scheduler also
> > supports other features such as upgrades for when Mesosphere releases a
> new
> > scheduler update or when a new version of Kafka is available.  Common ops
> > tasks like replacing a failed broker or adding more brokers is supported
> by
> > using the DC/OS CLI and Kafka scheduler configuration changes.  In short,
> > most of the the ops tasks are handled by the Kafka scheduler, but all
> other
> > tasks are just Kafka as usual.
> >
> > The biggest thing to watch out for is that running Kafka in DC/OS
> implies a
> > shared mixed-use environment.  It's possible other services could be
> > running on the Mesos agents brokers are installed on, which could have
> > resource conflicts, etc.  By default DC/OS Kafka shares the ZooKeeper
> > instances with Mesos and other services, you may want to consider a
> > standalone cluster for Kafka.  All these concerns can be mitigated with
> > configuration, but you'll need to get familiar with DC/OS and the Kafka
> > scheduler before you run anything in prod.
> >
> > Latest DC/OS Kafka release:
> > https://docs.mesosphere.com/service-docs/kafka/2.0.1-0.11.0/
> >
> > Regards,
> > Sean
> >
> > On Tue, Oct 3, 2017 at 5:20 AM, Valentin Forst 
> wrote:
> >
> >> Hi Avinash,
> >>
> >> Thanks for this hint.
> >>
> >> It would have been great, if someone could share experience using this
> >> framework on the production environment.
> >>
> >> Thanks in advance
> >> Valentin
> >>
> >>> Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri <
> >> avinashp...@gmail.com>:
> >>>
> >>> There is a a native kafka framework which runs on top of DC/OS.
> >>>
> >>> https://docs.mesosphere.com/service-docs/kafka/
> >>>
> >>> This will most likely be a better way to run kafka on DC/OS rather than
> >>> running it as a marathon framework.
> >>>
> >>>
> >>
> >>
> >
> >
> > --
> > Senior Software Engineer, Lightbend, Inc.
> >
> > 
> >
> > @seg1o 
>
>


-- 
Senior Software Engineer, Lightbend, Inc.



@seg1o 


Re: SSL setup in Kafka 2.10.0.10.2.1 for keystore and truststore files

2017-10-03 Thread Martin Gainty




From: Awadhesh Gupta 
Sent: Tuesday, October 3, 2017 1:44 PM
To: users@kafka.apache.org; ja...@scholz.cz
Subject: Re: SSL setup in Kafka 2.10.0.10.2.1 for keystore and truststore files

Hi,

I validated the client chain in server log after enabling the SSL log and
it was showing entries of both the certificate in chain.

I imported server csr (ca-cert file generated from command penssl req -new
-x509 -keyout ca-key -out ca-cert -days $VALIDITY) to Client trust store
and client csr to Server trust store and then found no error in
Server/Client SSL communication. I could see the publisher can produce the
messages and consumer can consume the messages without any error.

I am not sure if keytool command generated self signed certificates needs
to be imported to both client and server application everytime?
Is this also valid for Verisign or other standard CA generated certificate?

Regarding host name validation, does FQDN with hostname always present in
CN (common name) of the certificate? What if I want to use some free form
text in CSR for CN field to make it for multiple host?

MG>DigiCert certificate supports multiple subDomains with wildcard for CN
MG>https://www.digicert.com/faq-general.htm#wildcard
MG>remember its your CA Provider that utimately determines which certificate 
passes validation or not

SSL Digital Certificate Security :: DigiCert General 
FAQ
www.digicert.com
Digital SSL Certificate Questions? DigiCert has the answers!




Thanks
Awadhesh

On Fri, Sep 29, 2017 at 5:59 PM, Jakub Scholz  wrote:

> This normally means that the truststore in your producer doesn't contain a)
> the public key of your broker or b) the public keys of the CA which signed
> the broker key. With this error it didn't even get to the verification of
> the client certificate yet. Looking at the blog post it looks like there is
> something wrong with your kafka.client.truststore.jks. What you can try is
> to run these two commands and compare the output - whether they talk about
> the same certificates. On on the host where you run the client:
>   keytool -list -v -keystore kafka.client.truststore.jks
> And this one on the broker:
>   keytool -list -v -keystore kafka.server.keystore.jks
>
> You can also compare the certificates in the SSL debug log. Section
> starting with "adding as trusted cert:" lists what is in your client
> truststore. Section called "*** Certificate chain" shows the certificates
> which are used by the broker.
>
> When using SSL between different hosts you normally should not need
> anything special, since the hostname validation
> (ssl.endpoint.identification.algorithm is AFAIK disabled by default). If
> you enable the hostname verification you will need that the hostname (CN or
> alternative DNS names from the broker key) needs to match the hostname
> which you use to connect to. But this is not your case - the error would be
> different.
>
> Jakub
>
> On Fri, Sep 29, 2017 at 1:05 PM, Awadhesh Gupta 
> wrote:
>
> > Thanks M Manna.
> >
> > I followed the steps to recreate the keystore & truststore for SSL setup
> on
> > both Client machine and  it is working fine if I run the client
> and
> > broker on same Linux host.
> >
> > Problem starts when I publish the messages from Kafka Client deployed on
> > different Linux machine.
> >
> > I enabled SSL log in kafka-run-class.sh to see the handshake traces.
> >
> > I am getting following error in Producer log for Kafka broker
> > certificates - Does client application should have access of Server
> > certificates as well?
> > Exception traces:
> >
> > kafka-producer-network-thread | console-producer, fatal error: 46:
> General
> > SSLEngine problem
> > Caused by: sun.security.validator.ValidatorException: PKIX path building
> > failed: sun.security.provider.certpath.SunCertPathBuilderException:
> unable
> > to find valid certification path to requested target
> >
> > kafka-producer-network-thread | console-producer, SEND TLSv1.2, Alert:
> > fatal, description= certificate_unknown
> >
> > Want to understand if we need to consider any specific configuration for
> > Publisher if it it is sending messages to Kafka broker deployed on
> another
> > host. Please note that I had already created client certificate with
> steps
> > as mentioned in Confluent 101
> >  > authorization-authentication-encryption/>
> > page.
> >
> > I have also imported signed client certificates to JDK provided
> certificate
> > file ($JAVA_HOME/jre\lib/security/cacerts) but no luck.
> >
> > Thanks
> > Awadhesh
> >
> > On Thu, Sep 28, 2017 at 2:02 PM, M. Manna  wrote:
> >
> > > Hi Awadhesh,
> > >
> > > This seems like your certificate import order (intermediate - root) is
> > > jumbled up. Could you 

Hostname validation in Kafka for SSL authentication

2017-10-03 Thread Awadhesh Gupta
Hi,

I have enabled SSL certificate betwen my Kafka server and client
communicaiton with correct host name in CSR of certificates.

I've multiple hosts (100-200) that can produce the messages on the broker
and would liek to share the same certificates for each of the  hosts.

For this, I want to know if we can create certificates with some
description  (say "client-cert") in CN field of the CSR.

Will this certificate can be used in Kafka authenticaton and same can be
aceepted for all the client request by the Kafka server?

If not, what is the way to enable security for multiple hosts.

Thanks in Advance
Awadhesh


Re: SSL setup in Kafka 2.10.0.10.2.1 for keystore and truststore files

2017-10-03 Thread Awadhesh Gupta
Hi,

I validated the client chain in server log after enabling the SSL log and
it was showing entries of both the certificate in chain.

I imported server csr (ca-cert file generated from command penssl req -new
-x509 -keyout ca-key -out ca-cert -days $VALIDITY) to Client trust store
and client csr to Server trust store and then found no error in
Server/Client SSL communication. I could see the publisher can produce the
messages and consumer can consume the messages without any error.

I am not sure if keytool command generated self signed certificates needs
to be imported to both client and server application everytime?
Is this also valid for Verisign or other standard CA generated certificate?

Regarding host name validation, does FQDN with hostname always present in
CN (common name) of the certificate? What if I want to use some free form
text in CSR for CN field to make it for multiple host?


Thanks
Awadhesh

On Fri, Sep 29, 2017 at 5:59 PM, Jakub Scholz  wrote:

> This normally means that the truststore in your producer doesn't contain a)
> the public key of your broker or b) the public keys of the CA which signed
> the broker key. With this error it didn't even get to the verification of
> the client certificate yet. Looking at the blog post it looks like there is
> something wrong with your kafka.client.truststore.jks. What you can try is
> to run these two commands and compare the output - whether they talk about
> the same certificates. On on the host where you run the client:
>   keytool -list -v -keystore kafka.client.truststore.jks
> And this one on the broker:
>   keytool -list -v -keystore kafka.server.keystore.jks
>
> You can also compare the certificates in the SSL debug log. Section
> starting with "adding as trusted cert:" lists what is in your client
> truststore. Section called "*** Certificate chain" shows the certificates
> which are used by the broker.
>
> When using SSL between different hosts you normally should not need
> anything special, since the hostname validation
> (ssl.endpoint.identification.algorithm is AFAIK disabled by default). If
> you enable the hostname verification you will need that the hostname (CN or
> alternative DNS names from the broker key) needs to match the hostname
> which you use to connect to. But this is not your case - the error would be
> different.
>
> Jakub
>
> On Fri, Sep 29, 2017 at 1:05 PM, Awadhesh Gupta 
> wrote:
>
> > Thanks M Manna.
> >
> > I followed the steps to recreate the keystore & truststore for SSL setup
> on
> > both Client machine and  it is working fine if I run the client
> and
> > broker on same Linux host.
> >
> > Problem starts when I publish the messages from Kafka Client deployed on
> > different Linux machine.
> >
> > I enabled SSL log in kafka-run-class.sh to see the handshake traces.
> >
> > I am getting following error in Producer log for Kafka broker
> > certificates - Does client application should have access of Server
> > certificates as well?
> > Exception traces:
> >
> > kafka-producer-network-thread | console-producer, fatal error: 46:
> General
> > SSLEngine problem
> > Caused by: sun.security.validator.ValidatorException: PKIX path building
> > failed: sun.security.provider.certpath.SunCertPathBuilderException:
> unable
> > to find valid certification path to requested target
> >
> > kafka-producer-network-thread | console-producer, SEND TLSv1.2, Alert:
> > fatal, description= certificate_unknown
> >
> > Want to understand if we need to consider any specific configuration for
> > Publisher if it it is sending messages to Kafka broker deployed on
> another
> > host. Please note that I had already created client certificate with
> steps
> > as mentioned in Confluent 101
> >  > authorization-authentication-encryption/>
> > page.
> >
> > I have also imported signed client certificates to JDK provided
> certificate
> > file ($JAVA_HOME/jre\lib/security/cacerts) but no luck.
> >
> > Thanks
> > Awadhesh
> >
> > On Thu, Sep 28, 2017 at 2:02 PM, M. Manna  wrote:
> >
> > > Hi Awadhesh,
> > >
> > > This seems like your certificate import order (intermediate - root) is
> > > jumbled up. Could you kindly follow the instructions on confluent.io
> > where
> > > Ismael Juma has provided a nice set of steps to follow for SSL setup.
> > >
> > > https://www.confluent.io/blog/apache-kafka-security-
> > > authorization-authentication-encryption/
> > >
> > > Kindest Regards,
> > >
> > > On 28 September 2017 at 09:10, Awadhesh Gupta <
> awadhesh.in...@gmail.com>
> > > wrote:
> > >
> > > > Hello,
> > > >
> > > > I am trying to setup Kafka SSL using certificates on my windows
> machine
> > > > using reference of security_overview section of Kafka documents. I
> have
> > > > created server.keystore.jks, client.keystore.jks and respective trust
> > > store
> > > > file and signed it using keytool command. I followed 

Re: Using Kafka on DC/OS + Marathon

2017-10-03 Thread Valentin Forst
Hi Sean,

Thanks a lot for this info ! 
Are you running DC/OS in prod? 

Regards
Valentin

> Am 03.10.2017 um 15:29 schrieb Sean Glover :
> 
> Hi Valentin,
> 
> Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
> `kafka` package.  Mesosphere has put a lot of effort into making Kafka work
> on DC/OS.  Since Kafka requires persistent disk it's required to make sure
> after initial deployment brokers stay put on their assigned Mesos agents.
> Deployment and common ops tasks are supported with the help of the Kafka
> scheduler developed in the mesosphere/dcos-commons repo.  For example,
> configuration changes to brokers can be made through the DC/OS Kafka
> service (through the UI or the CLI) and deployed out to brokers as a
> rolling upgrade, where one at a time each broker server.config's are
> updated and the server is cleanly bounced.  The Kafka scheduler also
> supports other features such as upgrades for when Mesosphere releases a new
> scheduler update or when a new version of Kafka is available.  Common ops
> tasks like replacing a failed broker or adding more brokers is supported by
> using the DC/OS CLI and Kafka scheduler configuration changes.  In short,
> most of the the ops tasks are handled by the Kafka scheduler, but all other
> tasks are just Kafka as usual.
> 
> The biggest thing to watch out for is that running Kafka in DC/OS implies a
> shared mixed-use environment.  It's possible other services could be
> running on the Mesos agents brokers are installed on, which could have
> resource conflicts, etc.  By default DC/OS Kafka shares the ZooKeeper
> instances with Mesos and other services, you may want to consider a
> standalone cluster for Kafka.  All these concerns can be mitigated with
> configuration, but you'll need to get familiar with DC/OS and the Kafka
> scheduler before you run anything in prod.
> 
> Latest DC/OS Kafka release:
> https://docs.mesosphere.com/service-docs/kafka/2.0.1-0.11.0/
> 
> Regards,
> Sean
> 
> On Tue, Oct 3, 2017 at 5:20 AM, Valentin Forst  wrote:
> 
>> Hi Avinash,
>> 
>> Thanks for this hint.
>> 
>> It would have been great, if someone could share experience using this
>> framework on the production environment.
>> 
>> Thanks in advance
>> Valentin
>> 
>>> Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri <
>> avinashp...@gmail.com>:
>>> 
>>> There is a a native kafka framework which runs on top of DC/OS.
>>> 
>>> https://docs.mesosphere.com/service-docs/kafka/
>>> 
>>> This will most likely be a better way to run kafka on DC/OS rather than
>>> running it as a marathon framework.
>>> 
>>> 
>> 
>> 
> 
> 
> -- 
> Senior Software Engineer, Lightbend, Inc.
> 
> 
> 
> @seg1o 



Re: Kafka Streams Avro SerDe version/id caching

2017-10-03 Thread Damian Guy
If you are using the confluent schema registry then the will be cached by
the SchemaRegistryClient.

Thanks,
Damian

On Tue, 3 Oct 2017 at 09:00 Ted Yu  wrote:

> I did a quick search in the code base - there doesn't seem to be caching as
> you described.
>
> On Tue, Oct 3, 2017 at 6:36 AM, Kristopher Kane 
> wrote:
>
> > If using a Byte SerDe and schema registry in the consumer configs of a
> > Kafka streams application, does it cache the Avro schemas by ID and
> version
> > after fetching from the registry once?
> >
> > Thanks,
> >
> > Kris
> >
>


Re: Kafka Streams Avro SerDe version/id caching

2017-10-03 Thread Ted Yu
I did a quick search in the code base - there doesn't seem to be caching as
you described.

On Tue, Oct 3, 2017 at 6:36 AM, Kristopher Kane 
wrote:

> If using a Byte SerDe and schema registry in the consumer configs of a
> Kafka streams application, does it cache the Avro schemas by ID and version
> after fetching from the registry once?
>
> Thanks,
>
> Kris
>


Re: kafka-console-producer.sh --security-protocol

2017-10-03 Thread Pekka Sarnila

Ah! Of course.

That was it: missing ';' after last option (parser expected more options 
instead of '};' at line 5. Silly me.


Thanks a lot.

Pekka


On 10/03/17 17:42, Manikumar wrote:

looks like syntax issue with "sasl.jaas.config" config property.

On Tue, Oct 3, 2017 at 8:06 PM, Pekka Sarnila  wrote:


The output below is actually from having

  security.protocol=SASL_PLAINTEXT

in producer.properties.

Actual error point I believe is:

Caused by: java.lang.SecurityException: java.io.IOException: Configuration
Error:
 Line 5: expected [option key]
 at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:137)

Pekka


On 10/03/17 17:21, Ted Yu wrote:


I think in producer.properties you should use:

security.protocol=SASL_PLAINTEXT

FYI

On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila  wrote:

Hi,


kafka_2.11-0.11.0.0

If I try to give --security-protocol xyz (xyz any value e.g.
SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error

  security-protocol is not a recognized option

Also having security.protocol=xyz in producer.properties gives error

org.apache.kafka.common.KafkaException: Failed to construct kafka
producer
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:415)
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:287)
at kafka.producer.NewShinyProducer.(BaseProducer.scala:40)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: java.lang.SecurityException: java.io.IOException:
Configuration
Error:
Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
137)
at sun.security.provider.ConfigFile.(ConfigFile.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
ConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
legatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:4
23)
at java.lang.Class.newInstance(Class.java:442)
at javax.security.auth.login.Configuration$2.run(Configuration.
java:255)
at javax.security.auth.login.Configuration$2.run(Configuration.
java:247)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Con
figuration.java:246)
at org.apache.kafka.common.security.JaasContext.defaultContext(
JaasContext.java:112)
at org.apache.kafka.common.security.JaasContext.load(JaasContex
t.java:96)
at org.apache.kafka.common.security.JaasContext.load(JaasContex
t.java:78)
at org.apache.kafka.common.network.ChannelBuilders.create(
ChannelBuilders.java:100)
at org.apache.kafka.common.network.ChannelBuilders.clientChanne
lBuilder(ChannelBuilders.java:58)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(Cl
ientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:374)
... 4 more
Caused by: java.io.IOException: Configuration Error:
Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.
java:666)
at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:
562)
at sun.security.provider.ConfigFile$Spi.parseLoginEntry(
ConfigFile.java:477)
at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.
java:427)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:
329)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:
271)
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
135)
... 21 more

Pekka








Re: kafka-console-producer.sh --security-protocol

2017-10-03 Thread Manikumar
looks like syntax issue with "sasl.jaas.config" config property or jaas
conf file

On Tue, Oct 3, 2017 at 8:12 PM, Manikumar  wrote:

> looks like syntax issue with "sasl.jaas.config" config property.
>
> On Tue, Oct 3, 2017 at 8:06 PM, Pekka Sarnila  wrote:
>
>> The output below is actually from having
>>
>>   security.protocol=SASL_PLAINTEXT
>>
>> in producer.properties.
>>
>> Actual error point I believe is:
>>
>> Caused by: java.lang.SecurityException: java.io.IOException:
>> Configuration Error:
>>  Line 5: expected [option key]
>>  at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:137)
>>
>> Pekka
>>
>>
>> On 10/03/17 17:21, Ted Yu wrote:
>>
>>> I think in producer.properties you should use:
>>>
>>> security.protocol=SASL_PLAINTEXT
>>>
>>> FYI
>>>
>>> On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila  wrote:
>>>
>>> Hi,

 kafka_2.11-0.11.0.0

 If I try to give --security-protocol xyz (xyz any value e.g.
 SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error

   security-protocol is not a recognized option

 Also having security.protocol=xyz in producer.properties gives error

 org.apache.kafka.common.KafkaException: Failed to construct kafka
 producer
 at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
 Producer.java:415)
 at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
 Producer.java:287)
 at kafka.producer.NewShinyProducer.(BaseProducer.scala:40
 )
 at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
 at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
 Caused by: java.lang.SecurityException: java.io.IOException:
 Configuration
 Error:
 Line 5: expected [option key]
 at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
 137)
 at sun.security.provider.ConfigFile.(ConfigFile.java:102)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Nativ
 e
 Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
 ConstructorAccessorImpl.java:62)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
 legatingConstructorAccessorImpl.java:45)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:4
 23)
 at java.lang.Class.newInstance(Class.java:442)
 at javax.security.auth.login.Configuration$2.run(Configuration.
 java:255)
 at javax.security.auth.login.Configuration$2.run(Configuration.
 java:247)
 at java.security.AccessController.doPrivileged(Native Method)
 at javax.security.auth.login.Configuration.getConfiguration(Con
 figuration.java:246)
 at org.apache.kafka.common.security.JaasContext.defaultContext(
 JaasContext.java:112)
 at org.apache.kafka.common.security.JaasContext.load(JaasContex
 t.java:96)
 at org.apache.kafka.common.security.JaasContext.load(JaasContex
 t.java:78)
 at org.apache.kafka.common.network.ChannelBuilders.create(
 ChannelBuilders.java:100)
 at org.apache.kafka.common.network.ChannelBuilders.clientChanne
 lBuilder(ChannelBuilders.java:58)
 at org.apache.kafka.clients.ClientUtils.createChannelBuilder(Cl
 ientUtils.java:88)
 at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
 Producer.java:374)
 ... 4 more
 Caused by: java.io.IOException: Configuration Error:
 Line 5: expected [option key]
 at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.
 java:666)
 at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:5
 62)
 at sun.security.provider.ConfigFile$Spi.parseLoginEntry(
 ConfigFile.java:477)
 at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.
 java:427)
 at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:32
 9)
 at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:27
 1)
 at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
 135)
 ... 21 more

 Pekka


>>>
>


Re: kafka-console-producer.sh --security-protocol

2017-10-03 Thread Manikumar
looks like syntax issue with "sasl.jaas.config" config property.

On Tue, Oct 3, 2017 at 8:06 PM, Pekka Sarnila  wrote:

> The output below is actually from having
>
>   security.protocol=SASL_PLAINTEXT
>
> in producer.properties.
>
> Actual error point I believe is:
>
> Caused by: java.lang.SecurityException: java.io.IOException: Configuration
> Error:
>  Line 5: expected [option key]
>  at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:137)
>
> Pekka
>
>
> On 10/03/17 17:21, Ted Yu wrote:
>
>> I think in producer.properties you should use:
>>
>> security.protocol=SASL_PLAINTEXT
>>
>> FYI
>>
>> On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila  wrote:
>>
>> Hi,
>>>
>>> kafka_2.11-0.11.0.0
>>>
>>> If I try to give --security-protocol xyz (xyz any value e.g.
>>> SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error
>>>
>>>   security-protocol is not a recognized option
>>>
>>> Also having security.protocol=xyz in producer.properties gives error
>>>
>>> org.apache.kafka.common.KafkaException: Failed to construct kafka
>>> producer
>>> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
>>> Producer.java:415)
>>> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
>>> Producer.java:287)
>>> at kafka.producer.NewShinyProducer.(BaseProducer.scala:40)
>>> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
>>> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
>>> Caused by: java.lang.SecurityException: java.io.IOException:
>>> Configuration
>>> Error:
>>> Line 5: expected [option key]
>>> at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
>>> 137)
>>> at sun.security.provider.ConfigFile.(ConfigFile.java:102)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
>>> Method)
>>> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
>>> ConstructorAccessorImpl.java:62)
>>> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
>>> legatingConstructorAccessorImpl.java:45)
>>> at java.lang.reflect.Constructor.newInstance(Constructor.java:4
>>> 23)
>>> at java.lang.Class.newInstance(Class.java:442)
>>> at javax.security.auth.login.Configuration$2.run(Configuration.
>>> java:255)
>>> at javax.security.auth.login.Configuration$2.run(Configuration.
>>> java:247)
>>> at java.security.AccessController.doPrivileged(Native Method)
>>> at javax.security.auth.login.Configuration.getConfiguration(Con
>>> figuration.java:246)
>>> at org.apache.kafka.common.security.JaasContext.defaultContext(
>>> JaasContext.java:112)
>>> at org.apache.kafka.common.security.JaasContext.load(JaasContex
>>> t.java:96)
>>> at org.apache.kafka.common.security.JaasContext.load(JaasContex
>>> t.java:78)
>>> at org.apache.kafka.common.network.ChannelBuilders.create(
>>> ChannelBuilders.java:100)
>>> at org.apache.kafka.common.network.ChannelBuilders.clientChanne
>>> lBuilder(ChannelBuilders.java:58)
>>> at org.apache.kafka.clients.ClientUtils.createChannelBuilder(Cl
>>> ientUtils.java:88)
>>> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
>>> Producer.java:374)
>>> ... 4 more
>>> Caused by: java.io.IOException: Configuration Error:
>>> Line 5: expected [option key]
>>> at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.
>>> java:666)
>>> at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:
>>> 562)
>>> at sun.security.provider.ConfigFile$Spi.parseLoginEntry(
>>> ConfigFile.java:477)
>>> at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.
>>> java:427)
>>> at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:
>>> 329)
>>> at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:
>>> 271)
>>> at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
>>> 135)
>>> ... 21 more
>>>
>>> Pekka
>>>
>>>
>>


Re: kafka-console-producer.sh --security-protocol

2017-10-03 Thread Pekka Sarnila

The output below is actually from having

  security.protocol=SASL_PLAINTEXT

in producer.properties.

Actual error point I believe is:

Caused by: java.lang.SecurityException: java.io.IOException: 
Configuration Error:

 Line 5: expected [option key]
 at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:137)

Pekka

On 10/03/17 17:21, Ted Yu wrote:

I think in producer.properties you should use:

security.protocol=SASL_PLAINTEXT

FYI

On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila  wrote:


Hi,

kafka_2.11-0.11.0.0

If I try to give --security-protocol xyz (xyz any value e.g.
SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error

  security-protocol is not a recognized option

Also having security.protocol=xyz in producer.properties gives error

org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:415)
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:287)
at kafka.producer.NewShinyProducer.(BaseProducer.scala:40)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: java.lang.SecurityException: java.io.IOException: Configuration
Error:
Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
137)
at sun.security.provider.ConfigFile.(ConfigFile.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
ConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
legatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at javax.security.auth.login.Configuration$2.run(Configuration.
java:255)
at javax.security.auth.login.Configuration$2.run(Configuration.
java:247)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.login.Configuration.getConfiguration(Con
figuration.java:246)
at org.apache.kafka.common.security.JaasContext.defaultContext(
JaasContext.java:112)
at org.apache.kafka.common.security.JaasContext.load(JaasContex
t.java:96)
at org.apache.kafka.common.security.JaasContext.load(JaasContex
t.java:78)
at org.apache.kafka.common.network.ChannelBuilders.create(
ChannelBuilders.java:100)
at org.apache.kafka.common.network.ChannelBuilders.clientChanne
lBuilder(ChannelBuilders.java:58)
at org.apache.kafka.clients.ClientUtils.createChannelBuilder(Cl
ientUtils.java:88)
at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
Producer.java:374)
... 4 more
Caused by: java.io.IOException: Configuration Error:
Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.
java:666)
at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:562)
at sun.security.provider.ConfigFile$Spi.parseLoginEntry(
ConfigFile.java:477)
at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.
java:427)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:329)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:271)
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
135)
... 21 more

Pekka





Re: kafka-console-producer.sh --security-protocol

2017-10-03 Thread Ted Yu
I think in producer.properties you should use:

security.protocol=SASL_PLAINTEXT

FYI

On Tue, Oct 3, 2017 at 7:17 AM, Pekka Sarnila  wrote:

> Hi,
>
> kafka_2.11-0.11.0.0
>
> If I try to give --security-protocol xyz (xyz any value e.g.
> SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error
>
>   security-protocol is not a recognized option
>
> Also having security.protocol=xyz in producer.properties gives error
>
> org.apache.kafka.common.KafkaException: Failed to construct kafka producer
> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
> Producer.java:415)
> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
> Producer.java:287)
> at kafka.producer.NewShinyProducer.(BaseProducer.scala:40)
> at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
> at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
> Caused by: java.lang.SecurityException: java.io.IOException: Configuration
> Error:
> Line 5: expected [option key]
> at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
> 137)
> at sun.security.provider.ConfigFile.(ConfigFile.java:102)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance(Native
> ConstructorAccessorImpl.java:62)
> at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(De
> legatingConstructorAccessorImpl.java:45)
> at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
> at java.lang.Class.newInstance(Class.java:442)
> at javax.security.auth.login.Configuration$2.run(Configuration.
> java:255)
> at javax.security.auth.login.Configuration$2.run(Configuration.
> java:247)
> at java.security.AccessController.doPrivileged(Native Method)
> at javax.security.auth.login.Configuration.getConfiguration(Con
> figuration.java:246)
> at org.apache.kafka.common.security.JaasContext.defaultContext(
> JaasContext.java:112)
> at org.apache.kafka.common.security.JaasContext.load(JaasContex
> t.java:96)
> at org.apache.kafka.common.security.JaasContext.load(JaasContex
> t.java:78)
> at org.apache.kafka.common.network.ChannelBuilders.create(
> ChannelBuilders.java:100)
> at org.apache.kafka.common.network.ChannelBuilders.clientChanne
> lBuilder(ChannelBuilders.java:58)
> at org.apache.kafka.clients.ClientUtils.createChannelBuilder(Cl
> ientUtils.java:88)
> at org.apache.kafka.clients.producer.KafkaProducer.(Kafka
> Producer.java:374)
> ... 4 more
> Caused by: java.io.IOException: Configuration Error:
> Line 5: expected [option key]
> at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.
> java:666)
> at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:562)
> at sun.security.provider.ConfigFile$Spi.parseLoginEntry(
> ConfigFile.java:477)
> at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.
> java:427)
> at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:329)
> at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:271)
> at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:
> 135)
> ... 21 more
>
> Pekka
>


kafka-console-producer.sh --security-protocol

2017-10-03 Thread Pekka Sarnila

Hi,

kafka_2.11-0.11.0.0

If I try to give --security-protocol xyz (xyz any value e.g. 
SASL_PLAINTEXT, PLAINTEXTSASL, SASL_SSL) I get error


  security-protocol is not a recognized option

Also having security.protocol=xyz in producer.properties gives error

org.apache.kafka.common.KafkaException: Failed to construct kafka producer
	at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:415)
	at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:287)

at kafka.producer.NewShinyProducer.(BaseProducer.scala:40)
at kafka.tools.ConsoleProducer$.main(ConsoleProducer.scala:48)
at kafka.tools.ConsoleProducer.main(ConsoleProducer.scala)
Caused by: java.lang.SecurityException: java.io.IOException: 
Configuration Error:

Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:137)
at sun.security.provider.ConfigFile.(ConfigFile.java:102)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
	at 
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
	at 
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)

at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at javax.security.auth.login.Configuration$2.run(Configuration.java:255)
at javax.security.auth.login.Configuration$2.run(Configuration.java:247)
at java.security.AccessController.doPrivileged(Native Method)
	at 
javax.security.auth.login.Configuration.getConfiguration(Configuration.java:246)
	at 
org.apache.kafka.common.security.JaasContext.defaultContext(JaasContext.java:112)

at 
org.apache.kafka.common.security.JaasContext.load(JaasContext.java:96)
at 
org.apache.kafka.common.security.JaasContext.load(JaasContext.java:78)
	at 
org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:100)
	at 
org.apache.kafka.common.network.ChannelBuilders.clientChannelBuilder(ChannelBuilders.java:58)
	at 
org.apache.kafka.clients.ClientUtils.createChannelBuilder(ClientUtils.java:88)
	at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:374)

... 4 more
Caused by: java.io.IOException: Configuration Error:
Line 5: expected [option key]
at sun.security.provider.ConfigFile$Spi.ioException(ConfigFile.java:666)
at sun.security.provider.ConfigFile$Spi.match(ConfigFile.java:562)
	at 
sun.security.provider.ConfigFile$Spi.parseLoginEntry(ConfigFile.java:477)

at sun.security.provider.ConfigFile$Spi.readConfig(ConfigFile.java:427)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:329)
at sun.security.provider.ConfigFile$Spi.init(ConfigFile.java:271)
at sun.security.provider.ConfigFile$Spi.(ConfigFile.java:135)
... 21 more

Pekka


Kafka Streams Avro SerDe version/id caching

2017-10-03 Thread Kristopher Kane
If using a Byte SerDe and schema registry in the consumer configs of a
Kafka streams application, does it cache the Avro schemas by ID and version
after fetching from the registry once?

Thanks,

Kris


Re: Using Kafka on DC/OS + Marathon

2017-10-03 Thread Sean Glover
Hi Valentin,

Kafka is available on DC/OS in the Catalog (aka Universe) as part of the
`kafka` package.  Mesosphere has put a lot of effort into making Kafka work
on DC/OS.  Since Kafka requires persistent disk it's required to make sure
after initial deployment brokers stay put on their assigned Mesos agents.
Deployment and common ops tasks are supported with the help of the Kafka
scheduler developed in the mesosphere/dcos-commons repo.  For example,
configuration changes to brokers can be made through the DC/OS Kafka
service (through the UI or the CLI) and deployed out to brokers as a
rolling upgrade, where one at a time each broker server.config's are
updated and the server is cleanly bounced.  The Kafka scheduler also
supports other features such as upgrades for when Mesosphere releases a new
scheduler update or when a new version of Kafka is available.  Common ops
tasks like replacing a failed broker or adding more brokers is supported by
using the DC/OS CLI and Kafka scheduler configuration changes.  In short,
most of the the ops tasks are handled by the Kafka scheduler, but all other
tasks are just Kafka as usual.

The biggest thing to watch out for is that running Kafka in DC/OS implies a
shared mixed-use environment.  It's possible other services could be
running on the Mesos agents brokers are installed on, which could have
resource conflicts, etc.  By default DC/OS Kafka shares the ZooKeeper
instances with Mesos and other services, you may want to consider a
standalone cluster for Kafka.  All these concerns can be mitigated with
configuration, but you'll need to get familiar with DC/OS and the Kafka
scheduler before you run anything in prod.

Latest DC/OS Kafka release:
https://docs.mesosphere.com/service-docs/kafka/2.0.1-0.11.0/

Regards,
Sean

On Tue, Oct 3, 2017 at 5:20 AM, Valentin Forst  wrote:

> Hi Avinash,
>
> Thanks for this hint.
>
> It would have been great, if someone could share experience using this
> framework on the production environment.
>
> Thanks in advance
> Valentin
>
> > Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri <
> avinashp...@gmail.com>:
> >
> > There is a a native kafka framework which runs on top of DC/OS.
> >
> > https://docs.mesosphere.com/service-docs/kafka/
> >
> > This will most likely be a better way to run kafka on DC/OS rather than
> > running it as a marathon framework.
> >
> >
>
>


-- 
Senior Software Engineer, Lightbend, Inc.



@seg1o 


Re: Using Kafka on DC/OS + Marathon

2017-10-03 Thread Valentin Forst
Hi Avinash,

Thanks for this hint. 

It would have been great, if someone could share experience using this 
framework on the production environment.

Thanks in advance
Valentin

> Am 02.10.2017 um 19:39 schrieb Avinash Shahdadpuri :
> 
> There is a a native kafka framework which runs on top of DC/OS.
> 
> https://docs.mesosphere.com/service-docs/kafka/
> 
> This will most likely be a better way to run kafka on DC/OS rather than
> running it as a marathon framework.
> 
> 



Re: Using Kafka on DC/OS + Marathon

2017-10-03 Thread Valentin Forst
Hi David,

Thank you for your replay! Presumably I wasn’t clear in my previous post. Here 
an example to visualize what I'm trying to figure out:

Imagine we have a data flow propagating massages through a Kafka-Cluster which 
is happen to consist of 3 brokers (3 partitions, 3 replica). If one of those 
brokers goes down, Kafka does two things:
- Broker rebalancing
- Rebalancing the consumer within a group

Now when marathon starts the failed broker again, some messages could get 
duplicated or missed… That is exactly what I would like to avoid (requirement). 
Make sense?   

Does someone have experience with Kafka on DC/OS + Marathon on a production 
environment and supports Exactly-Ones Semantic? 

Which case would you recommend?
1. Kafka on DC/OS + Marathon using Mesos private nodes  (+ microservices on the 
public nodes)
2. Kafka on separate DC/OS-Cluster ? i.e. micro services have a different DC/OS 
Cluster
3. Kafka -Cluster on its own

Cheers,
Valentin


> Am 02.10.2017 um 16:35 schrieb David Garcia :
> 
> I’m not sure how your requirements of Kafka are related to your requirements 
> for marathon.  Kafka is a streaming-log system and marathon is a scheduler.  
> Mesos, as your resource manager, simply “manages” resources.  Are you asking 
> about multitenancy?  If so, I highly recommend that you separate your Kafka 
> cluster (and zookeeper) from your other services.  Kafka leverages the OS 
> page cache to optimize read performance and it seems likely this would 
> interfere with Mesos resource management policy.
> 
> -David 
> 
> On 10/2/17, 6:39 AM, "Valentin Forst"  wrote:
> 
>Hi there,
> 
>Working in a huge compony we are about to install Kafka on DC/OS (Mesos) 
> and intend to use Marathon as a Scheduler. Since I am new to DC/OS and 
> Marathon, I was wondering if this is a recommended way of using Kafka in the 
> production environment.
> 
>My doubts are:
>- Kafka manages Broker rebalancing (e.g. Failover, etc.) using its own 
> semantic. Can I trust Marathon that it will match the requirements here?
>- Since our Container Platform - DC/OS is going to be used by other „micro 
> services“ - soon or later this is going to raise a performance issue. Should 
> we better use a dedicated DC/OS instance for our Kafka-Cluster? Or 
> Kafka-Cluster on its own?
>- Is there something else we should consider important if using Kafka on 
> DC/OS + Marathon?
> 
> 
>Thanks in advance for your time.
>Valentin
> 
> 
> 



Re: Is anyone using MX4J loader?

2017-10-03 Thread Dong Lin
Thanks for the information Ted. Yes, it does appear that the issue has
already been reported in KAFKA-4946. I will comment on that ticket and see
if Ralph can open a git pull request instead. Or we can simply commit
Ralph's patch in his name.

Thanks,
Dong

On Fri, Sep 29, 2017 at 4:56 AM, Ted Yu  wrote:

> Looks like Ralph logged KAFKA-4946 for this already.
>
> On Fri, Sep 29, 2017 at 12:40 AM, Dong Lin  wrote:
>
> > Hi Kafka users,
> >
> > I am wondering if anyone is currently using feature from MX4J loader.
> This
> > feature is currently enabled by default. But if kafka_mx4jenable is
> > explicitly set to true in the broker config, then broker will disable
> MX4J
> > load. And if kafka_mx4jenable is explicitly set to false in the broker
> > config, broker will enable MX4J load, which is counter intuitive.
> >
> > I am going to submit a patch to make the use of kafka_mx4jenable more
> > intuitive, e.g. broker should enable MX4J loader if kafka_mx4jenable is
> set
> > to true. I am wring this email to see if anyone is using it or has any
> > comment on this.
> >
> > Thanks,
> > Dong
> >
>


Re: out of order sequence number in exactly once streams

2017-10-03 Thread Sameer Kumar
I had enabled eos through streams config and as explained in the
documentation, I have not added anything else other than following config.

  streamsConfiguration.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG,
StreamsConfig.EXACTLY_ONCE);

As explained by you, I think producer idempotence and retries would be
automatically picked up. I was thinking if there is someway i can print the
configs picked up the streams app for e.g. the current retries.

Do i need to add any other configuration.

-Sameer.

On Tue, Oct 3, 2017 at 11:59 AM, Sameer Kumar 
wrote:

> I had enabled eos through streams config.
>
>
> On Fri, Sep 29, 2017 at 11:12 PM, Matthias J. Sax 
> wrote:
>
>> That's correct: If EOS is enabled, we enforce some producer configs:
>>
>> https://github.com/apache/kafka/blob/0.11.0.1/streams/src/
>> main/java/org/apache/kafka/streams/StreamsConfig.java#L678-L688
>>
>> https://github.com/apache/kafka/blob/0.11.0.1/streams/src/
>> main/java/org/apache/kafka/streams/StreamsConfig.java#L691
>>
>> https://github.com/apache/kafka/blob/0.11.0.1/streams/src/
>> main/java/org/apache/kafka/streams/StreamsConfig.java#L493-L496
>>
>>
>> Note, that by default we set retries to Integer.MAX_VALUE but we do not
>> enforce this setting (as pointed out by Damian already). So you could
>> overwrite it with a smaller value (what is of course not recommended).
>>
>> I was not sure though, if you enabled EOS or just enabled idempotency
>> only for the producer -- what you can easily do by providing the
>> corresponding producer configs.
>>
>> If you did enable EOS in StreamsConfig, Producer will take care of
>> OutOfOrderSequenceException in general.
>>
>> However, there are scenarios for which Producer cannot handle it. For
>> those case, OutOfOrderSequenceException indicates that there has been
>> data loss on the broker, ie, a previously acknowledged message no longer
>> exists. For most part, this should only occur in rare situations
>> (simultaneous power outages, multiple disk losses, software bugs
>> resulting in data corruption, etc.).
>>
>>
>> -Matthias
>>
>> On 9/29/17 7:55 AM, Damian Guy wrote:
>> > You can set ProducerConfig.RETRIES_CONFIG in your StreamsConfig, i.e,
>> >
>> > Properties props = new Properties();
>> > props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
>> > ...
>> >
>> > On Fri, 29 Sep 2017 at 13:17 Sameer Kumar 
>> wrote:
>> >
>> >> I guess once stream app are enabled exactly-once, producer idempotence
>> get
>> >> enabled by default and so do the retries. I guess producer retries are
>> >> managed internally and not exposed through streamconfig.
>> >>
>> >> https://kafka.apache.org/0110/documentation/#streamsconfigs
>> >>
>> >> -Sameer.
>> >>
>> >> On Thu, Sep 28, 2017 at 12:12 AM, Matthias J. Sax <
>> matth...@confluent.io>
>> >> wrote:
>> >>
>> >>> An OutOfOrderSequenceException should only occur if a idempotent
>> >>> producer gets out of sync with the broker. If you set
>> >>> `enable.idempotence = true` on your producer, you might want to set
>> >>> `retries = Integer.MAX_VALUE`.
>> >>>
>> >>> -Matthias
>> >>>
>> >>> On 9/26/17 11:30 PM, Sameer Kumar wrote:
>>  Hi,
>> 
>>  I again received this exception while running my streams app. I am
>> >> using
>>  Kafka 11.0.1. After restarting my app, this error got fixed.
>> 
>>  I guess this might be due to bad network. Any pointers. Any config
>>  wherein I can configure it for retries.
>> 
>>  Exception trace is attached.
>> 
>>  Regards,
>>  -Sameer.
>> >>>
>> >>>
>> >>
>> >
>>
>>
>


Re: out of order sequence number in exactly once streams

2017-10-03 Thread Sameer Kumar
I had enabled eos through streams config.


On Fri, Sep 29, 2017 at 11:12 PM, Matthias J. Sax 
wrote:

> That's correct: If EOS is enabled, we enforce some producer configs:
>
> https://github.com/apache/kafka/blob/0.11.0.1/streams/
> src/main/java/org/apache/kafka/streams/StreamsConfig.java#L678-L688
>
> https://github.com/apache/kafka/blob/0.11.0.1/streams/
> src/main/java/org/apache/kafka/streams/StreamsConfig.java#L691
>
> https://github.com/apache/kafka/blob/0.11.0.1/streams/
> src/main/java/org/apache/kafka/streams/StreamsConfig.java#L493-L496
>
>
> Note, that by default we set retries to Integer.MAX_VALUE but we do not
> enforce this setting (as pointed out by Damian already). So you could
> overwrite it with a smaller value (what is of course not recommended).
>
> I was not sure though, if you enabled EOS or just enabled idempotency
> only for the producer -- what you can easily do by providing the
> corresponding producer configs.
>
> If you did enable EOS in StreamsConfig, Producer will take care of
> OutOfOrderSequenceException in general.
>
> However, there are scenarios for which Producer cannot handle it. For
> those case, OutOfOrderSequenceException indicates that there has been
> data loss on the broker, ie, a previously acknowledged message no longer
> exists. For most part, this should only occur in rare situations
> (simultaneous power outages, multiple disk losses, software bugs
> resulting in data corruption, etc.).
>
>
> -Matthias
>
> On 9/29/17 7:55 AM, Damian Guy wrote:
> > You can set ProducerConfig.RETRIES_CONFIG in your StreamsConfig, i.e,
> >
> > Properties props = new Properties();
> > props.put(ProducerConfig.RETRIES_CONFIG, Integer.MAX_VALUE);
> > ...
> >
> > On Fri, 29 Sep 2017 at 13:17 Sameer Kumar 
> wrote:
> >
> >> I guess once stream app are enabled exactly-once, producer idempotence
> get
> >> enabled by default and so do the retries. I guess producer retries are
> >> managed internally and not exposed through streamconfig.
> >>
> >> https://kafka.apache.org/0110/documentation/#streamsconfigs
> >>
> >> -Sameer.
> >>
> >> On Thu, Sep 28, 2017 at 12:12 AM, Matthias J. Sax <
> matth...@confluent.io>
> >> wrote:
> >>
> >>> An OutOfOrderSequenceException should only occur if a idempotent
> >>> producer gets out of sync with the broker. If you set
> >>> `enable.idempotence = true` on your producer, you might want to set
> >>> `retries = Integer.MAX_VALUE`.
> >>>
> >>> -Matthias
> >>>
> >>> On 9/26/17 11:30 PM, Sameer Kumar wrote:
>  Hi,
> 
>  I again received this exception while running my streams app. I am
> >> using
>  Kafka 11.0.1. After restarting my app, this error got fixed.
> 
>  I guess this might be due to bad network. Any pointers. Any config
>  wherein I can configure it for retries.
> 
>  Exception trace is attached.
> 
>  Regards,
>  -Sameer.
> >>>
> >>>
> >>
> >
>
>