Re: COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-15 Thread Ankur Rana
Any comments anyone?

On Fri, Feb 15, 2019 at 6:08 PM Ankur Rana  wrote:

> Hi everyone,
>
> We have a Kafka cluster with 5 brokers with all topics having at least 2
> replication factor. We have multiple Kafka consumers applications running
> on this cluster. Most of these consumers are build using consumer APIs and
> quite recently we have started using Stream applications.
>
> We are facing a really weird issue. Just Sometimes it happens that our
> Kafka cluster breaks down, By breaking down I mean that consumers and
> producers start throwing disconnection exception and all of them just stop.
>
> We use debezium connector to push Postgres events to Kafka topics.
> Debezium throws the error below:
> [image: image.png]
>
>
> Kafka broker throws the error below:
> COORDINATOR_NOT_AVAILABLE
> [image: image.png]
>
>
> Error on the consumer side :
>
> [image: image.png]
>
>
> In order to fix, I stop the disconnected broker and everything fixes
> itself. Debezium starts flushing messages and all consumers start working
> normally.  I bring the disconnected broker up and everything works as
> before without any problem.
>
> I don't understand a few things here :
>
>
>1. what could be the reason behind this disconnection exception. Even
>if one of the broker was somehow disconnected, Isn't kafka suppose to
>handle it in a cluster where all topics have a replication factor of 2.
>2. It appears that the malfunctioning broker was in a state where it
>was neither disconnected nor connected to the cluster. I could still see
>the broker visible in Kafka manager with zero bytes In, while it was
>disconnected from all the producers and consumers.
>3. Weirdly, I have noticed that this situation usually occurs when I
>start the multiple consumers of the stream application. Not sure about this
>as this error has only occurred a few times. It happened twice today and
>both the times I started 3 consumers of the same stream application.
>
>
> Can anyone help me debug this problem. I don't know where to look for
> possible issues with our cluster or stream application. I am attaching
> streams config and stream application code for your reference.
> Please feel free to ask for any more details.
>
>
> Stream config :
> [image: image.png]
>
>
> Stream application code : https://codeshare.io/Gq6pLB
>
> --
> Thanks,
>
> Ankur Rana
> Software Developer
> FarEye
>


-- 
Thanks,

Ankur Rana
Software Developer
FarEye


Re: DSL - Deliver through a table and then to a stream?

2019-02-15 Thread John Roesler
Hi Trey,

I think there is a ticket open requesting to be able to re-use the source
topic, so I don't think it's an intentional restriction, just a consequence
of the way the code is structured at the moment.

Is it sufficient to send the update to "calls" and "answered-calls" at the
same time? You could do something like:

val answeredCalls =
 actions.filter { _, action -> action == Actions.ANSWER }
  .join(callsTable) { id, call -> call }  // now a KTable
  .mapValues { call -> doAnswer(call) } // actual answer implementation

answeredCalls.to("calls");
answeredCalls.to("answered-calls");

Does that help?

-John


On Fri, Feb 15, 2019 at 4:18 PM Trey Hutcheson 
wrote:

> For context, imagine I'm building an IVR simulator. Desired workflow:
>
> IVR knows about a ringing call. IVR receives an IPC instruction to answer
> the call. That instruction is realized by sending a message {action=ANSWER}
> to the "actions" topic.
>
> At this point, the system needs to do two things: actually answer the call,
> and then start a recording of the call, in that order. Because of
> implementation peculiarities external to the system, assume that these two
> things cannot be executed together atomically.
>
> So this is what I'd *like* to do (warning, kotlin code, types omitted for
> brevity):
>
> val callsTable = builder.table("calls", ...)
> val actions = builder.stream("actions", ..)
>
> actions.filter { _, action -> action == Actions.ANSWER }
>   .join(callsTable) { id, call -> call }  // now a KTable
>   .mapValues { call -> doAnswer(call) } // actual answer implementation
>   .through("calls") // persist in state store
>   .to("answered-calls") // let other actors in the system know the call was
> answered, such as start the recording process
>
> Now in the current version of the streams library (2.1.0), that little bit
> of topology throws an exception when trying to build it, with a message
> that a source has already been defined for the "calls" topic. So apparently
> the call to .through materializes a view and defines a source, which was
> already defined in the call to builder.table("calls")?
>
> So how do I do what I want? This sequence needs to happen in order. I have
> tried .branch, but that just ends up in a race condition (the thing doing
> to recording has to join to calls table and filter that the call has been
> answered).
>
> I could create a custom processor that forwards to both sinks - but does
> that really solve the problem? And if it did, how do I create a
> KafkaStreams instance from a combination of StreamBuilder and Topology?
>
> Thanks for the insight
> Trey
>


DSL - Deliver through a table and then to a stream?

2019-02-15 Thread Trey Hutcheson
For context, imagine I'm building an IVR simulator. Desired workflow:

IVR knows about a ringing call. IVR receives an IPC instruction to answer
the call. That instruction is realized by sending a message {action=ANSWER}
to the "actions" topic.

At this point, the system needs to do two things: actually answer the call,
and then start a recording of the call, in that order. Because of
implementation peculiarities external to the system, assume that these two
things cannot be executed together atomically.

So this is what I'd *like* to do (warning, kotlin code, types omitted for
brevity):

val callsTable = builder.table("calls", ...)
val actions = builder.stream("actions", ..)

actions.filter { _, action -> action == Actions.ANSWER }
  .join(callsTable) { id, call -> call }  // now a KTable
  .mapValues { call -> doAnswer(call) } // actual answer implementation
  .through("calls") // persist in state store
  .to("answered-calls") // let other actors in the system know the call was
answered, such as start the recording process

Now in the current version of the streams library (2.1.0), that little bit
of topology throws an exception when trying to build it, with a message
that a source has already been defined for the "calls" topic. So apparently
the call to .through materializes a view and defines a source, which was
already defined in the call to builder.table("calls")?

So how do I do what I want? This sequence needs to happen in order. I have
tried .branch, but that just ends up in a race condition (the thing doing
to recording has to join to calls table and filter that the call has been
answered).

I could create a custom processor that forwards to both sinks - but does
that really solve the problem? And if it did, how do I create a
KafkaStreams instance from a combination of StreamBuilder and Topology?

Thanks for the insight
Trey


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Matthias J. Sax
Congrats Randall!


-Matthias

On 2/14/19 6:16 PM, Guozhang Wang wrote:
> Hello all,
> 
> The PMC of Apache Kafka is happy to announce another new committer joining
> the project today: we have invited Randall Hauch as a project committer and
> he has accepted.
> 
> Randall has been participating in the Kafka community for the past 3 years,
> and is well known as the founder of the Debezium project, a popular project
> for database change-capture streams using Kafka (https://debezium.io). More
> recently he has become the main person keeping Kafka Connect moving
> forward, participated in nearly all KIP discussions and QAs on the mailing
> list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> a hundred reviews around Kafka Connect, and has also been evangelizing
> Kafka Connect at several Kafka Summit venues.
> 
> 
> Thank you very much for your contributions to the Connect community Randall
> ! And looking forward to many more :)
> 
> 
> Guozhang, on behalf of the Apache Kafka PMC
> 



signature.asc
Description: OpenPGP digital signature


Re: Accessing Kafka stream's KTable underlying RocksDB memory usage

2019-02-15 Thread Matthias J. Sax
Cross posted on SO:
https://stackoverflow.com/questions/54701449/accessing-kafka-streams-ktable-underlying-rocksdb-memory-usage

On 2/14/19 9:24 PM, P. won wrote:
> Hi,
> 
> I have a kafka stream app that currently takes 3 topics and aggregates
> them into a KTable. This app resides inside a microservice which has
> been allocated 512 MB memory to work with. After implementing this,
> I've noticed that the docker container running the microservice
> eventually runs out of memory and was trying to debug the cause.
> 
> My current theory (whilst reading the sizing guide
> https://docs.confluent.io/current/streams/sizing.html) is that over
> time, the increasing records stored in the KTable and by extension,
> the underlying RocksDB, is causing the OOM for the microservice. Does
> kafka provide any way to find out the memory used by the underlying
> default RocksDB implementation?
> 



signature.asc
Description: OpenPGP digital signature


COORDINATOR_NOT_AVAILABLE exception on the broker side and Disconnection Exception on the consumer side breaks the entire cluster

2019-02-15 Thread Ankur Rana
Hi everyone,

We have a Kafka cluster with 5 brokers with all topics having at least 2
replication factor. We have multiple Kafka consumers applications running
on this cluster. Most of these consumers are build using consumer APIs and
quite recently we have started using Stream applications.

We are facing a really weird issue. Just Sometimes it happens that our
Kafka cluster breaks down, By breaking down I mean that consumers and
producers start throwing disconnection exception and all of them just stop.

We use debezium connector to push Postgres events to Kafka topics. Debezium
throws the error below:
[image: image.png]


Kafka broker throws the error below:
COORDINATOR_NOT_AVAILABLE
[image: image.png]


Error on the consumer side :

[image: image.png]


In order to fix, I stop the disconnected broker and everything fixes
itself. Debezium starts flushing messages and all consumers start working
normally.  I bring the disconnected broker up and everything works as
before without any problem.

I don't understand a few things here :


   1. what could be the reason behind this disconnection exception. Even if
   one of the broker was somehow disconnected, Isn't kafka suppose to handle
   it in a cluster where all topics have a replication factor of 2.
   2. It appears that the malfunctioning broker was in a state where it was
   neither disconnected nor connected to the cluster. I could still see the
   broker visible in Kafka manager with zero bytes In, while it was
   disconnected from all the producers and consumers.
   3. Weirdly, I have noticed that this situation usually occurs when I
   start the multiple consumers of the stream application. Not sure about this
   as this error has only occurred a few times. It happened twice today and
   both the times I started 3 consumers of the same stream application.


Can anyone help me debug this problem. I don't know where to look for
possible issues with our cluster or stream application. I am attaching
streams config and stream application code for your reference.
Please feel free to ask for any more details.


Stream config :
[image: image.png]


Stream application code : https://codeshare.io/Gq6pLB

-- 
Thanks,

Ankur Rana
Software Developer
FarEye


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Martin Gainty
willkommen randall

Martin


From: Daniel Hanley 
Sent: Friday, February 15, 2019 6:55 AM
To: users@kafka.apache.org
Cc: dev
Subject: Re: [ANNOUNCE] New Committer: Randall Hauch

Congratulations Randall!

On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass 
wrote:

> Congrats Randall! :)
>
> On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
> wrote:
>
> > Congratulations Randall!
> >
> > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  >
> > wrote:
> > >
> > > Congrats Randall!
> > >
> > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > wrote:
> > > >
> > > > Congrats, Randall! Well deserved!
> > > >
> > > > -James
> > > >
> > > > Sent from my iPhone
> > > >
> > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > The PMC of Apache Kafka is happy to announce another new committer
> > joining
> > > > > the project today: we have invited Randall Hauch as a project
> > committer and
> > > > > he has accepted.
> > > > >
> > > > > Randall has been participating in the Kafka community for the past
> 3
> > years,
> > > > > and is well known as the founder of the Debezium project, a popular
> > project
> > > > > for database change-capture streams using Kafka (
> https://debezium.io).
> > More
> > > > > recently he has become the main person keeping Kafka Connect moving
> > > > > forward, participated in nearly all KIP discussions and QAs on the
> > mailing
> > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > conducted over
> > > > > a hundred reviews around Kafka Connect, and has also been
> > evangelizing
> > > > > Kafka Connect at several Kafka Summit venues.
> > > > >
> > > > >
> > > > > Thank you very much for your contributions to the Connect community
> > Randall
> > > > > ! And looking forward to many more :)
> > > > >
> > > > >
> > > > > Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Rajini Sivaram
Congratulations, Randall!

On Fri, Feb 15, 2019 at 11:56 AM Daniel Hanley  wrote:

> Congratulations Randall!
>
> On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass <
> viktorsomo...@gmail.com>
> wrote:
>
> > Congrats Randall! :)
> >
> > On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana <
> satish.dugg...@gmail.com>
> > wrote:
> >
> > > Congratulations Randall!
> > >
> > > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Congrats Randall!
> > > >
> > > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > > wrote:
> > > > >
> > > > > Congrats, Randall! Well deserved!
> > > > >
> > > > > -James
> > > > >
> > > > > Sent from my iPhone
> > > > >
> > > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > > wrote:
> > > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > The PMC of Apache Kafka is happy to announce another new
> committer
> > > joining
> > > > > > the project today: we have invited Randall Hauch as a project
> > > committer and
> > > > > > he has accepted.
> > > > > >
> > > > > > Randall has been participating in the Kafka community for the
> past
> > 3
> > > years,
> > > > > > and is well known as the founder of the Debezium project, a
> popular
> > > project
> > > > > > for database change-capture streams using Kafka (
> > https://debezium.io).
> > > More
> > > > > > recently he has become the main person keeping Kafka Connect
> moving
> > > > > > forward, participated in nearly all KIP discussions and QAs on
> the
> > > mailing
> > > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > > conducted over
> > > > > > a hundred reviews around Kafka Connect, and has also been
> > > evangelizing
> > > > > > Kafka Connect at several Kafka Summit venues.
> > > > > >
> > > > > >
> > > > > > Thank you very much for your contributions to the Connect
> community
> > > Randall
> > > > > > ! And looking forward to many more :)
> > > > > >
> > > > > >
> > > > > > Guozhang, on behalf of the Apache Kafka PMC
> > >
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Daniel Hanley
Congratulations Randall!

On Fri, Feb 15, 2019 at 9:35 AM Viktor Somogyi-Vass 
wrote:

> Congrats Randall! :)
>
> On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
> wrote:
>
> > Congratulations Randall!
> >
> > On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  >
> > wrote:
> > >
> > > Congrats Randall!
> > >
> > > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> > wrote:
> > > >
> > > > Congrats, Randall! Well deserved!
> > > >
> > > > -James
> > > >
> > > > Sent from my iPhone
> > > >
> > > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> > wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > The PMC of Apache Kafka is happy to announce another new committer
> > joining
> > > > > the project today: we have invited Randall Hauch as a project
> > committer and
> > > > > he has accepted.
> > > > >
> > > > > Randall has been participating in the Kafka community for the past
> 3
> > years,
> > > > > and is well known as the founder of the Debezium project, a popular
> > project
> > > > > for database change-capture streams using Kafka (
> https://debezium.io).
> > More
> > > > > recently he has become the main person keeping Kafka Connect moving
> > > > > forward, participated in nearly all KIP discussions and QAs on the
> > mailing
> > > > > list. He's authored 6 KIPs and authored 50 pull requests and
> > conducted over
> > > > > a hundred reviews around Kafka Connect, and has also been
> > evangelizing
> > > > > Kafka Connect at several Kafka Summit venues.
> > > > >
> > > > >
> > > > > Thank you very much for your contributions to the Connect community
> > Randall
> > > > > ! And looking forward to many more :)
> > > > >
> > > > >
> > > > > Guozhang, on behalf of the Apache Kafka PMC
> >
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Viktor Somogyi-Vass
Congrats Randall! :)

On Fri, Feb 15, 2019 at 10:15 AM Satish Duggana 
wrote:

> Congratulations Randall!
>
> On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison 
> wrote:
> >
> > Congrats Randall!
> >
> > On Fri, Feb 15, 2019 at 6:37 AM James Cheng 
> wrote:
> > >
> > > Congrats, Randall! Well deserved!
> > >
> > > -James
> > >
> > > Sent from my iPhone
> > >
> > > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang 
> wrote:
> > > >
> > > > Hello all,
> > > >
> > > > The PMC of Apache Kafka is happy to announce another new committer
> joining
> > > > the project today: we have invited Randall Hauch as a project
> committer and
> > > > he has accepted.
> > > >
> > > > Randall has been participating in the Kafka community for the past 3
> years,
> > > > and is well known as the founder of the Debezium project, a popular
> project
> > > > for database change-capture streams using Kafka (https://debezium.io).
> More
> > > > recently he has become the main person keeping Kafka Connect moving
> > > > forward, participated in nearly all KIP discussions and QAs on the
> mailing
> > > > list. He's authored 6 KIPs and authored 50 pull requests and
> conducted over
> > > > a hundred reviews around Kafka Connect, and has also been
> evangelizing
> > > > Kafka Connect at several Kafka Summit venues.
> > > >
> > > >
> > > > Thank you very much for your contributions to the Connect community
> Randall
> > > > ! And looking forward to many more :)
> > > >
> > > >
> > > > Guozhang, on behalf of the Apache Kafka PMC
>


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Satish Duggana
Congratulations Randall!

On Fri, Feb 15, 2019 at 1:51 PM Mickael Maison  wrote:
>
> Congrats Randall!
>
> On Fri, Feb 15, 2019 at 6:37 AM James Cheng  wrote:
> >
> > Congrats, Randall! Well deserved!
> >
> > -James
> >
> > Sent from my iPhone
> >
> > > On Feb 14, 2019, at 6:16 PM, Guozhang Wang  wrote:
> > >
> > > Hello all,
> > >
> > > The PMC of Apache Kafka is happy to announce another new committer joining
> > > the project today: we have invited Randall Hauch as a project committer 
> > > and
> > > he has accepted.
> > >
> > > Randall has been participating in the Kafka community for the past 3 
> > > years,
> > > and is well known as the founder of the Debezium project, a popular 
> > > project
> > > for database change-capture streams using Kafka (https://debezium.io). 
> > > More
> > > recently he has become the main person keeping Kafka Connect moving
> > > forward, participated in nearly all KIP discussions and QAs on the mailing
> > > list. He's authored 6 KIPs and authored 50 pull requests and conducted 
> > > over
> > > a hundred reviews around Kafka Connect, and has also been evangelizing
> > > Kafka Connect at several Kafka Summit venues.
> > >
> > >
> > > Thank you very much for your contributions to the Connect community 
> > > Randall
> > > ! And looking forward to many more :)
> > >
> > >
> > > Guozhang, on behalf of the Apache Kafka PMC


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Colin McCabe
P.S. I have added KAFKA-7897 to the release notes. Good catch, Jason.

best,
Colin

On Fri, Feb 15, 2019, at 00:49, Colin McCabe wrote:
> Hi all,
> 
> With 7 non-binding +1 votes, 3 binding +1 votes, no +0 votes, and no -1 
> votes, the vote passes.
> 
> Thanks, all!
> 
> cheers,
> Colin
> 
> 
> On Fri, Feb 15, 2019, at 00:07, Jonathan Santilli wrote:
>> 
>> 
>> Hello,
>> 
>> I have downloaded the source and executed integration and unit tests 
>> successfully.
>> Ran kafka-monitor for about 1 hour without any issues.
>> 
>> +1
>> 
>> Thanks for the release Colin.
>> --
>> Jonathan Santilli
>> 
>> 
>> 
>> On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:
>>> Ran the quickstart against the 2.11 artifact and checked the release notes.
>>> For some reason, KAFKA-7897 is not included in the notes, though I
>>> definitely see it in the tagged version. The RC was probably created before
>>> the JIRA was resolved. I think we can regenerate without another RC, so +1
>>> from me.
>>> 
>>> Thanks Colin!
>>> 
>>> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>>> 
>>> > Hi, Colin,
>>> >
>>> > Thanks for running the release. Verified the quickstart for 2.12 binary. 
>>> > +1
>>> > from me.
>>> >
>>> > Jun
>>> >
>>> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
>>> >
>>> > > Hi all,
>>> > >
>>> > > This is the third candidate for release of Apache Kafka 2.1.1. This
>>> > > release includes many bug fixes for Apache Kafka 2.1.
>>> > >
>>> > > Compared to rc1, this release includes the following changes:
>>> > > * MINOR: release.py: fix some compatibility problems.
>>> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
>>> > > used
>>> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
>>> > > login fails
>>> > > * MINOR: Fix more places where the version should be bumped from 2.1.0 
>>> > > ->
>>> > > 2.1.1
>>> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if 
>>> > > the
>>> > > hostname of the broker changes.
>>> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
>>> > > * MINOR: Correctly set dev version in version.py
>>> > >
>>> > > Check out the release notes here:
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
>>> > >
>>> > > The vote will go until Wednesday, February 13st.
>>> > >
>>> > > * Release artifacts to be voted upon (source and binary):
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
>>> > >
>>> > > * Maven artifacts to be voted upon:
>>> > > https://repository.apache.org/content/groups/staging/
>>> > >
>>> > > * Javadoc:
>>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
>>> > >
>>> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
>>> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>>> > >
>>> > > * Jenkins builds for the 2.1 branch:
>>> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>>> > >
>>> > > Thanks to everyone who tested the earlier RCs.
>>> > >
>>> > > cheers,
>>> > > Colin
>>> > >
>>> > > --
>>> > > You received this message because you are subscribed to the Google 
>>> > > Groups
>>> > > "kafka-clients" group.
>>> > > To unsubscribe from this group and stop receiving emails from it, send 
>>> > > an
>>> > > email to kafka-clients+unsubscr...@googlegroups.com 
>>> > > .
>>> > > To post to this group, send email to kafka-clie...@googlegroups.com.
>>> > > Visit this group at https://groups.google.com/group/kafka-clients.
>>> > > To view this discussion on the web visit
>>> > >
>>> > https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
>>> > > .
>>> > > For more options, visit https://groups.google.com/d/optout.
>>> > >
>>> >
>> 
>> 
>> -- 
>> Santilli Jonathan
> 
> 


> --
>  You received this message because you are subscribed to the Google Groups 
> "kafka-clients" group.
>  To unsubscribe from this group and stop receiving emails from it, send an 
> email to kafka-clients+unsubscr...@googlegroups.com.
>  To post to this group, send email to kafka-clie...@googlegroups.com.
>  Visit this group at https://groups.google.com/group/kafka-clients.
>  To view this discussion on the web visit 
> https://groups.google.com/d/msgid/kafka-clients/dcfd1bcc-8960-4c7f-b95c-57e9a6aae0b7%40www.fastmail.com
>  
> .
>  For more options, visit https://groups.google.com/d/optout.


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Colin McCabe
Hi all,

With 7 non-binding +1 votes, 3 binding +1 votes, no +0 votes, and no -1 votes, 
the vote passes.

Thanks, all!

cheers,
Colin


On Fri, Feb 15, 2019, at 00:07, Jonathan Santilli wrote:
> 
> 
> Hello,
> 
> I have downloaded the source and executed integration and unit tests 
> successfully.
> Ran kafka-monitor for about 1 hour without any issues.
> 
> +1
> 
> Thanks for the release Colin.
> --
> Jonathan Santilli
> 
> 
> 
> On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:
>> Ran the quickstart against the 2.11 artifact and checked the release notes.
>> For some reason, KAFKA-7897 is not included in the notes, though I
>> definitely see it in the tagged version. The RC was probably created before
>> the JIRA was resolved. I think we can regenerate without another RC, so +1
>> from me.
>>  
>> Thanks Colin!
>>  
>> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>>  
>> > Hi, Colin,
>> >
>> > Thanks for running the release. Verified the quickstart for 2.12 binary. +1
>> > from me.
>> >
>> > Jun
>> >
>> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
>> >
>> > > Hi all,
>> > >
>> > > This is the third candidate for release of Apache Kafka 2.1.1. This
>> > > release includes many bug fixes for Apache Kafka 2.1.
>> > >
>> > > Compared to rc1, this release includes the following changes:
>> > > * MINOR: release.py: fix some compatibility problems.
>> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
>> > > used
>> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
>> > > login fails
>> > > * MINOR: Fix more places where the version should be bumped from 2.1.0 ->
>> > > 2.1.1
>> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if the
>> > > hostname of the broker changes.
>> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
>> > > * MINOR: Correctly set dev version in version.py
>> > >
>> > > Check out the release notes here:
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
>> > >
>> > > The vote will go until Wednesday, February 13st.
>> > >
>> > > * Release artifacts to be voted upon (source and binary):
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
>> > >
>> > > * Maven artifacts to be voted upon:
>> > > https://repository.apache.org/content/groups/staging/
>> > >
>> > > * Javadoc:
>> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
>> > >
>> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
>> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
>> > >
>> > > * Jenkins builds for the 2.1 branch:
>> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
>> > >
>> > > Thanks to everyone who tested the earlier RCs.
>> > >
>> > > cheers,
>> > > Colin
>> > >
>> > > --
>> > > You received this message because you are subscribed to the Google Groups
>> > > "kafka-clients" group.
>> > > To unsubscribe from this group and stop receiving emails from it, send an
>> > > email to kafka-clients+unsubscr...@googlegroups.com 
>> > > .
>> > > To post to this group, send email to kafka-clie...@googlegroups.com.
>> > > Visit this group at https://groups.google.com/group/kafka-clients.
>> > > To view this discussion on the web visit
>> > >
>> > https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
>> > > .
>> > > For more options, visit https://groups.google.com/d/optout.
>> > >
>> >
> 
> 
> -- 
> Santilli Jonathan


Re: [ANNOUNCE] New Committer: Randall Hauch

2019-02-15 Thread Mickael Maison
Congrats Randall!

On Fri, Feb 15, 2019 at 6:37 AM James Cheng  wrote:
>
> Congrats, Randall! Well deserved!
>
> -James
>
> Sent from my iPhone
>
> > On Feb 14, 2019, at 6:16 PM, Guozhang Wang  wrote:
> >
> > Hello all,
> >
> > The PMC of Apache Kafka is happy to announce another new committer joining
> > the project today: we have invited Randall Hauch as a project committer and
> > he has accepted.
> >
> > Randall has been participating in the Kafka community for the past 3 years,
> > and is well known as the founder of the Debezium project, a popular project
> > for database change-capture streams using Kafka (https://debezium.io). More
> > recently he has become the main person keeping Kafka Connect moving
> > forward, participated in nearly all KIP discussions and QAs on the mailing
> > list. He's authored 6 KIPs and authored 50 pull requests and conducted over
> > a hundred reviews around Kafka Connect, and has also been evangelizing
> > Kafka Connect at several Kafka Summit venues.
> >
> >
> > Thank you very much for your contributions to the Connect community Randall
> > ! And looking forward to many more :)
> >
> >
> > Guozhang, on behalf of the Apache Kafka PMC


Re: [kafka-clients] [VOTE] 2.1.1 RC2

2019-02-15 Thread Jonathan Santilli
Hello,

I have downloaded the source and executed integration and unit tests
successfully.
Ran kafka-monitor for about 1 hour without any issues.

+1

Thanks for the release Colin.
--
Jonathan Santilli



On Fri, Feb 15, 2019 at 6:16 AM Jason Gustafson  wrote:

> Ran the quickstart against the 2.11 artifact and checked the release notes.
> For some reason, KAFKA-7897 is not included in the notes, though I
> definitely see it in the tagged version. The RC was probably created before
> the JIRA was resolved. I think we can regenerate without another RC, so +1
> from me.
>
> Thanks Colin!
>
> On Thu, Feb 14, 2019 at 3:32 PM Jun Rao  wrote:
>
> > Hi, Colin,
> >
> > Thanks for running the release. Verified the quickstart for 2.12 binary.
> +1
> > from me.
> >
> > Jun
> >
> > On Fri, Feb 8, 2019 at 12:02 PM Colin McCabe  wrote:
> >
> > > Hi all,
> > >
> > > This is the third candidate for release of Apache Kafka 2.1.1.  This
> > > release includes many bug fixes for Apache Kafka 2.1.
> > >
> > > Compared to rc1, this release includes the following changes:
> > > * MINOR: release.py: fix some compatibility problems.
> > > * KAFKA-7897; Disable leader epoch cache when older message formats are
> > > used
> > > * KAFKA-7902: Replace original loginContext if SASL/OAUTHBEARER refresh
> > > login fails
> > > * MINOR: Fix more places where the version should be bumped from 2.1.0
> ->
> > > 2.1.1
> > > * KAFKA-7890: Invalidate ClusterConnectionState cache for a broker if
> the
> > > hostname of the broker changes.
> > > * KAFKA-7873; Always seek to beginning in KafkaBasedLog
> > > * MINOR: Correctly set dev version in version.py
> > >
> > > Check out the release notes here:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/RELEASE_NOTES.html
> > >
> > > The vote will go until Wednesday, February 13st.
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/
> > >
> > > * Javadoc:
> > > http://home.apache.org/~cmccabe/kafka-2.1.1-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.1 branch) is the 2.1.1 tag:
> > > https://github.com/apache/kafka/releases/tag/2.1.1-rc2
> > >
> > > * Jenkins builds for the 2.1 branch:
> > > Unit/integration tests: https://builds.apache.org/job/kafka-2.1-jdk8/
> > >
> > > Thanks to everyone who tested the earlier RCs.
> > >
> > > cheers,
> > > Colin
> > >
> > > --
> > > You received this message because you are subscribed to the Google
> Groups
> > > "kafka-clients" group.
> > > To unsubscribe from this group and stop receiving emails from it, send
> an
> > > email to kafka-clients+unsubscr...@googlegroups.com.
> > > To post to this group, send email to kafka-clie...@googlegroups.com.
> > > Visit this group at https://groups.google.com/group/kafka-clients.
> > > To view this discussion on the web visit
> > >
> >
> https://groups.google.com/d/msgid/kafka-clients/ea314ca1-d23a-47c4-8fc7-83b9b1c792db%40www.fastmail.com
> > > .
> > > For more options, visit https://groups.google.com/d/optout.
> > >
> >
>


-- 
Santilli Jonathan