Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread Luke Chen
Congratulations Randall!

Luke

Bill Bejeck  於 2021年4月17日 週六 上午11:33 寫道:

> Congratulations Randall!
>
> -Bill
>
> On Fri, Apr 16, 2021 at 11:10 PM lobo xu  wrote:
>
> > Congrats Randall
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread Bill Bejeck
Congratulations Randall!

-Bill

On Fri, Apr 16, 2021 at 11:10 PM lobo xu  wrote:

> Congrats Randall
>


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread lobo xu
Congrats Randall 


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread Satish Duggana
Congratulations Randall!!

On Sat, 17 Apr, 2021, 5:29 AM Raymond Ng,  wrote:

> Congrats Randall!
>
> On Fri, Apr 16, 2021 at 4:45 PM Matthias J. Sax  wrote:
>
> > Hi,
> >
> > It's my pleasure to announce that Randall Hauch in now a member of the
> > Kafka PMC.
> >
> > Randall has been a Kafka committer since Feb 2019. He has remained
> > active in the community since becoming a committer.
> >
> >
> >
> > Congratulations Randall!
> >
> >  -Matthias, on behalf of Apache Kafka PMC
> >
>


Re: [VOTE] 2.8.0 RC2

2021-04-16 Thread Randall Hauch
Hey, John.

+1 (binding)

I've performed the following:
  1. Validated the signatures and checksums
  2. Built the project from the src tgz file, and ran some of the unit tests
  3. Built from the tag and ran a subset of the tests
  4. Went through the quickstart for the broker and the Connect quickstart
(plus ran a custom connector)
  5. Spot reviewed the site and JavaDocs
  6. Spot checked the issues in the release notes match the Jira reports

Thanks again, John!

Best regards,

Randall

On Thu, Apr 15, 2021 at 11:30 AM Bill Bejeck  wrote:

> Hi John,
>
> Validation steps taken:
>
>1. Validated the signatures and checksums,
>2. Built the project from the src tgz file, and ran the unit tests
>3. Went through the quickstart and the Kafka Streams quickstart
>4. Ran the quickstart for the KRaft module
>   1. Created a topic
>   2. Produced and consumed messages
>   3. Ran the Metadata shell
>
> +1 (binding)
>
> Thanks for running this release!
>
> Bill
>
>
> On Wed, Apr 14, 2021 at 4:03 PM John Roesler  wrote:
>
> > Hello Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka
> > 2.8.0.This is a major release that includes many new
> > features, including:
> >
> >  * Early-access release of replacing Zookeeper with a
> >self-managed quorum
> >  * Add Describe Cluster API
> >  * Support mutual TLS authentication on SASL_SSL listeners
> >  * Ergonomic improvements to Streams TopologyTestDriver
> >  * Logger API improvement to respect the hierarchy
> >  * Request/response trace logs are now JSON-formatted
> >  * New API to add and remove Streams threads while running
> >  * New REST API to expose Connect task configurations
> >  * Fixed the TimeWindowDeserializer to be able to
> >deserialize
> >keys outside of Streams (such as in the console consumer)
> >  * Streams resilient improvement: new uncaught exception
> >handler
> >  * Streams resilience improvement: automatically recover
> >from transient timeout exceptions
> >
> > Release notes for the 2.8.0 release:
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by 19 April 2021 ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the
> > release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> > https://github.com/apache/kafka/releases/tag/2.8.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/28/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/28/protocol.html
> >
> > * Successful Jenkins builds for the 2.8 branch:
> > Unit/integration tests:
> > https://ci-builds.apache.org/job/Kafka/job/kafka/job/2.8/
> > (still flaky)
> >
> > System tests:
> >
> > https://jenkins.confluent.io/job/system-test-kafka/job/2.8/6
> > 0/
> >
> >
> >
> http://confluent-kafka-2-8-system-test-results.s3-us-west-2.amazonaws.com/2021-04-14--001.1618407001--confluentinc--2.8--1b61272d45/report.html
> >
> > /**
> >
> > Thanks,
> > John
> >
> >
> >
>


Re: [ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread Raymond Ng
Congrats Randall!

On Fri, Apr 16, 2021 at 4:45 PM Matthias J. Sax  wrote:

> Hi,
>
> It's my pleasure to announce that Randall Hauch in now a member of the
> Kafka PMC.
>
> Randall has been a Kafka committer since Feb 2019. He has remained
> active in the community since becoming a committer.
>
>
>
> Congratulations Randall!
>
>  -Matthias, on behalf of Apache Kafka PMC
>


[ANNOUNCE] New Kafka PMC Member: Randall Hauch

2021-04-16 Thread Matthias J. Sax
Hi,

It's my pleasure to announce that Randall Hauch in now a member of the
Kafka PMC.

Randall has been a Kafka committer since Feb 2019. He has remained
active in the community since becoming a committer.



Congratulations Randall!

 -Matthias, on behalf of Apache Kafka PMC


Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-16 Thread Ron Dagostino
Thanks, David.  Yeah, I agree.  I was more bringing it up to make sure we 
explicitly discussed it.

Ron

> On Apr 16, 2021, at 2:15 PM, David Arthur  wrote:
> 
> Guozhang / Ismael, yes agreed on the plurality of the naming. I've updated
> the KIP.
> 
> Ron, idempotent allocations are certainly possible, but as you pointed out
> it might not be needed. It would require some additional book-keeping by
> the controller to recall what was the last producer ID block allocated for
> each broker. It would also open us up to bugs where the same ID block could
> be given out repeatedly in the case of a broker providing some wrong
> information in its request. It also doesn't help in the case of a broker
> restarting since the broker won't know what its last block was (unless it
> adds some local state management). Generally, I think the added complexity
> is not worth it. The RPC rate limiting should be sufficient to protect us
> from exhausting the ID space. If you agree, I can update the discussion
> section of the KIP with this conclusion.
> 
> Thanks for the feedback so far, everyone!
> 
> -David
> 
>> On Wed, Apr 14, 2021 at 4:37 PM Ismael Juma  wrote:
>> 
>> Hi Guozhang,
>> 
>> That was my original suggestion, so I am naturally +1 :)
>> 
>> Ismael
>> 
>>> On Wed, Apr 14, 2021 at 11:44 AM Guozhang Wang  wrote:
>>> 
>>> Hi David,
>>> 
>>> Just putting my paranoid hat here :) Could we name the req/resp name as
>>> "AllocateProducerIds" instead of "AllocateProducerId"? Otherwise, LGTM!
>>> 
>>> Guozhang
>>> 
 On Thu, Apr 8, 2021 at 2:23 PM Ron Dagostino  wrote:
 
 Hi David.  I'm wondering if it might be a good idea to have the broker
 send information about the last block it successfully received when it
 requests a new block.  As the RPC stands right now it can't be
 idempotent -- it just tells the controller "provide me a new block,
 please".  One case where it might be useful for the RPC to be
 idempotent is if the broker never receives the response from the
 controller such that it asks again.  That would result in the burning
 of the block that the controller provided but that the broker never
 received.  Now, granted, the ID space is 64 bits, so we would have to
 make ~2^54 requests to burn the entire space, and that isn't going to
 happen.  So whether this is actually needed is questionable.  And it
 might not be worth it to write the controller side code to make it act
 idempotently even if we added the request field to make it possible.
 But I figured this is worth mentioning even if we explicitly decide to
 reject it.
 
 Ron
 
 On Thu, Apr 8, 2021 at 3:16 PM Ron Dagostino 
>> wrote:
> 
> Oh, I see.  Yes, my mistake -- I read it wrong.  You are right that
> all we need in the metadata log is the latest value allocated.
> 
> Ron
> 
> On Thu, Apr 8, 2021 at 11:21 AM David Arthur 
>> wrote:
>> 
>> Ron -- I considered making the RPC response and record use the same
>>> (or
>> very similar) fields, but the use case is slightly different. A
>>> broker
>> handling the RPC needs to know the bounds of the block since it has
>>> no
 idea
>> what the block size is. Also, the brokers will normally see
 non-contiguous
>> blocks.
>> 
>> For the metadata log, we can just keep track of the latest producer
>>> Id
 that
>> was allocated. It's kind of like a high watermark for producer IDs.
 This
>> actually saves us from needing an extra field in the record (the
>> KIP
 has
>> just ProducerIdEnd => int64 in the record).
>> 
>> Does that make sense?
>> 
>> On Wed, Apr 7, 2021 at 8:44 AM Ron Dagostino 
 wrote:
>> 
>>> Thanks for the KIP, David.
>>> 
>>> With the RPC returning a start and length, should the record in
>> the
>>> metadata log do the same thing for consistency and to save the
>> byte
>>> per record?
>>> 
>>> Ron
>>> 
>>> 
>>> On Tue, Apr 6, 2021 at 11:06 PM Ismael Juma 
 wrote:
 
 Great, thanks. Instead of calling it "bridge release", can we
>> say
 3.0?
 
 Ismael
 
 On Tue, Apr 6, 2021 at 7:48 PM David Arthur 
 wrote:
 
> Thanks for the feedback, Ismael. Renaming the RPC and using
 start+len
> instead of start+end sounds fine.
> 
> And yes, the controller will allocate the IDs in ZK mode for
>>> the
 bridge
> release.
> 
> I'll update the KIP to reflect these points.
> 
> Thanks!
> 
> On Tue, Apr 6, 2021 at 7:30 PM Ismael Juma <
>> ism...@juma.me.uk>
 wrote:
> 
>> Sorry, one more question: the allocation of ids will be
>> done
 by the
>> controller even in ZK mode, right?
>> 
>> Ismael
>> 
>> On Tue, Apr 6, 2021 at 4:26 PM Ismael Juma <
>>> 

Re: [DISCUSS] KIP-726: Make the CooperativeStickyAssignor as the default assignor

2021-04-16 Thread Guozhang Wang
1) From user's perspective, it is always possible that a commit within
onPartitionsRevoked throw in practice (e.g. if the member missed the
previous rebalance where its assigned partitions are already re-assigned)
-- and the onPartitionsLost was introduced for that exact reason, i.e. it
is primarily for optimizations, but not for correctness guarantees -- on
the other hand, it would be surprising to users to see the commit returns
and then later found it not going through. Given that, I'd suggest we still
throw the exception right away. Regarding the flag itself though, I agree
that keeping it set until the next succeeded join group makes sense to be
safer.

2) That's crystal, thank you for the clarification.

On Wed, Apr 14, 2021 at 6:46 PM Sophie Blee-Goldman
 wrote:

> 1) Once the short-circuit is triggered, the member will downgrade to the
> EAGER protocol, but
> won't necessarily try to rejoin the group right away.
>
> In the "happy path", the user has implemented #onPartitionsLost correctly
> and will not attempt
> to commit partitions that are lost. And since these partitions have indeed
> been revoked, the user
> application should not attempt to commit those partitions after this point.
> In this case, there's no
> reason for the consumer to immediately rejoin the group. Since a
> non-cooperative assignor was
> selected, we know that all partitions have been assigned. This member can
> continue on as usual,
> processing the remaining un-revoked partitions and will follow the EAGER
> protocol in the next
> rebalance. There's no user-facing impact or handling required; all that
> happens is that the work
> since the last commit on those revoked partitions has been lost.
>
> In the less-happy path, the user has implemented #onPartitionsLost
> incorrectly or not implemented
> it at all, falling back on the default which invokes #onPartitionsRevoked
> which in turn will attempt to
> commit those partitions during the rebalance callback. In this case we rely
> on the flag to prevent
> this commit request from being sent to the broker.
>
> Originally I was thinking we should throw a CommitFailedException up
> through the #onPartitionsLost
> callback, and eventually up through poll(), then rejoin the group. But now
> I'm wondering if this is really
> necessary -- the important point in all cases is just to prevent the
> commit, but there's no reason the
> consumer should not be allowed to continue processing its other partitions,
> and it hasn't dropped out
> of the group. What do you think about this slight amendment to my original
> proposal: if a user does end up
> calling commit for whatever reason when invoking #onPartitionsLost, we'll
> just swallow the resulting
> CommitFailedException. So the user application wouldn't see anything, and
> the only impact would be
> that these partitions were not able to commit those last set of offsets on
> the revoked partitions.
>
> WDYT? My only concern there is that the user might have some implicit
> assumption that unless a
> CommitFailedException was thrown, the offsets of revoked partitions were
> successfully committed
> and they may have some downstream logic that should trigger only in this
> case. If that's a concern,
> then I would keep the original proposal which says a CommitFailedException
> will be thrown up through
> poll(), and leave it up to the user to decide if they want to trigger a new
> rebalance/rejoin the group or not.
>
> Regarding the flag which prevents committing the revoked partitions, this
> will need to continue
> blocking such commit attempts until the next time the consumer rejoins the
> group, ie until the end
> of the next successful rebalance. Technically this shouldn't matter, since
> the consumer no longer
> owns those partitions this member shouldn't attempt to commit them anyways.
> Usually we can
> rely on the broker rejecting commit attempts on partitions that are not
> owned, in which case the
> consumer will throw a CommitFailedException. This is similar, except that
> we can't rely on the
> broker having been informed of the change in ownership before this consumer
> might attempt to
> commit. So to avoid this race condition, we'll keep the "blockCommit" flag
> until the next rebalance
> when we can be certain that the broker is clear on this
> partition's ownership.
>
> 2) I guess maybe the wording here is unclear -- what I meant is that all
> 3.0 applications will *eventually*
> enable cooperative rebalancing in the stable state. This doesn't mean that
> it will select COOPERATIVE
> when it first starts up, and in order for this dynamic protocol upgrade to
> be safe we do indeed need to
> start off with EAGER and only upgrade once the selected assignor indicates
> that it's safe to do so.
> (This only applies if multiple assignors are used, if the assignors are
> "cooperative-sticky" only then it
> will just start out and forever remain on COOPERATIVE, like in Streams)
>
> Since it's just the first rebalance, the 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #17

2021-04-16 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-730: Producer ID generation in KRaft mode

2021-04-16 Thread David Arthur
Guozhang / Ismael, yes agreed on the plurality of the naming. I've updated
the KIP.

Ron, idempotent allocations are certainly possible, but as you pointed out
it might not be needed. It would require some additional book-keeping by
the controller to recall what was the last producer ID block allocated for
each broker. It would also open us up to bugs where the same ID block could
be given out repeatedly in the case of a broker providing some wrong
information in its request. It also doesn't help in the case of a broker
restarting since the broker won't know what its last block was (unless it
adds some local state management). Generally, I think the added complexity
is not worth it. The RPC rate limiting should be sufficient to protect us
from exhausting the ID space. If you agree, I can update the discussion
section of the KIP with this conclusion.

Thanks for the feedback so far, everyone!

-David

On Wed, Apr 14, 2021 at 4:37 PM Ismael Juma  wrote:

> Hi Guozhang,
>
> That was my original suggestion, so I am naturally +1 :)
>
> Ismael
>
> On Wed, Apr 14, 2021 at 11:44 AM Guozhang Wang  wrote:
>
> > Hi David,
> >
> > Just putting my paranoid hat here :) Could we name the req/resp name as
> > "AllocateProducerIds" instead of "AllocateProducerId"? Otherwise, LGTM!
> >
> > Guozhang
> >
> > On Thu, Apr 8, 2021 at 2:23 PM Ron Dagostino  wrote:
> >
> > > Hi David.  I'm wondering if it might be a good idea to have the broker
> > > send information about the last block it successfully received when it
> > > requests a new block.  As the RPC stands right now it can't be
> > > idempotent -- it just tells the controller "provide me a new block,
> > > please".  One case where it might be useful for the RPC to be
> > > idempotent is if the broker never receives the response from the
> > > controller such that it asks again.  That would result in the burning
> > > of the block that the controller provided but that the broker never
> > > received.  Now, granted, the ID space is 64 bits, so we would have to
> > > make ~2^54 requests to burn the entire space, and that isn't going to
> > > happen.  So whether this is actually needed is questionable.  And it
> > > might not be worth it to write the controller side code to make it act
> > > idempotently even if we added the request field to make it possible.
> > > But I figured this is worth mentioning even if we explicitly decide to
> > > reject it.
> > >
> > > Ron
> > >
> > > On Thu, Apr 8, 2021 at 3:16 PM Ron Dagostino 
> wrote:
> > > >
> > > > Oh, I see.  Yes, my mistake -- I read it wrong.  You are right that
> > > > all we need in the metadata log is the latest value allocated.
> > > >
> > > > Ron
> > > >
> > > > On Thu, Apr 8, 2021 at 11:21 AM David Arthur 
> wrote:
> > > > >
> > > > > Ron -- I considered making the RPC response and record use the same
> > (or
> > > > > very similar) fields, but the use case is slightly different. A
> > broker
> > > > > handling the RPC needs to know the bounds of the block since it has
> > no
> > > idea
> > > > > what the block size is. Also, the brokers will normally see
> > > non-contiguous
> > > > > blocks.
> > > > >
> > > > > For the metadata log, we can just keep track of the latest producer
> > Id
> > > that
> > > > > was allocated. It's kind of like a high watermark for producer IDs.
> > > This
> > > > > actually saves us from needing an extra field in the record (the
> KIP
> > > has
> > > > > just ProducerIdEnd => int64 in the record).
> > > > >
> > > > > Does that make sense?
> > > > >
> > > > > On Wed, Apr 7, 2021 at 8:44 AM Ron Dagostino 
> > > wrote:
> > > > >
> > > > > > Thanks for the KIP, David.
> > > > > >
> > > > > > With the RPC returning a start and length, should the record in
> the
> > > > > > metadata log do the same thing for consistency and to save the
> byte
> > > > > > per record?
> > > > > >
> > > > > > Ron
> > > > > >
> > > > > >
> > > > > > On Tue, Apr 6, 2021 at 11:06 PM Ismael Juma 
> > > wrote:
> > > > > > >
> > > > > > > Great, thanks. Instead of calling it "bridge release", can we
> say
> > > 3.0?
> > > > > > >
> > > > > > > Ismael
> > > > > > >
> > > > > > > On Tue, Apr 6, 2021 at 7:48 PM David Arthur 
> > > wrote:
> > > > > > >
> > > > > > > > Thanks for the feedback, Ismael. Renaming the RPC and using
> > > start+len
> > > > > > > > instead of start+end sounds fine.
> > > > > > > >
> > > > > > > > And yes, the controller will allocate the IDs in ZK mode for
> > the
> > > bridge
> > > > > > > > release.
> > > > > > > >
> > > > > > > > I'll update the KIP to reflect these points.
> > > > > > > >
> > > > > > > > Thanks!
> > > > > > > >
> > > > > > > > On Tue, Apr 6, 2021 at 7:30 PM Ismael Juma <
> ism...@juma.me.uk>
> > > wrote:
> > > > > > > >
> > > > > > > > > Sorry, one more question: the allocation of ids will be
> done
> > > by the
> > > > > > > > > controller even in ZK mode, right?
> > > > > > > > >
> > > > > > > > > Ismael
> > > > > > > > >
> > > > > > > > > On Tue, Apr 6, 2021 at 4:26 PM 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #47

2021-04-16 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-12679) Rebalancing a restoring or running task may cause directory livelocking with newly created task

2021-04-16 Thread Peter Nahas (Jira)
Peter Nahas created KAFKA-12679:
---

 Summary: Rebalancing a restoring or running task may cause 
directory livelocking with newly created task
 Key: KAFKA-12679
 URL: https://issues.apache.org/jira/browse/KAFKA-12679
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.1
 Environment: Broker and client version 2.6.1
Multi-node broker cluster
Multi-node, auto scaling streams app instances

Reporter: Peter Nahas
 Attachments: Backoff-between-directory-lock-attempts.patch

If a task that uses a state store is in the restoring state or in a running 
state and the task gets rebalanced to a separate thread on the same instance, 
the newly created task will attempt to lock the state store director while the 
first thread is continuing to use it. This is totally normal and expected 
behavior when the first thread is not yet aware of the rebalance. However, that 
newly created task is effectively running a while loop with no backoff waiting 
to lock the directory:
 # TaskManager tells the task to restore in `tryToCompleteRestoration`
 # The task attempts to lock the directory
 # The lock attempt fails and throws a 
`org.apache.kafka.streams.errors.LockException`
 # TaskManager catches the exception, stops further processing on the task and 
reports that not all tasks have restored
 # The StreamThread `runLoop` continues to run.

I've seen some documentation indicate that there is supposed to be a backoff 
when this condition occurs, but there does not appear to be any in the code. 
The result is that if this goes on for long enough, the lock-loop may dominate 
CPU usage in the process and starve out the old stream thread task processing.

 

When in this state, the DEBUG level logging for TaskManager will produce a 
steady stream of messages like the following:
{noformat}
2021-03-30 20:59:51,098 DEBUG --- [StreamThread-10] o.a.k.s.p.i.TaskManager 
: stream-thread [StreamThread-10] Could not initialize 0_34 due to 
the following exception; will retry
org.apache.kafka.streams.errors.LockException: stream-thread [StreamThread-10] 
standby-task [0_34] Failed to lock the state directory for task 0_34
{noformat}
 

 

I've attached a git formatted patch to resolve the issue. Simply detect the 
scenario and sleep for the backoff time in the appropriate StreamThread.

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 2.8.0 RC2

2021-04-16 Thread Ismael Juma
Hi Gwen,

This is a bit different than the MirrorMaker 2 case. KRaft is "early
access" and we decided not to include the KRaft configs in the default
config docs. They will be there in 3.0.

Ismael

On Thu, Apr 15, 2021 at 2:31 PM Gwen Shapira 
wrote:

> Historically, "beta" features were not added to the documentation. I
> complained that MirrorMaker 2 was missing and I was pointed to the
> readme...
>
> I can't say I love this, but it seems to be The Kafka Way of doing things.
> So not a release blocker IMO.
>
> On Thu, Apr 15, 2021 at 1:46 PM Israel Ekpo  wrote:
>
> > I just checked the documentation page and could not find any reference to
> > the KIP-631 configurations like process.roles, node.id and any of the
> > controller.quorum.* configs
> > https://kafka.apache.org/28/documentation.html
> >
> > Were these left out on purpose or should there be included in the 2.8
> > documentation page as well?
> >
> > In the README.md file for KIP-500 (KRaft) mode, I see these configs:
> > https://github.com/apache/kafka/blob/2.8/config/kraft/README.md
> >
> > However, I don't see them in the documentation page where most users will
> > check first
> > https://kafka.apache.org/28/documentation.html
> >
> > A lot of users looking to try out KRaft mode are going to be very
> confused
> > on how to set this up without Zookeeper to check things out.
> >
> > When you have a moment, please share your thoughts regarding if these
> > configs/settings need to be in this 2.8 documentation for this release.
> >
> > Thanks.
> >
> >
> > Additional References:
> >
> >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-631%3A+The+Quorum-based+Kafka+Controller#KIP631:TheQuorumbasedKafkaController-Configurations
> >
> >
> >
> https://github.com/apache/kafka/blob/2.8/raft/src/main/java/org/apache/kafka/raft/RaftConfig.java#L57
> >
> >
> >
> https://github.com/apache/kafka/blob/2.8/core/src/main/scala/kafka/server/KafkaConfig.scala#L277
> >
> >
> >
> https://github.com/apache/kafka/blob/2.8/core/src/main/scala/kafka/server/KafkaConfig.scala#L373
> >
> > On Thu, Apr 15, 2021 at 12:30 PM Bill Bejeck  wrote:
> >
> > > Hi John,
> > >
> > > Validation steps taken:
> > >
> > >1. Validated the signatures and checksums,
> > >2. Built the project from the src tgz file, and ran the unit tests
> > >3. Went through the quickstart and the Kafka Streams quickstart
> > >4. Ran the quickstart for the KRaft module
> > >   1. Created a topic
> > >   2. Produced and consumed messages
> > >   3. Ran the Metadata shell
> > >
> > > +1 (binding)
> > >
> > > Thanks for running this release!
> > >
> > > Bill
> > >
> > >
> > > On Wed, Apr 14, 2021 at 4:03 PM John Roesler 
> wrote:
> > >
> > > > Hello Kafka users, developers and client-developers,
> > > >
> > > > This is the third candidate for release of Apache Kafka
> > > > 2.8.0.This is a major release that includes many new
> > > > features, including:
> > > >
> > > >  * Early-access release of replacing Zookeeper with a
> > > >self-managed quorum
> > > >  * Add Describe Cluster API
> > > >  * Support mutual TLS authentication on SASL_SSL listeners
> > > >  * Ergonomic improvements to Streams TopologyTestDriver
> > > >  * Logger API improvement to respect the hierarchy
> > > >  * Request/response trace logs are now JSON-formatted
> > > >  * New API to add and remove Streams threads while running
> > > >  * New REST API to expose Connect task configurations
> > > >  * Fixed the TimeWindowDeserializer to be able to
> > > >deserialize
> > > >keys outside of Streams (such as in the console consumer)
> > > >  * Streams resilient improvement: new uncaught exception
> > > >handler
> > > >  * Streams resilience improvement: automatically recover
> > > >from transient timeout exceptions
> > > >
> > > > Release notes for the 2.8.0 release:
> > > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/RELEASE_NOTES.html
> > > >
> > > > *** Please download, test and vote by 19 April 2021 ***
> > > >
> > > > Kafka's KEYS file containing PGP keys we use to sign the
> > > > release:
> > > > https://kafka.apache.org/KEYS
> > > >
> > > > * Release artifacts to be voted upon (source and binary):
> > > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/
> > > >
> > > > * Maven artifacts to be voted upon:
> > > >
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > >
> > > > * Javadoc:
> > > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/javadoc/
> > > >
> > > > * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> > > > https://github.com/apache/kafka/releases/tag/2.8.0-rc2
> > > >
> > > > * Documentation:
> > > > https://kafka.apache.org/28/documentation.html
> > > >
> > > > * Protocol:
> > > > https://kafka.apache.org/28/protocol.html
> > > >
> > > > * Successful Jenkins builds for the 2.8 branch:
> > > > Unit/integration tests:
> > > > https://ci-builds.apache.org/job/Kafka/job/kafka/job/2.8/
> > > > (still flaky)
> > > >
> > > 

Re: [VOTE] 2.8.0 RC2

2021-04-16 Thread Ismael Juma
Hi Israel,

This was intentional because KRaft mode is "early access" and not meant for
production usage. We don't want people to use it accidentally. Important
functionality like reassignment is not implemented yet. The configs will be
documented from 3.0 onwards.

Ismael

On Thu, Apr 15, 2021, 1:46 PM Israel Ekpo  wrote:

> I just checked the documentation page and could not find any reference to
> the KIP-631 configurations like process.roles, node.id and any of the
> controller.quorum.* configs
> https://kafka.apache.org/28/documentation.html
>
> Were these left out on purpose or should there be included in the 2.8
> documentation page as well?
>
> In the README.md file for KIP-500 (KRaft) mode, I see these configs:
> https://github.com/apache/kafka/blob/2.8/config/kraft/README.md
>
> However, I don't see them in the documentation page where most users will
> check first
> https://kafka.apache.org/28/documentation.html
>
> A lot of users looking to try out KRaft mode are going to be very confused
> on how to set this up without Zookeeper to check things out.
>
> When you have a moment, please share your thoughts regarding if these
> configs/settings need to be in this 2.8 documentation for this release.
>
> Thanks.
>
>
> Additional References:
>
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-631%3A+The+Quorum-based+Kafka+Controller#KIP631:TheQuorumbasedKafkaController-Configurations
>
>
> https://github.com/apache/kafka/blob/2.8/raft/src/main/java/org/apache/kafka/raft/RaftConfig.java#L57
>
>
> https://github.com/apache/kafka/blob/2.8/core/src/main/scala/kafka/server/KafkaConfig.scala#L277
>
>
> https://github.com/apache/kafka/blob/2.8/core/src/main/scala/kafka/server/KafkaConfig.scala#L373
>
> On Thu, Apr 15, 2021 at 12:30 PM Bill Bejeck  wrote:
>
> > Hi John,
> >
> > Validation steps taken:
> >
> >1. Validated the signatures and checksums,
> >2. Built the project from the src tgz file, and ran the unit tests
> >3. Went through the quickstart and the Kafka Streams quickstart
> >4. Ran the quickstart for the KRaft module
> >   1. Created a topic
> >   2. Produced and consumed messages
> >   3. Ran the Metadata shell
> >
> > +1 (binding)
> >
> > Thanks for running this release!
> >
> > Bill
> >
> >
> > On Wed, Apr 14, 2021 at 4:03 PM John Roesler  wrote:
> >
> > > Hello Kafka users, developers and client-developers,
> > >
> > > This is the third candidate for release of Apache Kafka
> > > 2.8.0.This is a major release that includes many new
> > > features, including:
> > >
> > >  * Early-access release of replacing Zookeeper with a
> > >self-managed quorum
> > >  * Add Describe Cluster API
> > >  * Support mutual TLS authentication on SASL_SSL listeners
> > >  * Ergonomic improvements to Streams TopologyTestDriver
> > >  * Logger API improvement to respect the hierarchy
> > >  * Request/response trace logs are now JSON-formatted
> > >  * New API to add and remove Streams threads while running
> > >  * New REST API to expose Connect task configurations
> > >  * Fixed the TimeWindowDeserializer to be able to
> > >deserialize
> > >keys outside of Streams (such as in the console consumer)
> > >  * Streams resilient improvement: new uncaught exception
> > >handler
> > >  * Streams resilience improvement: automatically recover
> > >from transient timeout exceptions
> > >
> > > Release notes for the 2.8.0 release:
> > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by 19 April 2021 ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the
> > > release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~vvcephei/kafka-2.8.0-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 2.8 branch) is the 2.8.0 tag:
> > > https://github.com/apache/kafka/releases/tag/2.8.0-rc2
> > >
> > > * Documentation:
> > > https://kafka.apache.org/28/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/28/protocol.html
> > >
> > > * Successful Jenkins builds for the 2.8 branch:
> > > Unit/integration tests:
> > > https://ci-builds.apache.org/job/Kafka/job/kafka/job/2.8/
> > > (still flaky)
> > >
> > > System tests:
> > >
> > > https://jenkins.confluent.io/job/system-test-kafka/job/2.8/6
> > > 0/
> > >
> > >
> > >
> >
> http://confluent-kafka-2-8-system-test-results.s3-us-west-2.amazonaws.com/2021-04-14--001.1618407001--confluentinc--2.8--1b61272d45/report.html
> > >
> > > /**
> > >
> > > Thanks,
> > > John
> > >
> > >
> > >
> >
>


[jira] [Created] (KAFKA-12678) Flaky Test CustomQuotaCallbackTest.testCustomQuotaCallback

2021-04-16 Thread Matthias J. Sax (Jira)
Matthias J. Sax created KAFKA-12678:
---

 Summary: Flaky Test CustomQuotaCallbackTest.testCustomQuotaCallback
 Key: KAFKA-12678
 URL: https://issues.apache.org/jira/browse/KAFKA-12678
 Project: Kafka
  Issue Type: Test
  Components: core, unit tests
Reporter: Matthias J. Sax


https://github.com/apache/kafka/pull/10548/checks?check_run_id=2363286324
{quote} {{org.opentest4j.AssertionFailedError: Topic [group1_largeTopic] 
metadata not propagated after 6 ms
at org.junit.jupiter.api.AssertionUtils.fail(AssertionUtils.java:39)
at org.junit.jupiter.api.Assertions.fail(Assertions.java:117)
at 
kafka.utils.TestUtils$.waitForAllPartitionsMetadata(TestUtils.scala:851)
at kafka.utils.TestUtils$.createTopic(TestUtils.scala:410)
at kafka.utils.TestUtils$.createTopic(TestUtils.scala:383)
at 
kafka.api.CustomQuotaCallbackTest.createTopic(CustomQuotaCallbackTest.scala:180)
at 
kafka.api.CustomQuotaCallbackTest.testCustomQuotaCallback(CustomQuotaCallbackTest.scala:135)}}
{{}}{quote}
{{STDOUT}}
{quote}{{}}
 {{[2021-04-16 13ː59ː37,961] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka14612759777396794548.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1094)
[2021-04-16 13ː59ː37,962] ERROR [ZooKeeperClient] Auth failed. 
(kafka.zookeeper.ZooKeeperClient:74)
[2021-04-16 13ː59ː38,367] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka14612759777396794548.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1094)
[2021-04-16 13ː59ː38,367] ERROR [ZooKeeperClient] Auth failed. 
(kafka.zookeeper.ZooKeeperClient:74)
[2021-04-16 13ː59ː38,514] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka14612759777396794548.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1094)
[2021-04-16 13ː59ː38,514] ERROR [ZooKeeperClient Kafka server] Auth failed. 
(kafka.zookeeper.ZooKeeperClient:74)
[2021-04-16 13ː59ː38,530] WARN No meta.properties file under dir 
/tmp/kafka-3506547187885001632/meta.properties 
(kafka.server.BrokerMetadataCheckpoint:70)
[2021-04-16 13ː59ː38,838] WARN SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/kafka14612759777396794548.tmp'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it. 
(org.apache.zookeeper.ClientCnxn:1094)
[2021-04-16 13ː59ː38,838] ERROR [ZooKeeperClient Kafka server] Auth failed. 
(kafka.zookeeper.ZooKeeperClient:74)
[2021-04-16 13ː59ː38,848] WARN No meta.properties file under dir 
/tmp/kafka-15288707564978049709/meta.properties 
(kafka.server.BrokerMetadataCheckpoint:70)
[2021-04-16 14ː00ː16,029] WARN [RequestSendThread controllerId=0] Controller 0 
epoch 1 fails to send request (type=LeaderAndIsRequest, controllerId=0, 
controllerEpoch=1, brokerEpoch=35, 
partitionStates=[LeaderAndIsrPartitionState(topicName='group1_largeTopic', 
partitionIndex=6, controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], 
zkVersion=0, replicas=[0], addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', partitionIndex=72, 
controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', partitionIndex=39, 
controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', partitionIndex=14, 
controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', partitionIndex=80, 
controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', partitionIndex=47, 
controllerEpoch=1, leader=0, leaderEpoch=0, isr=[0], zkVersion=0, replicas=[0], 
addingReplicas=[], removingReplicas=[], isNew=true), 
LeaderAndIsrPartitionState(topicName='group1_largeTopic', 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #16

2021-04-16 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #46

2021-04-16 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-8863) Add InsertHeader and DropHeaders connect transforms KIP-145

2021-04-16 Thread Tom Bentley (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-8863?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Bentley resolved KAFKA-8863.

Fix Version/s: 3.0.0
 Reviewer: Mickael Maison
 Assignee: Tom Bentley
   Resolution: Fixed

> Add InsertHeader and DropHeaders connect transforms KIP-145
> ---
>
> Key: KAFKA-8863
> URL: https://issues.apache.org/jira/browse/KAFKA-8863
> Project: Kafka
>  Issue Type: New Feature
>  Components: clients, KafkaConnect
>Reporter: Albert Lozano
>Assignee: Tom Bentley
>Priority: Major
> Fix For: 3.0.0
>
>
> [https://cwiki.apache.org/confluence/display/KAFKA/KIP-145+-+Expose+Record+Headers+in+Kafka+Connect]
> Continuing the work done in the PR 
> [https://github.com/apache/kafka/pull/4319] implementing the transforms to 
> work with headers would be awesome.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-732: Deprecate eos-alpha and replace eos-beta with eos-v2

2021-04-16 Thread Luke Chen
Hi Sophie,
+1 (non-binding)
Thanks for the KIP!

Luke


Matthias J. Sax  於 2021年4月16日 週五 上午11:21 寫道:

> +1 (binding)
>
> On 4/15/21 12:56 PM, Israel Ekpo wrote:
> > 1+ I agree.
> >
> > I think besides just merging the changes, specific attention should be
> > brought to the KIP in the 3.0 release through blogs and tutorials to make
> > the community more aware of the change in 3.0, upcoming removal of
> > deprecated features in 4.0, it's prerequisites (broker version etc) and
> > benefits
> >
> > Thanks for working on this
> >
> > On Thu, Apr 15, 2021 at 2:43 PM Guozhang Wang 
> wrote:
> >
> >> +1 as well (binding). Thanks Sophie!
> >>
> >> On Thu, Apr 15, 2021 at 7:29 AM Bruno Cadonna 
> wrote:
> >>
> >>> Sophie,
> >>>
> >>> Thank you for the KIP!
> >>>
> >>> +1 (binding)
> >>>
> >>> Best,
> >>> Bruno
> >>>
> >>> On 15.04.21 01:59, Sophie Blee-Goldman wrote:
>  Hey all,
> 
>  I'd like to kick off the vote on KIP-732, to deprecate eos-alpha in
> >> Kafka
>  Streams and migrate away from the "eos-beta" terminology by replacing
> >> it
>  with "eos-v2" to shore up user confidence in this feature.
> 
> 
> >>>
> >>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-732%3A+Deprecate+eos-alpha+and+replace+eos-beta+with+eos-v2
> 
>  Please respond on the discussion thread if you have any late-breaking
>  concerns or questions.
> 
>  Thanks!
>  Sophie
> 
> >>>
> >>
> >>
> >> --
> >> -- Guozhang
> >>
> >
>


[jira] [Created] (KAFKA-12677) The raftCluster always send to the wrong active controller and never update

2021-04-16 Thread Luke Chen (Jira)
Luke Chen created KAFKA-12677:
-

 Summary: The raftCluster always send to the wrong active 
controller and never update
 Key: KAFKA-12677
 URL: https://issues.apache.org/jira/browse/KAFKA-12677
 Project: Kafka
  Issue Type: Bug
  Components: core
Reporter: Luke Chen


We introduce KIP-500 to introduce a Self-Managed Metadata Quorum. We should 
always have 1 active controller, and all the RPC will send to the active 
controller. But there's chances that the active controller already changed, but 
the RPC still send to the old one.

In the attachment log, we can see: 
{code:java}
[Controller 3002] Becoming active at controller epoch 1. 
...
[Controller 3000] Becoming active at controller epoch 2. 
{code}
So, the latest active controller should be 3000. But the create topic RPC are 
all sending to controller 3002:
{code:java}
"errorMessage":"The active controller appears to be node 3000"
{code}
This bug causes the RaftClusterTests flaky.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #45

2021-04-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 479418 lines...]
[2021-04-16T09:14:14.988Z] [INFO] Parameter: packageInPathFormat, Value: myapps
[2021-04-16T09:14:14.988Z] [INFO] Parameter: package, Value: myapps
[2021-04-16T09:14:14.988Z] [INFO] Parameter: version, Value: 0.1
[2021-04-16T09:14:14.988Z] [INFO] Parameter: groupId, Value: streams.examples
[2021-04-16T09:14:14.988Z] [INFO] Parameter: artifactId, Value: streams.examples
[2021-04-16T09:14:14.988Z] [INFO] Project created from Archetype in dir: 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/test-streams-archetype/streams.examples
[2021-04-16T09:14:14.988Z] [INFO] 

[2021-04-16T09:14:14.988Z] [INFO] BUILD SUCCESS
[2021-04-16T09:14:14.988Z] [INFO] 

[2021-04-16T09:14:14.988Z] [INFO] Total time:  2.155 s
[2021-04-16T09:14:14.988Z] [INFO] Finished at: 2021-04-16T09:14:14Z
[2021-04-16T09:14:14.988Z] [INFO] 

[Pipeline] dir
[2021-04-16T09:14:15.508Z] Running in 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/test-streams-archetype/streams.examples
[Pipeline] {
[Pipeline] sh
[2021-04-16T09:14:18.336Z] + mvn compile
[2021-04-16T09:14:19.279Z] [INFO] Scanning for projects...
[2021-04-16T09:14:19.279Z] [INFO] 
[2021-04-16T09:14:19.279Z] [INFO] -< 
streams.examples:streams.examples >--
[2021-04-16T09:14:19.279Z] [INFO] Building Kafka Streams Quickstart :: Java 0.1
[2021-04-16T09:14:19.279Z] [INFO] [ jar 
]-
[2021-04-16T09:14:20.227Z] [INFO] 
[2021-04-16T09:14:20.227Z] [INFO] --- maven-resources-plugin:2.6:resources 
(default-resources) @ streams.examples ---
[2021-04-16T09:14:20.227Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-04-16T09:14:20.227Z] [INFO] Copying 1 resource
[2021-04-16T09:14:20.227Z] [INFO] 
[2021-04-16T09:14:20.227Z] [INFO] --- maven-compiler-plugin:3.1:compile 
(default-compile) @ streams.examples ---
[2021-04-16T09:14:21.168Z] [INFO] Changes detected - recompiling the module!
[2021-04-16T09:14:21.168Z] [INFO] Compiling 3 source files to 
/home/jenkins/workspace/Kafka_kafka_trunk/streams/quickstart/test-streams-archetype/streams.examples/target/classes
[2021-04-16T09:14:21.686Z] [INFO] 

[2021-04-16T09:14:21.686Z] [INFO] BUILD SUCCESS
[2021-04-16T09:14:21.686Z] [INFO] 

[2021-04-16T09:14:21.686Z] [INFO] Total time:  2.350 s
[2021-04-16T09:14:21.686Z] [INFO] Finished at: 2021-04-16T09:14:21Z
[2021-04-16T09:14:21.686Z] [INFO] 

[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // dir
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // withEnv
[Pipeline] }
[Pipeline] // node
[Pipeline] }
[Pipeline] // timestamps
[Pipeline] }
[Pipeline] // timeout
[Pipeline] }
[Pipeline] // stage
[Pipeline] }
[2021-04-16T09:14:36.254Z] > Task :streams:testClasses
[2021-04-16T09:14:36.254Z] > Task :streams:testJar
[2021-04-16T09:14:36.254Z] > Task :streams:testSrcJar
[2021-04-16T09:14:36.254Z] > Task 
:streams:publishMavenJavaPublicationToMavenLocal
[2021-04-16T09:14:36.254Z] > Task :streams:publishToMavenLocal
[2021-04-16T09:14:36.254Z] 
[2021-04-16T09:14:36.254Z] Deprecated Gradle features were used in this build, 
making it incompatible with Gradle 7.0.
[2021-04-16T09:14:36.254Z] Use '--warning-mode all' to show the individual 
deprecation warnings.
[2021-04-16T09:14:36.254Z] See 
https://docs.gradle.org/6.8.3/userguide/command_line_interface.html#sec:command_line_warnings
[2021-04-16T09:14:36.254Z] 
[2021-04-16T09:14:36.254Z] BUILD SUCCESSFUL in 7m 13s
[2021-04-16T09:14:36.254Z] 69 actionable tasks: 37 executed, 32 up-to-date
[Pipeline] sh
[2021-04-16T09:14:39.573Z] + grep ^version= gradle.properties
[2021-04-16T09:14:39.573Z] + cut -d= -f 2
[Pipeline] dir
[2021-04-16T09:14:40.404Z] Running in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/streams/quickstart
[Pipeline] {
[Pipeline] sh
[2021-04-16T09:14:43.056Z] + mvn clean install -Dgpg.skip
[2021-04-16T09:14:45.043Z] [INFO] Scanning for projects...
[2021-04-16T09:14:45.043Z] [INFO] 

[2021-04-16T09:14:45.043Z] [INFO] Reactor Build Order:
[2021-04-16T09:14:45.043Z] [INFO] 
[2021-04-16T09:14:45.043Z] [INFO] Kafka Streams :: Quickstart   
 [pom]
[2021-04-16T09:14:45.043Z] [INFO] streams-quickstart-java   
 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #15

2021-04-16 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-12445) Improve the display of ConsumerPerformance indicators

2021-04-16 Thread Wenbing Shen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12445?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenbing Shen resolved KAFKA-12445.
--
Resolution: Won't Do

> Improve the display of ConsumerPerformance indicators
> -
>
> Key: KAFKA-12445
> URL: https://issues.apache.org/jira/browse/KAFKA-12445
> Project: Kafka
>  Issue Type: Improvement
>  Components: tools
>Affects Versions: 2.7.0
>Reporter: Wenbing Shen
>Priority: Minor
> Attachments: image-2021-03-10-13-30-27-734.png
>
>
> The current test indicators are shown below, the user experience is poor, and 
> there is no intuitive display of the meaning of each indicator.
> bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic 
> test-perf10 --messages 4 --from-latest
> start.time, end.time, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, 
> nMsg.sec, rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
> 2021-03-10 04:32:54:349, 2021-03-10 04:33:45:651, 390.6348, 7.6144, 40001, 
> 779.7162, 3096, 48206, 8.1034, 829.7930
>  
> show-detailed-stats:
> bin/kafka-consumer-perf-test.sh --broker-list localhost:9092 --topic 
> test-perf --messages 1 --show-detailed-stats
> time, threadId, data.consumed.in.MB, MB.sec, data.consumed.in.nMsg, nMsg.sec, 
> rebalance.time.ms, fetch.time.ms, fetch.MB.sec, fetch.nMsg.sec
> 2021-03-10 11:19:00:146, 0, 785.6112, 157.1222, 823773, 164754.6000, 
> 1615346338626, -1615346333626, 0., 0.
> 2021-03-10 11:19:05:146, 0, 4577.7817, 758.4341, 4800152, 795275.8000, 0, 
> 5000, 758.4341, 795275.8000
> 2021-03-10 11:19:10:146, 0, 8556.0875, 795.6612, 8971708, 834311.2000, 0, 
> 5000, 795.6612, 834311.2000
> 2021-03-10 11:19:15:286, 0, 9526.5665, 188.8091, 9989329, 197980.7393, 0, 
> 5140, 188.8091, 197980.7393
> 2021-03-10 11:19:20:310, 0, 9536.3321, 1.9438, 569, 2038.2166, 0, 5024, 
> 1.9438, 2038.2166
>  
> One of the optimization methods is to display the indicator variable name and 
> indicator test value in the form of a table, so that the meaning of each 
> measurement value can be clearly expressed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE} KIP-733: change Kafka Streams default replication factor config

2021-04-16 Thread Bruno Cadonna

Thanks Matthias,

+1 (binding)

Best,
Bruno

On 16.04.21 01:03, Jorge Esteban Quilcate Otoya wrote:

+1

Thanks Matthias!

On Thu, 15 Apr 2021, 20:48 Israel Ekpo,  wrote:


Makes perfect sense to me

+1 as well.

Thanks Matthias.


On Thu, Apr 15, 2021 at 2:41 PM Guozhang Wang  wrote:


+1 as well. Thanks!

On Wed, Apr 14, 2021 at 4:30 PM Bill Bejeck  wrote:


Thanks for the KIP Matthias.

+1 (binding)

-Bill

On Wed, Apr 14, 2021 at 7:06 PM Sophie Blee-Goldman
 wrote:


Thanks Matthias. I'm +1 (binding)

-Sophie

On Wed, Apr 14, 2021 at 3:36 PM Matthias J. Sax 

wrote:



Hi,

Because this KIP is rather small, I would like to skip a dedicated
discussion thread and call for a vote right way. If there are any
concerns, we can just discuss on this vote thread:










https://cwiki.apache.org/confluence/display/KAFKA/KIP-733%3A+change+Kafka+Streams+default+replication+factor+config


Note, that we actually backed this change via








https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=113708722

already.

However, I felt it might be worth do make this change more explicit

as

KIP-464 is rather old now.

Quote from KIP-464:


To exploit this new feature in KafkaStreams, we update the

default

value

of Streams configuration parameter `replication.factor` from `1` to

`-1`.





-Matthias








--
-- Guozhang







Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #44

2021-04-16 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 407433 lines...]
[2021-04-16T06:17:25.429Z] 
[2021-04-16T06:17:25.429Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() STARTED
[2021-04-16T06:17:35.561Z] 
[2021-04-16T06:17:35.561Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeTopicAutoCreateTopicCreateAcl() PASSED
[2021-04-16T06:17:35.561Z] 
[2021-04-16T06:17:35.561Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls() STARTED
[2021-04-16T06:17:46.248Z] 
[2021-04-16T06:17:46.248Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeWithWildcardAcls() PASSED
[2021-04-16T06:17:46.248Z] 
[2021-04-16T06:17:46.248Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() STARTED
[2021-04-16T06:17:56.337Z] 
[2021-04-16T06:17:56.337Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithDescribeAclViaSubscribe() PASSED
[2021-04-16T06:17:56.337Z] 
[2021-04-16T06:17:56.337Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() STARTED
[2021-04-16T06:18:06.027Z] 
[2021-04-16T06:18:06.027Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoConsumeWithoutDescribeAclViaAssign() PASSED
[2021-04-16T06:18:06.027Z] 
[2021-04-16T06:18:06.027Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoGroupAcl() STARTED
[2021-04-16T06:18:16.682Z] 
[2021-04-16T06:18:16.682Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoGroupAcl() PASSED
[2021-04-16T06:18:16.682Z] 
[2021-04-16T06:18:16.682Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() STARTED
[2021-04-16T06:18:26.148Z] 
[2021-04-16T06:18:26.148Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoProduceWithDescribeAcl() PASSED
[2021-04-16T06:18:26.148Z] 
[2021-04-16T06:18:26.148Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() STARTED
[2021-04-16T06:18:39.464Z] 
[2021-04-16T06:18:39.464Z] SaslPlainSslEndToEndAuthorizationTest > 
testNoDescribeProduceOrConsumeWithoutTopicDescribeAcl() PASSED
[2021-04-16T06:18:39.464Z] 
[2021-04-16T06:18:39.464Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() STARTED
[2021-04-16T06:18:50.085Z] 
[2021-04-16T06:18:50.085Z] SaslPlainSslEndToEndAuthorizationTest > 
testProduceConsumeViaSubscribe() PASSED
[2021-04-16T06:18:50.085Z] 
[2021-04-16T06:18:50.085Z] SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() STARTED
[2021-04-16T06:19:00.807Z] 
[2021-04-16T06:19:00.807Z] SaslPlainSslEndToEndAuthorizationTest > 
testTwoConsumersWithDifferentSaslCredentials() PASSED
[2021-04-16T06:19:00.807Z] 
[2021-04-16T06:19:00.807Z] SaslPlainSslEndToEndAuthorizationTest > testAcls() 
STARTED
[2021-04-16T06:19:08.152Z] 
[2021-04-16T06:19:08.152Z] SaslPlainSslEndToEndAuthorizationTest > testAcls() 
PASSED
[2021-04-16T06:19:08.152Z] 
[2021-04-16T06:19:08.152Z] SaslSslAdminIntegrationTest > 
testCreateDeleteTopics() STARTED
[2021-04-16T06:20:00.615Z] 
[2021-04-16T06:20:00.615Z] SaslSslAdminIntegrationTest > 
testCreateDeleteTopics() PASSED
[2021-04-16T06:20:00.615Z] 
[2021-04-16T06:20:00.615Z] SaslSslAdminIntegrationTest > 
testAuthorizedOperations() STARTED
[2021-04-16T06:20:44.734Z] 
[2021-04-16T06:20:44.734Z] SaslSslAdminIntegrationTest > 
testAuthorizedOperations() PASSED
[2021-04-16T06:20:44.734Z] 
[2021-04-16T06:20:44.734Z] SaslSslAdminIntegrationTest > testAclDescribe() 
STARTED
[2021-04-16T06:21:37.412Z] 
[2021-04-16T06:21:37.412Z] SaslSslAdminIntegrationTest > testAclDescribe() 
PASSED
[2021-04-16T06:21:37.412Z] 
[2021-04-16T06:21:37.412Z] SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed() STARTED
[2021-04-16T06:22:10.616Z] 
[2021-04-16T06:22:10.616Z] SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed() PASSED
[2021-04-16T06:22:10.616Z] 
[2021-04-16T06:22:10.616Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() STARTED
[2021-04-16T06:22:43.637Z] 
[2021-04-16T06:22:43.637Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() PASSED
[2021-04-16T06:22:43.637Z] 
[2021-04-16T06:22:43.637Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() STARTED
[2021-04-16T06:23:12.751Z] 
[2021-04-16T06:23:12.751Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() PASSED
[2021-04-16T06:23:12.751Z] 
[2021-04-16T06:23:12.751Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() STARTED
[2021-04-16T06:23:51.919Z] 
[2021-04-16T06:23:51.919Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() PASSED
[2021-04-16T06:23:51.919Z] 
[2021-04-16T06:23:51.919Z] SaslSslAdminIntegrationTest > testAclOperations() 
STARTED
[2021-04-16T06:24:30.744Z] 
[2021-04-16T06:24:30.744Z] SaslSslAdminIntegrationTest > testAclOperations() 
PASSED
[2021-04-16T06:24:30.744Z]