Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #465

2021-09-10 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 492198 lines...]
[2021-09-11T02:00:24.074Z] 
[2021-09-11T02:00:24.074Z] PlaintextConsumerTest > 
testConsumeMessagesWithCreateTime() PASSED
[2021-09-11T02:00:24.074Z] 
[2021-09-11T02:00:24.074Z] PlaintextConsumerTest > testAsyncCommit() STARTED
[2021-09-11T02:00:27.676Z] 
[2021-09-11T02:00:27.676Z] PlaintextConsumerTest > testAsyncCommit() PASSED
[2021-09-11T02:00:27.676Z] 
[2021-09-11T02:00:27.676Z] PlaintextConsumerTest > 
testLowMaxFetchSizeForRequestAndPartition() STARTED
[2021-09-11T02:00:58.013Z] 
[2021-09-11T02:00:58.013Z] PlaintextConsumerTest > 
testLowMaxFetchSizeForRequestAndPartition() PASSED
[2021-09-11T02:00:58.013Z] 
[2021-09-11T02:00:58.013Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnStopPolling() STARTED
[2021-09-11T02:01:14.348Z] 
[2021-09-11T02:01:14.348Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnStopPolling() PASSED
[2021-09-11T02:01:14.348Z] 
[2021-09-11T02:01:14.348Z] PlaintextConsumerTest > 
testMaxPollIntervalMsDelayInRevocation() STARTED
[2021-09-11T02:01:19.002Z] 
[2021-09-11T02:01:19.002Z] PlaintextConsumerTest > 
testMaxPollIntervalMsDelayInRevocation() PASSED
[2021-09-11T02:01:19.002Z] 
[2021-09-11T02:01:19.002Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithAssign() STARTED
[2021-09-11T02:01:24.801Z] 
[2021-09-11T02:01:24.801Z] PlaintextConsumerTest > 
testPerPartitionLagMetricsCleanUpWithAssign() PASSED
[2021-09-11T02:01:24.801Z] 
[2021-09-11T02:01:24.801Z] PlaintextConsumerTest > 
testPartitionsForInvalidTopic() STARTED
[2021-09-11T02:01:27.455Z] 
[2021-09-11T02:01:27.455Z] PlaintextConsumerTest > 
testPartitionsForInvalidTopic() PASSED
[2021-09-11T02:01:27.455Z] 
[2021-09-11T02:01:27.455Z] PlaintextConsumerTest > 
testPauseStateNotPreservedByRebalance() STARTED
[2021-09-11T02:01:33.252Z] 
[2021-09-11T02:01:33.252Z] PlaintextConsumerTest > 
testPauseStateNotPreservedByRebalance() PASSED
[2021-09-11T02:01:33.252Z] 
[2021-09-11T02:01:33.252Z] PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst() STARTED
[2021-09-11T02:01:37.906Z] 
[2021-09-11T02:01:37.906Z] PlaintextConsumerTest > 
testFetchHonoursFetchSizeIfLargeRecordNotFirst() PASSED
[2021-09-11T02:01:37.906Z] 
[2021-09-11T02:01:37.906Z] PlaintextConsumerTest > testSeek() STARTED
[2021-09-11T02:01:44.972Z] 
[2021-09-11T02:01:44.972Z] PlaintextConsumerTest > testSeek() PASSED
[2021-09-11T02:01:44.972Z] 
[2021-09-11T02:01:44.972Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() STARTED
[2021-09-11T02:01:53.458Z] 
[2021-09-11T02:01:53.458Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() PASSED
[2021-09-11T02:01:53.458Z] 
[2021-09-11T02:01:53.458Z] PlaintextConsumerTest > testPositionAndCommit() 
STARTED
[2021-09-11T02:01:58.112Z] 
[2021-09-11T02:01:58.112Z] PlaintextConsumerTest > testPositionAndCommit() 
PASSED
[2021-09-11T02:01:58.112Z] 
[2021-09-11T02:01:58.112Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() STARTED
[2021-09-11T02:02:02.764Z] 
[2021-09-11T02:02:02.764Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED
[2021-09-11T02:02:02.764Z] 
[2021-09-11T02:02:02.764Z] PlaintextConsumerTest > testUnsubscribeTopic() 
STARTED
[2021-09-11T02:02:07.483Z] 
[2021-09-11T02:02:07.483Z] PlaintextConsumerTest > testUnsubscribeTopic() PASSED
[2021-09-11T02:02:07.483Z] 
[2021-09-11T02:02:07.483Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() STARTED
[2021-09-11T02:02:21.611Z] 
[2021-09-11T02:02:21.611Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() PASSED
[2021-09-11T02:02:21.611Z] 
[2021-09-11T02:02:21.611Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() STARTED
[2021-09-11T02:02:57.083Z] 
[2021-09-11T02:02:57.083Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() PASSED
[2021-09-11T02:02:57.083Z] 
[2021-09-11T02:02:57.083Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() STARTED
[2021-09-11T02:02:59.736Z] 
[2021-09-11T02:02:59.736Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() PASSED
[2021-09-11T02:02:59.736Z] 
[2021-09-11T02:02:59.736Z] PlaintextConsumerTest > testAutoCommitOnClose() 
STARTED
[2021-09-11T02:03:05.578Z] 
[2021-09-11T02:03:05.579Z] PlaintextConsumerTest > testAutoCommitOnClose() 
PASSED
[2021-09-11T02:03:05.579Z] 
[2021-09-11T02:03:05.579Z] PlaintextConsumerTest > testListTopics() STARTED
[2021-09-11T02:03:08.233Z] 
[2021-09-11T02:03:08.233Z] PlaintextConsumerTest > testListTopics() PASSED
[2021-09-11T02:03:08.233Z] 
[2021-09-11T02:03:08.233Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() STARTED
[2021-09-11T02:03:12.885Z] 
[2021-09-11T02:03:12.885Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() PASSED
[2021-09-11T02:03:12.885Z] 
[2021-09-11T02:03:12.885Z] PlaintextConsumerTest > 

Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.0 #132

2021-09-10 Thread Apache Jenkins Server
See 




Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #464

2021-09-10 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-13290) My timeWindows last aggregated message never emit until a new message coming

2021-09-10 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-13290.
-
Resolution: Not A Problem

> My timeWindows last aggregated message never emit until a new message coming 
> -
>
> Key: KAFKA-13290
> URL: https://issues.apache.org/jira/browse/KAFKA-13290
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.6.2
> Environment: Development
>Reporter: Steve Zhou
>Priority: Major
>
> I have a Kafka stream event processing code which aggregates 1 minutes data. 
> It works as expected if data comes continuously, 
> If we stop producer, then i found the last aggregated message does not emit 
> until new message coming.
>  
> Following is my sample code, @Bean
>  public KStream kStream(StreamsBuilder 
> streamBuilder) {
>   KStream aggregatedData = streamBuilder
>                .stream(dataTopic, dataConsumed)
>                .groupByKey(Grouped.with(
>                              stringSerde,
>                              aggregateValueSerde))
>                
> .windowedBy(TimeWindows.of(windowDuration).grace(Duration.ofMillis(10L)))
>               .aggregate(this::initialize, this::aggregateFields,
>                                materializedAsWindowStore(windowedStoreName, 
> stringSerde,
>                                AggregateMetricsFieldsSerde))
>             
> .suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded())
>             .withName(windowedSuppressNodeName))
>              .toStream().map((key, aggregateMetrics) ->
> {            return KeyValue.pair(key.key(), aggregateMetrics);    }
> );
>  aggregatedData.to(aggregatedDataTopic, aggregateDataProduced);
>  return aggregatedFlowData;
>  }
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-13288) Transaction find-hanging command with --broker-id excludes internal topics

2021-09-10 Thread Jason Gustafson (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Gustafson resolved KAFKA-13288.
-
Resolution: Fixed

> Transaction find-hanging command with --broker-id excludes internal topics
> --
>
> Key: KAFKA-13288
> URL: https://issues.apache.org/jira/browse/KAFKA-13288
> Project: Kafka
>  Issue Type: Bug
>Reporter: Jason Gustafson
>Assignee: Jason Gustafson
>Priority: Major
>
> We use the vanilla `Admin.listTopics()` in this command if `--broker-id` is 
> specified. By default, this excludes internal topics.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[GitHub] [kafka-site] Dougoc closed pull request #368: MINOR: Update powered-by.html used by globo

2021-09-10 Thread GitBox


Dougoc closed pull request #368:
URL: https://github.com/apache/kafka-site/pull/368


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: [VOTE] 3.0.0 RC2

2021-09-10 Thread Bill Bejeck
Hi Konstantine,

Thanks for that; I can get to the docs now.


I've validated the release by doing the following

   - built from source
   - ran all unit tests
   - verified all checksums and signatures
   - spot-checked the Javadoc
   - worked through the quick start
   - worked through the Kafka Streams quick start application
   - ran KRaft in preview mode
  - created a topic
  - produced and consumed from the topic
  - ran metadata shell


I did find some minor errors in the docs (all in quickstart)

   - The beginning of the quickstart still references version 2.8
   - The command presented to create a topic in the quickstart is missing
   the --partitions and --replication-factor params
   - The link for "Kafka Streams demo" and "app development tutorial"
   points to version 2.5


But considering we can update the documentation directly and, more
importantly, independently of the code, IMHO, I don't think these should
block the release.


So it's a +1(binding) for me.


Thanks for running the release!

Bill


On Fri, Sep 10, 2021 at 2:36 AM Konstantine Karantasis <
kkaranta...@apache.org> wrote:

> Hi Bill,
>
> I just added folder 30 to the kafka-site repo. Hadn't realized that this
> separate manual step was part of the RC process and not the official
> release (even though, strangely enough, I was expecting myself to be able
> to read the docs online). I guess I needed a second nudge after Gary's
> first comment on RC1 to see what was missing. I'll update the release doc
> to make this more clear.
>
> Should be accessible now. Please take another look.
>
> Konstantine
>
>
>
> On Fri, Sep 10, 2021 at 12:50 AM Bill Bejeck  wrote:
>
> > Hi Konstantine,
> >
> > I've started to do the validation for the release and the link for docs
> > doesn't work.
> >
> > Thanks,
> > Bill
> >
> > On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
> > kkaranta...@apache.org> wrote:
> >
> > > Hello again Kafka users, developers and client-developers,
> > >
> > > This is the third candidate for release of Apache Kafka 3.0.0.
> > > It is a major release that includes many new features, including:
> > >
> > > * The deprecation of support for Java 8 and Scala 2.12.
> > > * Kafka Raft support for snapshots of the metadata topic and other
> > > improvements in the self-managed quorum.
> > > * Deprecation of message formats v0 and v1.
> > > * Stronger delivery guarantees for the Kafka producer enabled by
> default.
> > > * Optimizations in OffsetFetch and FindCoordinator requests.
> > > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
> > > Maker 1.
> > > * Ability to restart a connector's tasks on a single call in Kafka
> > Connect.
> > > * Connector log contexts and connector client overrides are now enabled
> > by
> > > default.
> > > * Enhanced semantics for timestamp synchronization in Kafka Streams.
> > > * Revamped public API for Stream's TaskId.
> > > * Default serde becomes null in Kafka Streams and several other
> > > configuration changes.
> > >
> > > You may read and review a more detailed list of changes in the 3.0.0
> blog
> > > post draft here:
> > >
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
> > >
> > > Release notes for the 3.0.0 release:
> > >
> https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
> > >
> > > *** Please download, test and vote by Tuesday, September 14, 2021 ***
> > >
> > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > https://kafka.apache.org/KEYS
> > >
> > > * Release artifacts to be voted upon (source and binary):
> > > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/
> > >
> > > * Maven artifacts to be voted upon:
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >
> > > * Javadoc:
> > > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/
> > >
> > > * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> > > https://github.com/apache/kafka/releases/tag/3.0.0-rc2
> > >
> > > * Documentation:
> > > https://kafka.apache.org/30/documentation.html
> > >
> > > * Protocol:
> > > https://kafka.apache.org/30/protocol.html
> > >
> > > * Successful Jenkins builds for the 3.0 branch:
> > > Unit/integration tests:
> > >
> > >
> >
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
> > > (1 flaky test failure)
> > > System tests:
> > > https://jenkins.confluent.io/job/system-test-kafka/job/3.0/67/
> > > (1 flaky test failure)
> > >
> > > /**
> > >
> > > Thanks,
> > > Konstantine
> > >
> >
>


[VOTE] KIP-760: Minimum value for segment.ms and segment.bytes

2021-09-10 Thread Badai Aqrandista
Hi all

I would like to start a vote on KIP-760
(https://cwiki.apache.org/confluence/display/KAFKA/KIP-760%3A+Minimum+value+for+segment.ms+and+segment.bytes).

I created this KIP because I have seen so many Kafka brokers crash due
to small segment.ms and/or segment.bytes.

-- 
Thanks,
Badai


[jira] [Created] (KAFKA-13290) My timeWindows last aggregated message never emit until a new message coming

2021-09-10 Thread Steve Zhou (Jira)
Steve Zhou created KAFKA-13290:
--

 Summary: My timeWindows last aggregated message never emit until a 
new message coming 
 Key: KAFKA-13290
 URL: https://issues.apache.org/jira/browse/KAFKA-13290
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.6.2
 Environment: Development
Reporter: Steve Zhou


I have a stream code which aggregate 1 minutes data. 

It works as expected if data comes continuously, 

If we stop producer, then i found the last aggregated message does not emit 
until new message coming, even the new message has different key.

 

Following is my sample code, @Bean
 public KStream kStream(StreamsBuilder 
streamBuilder) {

  KStream aggregatedData = streamBuilder
              .stream(dataTopic, dataConsumed)
              .groupByKey(Grouped.with(
                            stringSerde,
                            aggregateValueSerde))
              
.windowedBy(TimeWindows.of(windowDuration).grace(Duration.ofMillis(10L)))
             .aggregate(this::initialize, this::aggregateFields,
                              materializedAsWindowStore(windowedStoreName, 
stringSerde,
                              AggregateMetricsFieldsSerde))
           
.suppress(Suppressed.untilWindowCloses(Suppressed.BufferConfig.unbounded())
           .withName(windowedFlowSuppressNodeName))
            .toStream().map((key, aggregateMetrics) -> {
           return KeyValue.pair(key.key(), aggregateMetrics);
   });
 aggregatedData.to(aggregatedDataTopic, aggregateDataProduced);
 return aggregatedFlowData;
 }

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[VOTE] 2.8.1 RC0

2021-09-10 Thread David Jacot
Hello Kafka users, developers and client-developers,

This is the first candidate for release of Apache Kafka 2.8.1.

Apache Kafka 2.8.1 is a bugfix release and fixes 49 issues since the 2.8.0
release. Please see the release notes for more information.

Release notes for the 2.8.1 release:
https://home.apache.org/~dajac/kafka-2.8.1-rc0/RELEASE_NOTES.html

*** Please download, test and vote by Friday, September 17, 9am PT

Kafka's KEYS file containing PGP keys we use to sign the release:
https://kafka.apache.org/KEYS

* Release artifacts to be voted upon (source and binary):
https://home.apache.org/~dajac/kafka-2.8.1-rc0/

* Maven artifacts to be voted upon:
https://repository.apache.org/content/groups/staging/org/apache/kafka/

* Javadoc:
https://home.apache.org/~dajac/kafka-2.8.1-rc0/javadoc/

* Tag to be voted upon (off 2.8 branch) is the 2.8.1 tag:
https://github.com/apache/kafka/releases/tag/2.8.1-rc0

* Documentation:
https://kafka.apache.org/28/documentation.html

* Protocol:
https://kafka.apache.org/28/protocol.html

* Successful Jenkins builds for the 2.8 branch:
Unit/integration tests:
https://ci-builds.apache.org/job/Kafka/job/kafka/job/2.8/80/
System tests:
https://jenkins.confluent.io/job/system-test-kafka/job/2.8/214/

/**

Thanks,
David


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 2.8 #80

2021-09-10 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-13289) Bulk processing data through a join with kafka-streams results in `Skipping record for expired segment`

2021-09-10 Thread Matthew Sheppard (Jira)
Matthew Sheppard created KAFKA-13289:


 Summary: Bulk processing data through a join with kafka-streams 
results in `Skipping record for expired segment`
 Key: KAFKA-13289
 URL: https://issues.apache.org/jira/browse/KAFKA-13289
 Project: Kafka
  Issue Type: Bug
  Components: streams
Affects Versions: 2.8.0
Reporter: Matthew Sheppard


When pushing bulk data through a kafka-steams app, I see it log the following 
message many times...

`WARN 
org.apache.kafka.streams.state.internals.AbstractRocksDBSegmentedBytesStore - 
Skipping record for expired segment.`

...and data which I expect to have been joined through a leftJoin step appears 
to be lost.

I've seen this in practice either when my application has been shut down for a 
while and then is brought back up, or when I've used something like the 
[app-reset-rool](https://docs.confluent.io/platform/current/streams/developer-guide/app-reset-tool.html)
 in an attempt to have the application reprocess past data.

I was able to reproduce this behaviour in isolation by generating 1000 messages 
to two topics spaced an hour apart (with the original timestamps in order), 
then having kafka streams select a key for them and try to leftJoin the two 
rekeyed streams.

Self contained source code for that reproduction is available at 
https://github.com/mattsheppard/ins14809/blob/main/src/test/java/ins14809/Ins14809Test.java

The actual kafka-streams topology in there looks like this.

```
final StreamsBuilder builder = new StreamsBuilder();
final KStream leftStream = 
builder.stream(leftTopic);
final KStream rightStream = 
builder.stream(rightTopic);

final KStream rekeyedLeftStream = leftStream
.selectKey((k, v) -> v.substring(0, v.indexOf(":")));

final KStream rekeyedRightStream = rightStream
.selectKey((k, v) -> v.substring(0, v.indexOf(":")));

JoinWindows joinWindow = JoinWindows.of(Duration.ofSeconds(5));

final KStream joined = rekeyedLeftStream.leftJoin(
rekeyedRightStream,
(left, right) -> left + "/" + right,
joinWindow
);
```

...and the eventual output I produce looks like this...

```
...
523 [523,left/null]
524 [524,left/null, 524,left/524,right]
525 [525,left/525,right]
526 [526,left/null]
527 [527,left/null]
528 [528,left/528,right]
529 [529,left/null]
530 [530,left/null]
531 [531,left/null, 531,left/531,right]
532 [532,left/null]
533 [533,left/null]
534 [534,left/null, 534,left/534,right]
535 [535,left/null]
536 [536,left/null]
537 [537,left/null, 537,left/537,right]
538 [538,left/null]
539 [539,left/null]
540 [540,left/null]
541 [541,left/null]
542 [542,left/null]
543 [543,left/null]
...
```

...where as, given the input data, I expect to see every row end with the two 
values joined, rather than the right value being null.

Note that I understand it's expected that we initially get the left/null values 
for many values since that's the expected semantics of kafka-streams left join, 
at least until 
https://cwiki.apache.org/confluence/display/KAFKA/Kafka+Streams+Join+Semantics#KafkaStreamsJoinSemantics-ImprovedLeft/OuterStream-StreamJoin(v3.1.xandnewer)spurious

I've noticed that if I set a very large grace value on the join window the 
problem is solved, but since the input I provide is not out of order I did not 
expect to need to do that, and I'm weary of the resource requirements doing so 
in practice on an application with a lot of volume.

My suspicion is that something is happening such that when one partition is 
processed it causes the stream time to be pushed forward to the newest message 
in that partition, meaning when the next partition is then examined it is found 
to contain many records which are 'too old' compared to the stream time. 

I ran across 
https://kafkacommunity.blogspot.com/2020/02/re-skipping-record-for-expired-segment_88.html
 from a year and a half ago which seems to describe the same problem, but I'm 
hoping the self-contained reproduction might make the issue easier to tackle!



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] 3.0.0 RC2

2021-09-10 Thread Konstantine Karantasis
Hi Bill,

I just added folder 30 to the kafka-site repo. Hadn't realized that this
separate manual step was part of the RC process and not the official
release (even though, strangely enough, I was expecting myself to be able
to read the docs online). I guess I needed a second nudge after Gary's
first comment on RC1 to see what was missing. I'll update the release doc
to make this more clear.

Should be accessible now. Please take another look.

Konstantine



On Fri, Sep 10, 2021 at 12:50 AM Bill Bejeck  wrote:

> Hi Konstantine,
>
> I've started to do the validation for the release and the link for docs
> doesn't work.
>
> Thanks,
> Bill
>
> On Wed, Sep 8, 2021 at 5:59 PM Konstantine Karantasis <
> kkaranta...@apache.org> wrote:
>
> > Hello again Kafka users, developers and client-developers,
> >
> > This is the third candidate for release of Apache Kafka 3.0.0.
> > It is a major release that includes many new features, including:
> >
> > * The deprecation of support for Java 8 and Scala 2.12.
> > * Kafka Raft support for snapshots of the metadata topic and other
> > improvements in the self-managed quorum.
> > * Deprecation of message formats v0 and v1.
> > * Stronger delivery guarantees for the Kafka producer enabled by default.
> > * Optimizations in OffsetFetch and FindCoordinator requests.
> > * More flexible Mirror Maker 2 configuration and deprecation of Mirror
> > Maker 1.
> > * Ability to restart a connector's tasks on a single call in Kafka
> Connect.
> > * Connector log contexts and connector client overrides are now enabled
> by
> > default.
> > * Enhanced semantics for timestamp synchronization in Kafka Streams.
> > * Revamped public API for Stream's TaskId.
> > * Default serde becomes null in Kafka Streams and several other
> > configuration changes.
> >
> > You may read and review a more detailed list of changes in the 3.0.0 blog
> > post draft here:
> >
> https://blogs.apache.org/preview/kafka/?previewEntry=what-s-new-in-apache6
> >
> > Release notes for the 3.0.0 release:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/RELEASE_NOTES.html
> >
> > *** Please download, test and vote by Tuesday, September 14, 2021 ***
> >
> > Kafka's KEYS file containing PGP keys we use to sign the release:
> > https://kafka.apache.org/KEYS
> >
> > * Release artifacts to be voted upon (source and binary):
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/
> >
> > * Maven artifacts to be voted upon:
> > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >
> > * Javadoc:
> > https://home.apache.org/~kkarantasis/kafka-3.0.0-rc2/javadoc/
> >
> > * Tag to be voted upon (off 3.0 branch) is the 3.0.0 tag:
> > https://github.com/apache/kafka/releases/tag/3.0.0-rc2
> >
> > * Documentation:
> > https://kafka.apache.org/30/documentation.html
> >
> > * Protocol:
> > https://kafka.apache.org/30/protocol.html
> >
> > * Successful Jenkins builds for the 3.0 branch:
> > Unit/integration tests:
> >
> >
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka/detail/3.0/129/
> > (1 flaky test failure)
> > System tests:
> > https://jenkins.confluent.io/job/system-test-kafka/job/3.0/67/
> > (1 flaky test failure)
> >
> > /**
> >
> > Thanks,
> > Konstantine
> >
>