Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #525

2021-10-15 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 494919 lines...]
[2021-10-15T18:51:51.173Z] 
[2021-10-15T18:51:51.173Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldReduceWindowed PASSED
[2021-10-15T18:51:51.173Z] 
[2021-10-15T18:51:51.173Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows STARTED
[2021-10-15T18:51:52.951Z] 
[2021-10-15T18:51:52.951Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldCountSessionWindows PASSED
[2021-10-15T18:51:52.951Z] 
[2021-10-15T18:51:52.951Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed STARTED
[2021-10-15T18:51:58.767Z] 
[2021-10-15T18:51:58.767Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED
[2021-10-15T18:52:02.390Z] 
[2021-10-15T18:52:02.390Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargePartitionCount PASSED
[2021-10-15T18:52:02.390Z] 
[2021-10-15T18:52:02.390Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys STARTED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] > Task :core:integrationTest
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > testAutoCommitOnClose() 
PASSED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > testListTopics() STARTED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > testListTopics() PASSED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() STARTED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > 
testExpandingTopicSubscriptions() PASSED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignor() STARTED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] ConsumerBounceTest > 
testSeekAndCommitWithBrokerFailures() PASSED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] ConsumerBounceTest > 
testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize() STARTED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] ConsumerBounceTest > 
testConsumerReceivesFatalExceptionWhenGroupPassesMaxSize() PASSED
[2021-10-15T18:52:05.283Z] 
[2021-10-15T18:52:05.283Z] ConsumerBounceTest > 
testSubscribeWhenTopicUnavailable() STARTED
[2021-10-15T18:52:06.233Z] 
[2021-10-15T18:52:06.233Z] PlaintextConsumerTest > 
testMultiConsumerDefaultAssignor() PASSED
[2021-10-15T18:52:06.233Z] 
[2021-10-15T18:52:06.233Z] PlaintextConsumerTest > testInterceptors() STARTED
[2021-10-15T18:52:10.904Z] 
[2021-10-15T18:52:10.904Z] PlaintextConsumerTest > testInterceptors() PASSED
[2021-10-15T18:52:10.904Z] 
[2021-10-15T18:52:10.904Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() STARTED
[2021-10-15T18:52:15.744Z] 
[2021-10-15T18:52:15.744Z] PlaintextConsumerTest > 
testConsumingWithEmptyGroupId() PASSED
[2021-10-15T18:52:15.744Z] 
[2021-10-15T18:52:15.744Z] PlaintextConsumerTest > testPatternUnsubscription() 
STARTED
[2021-10-15T18:52:18.410Z] 
[2021-10-15T18:52:18.410Z] ConsumerBounceTest > 
testSubscribeWhenTopicUnavailable() PASSED
[2021-10-15T18:52:18.410Z] 
[2021-10-15T18:52:18.410Z] ConsumerBounceTest > 
testRollingBrokerRestartsWithSmallerMaxGroupSizeConfigDisruptsBigGroup() STARTED
[2021-10-15T18:52:23.190Z] 
[2021-10-15T18:52:23.190Z] PlaintextConsumerTest > testPatternUnsubscription() 
PASSED
[2021-10-15T18:52:23.190Z] 
[2021-10-15T18:52:23.190Z] PlaintextConsumerTest > testGroupConsumption() 
STARTED
[2021-10-15T18:52:30.210Z] 
[2021-10-15T18:52:30.210Z] PlaintextConsumerTest > testGroupConsumption() PASSED
[2021-10-15T18:52:30.210Z] 
[2021-10-15T18:52:30.210Z] PlaintextConsumerTest > testPartitionsFor() STARTED
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] > Task :streams:integrationTest
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor STARTED
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor PASSED
[2021-10-15T18:52:32.746Z] 
[2021-10-15T18:52:32.746Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorMany

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-15 Thread David Arthur
>
> How does the active controller know what is a valid `metadata.version`
> to persist? Could the active controller learn this from the
> ApiVersions response from all of the inactive controllers?


The active controller should probably validate whatever value is read from
meta.properties against its own range of supported versions (statically
defined in code). If the operator sets a version unsupported by the active
controller, that sounds like a configuration error and we should shutdown.
I'm not sure what other validation we could do here without introducing
ordering dependencies (e.g., must have quorum before initializing the
version)

For example, let's say that we have a cluster that only has remote
> controllers, what are the valid metadata.version in that case?


I believe it would be the intersection of supported versions across all
brokers and controllers. This does raise a concern with upgrading the
metadata.version in general. Currently, the active controller only
validates the target version based on the brokers' support versions. We
will need to include controllers supported versions here as well (using
ApiVersions, probably).

On Fri, Oct 15, 2021 at 1:44 PM José Armando García Sancio
 wrote:

> On Fri, Oct 15, 2021 at 7:24 AM David Arthur  wrote:
> > Hmm. So I think you are proposing the following flow:
> > > 1. Cluster metadata partition replicas establish a quorum using
> > > ApiVersions and the KRaft protocol.
> > > 2. Inactive controllers send a registration RPC to the active
> controller.
> > > 3. The active controller persists this information to the metadata log.
> >
> >
> > What happens if the inactive controllers send a metadata.version range
> > > that is not compatible with the metadata.version set for the cluster?
> >
> >
> > As we discussed offline, we don't need the explicit registration step.
> Once
> > a controller has joined the quorum, it will learn about the finalized
> > "metadata.version" level once it reads that record.
>
> How does the active controller know what is a valid `metadata.version`
> to persist? Could the active controller learn this from the
> ApiVersions response from all of the inactive controllers? For
> example, let's say that we have a cluster that only has remote
> controllers, what are the valid metadata.version in that case?
>
> > If it encounters a
> > version it can't support it should probably shutdown since it might not
> be
> > able to process any more records.
>
> I think that makes sense. If a controller cannot replay the metadata
> log, it might as well not be part of the quorum. If the cluster
> continues in this state it won't guarantee availability based on the
> replication factor.
>
> Thanks
> --
> -Jose
>


-- 
David Arthur


Jenkins build is back to normal : Kafka » kafka-2.6-jdk8 #140

2021-10-15 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-15 Thread José Armando García Sancio
On Fri, Oct 15, 2021 at 7:24 AM David Arthur  wrote:
> Hmm. So I think you are proposing the following flow:
> > 1. Cluster metadata partition replicas establish a quorum using
> > ApiVersions and the KRaft protocol.
> > 2. Inactive controllers send a registration RPC to the active controller.
> > 3. The active controller persists this information to the metadata log.
>
>
> What happens if the inactive controllers send a metadata.version range
> > that is not compatible with the metadata.version set for the cluster?
>
>
> As we discussed offline, we don't need the explicit registration step. Once
> a controller has joined the quorum, it will learn about the finalized
> "metadata.version" level once it reads that record.

How does the active controller know what is a valid `metadata.version`
to persist? Could the active controller learn this from the
ApiVersions response from all of the inactive controllers? For
example, let's say that we have a cluster that only has remote
controllers, what are the valid metadata.version in that case?

> If it encounters a
> version it can't support it should probably shutdown since it might not be
> able to process any more records.

I think that makes sense. If a controller cannot replay the metadata
log, it might as well not be part of the quorum. If the cluster
continues in this state it won't guarantee availability based on the
replication factor.

Thanks
-- 
-Jose


Re: [VOTE] KIP-768: Extend SASL/OAUTHBEARER with Support for OIDC

2021-10-15 Thread Ismael Juma
Thanks Kirk, I'm overall +1 (binding). I think the timeouts may need a bit
of tweaking still. We can update the KIP and the thread if we decide to do
that as part of the PR review.

Ismael

On Thu, Oct 14, 2021 at 11:59 AM Kirk True  wrote:

> Hi Ismael,
>
> Thanks for reviewing the KIP. I've made a first pass at updating based on
> your feedback.
>
> Questions/comments inline...
>
> On Thu, Oct 14, 2021, at 6:20 AM, Ismael Juma wrote:
>
> Hi Kirk,
>
> Thanks for the KIP. It looks good overall to me. A few nits:
>
> 1. "sasl.login.retry.wait.ms": these configs are typically called
> `backoff`
> in Kafka. For example "retry.backoff.ms". The default for `
> retry.backoff.ms`
> is 100ms. Is there a reason why we are using a different value for this
> one? The `sasl.login.retry.max.wait.ms` should be renamed accordingly.
>
>
>
> Changed to sasl.login.retry.backoff.ms and sasl.login.retry.backoff.max.ms
> and changed the former to 100 ms.
>
> 2. "sasl.login.attempts": do we need this at all? We have generally moved
> away from number of retries in favor of timeouts for Kafka (the producer
> has a retries config partly for historical reasons, but partly due to
> semantics that are specific to the producer.
>
>
> Removed this option and now we just retry up to
> sasl.login.retry.backoff.max.ms.
>
> 3. "sasl.login.read.timeout.ms" : we have two types of kafka timeouts, "
> default.api.timeout.ms" and "request.timeout.ms". Is this similar to any
> of
> the two or is it different? If similar to one of the existing ones, we
> should name it similarly.
>
>
> This is specifically for the setReadTimeout on java.net.URLConnection when
> making the call to the OAuth/OIDC provider to retrieve the token. So it is
> SASL specific because reading the response from the external OAuth/OIDC
> provider (likely over WAN) may require a longer timeout.
>
> 4. "sasl.login.connect.timeout.ms": is this the equivalent of "
> socket.connection.setup.timeout.ms" in Kafka? I am unsure why we chose
> such
> a long name, "connect.timeout.ms" would have been a lot better. However,
> if
> it is similar, then we may want to follow the same naming convention.
>
>
> This is for the setConnectTimeout on java.net.URLConnection, similar to
> the above.
>
> 5. Should there be a "connect.max.timeout.ms" too?
>
>
> AFAIK, we don't have that level of control per our use of URLConnection.
>
> 6. What are the compatibility guarantees offered by the
> "OAuthCompatibilityTest" CLI tool? Also, can we adjust the name so it's
> clear that it's a Command versus a test suite?
>
>
> I changed the name to OAuthCompatibilityTool. Can you elaborate on what
> compatibility guarantees you'd like to see listed? I may just be
> misunderstanding the request.
>
> Thanks,
> Kirk
>
>
> Thanks!
>
> Ismael
>
> On Mon, Sep 27, 2021 at 10:20 AM Kirk True  wrote:
>
> > Hi all!
> >
> > I'd like to start a vote for KIP-768 that allows Kafka to connect to an
> > OAuth/OIDC identity provider for authentication and token retrieval:
> >
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=186877575
> >
> > Thanks!
> > Kirk
>
>
>


[jira] [Resolved] (KAFKA-13377) Fix Resocue leak due to Files.list

2021-10-15 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13377?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-13377.
-
Fix Version/s: 3.1.0
   Resolution: Fixed

> Fix Resocue leak due to Files.list 
> ---
>
> Key: KAFKA-13377
> URL: https://issues.apache.org/jira/browse/KAFKA-13377
> Project: Kafka
>  Issue Type: Improvement
>Reporter: lujie
>Priority: Major
> Fix For: 3.1.0
>
>
> Files.list will open dir stream, we should close it.
>  
> see jdk: the
>  \{try} -with-resources construct should be used to ensure that the
> stream's
> {[@link|https://github.com/link] Stream#close close}
> method is invoked after the stream
> operations are completed.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #524

2021-10-15 Thread Apache Jenkins Server
See 




Re: [kafka-clients] [VOTE] 2.6.3 RC0

2021-10-15 Thread Israel Ekpo
Hi Mickael,

I am running my checks right now using the process outlined here

https://github.com/izzyacademy/apache-kafka-release-party#how-to-validate-apache-kafka-release-candidates

I will post my results and vote as soon as they are completed.

On Tue, Oct 12, 2021 at 9:56 AM Mickael Maison  wrote:

> Hello Kafka users, developers and client-developers,
>
> This is the first candidate for release of Apache Kafka 2.6.3.
>
> Apache Kafka 2.6.3 is a bugfix release and 11 issues, as well as
> CVE-2021-38153, have been fixed since 2.6.2.
>
> Release notes for the 2.6.3 release:
> https://home.apache.org/~mimaison/kafka-2.6.3-rc0/RELEASE_NOTES.html
>
> *** Please download, test and vote by Friday, October 15, 5pm CET
>
> Kafka's KEYS file containing PGP keys we use to sign the release:
> https://kafka.apache.org/KEYS
>
> * Release artifacts to be voted upon (source and binary):
> https://home.apache.org/~mimaison/kafka-2.6.3-rc0/
>
> * Maven artifacts to be voted upon:
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
>
> * Javadoc:
> https://home.apache.org/~mimaison/kafka-2.6.3-rc0/javadoc/
>
> * Tag to be voted upon (off 2.6 branch) is the 2.6.3 tag:
> https://github.com/apache/kafka/releases/tag/2.6.3-rc0
>
> * Documentation:
> https://kafka.apache.org/26/documentation.html
>
> * Protocol:
> https://kafka.apache.org/26/protocol.html
>
> * Successful Jenkins builds for the 2.6 branch:
> I'll share a link once the build completes
>
> /**
>
> Thanks,
> Mickael
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2BOCqnZNO3io3hm7VDoR9DzoN4kWSAz8nPV%2BCY07hDoSCkfwwg%40mail.gmail.com
> .
>


Re: [kafka-clients] [VOTE] 2.7.2 RC0

2021-10-15 Thread Israel Ekpo
Hi Mickael,

I am pretty surprised that there are no votes so far on the RCs and the
deadline has already passed.

I am running my checks right now using the process outlined here

https://github.com/izzyacademy/apache-kafka-release-party#how-to-validate-apache-kafka-release-candidates

I will post my results and vote as soon as they are completed.

On Fri, Oct 15, 2021 at 9:52 AM Mickael Maison  wrote:

> Successful Jenkins build:
> https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/181/
>
> On Wed, Oct 13, 2021 at 6:47 PM Mickael Maison 
> wrote:
> >
> > Hi Israel,
> >
> > Our tooling generates the same template for all types of releases.
> >
> > For bugfix releases, the site docs and javadocs don't typically
> > require extensive validation.
> > It's still a good idea to open them up and check a few pages to
> > validate they look right.
> >
> > For this release, as you've mentioned, site docs have not changed.
> >
> > Thanks
> >
> > On Wed, Oct 13, 2021 at 1:59 AM Israel Ekpo 
> wrote:
> > >
> > > Mickael,
> > >
> > > For patch or bug fix releases like this one, should we exclude the
> Javadocs and site docs if they have not changed?
> > >
> > > https://github.com/apache/kafka-site
> > >
> > > The site docs were last changed about 6 months ago and it appears it
> may not have changed or needs validation
> > >
> > >
> > >
> > > On Tue, Oct 12, 2021 at 2:17 PM Mickael Maison 
> wrote:
> > >>
> > >> Hello Kafka users, developers and client-developers,
> > >>
> > >> This is the first candidate for release of Apache Kafka 2.7.2.
> > >>
> > >> Apache Kafka 2.7.2 is a bugfix release and 26 issues, as well as
> > >> CVE-2021-38153, have been fixed since 2.7.1.
> > >>
> > >> Release notes for the 2.7.2 release:
> > >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/RELEASE_NOTES.html
> > >>
> > >> *** Please download, test and vote by Friday, October 15, 5pm CET
> > >>
> > >> Kafka's KEYS file containing PGP keys we use to sign the release:
> > >> https://kafka.apache.org/KEYS
> > >>
> > >> * Release artifacts to be voted upon (source and binary):
> > >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/
> > >>
> > >> * Maven artifacts to be voted upon:
> > >>
> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > >>
> > >> * Javadoc:
> > >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/javadoc/
> > >>
> > >> * Tag to be voted upon (off 2.7 branch) is the 2.7.2 tag:
> > >> https://github.com/apache/kafka/releases/tag/2.7.2-rc0
> > >>
> > >> * Documentation:
> > >> https://kafka.apache.org/27/documentation.html
> > >>
> > >> * Protocol:
> > >> https://kafka.apache.org/27/protocol.html
> > >>
> > >> * Successful Jenkins builds for the 2.7 branch:
> > >> I'll share a link once the build completes
> > >>
> > >>
> > >> /**
> > >>
> > >> Thanks,
> > >> Mickael
> > >>
> > >> --
> > >> You received this message because you are subscribed to the Google
> Groups "kafka-clients" group.
> > >> To unsubscribe from this group and stop receiving emails from it,
> send an email to kafka-clients+unsubscr...@googlegroups.com.
> > >> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2BOCqnY9TikXuCjEyr%2BA2bSjG_Zkd-zFvx9_1Bx%3DiOwpWWN1Sg%40mail.gmail.com
> .
>
> --
> You received this message because you are subscribed to the Google Groups
> "kafka-clients" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to kafka-clients+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/kafka-clients/CA%2BOCqnaojtzhaG5tY%3DvMsqzD8e4jXPMsCktEvf7PtcaRsNH2xg%40mail.gmail.com
> .
>


Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-15 Thread David Arthur
Hey everyone, I've updated the KIP with a few items we've discussed so far:

101. Change AllowDowngrade bool to DowngradeType int8 in
UpgradeFeatureRequest RPC. I'm wondering if we can kind of "cheat" on this
incompatible change since it's not currently in use and totally remove the
old field and leave the versions at 0+. Thoughts?

102. Add section on metadata.version initialization. This includes a new
option to kafka-storage.sh to let the operator set the initial version

103. Add section on behavior of controllers and brokers if they encounter
an unsupported version

Thanks for the great discussion so far!
-David

On Fri, Oct 15, 2021 at 10:23 AM David Arthur  wrote:

> Jose,
>
> Are you saying that for metadata.version X different software versions
>> could generate different snapshots? If so, I would consider this an
>> implementation bug, no? The format and content of a snapshot is a
>> public API that needs to be supported across software versions.
>
>
> I agree this would be a bug. This is probably a non-issue since we will
> recommend brokers to be at the same software version before performing an
> upgrade or downgrade. That eliminates the possibility that brokers could
> generate different snapshots.
>
> Hmm. So I think you are proposing the following flow:
>> 1. Cluster metadata partition replicas establish a quorum using
>> ApiVersions and the KRaft protocol.
>> 2. Inactive controllers send a registration RPC to the active controller.
>> 3. The active controller persists this information to the metadata log.
>
>
> What happens if the inactive controllers send a metadata.version range
>> that is not compatible with the metadata.version set for the cluster?
>
>
> As we discussed offline, we don't need the explicit registration step.
> Once a controller has joined the quorum, it will learn about the finalized
> "metadata.version" level once it reads that record. If it encounters a
> version it can't support it should probably shutdown since it might not be
> able to process any more records.
>
> However, this does lead to a race condition when a controller is catching
> up on the metadata log. Let's say a controller comes up and fetches the
> following records from the leader:
>
> > [M1, M2, M3, FeatureLevelRecord]
>
> Between the time the controller starts handling requests and it processes
> the FeatureLevelRecord, it could report an old value of metadata.version
> when handling ApiVersionsRequest. In the brokers, I believe we delay
> starting the request handler threads until the broker has been unfenced
> (meaning, it is reasonably caught up). We might need something similar for
> controllers.
>
> -David
>
> On Thu, Oct 14, 2021 at 9:29 PM David Arthur  wrote:
>
>> Kowshik, thanks for the review!
>>
>> 7001. An enum sounds like a good idea here. Especially since setting
>> Upgrade=false and Force=true doesn't really make sense. An enum would avoid
>> confusing/invalid combinations of flags
>>
>> 7002. How about adding --force-downgrade as an alternative to the
>> --downgrade argument? So, it would take the same arguments (list of
>> feature:version), but use the DOWNGRADE_UNSAFE option in the RPC.
>>
>> 7003. Yes we will need the advanced CLI since we will need to only modify
>> the "metadata.version" FF. I was kind of wondering if we might want
>> separate sub-commands for the different operations instead of all under
>> "update". E.g., "kafka-features.sh
>> upgrade|downgrade|force-downgrade|delete|describe".
>>
>> 7004/7005. The intention in this KIP is that we bump the
>> "metadata.version" liberally and that most upgrades are backwards
>> compatible. We're relying on this for feature gating as well as indicating
>> compatibility. The enum is indeed an implementation detail that is enforced
>> by the controller when handling UpdateFeaturesRequest. As for lossy
>> downgrades, this really only applies to the metadata log as we will lose
>> some fields and records when downgrading to an older version. This is
>> useful as an escape hatch for cases when a software upgrade has occurred,
>> the feature flag was increased, and then bugs are discovered. Without the
>> lossy downgrade scenario, we have no path back to a previous software
>> version.
>>
>> As for the min/max finalized version, I'm not totally clear on cases
>> where these would differ. I think for "metadata.version" we just want a
>> single finalized version for the whole cluster (like we do for IBP).
>>
>> -David
>>
>>
>> On Thu, Oct 14, 2021 at 1:59 PM José Armando García Sancio
>>  wrote:
>>
>>> On Tue, Oct 12, 2021 at 10:57 AM Colin McCabe 
>>> wrote:
>>> > > 11. For downgrades, it would be useful to describe how to determine
>>> the
>>> > > downgrade process (generating new snapshot, propagating the
>>> snapshot, etc)
>>> > > has completed. We could block the UpdateFeature request until the
>>> process
>>> > > is completed. However, since the process could take time, the
>>> request could
>>> > > time out. Another way is throu

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-10-15 Thread David Arthur
Jose,

Are you saying that for metadata.version X different software versions
> could generate different snapshots? If so, I would consider this an
> implementation bug, no? The format and content of a snapshot is a
> public API that needs to be supported across software versions.


I agree this would be a bug. This is probably a non-issue since we will
recommend brokers to be at the same software version before performing an
upgrade or downgrade. That eliminates the possibility that brokers could
generate different snapshots.

Hmm. So I think you are proposing the following flow:
> 1. Cluster metadata partition replicas establish a quorum using
> ApiVersions and the KRaft protocol.
> 2. Inactive controllers send a registration RPC to the active controller.
> 3. The active controller persists this information to the metadata log.


What happens if the inactive controllers send a metadata.version range
> that is not compatible with the metadata.version set for the cluster?


As we discussed offline, we don't need the explicit registration step. Once
a controller has joined the quorum, it will learn about the finalized
"metadata.version" level once it reads that record. If it encounters a
version it can't support it should probably shutdown since it might not be
able to process any more records.

However, this does lead to a race condition when a controller is catching
up on the metadata log. Let's say a controller comes up and fetches the
following records from the leader:

> [M1, M2, M3, FeatureLevelRecord]

Between the time the controller starts handling requests and it processes
the FeatureLevelRecord, it could report an old value of metadata.version
when handling ApiVersionsRequest. In the brokers, I believe we delay
starting the request handler threads until the broker has been unfenced
(meaning, it is reasonably caught up). We might need something similar for
controllers.

-David

On Thu, Oct 14, 2021 at 9:29 PM David Arthur  wrote:

> Kowshik, thanks for the review!
>
> 7001. An enum sounds like a good idea here. Especially since setting
> Upgrade=false and Force=true doesn't really make sense. An enum would avoid
> confusing/invalid combinations of flags
>
> 7002. How about adding --force-downgrade as an alternative to the
> --downgrade argument? So, it would take the same arguments (list of
> feature:version), but use the DOWNGRADE_UNSAFE option in the RPC.
>
> 7003. Yes we will need the advanced CLI since we will need to only modify
> the "metadata.version" FF. I was kind of wondering if we might want
> separate sub-commands for the different operations instead of all under
> "update". E.g., "kafka-features.sh
> upgrade|downgrade|force-downgrade|delete|describe".
>
> 7004/7005. The intention in this KIP is that we bump the
> "metadata.version" liberally and that most upgrades are backwards
> compatible. We're relying on this for feature gating as well as indicating
> compatibility. The enum is indeed an implementation detail that is enforced
> by the controller when handling UpdateFeaturesRequest. As for lossy
> downgrades, this really only applies to the metadata log as we will lose
> some fields and records when downgrading to an older version. This is
> useful as an escape hatch for cases when a software upgrade has occurred,
> the feature flag was increased, and then bugs are discovered. Without the
> lossy downgrade scenario, we have no path back to a previous software
> version.
>
> As for the min/max finalized version, I'm not totally clear on cases where
> these would differ. I think for "metadata.version" we just want a single
> finalized version for the whole cluster (like we do for IBP).
>
> -David
>
>
> On Thu, Oct 14, 2021 at 1:59 PM José Armando García Sancio
>  wrote:
>
>> On Tue, Oct 12, 2021 at 10:57 AM Colin McCabe  wrote:
>> > > 11. For downgrades, it would be useful to describe how to determine
>> the
>> > > downgrade process (generating new snapshot, propagating the snapshot,
>> etc)
>> > > has completed. We could block the UpdateFeature request until the
>> process
>> > > is completed. However, since the process could take time, the request
>> could
>> > > time out. Another way is through DescribeFeature and the server only
>> > > reports downgraded versions after the process is completed.
>> >
>> > Hmm.. I think we need to avoid blocking, since we don't know how long
>> it will take for all nodes to act on the downgrade request. After all, some
>> nodes may be down.
>> >
>> > But I agree we should have some way of knowing when the upgrade is
>> done. DescribeClusterResponse seems like the natural place to put
>> information about each node's feature level. While we're at it, we should
>> also add a boolean to indicate whether the given node is fenced. (This will
>> always be false for ZK mode, of course...)
>> >
>>
>> I agree. I think from the user's point of view, they would like to
>> know if it is safe to downgrade the software version of a specific
>> broker or controlle

Re: [kafka-clients] [VOTE] 2.7.2 RC0

2021-10-15 Thread Mickael Maison
Successful Jenkins build:
https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/181/

On Wed, Oct 13, 2021 at 6:47 PM Mickael Maison  wrote:
>
> Hi Israel,
>
> Our tooling generates the same template for all types of releases.
>
> For bugfix releases, the site docs and javadocs don't typically
> require extensive validation.
> It's still a good idea to open them up and check a few pages to
> validate they look right.
>
> For this release, as you've mentioned, site docs have not changed.
>
> Thanks
>
> On Wed, Oct 13, 2021 at 1:59 AM Israel Ekpo  wrote:
> >
> > Mickael,
> >
> > For patch or bug fix releases like this one, should we exclude the Javadocs 
> > and site docs if they have not changed?
> >
> > https://github.com/apache/kafka-site
> >
> > The site docs were last changed about 6 months ago and it appears it may 
> > not have changed or needs validation
> >
> >
> >
> > On Tue, Oct 12, 2021 at 2:17 PM Mickael Maison  wrote:
> >>
> >> Hello Kafka users, developers and client-developers,
> >>
> >> This is the first candidate for release of Apache Kafka 2.7.2.
> >>
> >> Apache Kafka 2.7.2 is a bugfix release and 26 issues, as well as
> >> CVE-2021-38153, have been fixed since 2.7.1.
> >>
> >> Release notes for the 2.7.2 release:
> >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/RELEASE_NOTES.html
> >>
> >> *** Please download, test and vote by Friday, October 15, 5pm CET
> >>
> >> Kafka's KEYS file containing PGP keys we use to sign the release:
> >> https://kafka.apache.org/KEYS
> >>
> >> * Release artifacts to be voted upon (source and binary):
> >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/
> >>
> >> * Maven artifacts to be voted upon:
> >> https://repository.apache.org/content/groups/staging/org/apache/kafka/
> >>
> >> * Javadoc:
> >> https://home.apache.org/~mimaison/kafka-2.7.2-rc0/javadoc/
> >>
> >> * Tag to be voted upon (off 2.7 branch) is the 2.7.2 tag:
> >> https://github.com/apache/kafka/releases/tag/2.7.2-rc0
> >>
> >> * Documentation:
> >> https://kafka.apache.org/27/documentation.html
> >>
> >> * Protocol:
> >> https://kafka.apache.org/27/protocol.html
> >>
> >> * Successful Jenkins builds for the 2.7 branch:
> >> I'll share a link once the build completes
> >>
> >>
> >> /**
> >>
> >> Thanks,
> >> Mickael
> >>
> >> --
> >> You received this message because you are subscribed to the Google Groups 
> >> "kafka-clients" group.
> >> To unsubscribe from this group and stop receiving emails from it, send an 
> >> email to kafka-clients+unsubscr...@googlegroups.com.
> >> To view this discussion on the web visit 
> >> https://groups.google.com/d/msgid/kafka-clients/CA%2BOCqnY9TikXuCjEyr%2BA2bSjG_Zkd-zFvx9_1Bx%3DiOwpWWN1Sg%40mail.gmail.com.


Jenkins build is back to normal : Kafka » kafka-2.7-jdk8 #181

2021-10-15 Thread Apache Jenkins Server
See 




Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #139

2021-10-15 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.17 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldReturnCorrectInMemoryStoreTypeOnly[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > shouldThrowForMissingTime[Eos 
enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.

[jira] [Created] (KAFKA-13380) When I migrate nodes from cluster A to cluster B, cluster A reports an error

2021-10-15 Thread donglei (Jira)
donglei created KAFKA-13380:
---

 Summary: When I migrate nodes from cluster A to cluster B, cluster 
A reports an error
 Key: KAFKA-13380
 URL: https://issues.apache.org/jira/browse/KAFKA-13380
 Project: Kafka
  Issue Type: Bug
Reporter: donglei


We have two kafka clusters:
 * ClusterA version:0.10.0.1
 * ClusterB version:0.10.2.1

Recently, due to the shortage of resources in cluster B, three machines are 
offline from cluster A and ready to be added to cluster B. the offline steps 
are as follows:
 * 1. Migrate the topic replicas on the three brokers to other brokers.
 * 2. Run the `bin/kafka-server-stop.sh` command on the three brokers to close 
the Kafka service.
 * 3. Clean the installation directory, data directory and log directory of 
Kafka service on the three brokers.

When the threee brokers offline from ClusterA, the ClusterA operates normally.

Then, we add the three brokers to cluster B in turn. After adding, the warning 
logs in server.log on the three brokers are as follows:
{code:java}
[2021-10-15 11:12:12,140] INFO [Kafka Server 12], started 
(kafka.server.KafkaServer)
[2021-10-15 11:12:12,323] WARN Attempting to send response via channel for 
which there is no open connection, connection id 5 (kafka.network.Processor)
[2021-10-15 11:12:18,445] WARN Attempting to send response via channel for 
which there is no open connection, connection id 1 (kafka.network.Processor)
[2021-10-15 11:12:29,527] WARN Attempting to send response via channel for 
which there is no open connection, connection id 3 (kafka.network.Processor)
[2021-10-15 11:12:31,585] WARN Attempting to send response via channel for 
which there is no open connection, connection id 1 (kafka.network.Processor)
[2021-10-15 11:12:31,728] WARN Attempting to send response via channel for 
which there is no open connection, connection id 10 (kafka.network.Processor)
[2021-10-15 11:12:57,526] WARN Attempting to send response via channel for 
which there is no open connection, connection id 0 
(kafka.network.Processor){code}
At the same time, we found that all brokers in clusterA also began to report 
error logs as follows:
{code:java}
[2021-10-15 11:13:02,187] ERROR Closing socket for xxx:9092-:49691 because 
of error (kafka.network.Processor)
kafka.network.InvalidRequestException: Error getting request for apiKey: 3 and 
apiVersion: 2
at 
kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:95)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:87)
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:488)
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:483)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.network.Processor.processCompletedReceives(SocketServer.scala:483)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 2
at 
org.apache.kafka.common.protocol.ProtoUtils.schemaFor(ProtoUtils.java:31)
at 
org.apache.kafka.common.protocol.ProtoUtils.requestSchema(ProtoUtils.java:44)
at 
org.apache.kafka.common.protocol.ProtoUtils.parseRequest(ProtoUtils.java:60)
at 
org.apache.kafka.common.requests.MetadataRequest.parse(MetadataRequest.java:96)
at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:48)
at 
kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:92)
... 10 more
{code}
Is there any error in the way that my broker goes offline from clusterA ? Why 
are there no exceptions after the three brokers go offline? Adding them to 
clusterB will cause an error in clusterA service?

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Created] (KAFKA-13379) When I migrate nodes from cluster A to cluster B, cluster A reports an error

2021-10-15 Thread donglei (Jira)
donglei created KAFKA-13379:
---

 Summary: When I migrate nodes from cluster A to cluster B, cluster 
A reports an error
 Key: KAFKA-13379
 URL: https://issues.apache.org/jira/browse/KAFKA-13379
 Project: Kafka
  Issue Type: Bug
Reporter: donglei


We have two kafka clusters:
 * ClusterA version:0.10.0.1
 * ClusterB version:0.10.2.1

Recently, due to the shortage of resources in cluster B, three machines are 
offline from cluster A and ready to be added to cluster B. the offline steps 
are as follows:
 * 1. Migrate the topic replicas on the three brokers to other brokers.
 * 2. Run the `bin/kafka-server-stop.sh` command on the three brokers to close 
the Kafka service.
 * 3. Clean the installation directory, data directory and log directory of 
Kafka service on the three brokers.

When the threee brokers offline from ClusterA, the ClusterA operates normally.

Then, we add the three brokers to cluster B in turn. After adding, the warning 
logs in server.log on the three brokers are as follows:
{code:java}
[2021-10-15 11:12:12,140] INFO [Kafka Server 12], started 
(kafka.server.KafkaServer)
[2021-10-15 11:12:12,323] WARN Attempting to send response via channel for 
which there is no open connection, connection id 5 (kafka.network.Processor)
[2021-10-15 11:12:18,445] WARN Attempting to send response via channel for 
which there is no open connection, connection id 1 (kafka.network.Processor)
[2021-10-15 11:12:29,527] WARN Attempting to send response via channel for 
which there is no open connection, connection id 3 (kafka.network.Processor)
[2021-10-15 11:12:31,585] WARN Attempting to send response via channel for 
which there is no open connection, connection id 1 (kafka.network.Processor)
[2021-10-15 11:12:31,728] WARN Attempting to send response via channel for 
which there is no open connection, connection id 10 (kafka.network.Processor)
[2021-10-15 11:12:57,526] WARN Attempting to send response via channel for 
which there is no open connection, connection id 0 
(kafka.network.Processor){code}
At the same time, we found that all brokers in clusterA also began to report 
error logs as follows:
{code:java}
[2021-10-15 11:13:02,187] ERROR Closing socket for xxx:9092-:49691 because 
of error (kafka.network.Processor)
kafka.network.InvalidRequestException: Error getting request for apiKey: 3 and 
apiVersion: 2
at 
kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:95)
at kafka.network.RequestChannel$Request.(RequestChannel.scala:87)
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:488)
at 
kafka.network.Processor$$anonfun$processCompletedReceives$1.apply(SocketServer.scala:483)
at scala.collection.Iterator$class.foreach(Iterator.scala:727)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at 
kafka.network.Processor.processCompletedReceives(SocketServer.scala:483)
at kafka.network.Processor.run(SocketServer.scala:413)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.IllegalArgumentException: Invalid version for API key 3: 2
at 
org.apache.kafka.common.protocol.ProtoUtils.schemaFor(ProtoUtils.java:31)
at 
org.apache.kafka.common.protocol.ProtoUtils.requestSchema(ProtoUtils.java:44)
at 
org.apache.kafka.common.protocol.ProtoUtils.parseRequest(ProtoUtils.java:60)
at 
org.apache.kafka.common.requests.MetadataRequest.parse(MetadataRequest.java:96)
at 
org.apache.kafka.common.requests.AbstractRequest.getRequest(AbstractRequest.java:48)
at 
kafka.network.RequestChannel$Request.liftedTree2$1(RequestChannel.scala:92)
... 10 more
{code}
Is there any error in the way that my broker goes offline from clusterA ? Why 
are there no exceptions after the three brokers go offline? Adding them to 
clusterB will cause an error in clusterA service?

 

 

 

 



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Build failed in Jenkins: Kafka » kafka-2.6-jdk8 #138

2021-10-15 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 3.18 MB...]

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCaptureInternalTopicNamesIfWrittenInto[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateOnWallClockTimeDeprecated[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldProcessRecordForTopic[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldForwardRecordsFromSubtopologyToSubtopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldNotUpdateStoreForSmallerValue[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldCreateStateDirectoryForStatefulTopology[Eos enabled = false] PASSED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] STARTED

org.apache.kafka.streams.TopologyTestDriverTest > 
shouldPunctuateIfWallClockTimeAdvances[Eos enabled = false] PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordsFromKeyValuePairs PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKeyAndDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithDefaultTimestamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecordWithOtherTopicNameAndTimestampWithTimetamp 
PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullHeaders PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldNotAllowToCreateTopicWithNullTopicNameWithDefaultTimestamp PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldRequireCustomTopicNameIfNotDefaultFactoryTopicNameWithNullKey PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateNullKeyConsumerRecord PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName STARTED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > 
shouldCreateConsumerRecordWithOtherTopicName PASSED

org.apache.kafka.streams.test.ConsumerRecordFactoryTest > shouldAdvanceTime 
STARTED

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #523

2021-10-15 Thread Apache Jenkins Server
See 




Re: [DISCUSS] Apache Kafka 3.1.0 release

2021-10-15 Thread David Jacot
Hi folks,

Just a quick reminder that the KIP freeze is today. Don't forget to close
your ongoing votes.

Best,
David

On Thu, Oct 14, 2021 at 5:31 PM David Jacot  wrote:

> Hi Luke,
>
> Added it to the plan.
>
> Thanks,
> David
>
> On Thu, Oct 14, 2021 at 10:09 AM Luke Chen  wrote:
>
>> Hi David,
>> KIP-766 is merged into trunk. Please help add it into the release plan.
>>
>> Thank you.
>> Luke
>>
>> On Mon, Oct 11, 2021 at 10:50 PM David Jacot > >
>> wrote:
>>
>> > Hi Michael,
>> >
>> > Sure. I have updated the release plan to include it. Thanks for the
>> > heads up.
>> >
>> > Best,
>> > David
>> >
>> > On Mon, Oct 11, 2021 at 4:39 PM Mickael Maison <
>> mickael.mai...@gmail.com>
>> > wrote:
>> >
>> > > Hi David,
>> > >
>> > > You can add KIP-690 to the release plan. The vote passed months ago
>> > > and I merged the PR today.
>> > >
>> > > Thanks
>> > >
>> > > On Fri, Oct 8, 2021 at 8:32 AM David Jacot
>> 
>> > > wrote:
>> > > >
>> > > > Hi folks,
>> > > >
>> > > > Just a quick reminder that KIP Freeze is next Friday, October 15th.
>> > > >
>> > > > Cheers,
>> > > > David
>> > > >
>> > > > On Wed, Sep 29, 2021 at 3:52 PM Chris Egerton
>> > > 
>> > > > wrote:
>> > > >
>> > > > > Thanks David!
>> > > > >
>> > > > > On Wed, Sep 29, 2021 at 2:56 AM David Jacot
>> > > 
>> > > > > wrote:
>> > > > >
>> > > > > > Hi Chris,
>> > > > > >
>> > > > > > Sure thing. I have added KIP-618 to the release plan. Thanks for
>> > the
>> > > > > heads
>> > > > > > up.
>> > > > > >
>> > > > > > Best,
>> > > > > > David
>> > > > > >
>> > > > > > On Wed, Sep 29, 2021 at 8:53 AM David Jacot <
>> dja...@confluent.io>
>> > > wrote:
>> > > > > >
>> > > > > > > Hi Kirk,
>> > > > > > >
>> > > > > > > Yes, it is definitely possible if you can get the KIP voted
>> > before
>> > > the
>> > > > > > KIP
>> > > > > > > freeze
>> > > > > > > and the code committed before the feature freeze. Please, let
>> me
>> > > know
>> > > > > > when
>> > > > > > > the
>> > > > > > > KIP is voted and I will add it to the release plan.
>> > > > > > >
>> > > > > > > Thanks,
>> > > > > > > David
>> > > > > > >
>> > > > > > > On Tue, Sep 28, 2021 at 7:05 PM Chris Egerton
>> > > > > > 
>> > > > > > > wrote:
>> > > > > > >
>> > > > > > >> Hi David,
>> > > > > > >>
>> > > > > > >> Wondering if we can get KIP-618 included? The vote passed
>> months
>> > > ago
>> > > > > > and a
>> > > > > > >> PR has been available since mid-June.
>> > > > > > >>
>> > > > > > >> Cheers,
>> > > > > > >>
>> > > > > > >> Chris
>> > > > > > >>
>> > > > > > >> On Tue, Sep 28, 2021 at 12:53 PM Kirk True <
>> > k...@mustardgrain.com
>> > > >
>> > > > > > wrote:
>> > > > > > >>
>> > > > > > >> > Hi David,
>> > > > > > >> >
>> > > > > > >> > Is it possible to try to get KIP-768 in 3.1? I have put it
>> up
>> > > for a
>> > > > > > vote
>> > > > > > >> > and have much of it implemented already.
>> > > > > > >> >
>> > > > > > >> > Thanks,
>> > > > > > >> > Kirk
>> > > > > > >> >
>> > > > > > >> > On Tue, Sep 28, 2021, at 3:11 AM, Israel Ekpo wrote:
>> > > > > > >> > > Ok. Sounds good, David.
>> > > > > > >> > >
>> > > > > > >> > > Let’s forge ahead. The plan looks good.
>> > > > > > >> > >
>> > > > > > >> > > On Tue, Sep 28, 2021 at 4:02 AM David Jacot
>> > > > > > >> > > > > > > >> > >
>> > > > > > >> > > wrote:
>> > > > > > >> > >
>> > > > > > >> > > > Hi Israel,
>> > > > > > >> > > >
>> > > > > > >> > > > Yeah, 3.0 took quite a long time to be released.
>> However,
>> > I
>> > > > > think
>> > > > > > >> > > > that we should stick to our time based release.
>> > > > > > >> > > >
>> > > > > > >> > > > Best,
>> > > > > > >> > > > David
>> > > > > > >> > > >
>> > > > > > >> > > >
>> > > > > > >> > > > On Tue, Sep 28, 2021 at 9:59 AM David Jacot <
>> > > > > dja...@confluent.io>
>> > > > > > >> > wrote:
>> > > > > > >> > > >
>> > > > > > >> > > > > Hi Bruno,
>> > > > > > >> > > > >
>> > > > > > >> > > > > Thanks for the heads up. I have removed it from the
>> > plan.
>> > > > > > >> > > > >
>> > > > > > >> > > > > Best,
>> > > > > > >> > > > > David
>> > > > > > >> > > > >
>> > > > > > >> > > > > On Mon, Sep 27, 2021 at 11:04 AM Bruno Cadonna <
>> > > > > > >> cado...@apache.org>
>> > > > > > >> > > > wrote:
>> > > > > > >> > > > >
>> > > > > > >> > > > >> Hi David,
>> > > > > > >> > > > >>
>> > > > > > >> > > > >> Thank you for the plan!
>> > > > > > >> > > > >>
>> > > > > > >> > > > >> KIP-698 will not make it for 3.1.0. Could you please
>> > > remove
>> > > > > it
>> > > > > > >> from
>> > > > > > >> > the
>> > > > > > >> > > > >> plan?
>> > > > > > >> > > > >>
>> > > > > > >> > > > >> Best,
>> > > > > > >> > > > >> Bruno
>> > > > > > >> > > > >>
>> > > > > > >> > > > >> On 24.09.21 16:22, David Jacot wrote:
>> > > > > > >> > > > >> > Hi all,
>> > > > > > >> > > > >> >
>> > > > > > >> > > > >> > I just published a release plan here:
>> > > > > > >> > > > >> >
>> > > > > > >> >
>> > > > >
>> https://cwiki.apache.org/confluence/display/KAFKA/Release+Plan+3.1.0
>> > > > > > >> > > > >> >