[GitHub] [kafka-site] dongjinleekr opened a new pull request #380: KAFKA-13430: Remove broker-wide quota properties from the documentation

2021-11-03 Thread GitBox


dongjinleekr opened a new pull request #380:
URL: https://github.com/apache/kafka-site/pull/380


   A counterpart of [KAFKA-13430: Remove broker-wide quota properties from the 
documentation](https://github.com/apache/kafka/pull/11463).
   
   cc/ @dajac


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




Re: Request for permission to assign JIRA ticket (KAFKA-13403) to myself

2021-11-03 Thread Arun Mathew
Thanks Mickael.
--
With Regards,
Arun Mathew

On Thu, Oct 28, 2021 at 8:30 PM Mickael Maison 
wrote:

> Hi Arun,
>
> I granted you permissions.
> Thanks
>
> On Thu, Oct 28, 2021 at 1:01 PM Arun Mathew 
> wrote:
> >
> > Ah! It is arunmathew88.
> > Thank you.
> > --
> > With Regards,
> > Arun Mathew
> >
> > On Thu, Oct 28, 2021 at 2:28 PM Matthias J. Sax 
> wrote:
> >
> > > What is you user name?
> > >
> > > On 10/27/21 6:08 PM, Arun Mathew wrote:
> > > > Hi,
> > > >  Please give me the relevant permissions to take up tickets.
> > > > --
> > > > With Regards,
> > > > Arun Mathew
> > > >
> > >
>


Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-03 Thread Colin McCabe
On Tue, Oct 12, 2021, at 10:34, Jun Rao wrote:
> Hi, David,
>
> One more comment.
>
> 16. The main reason why KIP-584 requires finalizing a feature manually is
> that in the ZK world, the controller doesn't know all brokers in a cluster.
> A broker temporarily down is not registered in ZK. in the KRaft world, the
> controller keeps track of all brokers, including those that are temporarily
> down. This makes it possible for the controller to automatically finalize a
> feature---it's safe to do so when all brokers support that feature. This
> will make the upgrade process much simpler since no manual command is
> required to turn on a new feature. Have we considered this?
>
> Thanks,
>
> Jun

Hi Jun,

I guess David commented on this point already, but I'll comment as well. I 
always had the perception that users viewed rolls as potentially risky and were 
looking for ways to reduce the risk. Not enabling features right away after 
installing new software seems like one way to do that. If we had a feature to 
automatically upgrade during a roll, I'm not sure that I would recommend that 
people use it, because if something fails, it makes it harder to tell if the 
new feature is at fault, or something else in the new software.

We already tell users to do a "double roll" when going to a new IBP. (Just to 
give background to people who haven't heard that phrase, the first roll 
installs the new software, and the second roll updates the IBP). So this 
KIP-778 mechanism is basically very similar to that, except the second thing 
isn't a roll, but just an upgrade command. So I think this is consistent with 
what we currently do.

Also, just like David said, we can always add auto-upgrade later if there is 
demand...

best,
Colin


>
> On Thu, Oct 7, 2021 at 5:19 PM Jun Rao  wrote:
>
>> Hi, David,
>>
>> Thanks for the KIP. A few comments below.
>>
>> 10. It would be useful to describe how the controller node determines the
>> RPC version used to communicate to other controller nodes. There seems to
>> be a bootstrap problem. A controller node can't read the log and
>> therefore the feature level until a quorum leader is elected. But leader
>> election requires an RPC.
>>
>> 11. For downgrades, it would be useful to describe how to determine the
>> downgrade process (generating new snapshot, propagating the snapshot, etc)
>> has completed. We could block the UpdateFeature request until the process
>> is completed. However, since the process could take time, the request could
>> time out. Another way is through DescribeFeature and the server only
>> reports downgraded versions after the process is completed.
>>
>> 12. Since we are changing UpdateFeaturesRequest, do we need to change the
>> AdminClient api for updateFeatures too?
>>
>> 13. For the paragraph starting with "In the absence of an operator
>> defined value for metadata.version", in KIP-584, we described how to
>> finalize features with New cluster bootstrap. In that case, it's
>> inconvenient for the users to have to run an admin tool to finalize the
>> version for each feature. Instead, the system detects that the /features
>> path is missing in ZK and thus automatically finalizes every feature with
>> the latest supported version. Could we do something similar in the KRaft
>> mode?
>>
>> 14. After the quorum leader generates a new snapshot, how do we force
>> other nodes to pick up the new snapshot?
>>
>> 15. I agree with Jose that it will be useful to describe when generating a
>> new snapshot is needed. To me, it seems the new snapshot is only needed
>> when incompatible changes are made.
>>
>> 7. Jose, what control records were you referring?
>>
>> Thanks,
>>
>> Jun
>>
>>
>> On Tue, Oct 5, 2021 at 8:53 AM David Arthur 
>> wrote:
>>
>>> Jose, thanks for the thorough review and comments!
>>>
>>> I am out of the office until next week, so I probably won't be able to
>>> update the KIP until then. Here are some replies to your questions:
>>>
>>> 1. Generate snapshot on upgrade
>>> > > Metadata snapshot is generated and sent to the other nodes
>>> > Why does the Active Controller need to generate a new snapshot and
>>> > force a snapshot fetch from the replicas (inactive controller and
>>> > brokers) on an upgrade? Isn't writing the FeatureLevelRecord good
>>> > enough to communicate the upgrade to the replicas?
>>>
>>>
>>> You're right, we don't necessarily need to _transmit_ a snapshot, since
>>> each node can generate its own equivalent snapshot
>>>
>>> 2. Generate snapshot on downgrade
>>> > > Metadata snapshot is generated and sent to the other inactive
>>> > controllers and to brokers (this snapshot may be lossy!)
>>> > Why do we need to send this downgraded snapshot to the brokers? The
>>> > replicas have seen the FeatureLevelRecord and noticed the downgrade.
>>> > Can we have the replicas each independently generate a downgraded
>>> > snapshot at the offset for the downgraded FeatureLevelRecord? I assume
>>> > that the active controller will 

Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-03 Thread Colin McCabe
Hi David,

Thanks again for the KIP.

David Arthur wrote:
 > 101. Change AllowDowngrade bool to DowngradeType int8 in
 > UpgradeFeatureRequest RPC. I'm wondering if we can kind of "cheat" on this
 > incompatible change since it's not currently in use and totally remove the
 > old field and leave the versions at 0+. Thoughts?

We've been talking about having an automated RPC compatibility checker, and 
doing something like this might make that more complex. On the other hand, I 
suppose it could be annotated as a special case.

David Arthur wrote:
 > The active controller should probably validate whatever value is read from
 > meta.properties against its own range of supported versions (statically
 > defined in code). If the operator sets a version unsupported by the active
 > controller, that sounds like a configuration error and we should shutdown.
 > I'm not sure what other validation we could do here without introducing
 > ordering dependencies (e.g., must have quorum before initializing the
 > version)

It would be nice if the active controller could validate that a majority of the 
quorum could use the proposed metadata.version. The active controller should 
have this information, right? If we don't have recent information  from a 
quorum of voters, we wouldn't be active.

David Arthur wrote:
 > Hey folks, I just updated the KIP with details on proposed changes to the
 > kafka-features.sh tool. It includes four proposed sub-commands which will
 > provide the Basic and Advanced functions detailed in KIP-584. Please have a
 > look, thanks!

I like the new sub-commands... seems very clean.

Do we need delete as a command separate from downgrade? In KIP-584, we agreed
to keep version 0 reserved for "no such feature flag." So downgrading to
version 0 should be the same as deletion.

On another note, it seems like we should spell out that metadata.version begins 
at 1 in KRaft clusters, and that it's always 0 in ZK-based clusters. Maybe this 
is obvious but it would be good to write it down here.

It seems like we never got closure on the issue of simplifying feature levels. 
As I said earlier, I think the min/max thing is not needed for 99% of the 
use-cases we've talked about for feature levels. Certainly, it's not needed for 
metadata.version. Having it be mandatory for every feature level, whether it 
needs it or not, is an extra complication that I think we should get rid of, 
before this interface becomes set in stone. (Use-cases that need this can 
simply have two feature flags, X_min and X_max, as I said before.)

We probably also want an RPC implemented by both brokers and controllers that 
will reveal the min and max supported versions for each feature level supported 
by the server. This is useful for diagnostics. And I suppose we should have a 
command line option to the features command that broadcasts it to everyone and 
returns the results (or lack of result, if the connection couldn't be made...)

best,
Colin


Re: [DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2021-11-03 Thread Chris Egerton
Hi Diego,

This is a long time coming and I'm glad to see someone's finally gotten
around to filling in this feature gap for Connect.

It looks like this KIP does not take the SinkTask::open and SinkTask::close
methods into account (
https://kafka.apache.org/30/javadoc/org/apache/kafka/connect/sink/SinkTask.html#open(java.util.Collection
/
https://kafka.apache.org/30/javadoc/org/apache/kafka/connect/sink/SinkTask.html#close(java.util.Collection)).
Is this intentional? If so, it'd be nice to see a rationale for leaving
this out in the rejected alternatives so; if not, I think we may want to
add this type of support to the KIP so that we can solve the mutating
SMT/asynchronous sink connector problem once and for all, instead of
narrowing but not closing the existing feature gap. We may want to take the
current effort to add support for cooperative consumer groups (
https://issues.apache.org/jira/browse/KAFKA-12487 /
https://github.com/apache/kafka/pull/10563) into account if we opt to add
support for open/close, since the current behavior of Connect (which
involves invoking SinkTask::close for every topic partition every time a
consumer rebalance occurs, then invoking SinkTask::open for all
still-assigned partitions) may be easier to reason about, but is likely
going to change soon (although it is an option to hold off on that work if
this KIP is given priority, which is definitely a valid option).

It also looks like we're only exposing the original topic partition to
connector developers. I agree with the rationale for not exposing more of
the original consumer record for the most part, but what about the record's
offset? Although it's not possible to override the Kafka offset for a sink
record via the standard SinkRecord::newRecord methods (
https://kafka.apache.org/30/javadoc/org/apache/kafka/connect/sink/SinkRecord.html#newRecord(java.lang.String,java.lang.Integer,org.apache.kafka.connect.data.Schema,java.lang.Object,org.apache.kafka.connect.data.Schema,java.lang.Object,java.lang.Long)
/
https://kafka.apache.org/30/javadoc/org/apache/kafka/connect/sink/SinkRecord.html#newRecord(java.lang.String,java.lang.Integer,org.apache.kafka.connect.data.Schema,java.lang.Object,org.apache.kafka.connect.data.Schema,java.lang.Object,java.lang.Long,java.lang.Iterable)),
there are still public constructors available for the SinkRecord class that
can be leveraged by SMTs to return new SinkRecord instances that don't have
the same Kafka offset as the one that they've mutated. Do you think it may
be worth the additional maintenance burden and API complexity to
accommodate this case, with something like a SinkTask::originalKafkaOffset
method?

I'm also wondering about how exactly this method will be implemented. Will
we automatically create a new SinkRecord instance at the end of the
transformation chain in order to provide the correct topic partition (and
possibly offset)? If so, this should be called out since it means that
transformations that return custom subclasses of SinkRecord will no longer
be able to do so (or rather, they will still be able to, but these custom
subclasses will never be visible to sink tasks).

Finally, a small nit: do you think it'd make sense to separate out the
newly-proposed SinkTask::originalTopicPartition method into separate
SinkTask::originalTopic and SinkTask::originalKafkaPartition methods, to
stay in line with the convention that's been loosely set by the existing,
separate SinkTask::topic and SinkTask::kafkaPartition methods?

I'm personally looking forward to leveraging this improvement in the
BigQuery sink connector I help maintain because we recently added a new
write mode that uses asynchronous writes and SinkTask::preCommit, but
encourage users to use SMTs to redirect records to different
datasets/tables in BigQuery, which is currently impossible in that write
mode. Thanks for taking this on!

Cheers,

Chris

On Wed, Nov 3, 2021 at 6:17 PM Diego Erdody  wrote:

> Hello,
>
> I'd like to propose a small KIP to add a new field to SinkRecord in order
> to add support for topic-mutating SMTs (e.g. RegexRouter) to asynchronous
> Sink Connectors (the ones that override preCommit for internal offset
> tracking, like S3
> <
> https://github.com/confluentinc/kafka-connect-storage-cloud/blob/master/kafka-connect-s3/src/main/java/io/confluent/connect/s3/S3SinkTask.java#L274
> >
> ).
>
> Links:
>
> - KIP-793: Sink Connectors: Support topic-mutating SMTs for async
> connectors (preCommit users)
> 
> - PR #11464 
>
> Thanks,
>
> Diego
>


[DISCUSS] KIP-793: Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2021-11-03 Thread Diego Erdody
Hello,

I'd like to propose a small KIP to add a new field to SinkRecord in order
to add support for topic-mutating SMTs (e.g. RegexRouter) to asynchronous
Sink Connectors (the ones that override preCommit for internal offset
tracking, like S3

).

Links:

- KIP-793: Sink Connectors: Support topic-mutating SMTs for async
connectors (preCommit users)

- PR #11464 

Thanks,

Diego


[jira] [Created] (KAFKA-13432) ApiException should provide a way to capture stacktrace

2021-11-03 Thread Vikas Singh (Jira)
Vikas Singh created KAFKA-13432:
---

 Summary: ApiException should provide a way to capture stacktrace
 Key: KAFKA-13432
 URL: https://issues.apache.org/jira/browse/KAFKA-13432
 Project: Kafka
  Issue Type: Improvement
  Components: core
Reporter: Vikas Singh


ApiException doesn't fill in the stacktrace, it overrides `fillInStacktrace` to 
make it a no-op, here is the code: 
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/common/errors/ApiException.java#L45,L49

However, there are times when full stacktrace will be helpful in finding out 
what went wrong on the client side. We should provide a way to make this 
behavior configurable, so that if an error is hit multiple times, we can switch 
the behavior and find out what code is causing it.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-791: Add Record Metadata to State Store Context

2021-11-03 Thread Guozhang Wang
Thanks Patrick,

I looked at the KIP and it looks good to me overall. I think we need to
double check whether the record metadata reflect the "last processed
record" or the "currently processed record" where the latter may not have
been completely processed. In `ProcessorContext#recordMetadata` it returns
the latter, but that may not be the preferred case if you want to build the
consistency reasoning on top of.

Otherwise, LGTM.


Guozhang

On Wed, Nov 3, 2021 at 1:44 PM Patrick Stuedi 
wrote:

> Hi everyone,
>
> I would like to start the discussion for KIP-791: Add Record Metadata to
> State Store Context.
>
> The KIP can be found here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-791:+Add+Record+Metadata+to+State+Store+Context
>
> Any feedback will be highly appreciated.
>
> Many thanks,
>  Patrick
>


-- 
-- Guozhang


[DISCUSS] KIP-791: Add Record Metadata to State Store Context

2021-11-03 Thread Patrick Stuedi
Hi everyone,

I would like to start the discussion for KIP-791: Add Record Metadata to
State Store Context.

The KIP can be found here:
https://cwiki.apache.org/confluence/display/KAFKA/KIP-791:+Add+Record+Metadata+to+State+Store+Context

Any feedback will be highly appreciated.

Many thanks,
 Patrick


Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.1 #2

2021-11-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 497588 lines...]
[2021-11-03T20:20:35.219Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart ---
[2021-11-03T20:20:35.992Z] 
[2021-11-03T20:20:35.992Z] LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(CompressionType)[2]
 PASSED
[2021-11-03T20:20:35.992Z] 
[2021-11-03T20:20:35.992Z] LogCleanerParameterizedIntegrationTest > 
testCleansCombinedCompactAndDeleteTopic(CompressionType) > 
kafka.log.LogCleanerParameterizedIntegrationTest.testCleansCombinedCompactAndDeleteTopic(CompressionType)[3]
 STARTED
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart ---
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart ---
[2021-11-03T20:20:36.170Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.1/streams/quickstart/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart/3.1.0-SNAPSHOT/streams-quickstart-3.1.0-SNAPSHOT.pom
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --< 
org.apache.kafka:streams-quickstart-java >--
[2021-11-03T20:20:36.170Z] [INFO] Building streams-quickstart-java 
3.1.0-SNAPSHOT[2/2]
[2021-11-03T20:20:36.170Z] [INFO] --[ maven-archetype 
]---
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --- maven-clean-plugin:3.0.0:clean 
(default-clean) @ streams-quickstart-java ---
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --- maven-remote-resources-plugin:1.5:process 
(process-resource-bundles) @ streams-quickstart-java ---
[2021-11-03T20:20:36.170Z] [INFO] 
[2021-11-03T20:20:36.170Z] [INFO] --- maven-resources-plugin:2.7:resources 
(default-resources) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-11-03T20:20:37.118Z] [INFO] Copying 6 resources
[2021-11-03T20:20:37.118Z] [INFO] Copying 3 resources
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- maven-resources-plugin:2.7:testResources 
(default-testResources) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] Using 'UTF-8' encoding to copy filtered 
resources.
[2021-11-03T20:20:37.118Z] [INFO] Copying 2 resources
[2021-11-03T20:20:37.118Z] [INFO] Copying 3 resources
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- maven-archetype-plugin:2.2:jar 
(default-jar) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] Building archetype jar: 
/home/jenkins/workspace/Kafka_kafka_3.1/streams/quickstart/java/target/streams-quickstart-java-3.1.0-SNAPSHOT
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- maven-site-plugin:3.5.1:attach-descriptor 
(attach-descriptor) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- 
maven-archetype-plugin:2.2:integration-test (default-integration-test) @ 
streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- maven-gpg-plugin:1.6:sign 
(sign-artifacts) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- maven-install-plugin:2.5.2:install 
(default-install) @ streams-quickstart-java ---
[2021-11-03T20:20:37.118Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.1/streams/quickstart/java/target/streams-quickstart-java-3.1.0-SNAPSHOT.jar
 to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.1.0-SNAPSHOT/streams-quickstart-java-3.1.0-SNAPSHOT.jar
[2021-11-03T20:20:37.118Z] [INFO] Installing 
/home/jenkins/workspace/Kafka_kafka_3.1/streams/quickstart/java/pom.xml to 
/home/jenkins/.m2/repository/org/apache/kafka/streams-quickstart-java/3.1.0-SNAPSHOT/streams-quickstart-java-3.1.0-SNAPSHOT.pom
[2021-11-03T20:20:37.118Z] [INFO] 
[2021-11-03T20:20:37.118Z] [INFO] --- 
maven-archetype-plugin:2.2:update-local-catalog (default-update-local-catalog) 
@ streams-quickstart-java ---
[2021-11-03T20:20:37.641Z] [INFO] 

[2021-11-03T20:20:37.641Z] [INFO] Reactor Summary for Kafka Streams :: 
Quickstart 3.1.0-SNAPSHOT:
[2021-11-03T20:20:37.641Z] [INFO] 
[2021-11-03T20:20:37.641Z] [INFO] Kafka Streams :: Quickstart 
 SUCCESS [  2.549 s]
[2021-11-03T20:20:37.641Z] [INFO] streams-quickstart-java 
 SUCCESS [  1.282 s]
[2021-11-03T20:20:37.641Z] [INFO] 

Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #551

2021-11-03 Thread Apache Jenkins Server
See 




Re: [DISCUSS] KIP-778 KRaft Upgrades

2021-11-03 Thread David Arthur
Hey folks, I just updated the KIP with details on proposed changes to the
kafka-features.sh tool. It includes four proposed sub-commands which will
provide the Basic and Advanced functions detailed in KIP-584. Please have a
look, thanks!
https://cwiki.apache.org/confluence/display/KAFKA/KIP-778%3A+KRaft+Upgrades#KIP778:KRaftUpgrades-KIP-584Addendum

Aside from this change, if there isn't any more feedback on the KIP I'd
like to start a vote soon.

Cheers,
David

On Thu, Oct 21, 2021 at 3:09 AM Kowshik Prakasam
 wrote:

> Hi David,
>
> Thanks for the explanations. Few comments below.
>
> 7001. Sounds good.
>
> 7002. Sounds good. The --force-downgrade-all option can be used for the
> basic CLI while the --force-downgrade option can be used for the advanced
> CLI.
>
> 7003. I like your suggestion on separate sub-commands, I agree it's more
> convenient to use.
>
> 7004/7005. Your explanation sounds good to me. Regarding the min finalized
> version level, this becomes useful for feature version deprecation as
> explained here:
>
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-584%3A+Versioning+scheme+for+features#KIP584:Versioningschemeforfeatures-Featureversiondeprecation
> . This is not implemented yet, and the work item is tracked in KAFKA-10622.
>
>
> Cheers,
> Kowshik
>
>
>
> On Fri, Oct 15, 2021 at 11:38 AM David Arthur  wrote:
>
> > >
> > > How does the active controller know what is a valid `metadata.version`
> > > to persist? Could the active controller learn this from the
> > > ApiVersions response from all of the inactive controllers?
> >
> >
> > The active controller should probably validate whatever value is read
> from
> > meta.properties against its own range of supported versions (statically
> > defined in code). If the operator sets a version unsupported by the
> active
> > controller, that sounds like a configuration error and we should
> shutdown.
> > I'm not sure what other validation we could do here without introducing
> > ordering dependencies (e.g., must have quorum before initializing the
> > version)
> >
> > For example, let's say that we have a cluster that only has remote
> > > controllers, what are the valid metadata.version in that case?
> >
> >
> > I believe it would be the intersection of supported versions across all
> > brokers and controllers. This does raise a concern with upgrading the
> > metadata.version in general. Currently, the active controller only
> > validates the target version based on the brokers' support versions. We
> > will need to include controllers supported versions here as well (using
> > ApiVersions, probably).
> >
> > On Fri, Oct 15, 2021 at 1:44 PM José Armando García Sancio
> >  wrote:
> >
> > > On Fri, Oct 15, 2021 at 7:24 AM David Arthur  wrote:
> > > > Hmm. So I think you are proposing the following flow:
> > > > > 1. Cluster metadata partition replicas establish a quorum using
> > > > > ApiVersions and the KRaft protocol.
> > > > > 2. Inactive controllers send a registration RPC to the active
> > > controller.
> > > > > 3. The active controller persists this information to the metadata
> > log.
> > > >
> > > >
> > > > What happens if the inactive controllers send a metadata.version
> range
> > > > > that is not compatible with the metadata.version set for the
> cluster?
> > > >
> > > >
> > > > As we discussed offline, we don't need the explicit registration
> step.
> > > Once
> > > > a controller has joined the quorum, it will learn about the finalized
> > > > "metadata.version" level once it reads that record.
> > >
> > > How does the active controller know what is a valid `metadata.version`
> > > to persist? Could the active controller learn this from the
> > > ApiVersions response from all of the inactive controllers? For
> > > example, let's say that we have a cluster that only has remote
> > > controllers, what are the valid metadata.version in that case?
> > >
> > > > If it encounters a
> > > > version it can't support it should probably shutdown since it might
> not
> > > be
> > > > able to process any more records.
> > >
> > > I think that makes sense. If a controller cannot replay the metadata
> > > log, it might as well not be part of the quorum. If the cluster
> > > continues in this state it won't guarantee availability based on the
> > > replication factor.
> > >
> > > Thanks
> > > --
> > > -Jose
> > >
> >
> >
> > --
> > David Arthur
> >
>


-- 
David Arthur


[jira] [Created] (KAFKA-13431) Sink Connectors: Support topic-mutating SMTs for async connectors (preCommit users)

2021-11-03 Thread Diego Erdody (Jira)
Diego Erdody created KAFKA-13431:


 Summary: Sink Connectors: Support topic-mutating SMTs for async 
connectors (preCommit users)
 Key: KAFKA-13431
 URL: https://issues.apache.org/jira/browse/KAFKA-13431
 Project: Kafka
  Issue Type: Improvement
  Components: KafkaConnect
Reporter: Diego Erdody
Assignee: Diego Erdody


There's currently an incompatibility between Sink connectors overriding the 
{{SinkTask.preCommit}} method (for asynchronous processing) and SMTs that 
mutate the topic field.

The problem was present since the {{preCommit}} method inception and is rooted 
in a mismatch between the topic/partition that is passed to {{open/preCommit}} 
(the original topic and partition before applying any transformations) and the 
topic partition that is present in the SinkRecord that the {{SinkTask.put}} 
method receives (after transformations are applied). Since that's all the 
information the connector has to implement any kind of internal offset 
tracking, the topic/partitions it can return in preCommit will correspond to 
the transformed topic, when the framework actually expects it to be the 
original topic.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: Wiki Permissions

2021-11-03 Thread Guozhang Wang
Hello Diego,

I saw your id has already been in the contributors list.

Cheers,
Guozhang

On Wed, Nov 3, 2021 at 10:02 AM Diego Erdody  wrote:

> Hello,
>
> Can I please have "permissions to contribute to Apache Kafka".
> Context: propose a new KIP.
> User (both jira and wiki): erdody.
> Thanks!
>
> Diego
>


-- 
-- Guozhang


Re: Wiki Permissions

2021-11-03 Thread Bill Bejeck
Hi Diego,

You're set up in both now.
Thanks for your interest in Apache Kafka.

-Bill

On Wed, Nov 3, 2021 at 1:09 PM Diego Erdody  wrote:

> Hello,
>
> Can I please have "permissions to contribute to Apache Kafka".
> Context: propose a new KIP.
> User (both jira and wiki): erdody.
> Thanks!
>
> Diego
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #550

2021-11-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 494815 lines...]
[2021-11-03T17:02:09.240Z] SaslSslAdminIntegrationTest > 
testLegacyAclOpsNeverAffectOrReturnPrefixed() PASSED
[2021-11-03T17:02:09.240Z] 
[2021-11-03T17:02:09.240Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() STARTED
[2021-11-03T17:02:44.167Z] 
[2021-11-03T17:02:44.167Z] SaslSslAdminIntegrationTest > 
testCreateTopicsResponseMetadataAndConfig() PASSED
[2021-11-03T17:02:44.167Z] 
[2021-11-03T17:02:44.167Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() STARTED
[2021-11-03T17:03:23.141Z] 
[2021-11-03T17:03:23.141Z] SaslSslAdminIntegrationTest > 
testAttemptToCreateInvalidAcls() PASSED
[2021-11-03T17:03:23.141Z] 
[2021-11-03T17:03:23.141Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() STARTED
[2021-11-03T17:04:10.254Z] 
[2021-11-03T17:04:10.254Z] SaslSslAdminIntegrationTest > 
testAclAuthorizationDenied() PASSED
[2021-11-03T17:04:10.254Z] 
[2021-11-03T17:04:10.254Z] SaslSslAdminIntegrationTest > testAclOperations() 
STARTED
[2021-11-03T17:04:56.050Z] 
[2021-11-03T17:04:56.050Z] SaslSslAdminIntegrationTest > testAclOperations() 
PASSED
[2021-11-03T17:04:56.050Z] 
[2021-11-03T17:04:56.050Z] SaslSslAdminIntegrationTest > testAclOperations2() 
STARTED
[2021-11-03T17:05:36.230Z] 
[2021-11-03T17:05:36.230Z] SaslSslAdminIntegrationTest > testAclOperations2() 
PASSED
[2021-11-03T17:05:36.230Z] 
[2021-11-03T17:05:36.230Z] SaslSslAdminIntegrationTest > testAclDelete() STARTED
[2021-11-03T17:06:29.659Z] 
[2021-11-03T17:06:29.659Z] SaslSslAdminIntegrationTest > testAclDelete() PASSED
[2021-11-03T17:06:29.659Z] 
[2021-11-03T17:06:29.659Z] TransactionsTest > testBumpTransactionalEpoch() 
STARTED
[2021-11-03T17:06:46.888Z] 
[2021-11-03T17:06:46.888Z] TransactionsTest > testBumpTransactionalEpoch() 
PASSED
[2021-11-03T17:06:46.888Z] 
[2021-11-03T17:06:46.888Z] TransactionsTest > 
testSendOffsetsWithGroupMetadata() STARTED
[2021-11-03T17:07:00.152Z] 
[2021-11-03T17:07:00.152Z] TransactionsTest > 
testSendOffsetsWithGroupMetadata() PASSED
[2021-11-03T17:07:00.152Z] 
[2021-11-03T17:07:00.152Z] TransactionsTest > testBasicTransactions() STARTED
[2021-11-03T17:07:10.584Z] 
[2021-11-03T17:07:10.584Z] TransactionsTest > testBasicTransactions() PASSED
[2021-11-03T17:07:10.584Z] 
[2021-11-03T17:07:10.584Z] TransactionsTest > testSendOffsetsWithGroupId() 
STARTED
[2021-11-03T17:07:22.980Z] 
[2021-11-03T17:07:22.980Z] TransactionsTest > testSendOffsetsWithGroupId() 
PASSED
[2021-11-03T17:07:22.980Z] 
[2021-11-03T17:07:22.980Z] TransactionsTest > testFencingOnSendOffsets() STARTED
[2021-11-03T17:07:33.210Z] 
[2021-11-03T17:07:33.210Z] TransactionsTest > testFencingOnSendOffsets() PASSED
[2021-11-03T17:07:33.210Z] 
[2021-11-03T17:07:33.210Z] TransactionsTest > testFencingOnAddPartitions() 
STARTED
[2021-11-03T17:07:44.532Z] 
[2021-11-03T17:07:44.532Z] TransactionsTest > testFencingOnAddPartitions() 
PASSED
[2021-11-03T17:07:44.532Z] 
[2021-11-03T17:07:44.532Z] TransactionsTest > 
testFencingOnTransactionExpiration() STARTED
[2021-11-03T17:07:55.735Z] 
[2021-11-03T17:07:55.735Z] TransactionsTest > 
testFencingOnTransactionExpiration() PASSED
[2021-11-03T17:07:55.735Z] 
[2021-11-03T17:07:55.735Z] TransactionsTest > 
testDelayedFetchIncludesAbortedTransaction() STARTED
[2021-11-03T17:08:02.159Z] 
[2021-11-03T17:08:02.159Z] TransactionsTest > 
testDelayedFetchIncludesAbortedTransaction() PASSED
[2021-11-03T17:08:02.159Z] 
[2021-11-03T17:08:02.159Z] TransactionsTest > 
testOffsetMetadataInSendOffsetsToTransaction() STARTED
[2021-11-03T17:08:10.869Z] 
[2021-11-03T17:08:10.869Z] TransactionsTest > 
testOffsetMetadataInSendOffsetsToTransaction() PASSED
[2021-11-03T17:08:10.869Z] 
[2021-11-03T17:08:10.869Z] TransactionsTest > testInitTransactionsTimeout() 
STARTED
[2021-11-03T17:08:23.929Z] 
[2021-11-03T17:08:23.929Z] TransactionsTest > testInitTransactionsTimeout() 
PASSED
[2021-11-03T17:08:23.929Z] 
[2021-11-03T17:08:23.929Z] TransactionsTest > 
testConsecutivelyRunInitTransactions() STARTED
[2021-11-03T17:08:27.514Z] 
[2021-11-03T17:08:27.514Z] TransactionsTest > 
testConsecutivelyRunInitTransactions() PASSED
[2021-11-03T17:08:27.514Z] 
[2021-11-03T17:08:27.514Z] TransactionsTest > 
testReadCommittedConsumerShouldNotSeeUndecidedData() STARTED
[2021-11-03T17:08:38.283Z] 
[2021-11-03T17:08:38.283Z] TransactionsTest > 
testReadCommittedConsumerShouldNotSeeUndecidedData() PASSED
[2021-11-03T17:08:38.283Z] 
[2021-11-03T17:08:38.283Z] TransactionsTest > 
testSendOffsetsToTransactionTimeout() STARTED
[2021-11-03T17:08:45.474Z] 
kafka.api.TransactionsTest.testSendOffsetsToTransactionTimeout() failed, log 
available in 
/home/jenkins/jenkins-agent/workspace/Kafka_kafka_trunk/core/build/reports/testOutput/kafka.api.TransactionsTest.testSendOffsetsToTransactionTimeout().test.stdout
[2021-11-03T17:08:45.474Z] 

Wiki Permissions

2021-11-03 Thread Diego Erdody
Hello,

Can I please have "permissions to contribute to Apache Kafka".
Context: propose a new KIP.
User (both jira and wiki): erdody.
Thanks!

Diego


Re: [DISCUSS] KIP-786: Emit Metric Client Quota Values

2021-11-03 Thread Mickael Maison
Hi Mason,

Thanks for the KIP. I think it's a good idea to also emit quota limits
as metrics. It certainly simplifies monitoring/graphing if all the
data come from the same source.

The KIP looks good overall, just a couple of questions:
- Have you considered enabling the new metrics by default?
- If you prefer keeping a configuration to enable them, what about
renaming it to "client.quota.value.metric.enable" or even
"quota.value.metric.enable"?

Thanks,
Mickael

On Wed, Oct 27, 2021 at 11:36 PM Mason Legere
 wrote:
>
> Hi All,
>
> Haven't received any feedback on this yet but as it was a small change have
> made a PR showing the functional components: pull request
> 
> Will update the related documentation outlining the new metric attributes
> in a bit.
>
> Best,
> Mason Legere
>
> On Sat, Oct 23, 2021 at 4:00 PM Mason Legere 
> wrote:
>
> > Hi All,
> >
> > I would like to start a discussion for my proposed KIP-786
> > 
> >  which
> > aims to allow client quota values to be emitted as a standard jmx MBean
> > attribute - if enabled in the static broker configuration.
> >
> > Please note that I originally misnumbered this KIP and am re-creating this
> > discussion thread for clarity. The original thread can be found at: Original
> > Email Thread
> > 
> >
> > Best,
> > Mason Legere
> >


[jira] [Created] (KAFKA-13430) Remove broker-wide quota properties from the documentation

2021-11-03 Thread Dongjin Lee (Jira)
Dongjin Lee created KAFKA-13430:
---

 Summary: Remove broker-wide quota properties from the documentation
 Key: KAFKA-13430
 URL: https://issues.apache.org/jira/browse/KAFKA-13430
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Dongjin Lee
Assignee: Dongjin Lee


I found this problem while working on 
[KAFKA-13341|https://issues.apache.org/jira/browse/KAFKA-13341].

Broker-wide quota properties ({{quota.producer.default}}, 
{{quota.consumer.default}}) are removed in 3.0, but it is not applied to the 
documentation yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [DISCUSS] KIP-786: Use localhost:9092 as default bootstrap-server/broker-list in client tools

2021-11-03 Thread Mickael Maison
Hi,

I suggest also starting a new DISCUSS thread on the mailing list to
avoid any confusion.

Thanks

On Tue, Nov 2, 2021 at 3:24 AM deng ziming  wrote:
>
> Thank you Maison
>
> I have update the KIP number to KIP-789, the url has been updated to:
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=191335433 
> 
>
> Thanks
>
> Deng Ziming
>
> > On Oct 31, 2021, at 10:39 PM, Mickael Maison  
> > wrote:
> >
> > Hi,
> >
> > Can you adjust the KIP number as there is already another KIP using 786?
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-786%3A+Emit+Metric+Client+Quota+Values
> >
> > Thanks
> >
> > On Mon, Oct 25, 2021 at 2:33 PM deng ziming  
> > wrote:
> >>
> >> Hey all,
> >> I’d like to start the discussion for proposal, KIP-786: Use localhost:9092 
> >> as default bootstrap-server/broker-list in client tools.
> >>
> >> After this KIP, user can use client tools such as 
> >> kafka-console-consumer.sh without specifying bootstrap-server.
> >>
> >> Detailed information can be found here:
> >> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=191335433 
> >> 
> >>
> >> Any comments and feedback are welcome.
> >>
> >> Thank you.
> >> Deng Ziming.
>


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #549

2021-11-03 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 497019 lines...]
[2021-11-03T13:30:37.078Z] 
[2021-11-03T13:30:37.078Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() STARTED
[2021-11-03T13:30:37.078Z] 
[2021-11-03T13:30:37.078Z] PlaintextConsumerTest > 
testConsumingWithNullGroupId() PASSED
[2021-11-03T13:30:37.078Z] 
[2021-11-03T13:30:37.078Z] PlaintextConsumerTest > testPositionAndCommit() 
STARTED
[2021-11-03T13:30:44.367Z] 
[2021-11-03T13:30:44.367Z] PlaintextConsumerTest > testPositionAndCommit() 
PASSED
[2021-11-03T13:30:44.367Z] 
[2021-11-03T13:30:44.367Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() STARTED
[2021-11-03T13:30:50.389Z] 
[2021-11-03T13:30:50.389Z] PlaintextConsumerTest > 
testFetchRecordLargerThanMaxPartitionFetchBytes() PASSED
[2021-11-03T13:30:50.389Z] 
[2021-11-03T13:30:50.389Z] PlaintextConsumerTest > testUnsubscribeTopic() 
STARTED
[2021-11-03T13:30:59.327Z] 
[2021-11-03T13:30:59.327Z] PlaintextConsumerTest > testUnsubscribeTopic() PASSED
[2021-11-03T13:30:59.327Z] 
[2021-11-03T13:30:59.327Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() STARTED
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] > Task :streams:integrationTest
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorManyStandbys PASSED
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys STARTED
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] 
org.apache.kafka.streams.integration.KStreamAggregationIntegrationTest > 
shouldAggregateWindowed PASSED
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testHighAvailabilityTaskAssignorManyStandbys PASSED
[2021-11-03T13:31:01.131Z] 
[2021-11-03T13:31:01.131Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers STARTED
[2021-11-03T13:31:02.094Z] 
[2021-11-03T13:31:02.094Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testFallbackPriorTaskAssignorLargeNumConsumers PASSED
[2021-11-03T13:31:02.094Z] 
[2021-11-03T13:31:02.094Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers STARTED
[2021-11-03T13:31:03.057Z] 
[2021-11-03T13:31:03.057Z] 
org.apache.kafka.streams.processor.internals.StreamsAssignmentScaleTest > 
testStickyTaskAssignorLargeNumConsumers PASSED
[2021-11-03T13:31:03.057Z] 
[2021-11-03T13:31:03.057Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted STARTED
[2021-11-03T13:31:05.616Z] 
[2021-11-03T13:31:05.616Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor STARTED
[2021-11-03T13:31:06.579Z] 
[2021-11-03T13:31:06.579Z] 
org.apache.kafka.streams.integration.TaskAssignorIntegrationTest > 
shouldProperlyConfigureTheAssignor PASSED
[2021-11-03T13:31:12.648Z] 
[2021-11-03T13:31:12.648Z] 
org.apache.kafka.streams.processor.internals.HandlingSourceTopicDeletionIntegrationTest
 > shouldThrowErrorAfterSourceTopicDeleted PASSED
[2021-11-03T13:31:17.371Z] streams-6: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-3: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-0: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-7: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-9: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-4: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-8: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-1: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-2: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:17.371Z] streams-5: SMOKE-TEST-CLIENT-CLOSED
[2021-11-03T13:31:21.211Z] 
[2021-11-03T13:31:21.211Z] > Task :core:integrationTest
[2021-11-03T13:31:21.211Z] 
[2021-11-03T13:31:21.211Z] PlaintextConsumerTest > 
testMultiConsumerSessionTimeoutOnClose() PASSED
[2021-11-03T13:31:21.211Z] 
[2021-11-03T13:31:21.211Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() STARTED
[2021-11-03T13:32:03.295Z] 
[2021-11-03T13:32:03.295Z] PlaintextConsumerTest > 
testMultiConsumerStickyAssignor() PASSED
[2021-11-03T13:32:03.295Z] 
[2021-11-03T13:32:03.295Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() STARTED
[2021-11-03T13:32:08.213Z] 
[2021-11-03T13:32:08.213Z] PlaintextConsumerTest > 
testFetchRecordLargerThanFetchMaxBytes() PASSED
[2021-11-03T13:32:08.213Z] 
[2021-11-03T13:32:08.213Z] PlaintextConsumerTest > testAutoCommitOnClose() 
STARTED
[2021-11-03T13:32:15.831Z] 

[jira] [Resolved] (KAFKA-13428) server hang on shutdown

2021-11-03 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13428?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-13428.
-
Resolution: Duplicate

> server hang on shutdown
> ---
>
> Key: KAFKA-13428
> URL: https://issues.apache.org/jira/browse/KAFKA-13428
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Affects Versions: 2.5.0
>Reporter: yuhuo
>Priority: Major
>
> Kafka Server start as follow step:  
>     1. socketServer.startup
>     2. zkClient.registerBroker
>     3. socketServer.startDataPlaneProcessors
> after step1,the port can be connnected, but default processors queue size is 
> 20, if there is many connections, the AcceptorThread will wait on processors 
> queue put. and then, if registerBroker error(such as zk session not expired, 
> the broker id still exists), server will go shutdown and never start network 
> processors,  AcceptorThread will shutdown fail because thread still wait on 
> queue, at last server is hang.
> stack:
> {code:java}
> //代码占位符
> ...
> "data-plane-kafka-socket-acceptor-ListenerName(ING_INSIDE)-PLAINTEXT-9094" 
> #35 prio=5 os_prio=0 tid=0x55fe58048800 nid=0x6c5 runnable 
> [0x7f5f60f8b000]
>java.lang.Thread.State: RUNNABLE
>   at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
>   at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
>   at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
>   at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
>   - locked <0x0006497b86a0> (a sun.nio.ch.Util$3)
>   - locked <0x0006497b8690> (a java.util.Collections$UnmodifiableSet)
>   - locked <0x0006497b86b0> (a sun.nio.ch.EPollSelectorImpl)
>   at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
>   at kafka.network.Acceptor.run(SocketServer.scala:534)
>   at java.lang.Thread.run(Thread.java:748)
> "data-plane-kafka-socket-acceptor-ListenerName(INSIDE)-PLAINTEXT-9092" #34 
> prio=5 os_prio=0 tid=0x55fe55773800 nid=0x6c4 waiting on condition 
> [0x7f5f63315000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0006497b88d8> (a 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
>   at 
> java.util.concurrent.ArrayBlockingQueue.put(ArrayBlockingQueue.java:353)
>   at kafka.network.Processor.accept(SocketServer.scala:1002)
>   at kafka.network.Acceptor.assignNewConnection(SocketServer.scala:633)
>   at kafka.network.Acceptor.$anonfun$run$1(SocketServer.scala:560)
>   at kafka.network.Acceptor.run(SocketServer.scala:544)
>   at java.lang.Thread.run(Thread.java:748)
> ...
> "main" #1 prio=5 os_prio=0 tid=0x55fe5519a000 nid=0x69f waiting on 
> condition [0x7f5f8a0cf000]
>java.lang.Thread.State: WAITING (parking)
>   at sun.misc.Unsafe.park(Native Method)
>   - parking to wait for  <0x0006497b8df8> (a 
> java.util.concurrent.CountDownLatch$Sync)
>   at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
>   at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
>   at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
>   at kafka.network.AbstractServerThread.shutdown(SocketServer.scala:430)
>   at kafka.network.Acceptor.shutdown(SocketServer.scala:517)
>   at 
> kafka.network.SocketServer.$anonfun$stopProcessingRequests$2(SocketServer.scala:267)
>   at 
> kafka.network.SocketServer.$anonfun$stopProcessingRequests$2$adapted(SocketServer.scala:267)
>   at kafka.network.SocketServer$$Lambda$408/1620459733.apply(Unknown 
> Source)
>   at scala.collection.Iterator.foreach(Iterator.scala:941)
>   at scala.collection.Iterator.foreach$(Iterator.scala:941)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
>   at 
> scala.collection.MapLike$DefaultValuesIterable.foreach(MapLike.scala:213)
>   at 
> kafka.network.SocketServer.stopProcessingRequests(SocketServer.scala:267)
>   - locked <0x0006497b8e98> (a kafka.network.SocketServer)
>   at kafka.server.KafkaServer.$anonfun$shutdown$4(KafkaServer.scala:617)
>   at kafka.server.KafkaServer$$Lambda$406/1338368149.apply$mcV$sp(Unknown 
> Source)

Re: [DISCUSS] KIP-788: Allow configuring num.network.threads per listener

2021-11-03 Thread Mickael Maison
Hi Israel,

The notation used in this KIP to specify a listener already exists in
Kafka. It will keep the same format and rules than for other
configurations. For example, see the documentation of max.connections:
https://kafka.apache.org/documentation/#brokerconfigs_max.connections

I've updated the KIP to mention it.

Thanks

On Wed, Nov 3, 2021 at 3:57 AM Mason Legere
 wrote:
>
> Cool, great idea.
>
> Mason
>
> On Tue, Oct 26, 2021 at 10:35 AM Israel Ekpo  wrote:
>
> > Mickael,
> >
> > It will great to specify if the listener name is case sensitive (ie do you
> > need to use upper case in the config)
> >
> > Your examples should reference actual listener names and the case (upper or
> > lower) to make it clear for the users
> >
> > Nevertheless, the KIP is solid and will help configure and  scale the
> > different components independently
> >
> > Looks great to me
> >
> >
> >
> > On Tue, Oct 26, 2021 at 1:27 PM Ryanne Dolan 
> > wrote:
> >
> > > Neat! Makes sense to me.
> > >
> > > Ryanne
> > >
> > > On Tue, Oct 26, 2021, 11:02 AM Mickael Maison 
> > > wrote:
> > >
> > > > Hi all,
> > > >
> > > > I wrote a KIP to allow setting the number of network threads per
> > > listener:
> > > >
> > > >
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener
> > > >
> > > > Please let me know if you have any feedback.
> > > > Thanks
> > > >
> > >
> >


[VOTE] KIP-788: Allow configuring num.network.threads per listener

2021-11-03 Thread Mickael Maison
Hi all,

I'd like to start the vote on KIP-788. It will allow setting the
number of network threads per listener.
https://cwiki.apache.org/confluence/display/KAFKA/KIP-788%3A+Allow+configuring+num.network.threads+per+listener

Please let me know if you have any feedback.
Thanks


[jira] [Resolved] (KAFKA-13373) ValueTransformerWithKeySupplier doesn't work with store()

2021-11-03 Thread Aleksandr Sorokoumov (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-13373?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aleksandr Sorokoumov resolved KAFKA-13373.
--
Resolution: Cannot Reproduce

> ValueTransformerWithKeySupplier doesn't work with store()
> -
>
> Key: KAFKA-13373
> URL: https://issues.apache.org/jira/browse/KAFKA-13373
> Project: Kafka
>  Issue Type: Bug
>  Components: streams
>Affects Versions: 2.8.0
>Reporter: Anatoly Tsyganenko
>Assignee: Aleksandr Sorokoumov
>Priority: Minor
>  Labels: newbie
>
> I'm trying to utilize stores() method in ValueTransformerWithKeySupplier like 
> this:
>  
> {code:java}
> public final class CustomSupplier implements 
> ValueTransformerWithKeySupplier, JsonNode, JsonNode> {
> private final String storeName = "my-store";
> public Set> stores() {
> final Deserializer jsonDeserializer = new 
> JsonDeserializer();
> final Serializer jsonSerializer = new JsonSerializer();
> final Serde jsonSerde = Serdes.serdeFrom(jsonSerializer, 
> jsonDeserializer);
> final Serde stringSerde = Serdes.String();
> final StoreBuilder> store 
> = 
> Stores.timestampedKeyValueStoreBuilder(Stores.inMemoryKeyValueStore(storeName),
> stringSerde, jsonSerde).withLoggingDisabled();
> return Collections.singleton(store);
> }
> @Override
> public ValueTransformerWithKey, JsonNode, JsonNode> 
> get() {
> return new ValueTransformerWithKey, JsonNode, 
> JsonNode>() {
> private ProcessorContext context;
> private TimestampedKeyValueStore store;
> @Override
> public void init(final ProcessorContext context) {
> this.store = context.getStateStore(storeName);
> this.context = context;
> }
> //
> }{code}
>  
> But got next error for line "this.store = context.getStateStore(storeName);" 
> in init():
> {code:java}
> Caused by: org.apache.kafka.streams.errors.StreamsException: Processor 
> KTABLE-TRANSFORMVALUES-08 has no access to StateStore my-store as the 
> store is not connected to the processor. If you add stores manually via 
> '.addStateStore()' make sure to connect the added store to the processor by 
> providing the processor name to '.addStateStore()' or connect them via 
> '.connectProcessorAndStateStores()'. DSL users need to provide the store name 
> to '.process()', '.transform()', or '.transformValues()' to connect the store 
> to the corresponding operator, or they can provide a StoreBuilder by 
> implementing the stores() method on the Supplier itself. If you do not add 
> stores manually, please file a bug report at 
> https://issues.apache.org/jira/projects/KAFKA.{code}
>  
> The same code works perfect with Transform or when I adding store to builder. 
> Looks like something wrong when ConnectedStoreProvider and 
> ValueTransformerWithKeySupplier used together.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Resolved] (KAFKA-6498) Add RocksDB statistics via Streams metrics

2021-11-03 Thread Bruno Cadonna (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-6498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bruno Cadonna resolved KAFKA-6498.
--
Resolution: Done

> Add RocksDB statistics via Streams metrics
> --
>
> Key: KAFKA-6498
> URL: https://issues.apache.org/jira/browse/KAFKA-6498
> Project: Kafka
>  Issue Type: Improvement
>  Components: metrics, streams
>Reporter: Guozhang Wang
>Assignee: Bruno Cadonna
>Priority: Major
>  Labels: kip
>
> RocksDB's own stats can be programmatically exposed via 
> {{Options.statistics()}} and the JNI `Statistics` has indeed implemented many 
> useful settings already. However these stats are not exposed directly via 
> Streams today and hence for any users who wants to get access to them they 
> have to manually interact with the underlying RocksDB directly, not through 
> Streams.
> We should expose such stats via Streams metrics programmatically for users to 
> investigate them without trying to access the rocksDB directly.
> [KIP-471: Expose RocksDB Metrics in Kafka 
> Streams|http://cwiki.apache.org/confluence/display/KAFKA/KIP-471%3A+Expose+RocksDB+Metrics+in+Kafka+Streams]



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


Re: [VOTE] KIP-782: Expandable batch size in producer

2021-11-03 Thread Mickael Maison
Hi Luke,

Thanks for the KIP. It looks like an interesting idea. I like the
concept of dynamically adjusting settings to handle load. I wonder if
other client settings could also benefit from a similar logic.

Just a couple of questions:
- When under load, the producer may allocate extra buffers. Are these
buffers ever released if the load drops?
- Do we really need batch.initial.size? It's not clear that having
this extra setting adds a lot of value.

Thanks,
Mickael

On Tue, Oct 26, 2021 at 11:12 AM Luke Chen  wrote:
>
> Thank you, Artem!
>
> @devs, welcome to vote for this KIP.
> Key proposal:
> 1. allocate multiple smaller initial batch size buffer in producer, and
> list them together when expansion for better memory usage
> 2. add a max batch size config in producer, so when producer rate is
> suddenly high, we can still have high throughput with batch size larger
> than "batch.size" (and less than "batch.max.size", where "batch.size" is
> soft limit and "batch.max.size" is hard limit)
> Here's the updated KIP:
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-782%3A+Expandable+batch+size+in+producer
>
> And, any comments and feedback are welcome.
>
> Thank you.
> Luke
>
> On Tue, Oct 26, 2021 at 6:35 AM Artem Livshits
>  wrote:
>
> > Hi Luke,
> >
> > I've looked at the updated KIP-782, it looks good to me.
> >
> > -Artem
> >
> > On Sun, Oct 24, 2021 at 1:46 AM Luke Chen  wrote:
> >
> > > Hi Artem,
> > > Thanks for your good suggestion again.
> > > I've combined your idea into this KIP, and updated it.
> > > Note, in the end, I still keep the "batch.initial.size" config (default
> > is
> > > 0, which means "batch.size" will be initial batch size) for better memory
> > > conservation.
> > >
> > > Detailed description can be found here:
> > >
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-782%3A+Expandable+batch+size+in+producer
> > >
> > > Let me know if you have other suggestions.
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Sat, Oct 23, 2021 at 10:50 AM Luke Chen  wrote:
> > >
> > >> Hi Artem,
> > >> Thanks for the suggestion. Let me confirm my understanding is correct.
> > >> So, what you suggest is that the "batch.size" is more like a "soft
> > limit"
> > >> batch size, and the "hard limit" is "batch.max.size". When reaching the
> > >> batch.size of the buffer, it means the buffer is "ready" to be be sent.
> > But
> > >> before the linger.ms reached, if there are more data coming, we can
> > >> still accumulate it into the same buffer, until it reached the
> > >> "batch.max.size". After it reached the "batch.max.size", we'll create
> > >> another batch for it.
> > >>
> > >> So after your suggestion, we won't need the "batch.initial.size", and we
> > >> can use "batch.size" as the initial batch size. We list each
> > "batch.size"
> > >> together, until it reached "batch.max.size". Something like this:
> > >>
> > >> [image: image.png]
> > >> Is my understanding correct?
> > >> If so, that sounds good to me.
> > >> If not, please kindly explain more to me.
> > >>
> > >> Thank you.
> > >> Luke
> > >>
> > >>
> > >>
> > >>
> > >> On Sat, Oct 23, 2021 at 2:13 AM Artem Livshits
> > >>  wrote:
> > >>
> > >>> Hi Luke,
> > >>>
> > >>> Nice suggestion.  It should optimize how memory is used with different
> > >>> production rates, but I wonder if we can take this idea further and
> > >>> improve
> > >>> batching in general.
> > >>>
> > >>> Currently batch.size is used in two conditions:
> > >>>
> > >>> 1. When we append records to a batch in the accumulator, we create a
> > new
> > >>> batch if the current batch would exceed the batch.size.
> > >>> 2. When we drain the batch from the accumulator, a batch becomes
> > 'ready'
> > >>> when it reaches batch.size.
> > >>>
> > >>> The second condition is good with the current batch size, because if
> > >>> linger.ms is greater than 0, the send can be triggered by
> > accomplishing
> > >>> the
> > >>> batching goal.
> > >>>
> > >>> The first condition, though, leads to creating many batches if the
> > >>> network
> > >>> latency or production rate (or both) is high, and with 5 in-flight and
> > >>> 16KB
> > >>> batches we can only have 80KB of data in-flight per partition.  Which
> > >>> means
> > >>> that with 50ms latency, we can only push ~1.6MB/sec per partition (this
> > >>> goes down if we consider higher latencies, e.g. with 100ms we can only
> > >>> push
> > >>> ~0.8MB/sec).
> > >>>
> > >>> I think it would be great to separate the two sizes:
> > >>>
> > >>> 1. When appending records to a batch, create a new batch if the current
> > >>> exceeds a larger size (we can call it batch.max.size), say 256KB by
> > >>> default.
> > >>> 2. When we drain, consider batch 'ready' if it exceeds batch.size,
> > which
> > >>> is
> > >>> 16KB by default.
> > >>>
> > >>> For memory conservation we may introduce batch.initial.size if we want
> > to
> > >>> have a flexibility to make it even smaller than batch.size, or we can
> > >>> just
> > >>> always 

Re: [kafka-clients] [VOTE] 2.7.2 RC0

2021-11-03 Thread Manikumar
Hi,

+1 (binding)

- verified the signatures
- verified the quickstart with binary

Thanks for running the release!

Thanks,
Manikumar

On Tue, Nov 2, 2021 at 11:16 PM Mickael Maison  wrote:

> Bumping the thread.
>
> Contributors, committers and PMC, please take some time to test this
> release candidate and vote.
>
> Thanks,
> Mickael
>
> On Tue, Oct 26, 2021 at 7:38 PM Israel Ekpo  wrote:
> >
> > Thanks Bill. That is greatly appreciated :)
> >
> > We need more PMC members with binding votes to participate.
> >
> > You can do it!
> >
> > On Tue, Oct 26, 2021 at 1:25 PM Bill Bejeck  wrote:
> >>
> >> Hi Mickael,
> >>
> >> Thanks for running the release.
> >>
> >> Steps taken
> >>
> >> Validated checksums
> >> Validated signatures
> >> Built from source
> >> Ran all the unit tests
> >> Spot checked various JavaDocs
> >>
> >>
> >> +1(binding)
> >>
> >> On Tue, Oct 26, 2021 at 4:43 AM Luke Chen  wrote:
> >>>
> >>> Hi Mickael,
> >>>
> >>> Thanks for the release. I did:
> >>> 1. Verified checksums and signatures
> >>> 2. Run quick start steps
> >>> 3. Verified the CVE-2021-38153 is indeed fixed in kafka-2.7.2-src.tgz
> >>>  >.
> >>>
> >>> +1 (non-binding)
> >>>
> >>> Thank you.
> >>> Luke
> >>>
> >>> On Tue, Oct 26, 2021 at 3:41 PM Tom Bentley 
> wrote:
> >>>
> >>> > Hi Mickael,
> >>> >
> >>> > As with 2.6.3 RC0, I have:
> >>> >
> >>> > * Verified checksums and signatures
> >>> > * Built jars and docs from the source jar
> >>> > * Run the unit and integration tests
> >>> >
> >>> > +1 non-binding
> >>> >
> >>> > Kind regards,
> >>> >
> >>> > Tom
> >>> >
> >>> > On Sun, Oct 24, 2021 at 3:05 PM Israel Ekpo 
> wrote:
> >>> >
> >>> > > Mickael,
> >>> > >
> >>> > > Do we need to do another RC? Were there issues with this release?
> >>> > >
> >>> > > What happens next?
> >>> > >
> >>> > >
> >>> > > On Sat, Oct 16, 2021 at 8:11 PM Israel Ekpo 
> >>> > wrote:
> >>> > >
> >>> > > >
> >>> > > > I have performed the following checks
> >>> > > >
> >>> > > > Validation of Release Artifacts Cryptographic Hashes (ASC MD5
> SHA1
> >>> > > SHA512)
> >>> > > > PGP Signatures used to sign the release artifacts
> >>> > > > Javadocs check
> >>> > > > Site docs check was not necessary
> >>> > > > Jenkins build was successful.
> >>> > > >
> >>> > > > I used the steps here for the first two checks
> >>> > > > https://github.com/izzyacademy/apache-kafka-release-party
> >>> > > >
> >>> > > > I vote +1 on this RC
> >>> > > >
> >>> > > >
> >>> > > > On Fri, Oct 15, 2021 at 12:11 PM Israel Ekpo <
> israele...@gmail.com>
> >>> > > wrote:
> >>> > > >
> >>> > > >> Hi Mickael,
> >>> > > >>
> >>> > > >> I am pretty surprised that there are no votes so far on the RCs
> and
> >>> > the
> >>> > > >> deadline has already passed.
> >>> > > >>
> >>> > > >> I am running my checks right now using the process outlined here
> >>> > > >>
> >>> > > >>
> >>> > > >>
> >>> > >
> >>> >
> https://github.com/izzyacademy/apache-kafka-release-party#how-to-validate-apache-kafka-release-candidates
> >>> > > >>
> >>> > > >> I will post my results and vote as soon as they are completed.
> >>> > > >>
> >>> > > >> On Fri, Oct 15, 2021 at 9:52 AM Mickael Maison <
> mimai...@apache.org>
> >>> > > >> wrote:
> >>> > > >>
> >>> > > >>> Successful Jenkins build:
> >>> > > >>> https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/181/
> >>> > > >>>
> >>> > > >>> On Wed, Oct 13, 2021 at 6:47 PM Mickael Maison <
> mimai...@apache.org>
> >>> > > >>> wrote:
> >>> > > >>> >
> >>> > > >>> > Hi Israel,
> >>> > > >>> >
> >>> > > >>> > Our tooling generates the same template for all types of
> releases.
> >>> > > >>> >
> >>> > > >>> > For bugfix releases, the site docs and javadocs don't
> typically
> >>> > > >>> > require extensive validation.
> >>> > > >>> > It's still a good idea to open them up and check a few pages
> to
> >>> > > >>> > validate they look right.
> >>> > > >>> >
> >>> > > >>> > For this release, as you've mentioned, site docs have not
> changed.
> >>> > > >>> >
> >>> > > >>> > Thanks
> >>> > > >>> >
> >>> > > >>> > On Wed, Oct 13, 2021 at 1:59 AM Israel Ekpo <
> israele...@gmail.com>
> >>> > > >>> wrote:
> >>> > > >>> > >
> >>> > > >>> > > Mickael,
> >>> > > >>> > >
> >>> > > >>> > > For patch or bug fix releases like this one, should we
> exclude
> >>> > the
> >>> > > >>> Javadocs and site docs if they have not changed?
> >>> > > >>> > >
> >>> > > >>> > > https://github.com/apache/kafka-site
> >>> > > >>> > >
> >>> > > >>> > > The site docs were last changed about 6 months ago and it
> appears
> >>> > > it
> >>> > > >>> may not have changed or needs validation
> >>> > > >>> > >
> >>> > > >>> > >
> >>> > > >>> > >
> >>> > > >>> > > On Tue, Oct 12, 2021 at 2:17 PM Mickael Maison <
> >>> > > mimai...@apache.org>
> >>> > > >>> wrote:
> >>> > > >>> > >>
> >>> > > >>> > >> Hello Kafka users, developers and client-developers,
> >>> > > >>> > >>
> >>> > > >>> > >> This is the first candidate for 

Re: [kafka-clients] [VOTE] 2.6.3 RC0

2021-11-03 Thread Manikumar
Hi,

+1 (binding)

- verified the signatures
- verified the quickstart with binary

Thanks for running the release!

Thanks,
Manikumar

On Tue, Nov 2, 2021 at 11:15 PM Mickael Maison 
wrote:

> Bumping the thread.
>
> Contributors, committers and PMC, please take some time to test this
> release candidate and vote.
>
> Thanks,
> Mickael
>
> On Tue, Oct 26, 2021 at 4:50 PM Bill Bejeck  wrote:
> >
> > Thanks for running the release Mickael.
> >
> > I did the following:
> >
> >1. Validated signatures
> >2. Validated checksums
> >3. Built from source
> >4. Ran all the unit tests
> >5. Spot checked the Javadocs
> >
> >
> > +1(binding)
> > -Bill
> >
> >
> >
> > On Mon, Oct 25, 2021 at 8:24 PM Israel Ekpo 
> wrote:
> >
> > > Hello Friends
> > >
> > > We are approaching the limit of the grace period for your vote events
> to
> > > make in into the result stream. Just kidding :) KIP-633 added 24 more
> weeks
> > > to the grace period :)
> > >
> > > All kidding aside, lets take a few moments and validate the RC and
> vote yes
> > > (+1) or no (-1) so that we can close out the process soon.
> > >
> > > That being said let’s get started with the release party
> > >
> > > I have simplified the validation process here
> > >
> > > https://github.com/izzyacademy/apache-kafka-release-party
> > >
> > > All you need is Docker in your local environment and the validation can
> > > be done in a few moments
> > >
> > > Let’s try to complete the voting process so that we can push this
> release
> > > out and resolve the outstanding vulnerabilities and defects already
> > > resolved.
> > >
> > > Thanks
> > >
> > > On Mon, Oct 25, 2021 at 12:34 PM Tom Bentley 
> wrote:
> > >
> > > > Hi Mickael,
> > > >
> > > > I have:
> > > >
> > > > * Verified checksums and signatures
> > > > * Built jars and docs from the source jar
> > > > * Run the unit and integration tests
> > > >
> > > > +1 non-binding
> > > >
> > > > Kind regards,
> > > >
> > > > Tom
> > > >
> > > > On Mon, Oct 25, 2021 at 10:07 AM Mickael Maison  >
> > > > wrote:
> > > >
> > > > > Hi Israel,
> > > > >
> > > > > Thanks for checking this RC and voting!
> > > > > The vote is not abandoned, we are just waiting for people to vote.
> > > > >
> > > > >
> > > > >
> > > > >
> > > > >
> > > > > On Sun, Oct 24, 2021 at 3:59 PM Israel Ekpo 
> > > > wrote:
> > > > > >
> > > > > > Was this vote abandoned? If so why?
> > > > > >
> > > > > >
> > > > > > On Sat, Oct 16, 2021 at 8:12 PM Israel Ekpo <
> israele...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > >
> > > > > > > I have performed the following checks for this release
> candidate.
> > > > > > >
> > > > > > > Validation of Release Artifacts Cryptographic Hashes (ASC MD5
> SHA1
> > > > > SHA512)
> > > > > > > PGP Signatures used to sign the release artifacts
> > > > > > > Javadocs check
> > > > > > > Site docs check was not necessary
> > > > > > > Jenkins build was successful.
> > > > > > >
> > > > > > > I used the steps here for the first two checks
> > > > > > > https://github.com/izzyacademy/apache-kafka-release-party
> > > > > > >
> > > > > > > I vote +1 on this RC
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Oct 15, 2021 at 12:11 PM Israel Ekpo <
> israele...@gmail.com
> > > >
> > > > > wrote:
> > > > > > >
> > > > > > >> Hi Mickael,
> > > > > > >>
> > > > > > >> I am pretty surprised that there are no votes so far on the
> RCs
> > > and
> > > > > the
> > > > > > >> deadline has already passed.
> > > > > > >>
> > > > > > >> I am running my checks right now using the process outlined
> here
> > > > > > >>
> > > > > > >>
> > > > > > >>
> > > > >
> > > >
> > >
> https://github.com/izzyacademy/apache-kafka-release-party#how-to-validate-apache-kafka-release-candidates
> > > > > > >>
> > > > > > >> I will post my results and vote as soon as they are completed.
> > > > > > >>
> > > > > > >> On Fri, Oct 15, 2021 at 9:52 AM Mickael Maison <
> > > mimai...@apache.org
> > > > >
> > > > > > >> wrote:
> > > > > > >>
> > > > > > >>> Successful Jenkins build:
> > > > > > >>>
> https://ci-builds.apache.org/job/Kafka/job/kafka-2.7-jdk8/181/
> > > > > > >>>
> > > > > > >>> On Wed, Oct 13, 2021 at 6:47 PM Mickael Maison <
> > > > mimai...@apache.org>
> > > > > > >>> wrote:
> > > > > > >>> >
> > > > > > >>> > Hi Israel,
> > > > > > >>> >
> > > > > > >>> > Our tooling generates the same template for all types of
> > > > releases.
> > > > > > >>> >
> > > > > > >>> > For bugfix releases, the site docs and javadocs don't
> typically
> > > > > > >>> > require extensive validation.
> > > > > > >>> > It's still a good idea to open them up and check a few
> pages to
> > > > > > >>> > validate they look right.
> > > > > > >>> >
> > > > > > >>> > For this release, as you've mentioned, site docs have not
> > > > changed.
> > > > > > >>> >
> > > > > > >>> > Thanks
> > > > > > >>> >
> > > > > > >>> > On Wed, Oct 13, 2021 at 1:59 AM Israel Ekpo <
> > > > israele...@gmail.com>
> > > > > > >>> wrote:
>