Re: [PR] KAFKA-16356 RemoteLogMetadataSerde: Serializer via class-name dispatch removed and replaced with if-elseif-else conditions [kafka]

2024-04-14 Thread via GitHub


gharris1727 commented on PR #15620:
URL: https://github.com/apache/kafka/pull/15620#issuecomment-2055290090

   Hi @linu-shibu @showuon this still uses raw types, and so is still 
type-unsafe. Fixing that was my motivation for creating the ticket, sorry for 
not emphasizing it more.
   
   I think it should be addressed in this PR rather than a follow-up, because 
it involves changes to the exact same code.
   
   One other thought that occurs to me now is that we should store the metadata 
transform instances, so that we don't incur the cost of constructing them for 
each call.
   
   Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15265) Remote copy/fetch quotas for tiered storage.

2024-04-14 Thread Abhijeet Kumar (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837065#comment-17837065
 ] 

Abhijeet Kumar commented on KAFKA-15265:


Hi [~h...@pinterest.com]. I have broken down the task into smaller changes and 
published [https://github.com/apache/kafka/pull/15625] as the first PR. It is 
ready for review. I will be raising the remaining PRs soon.

> Remote copy/fetch quotas for tiered storage.
> 
>
> Key: KAFKA-15265
> URL: https://issues.apache.org/jira/browse/KAFKA-15265
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Satish Duggana
>Assignee: Abhijeet Kumar
>Priority: Major
>
> Related KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-956+Tiered+Storage+Quotas



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: skip 'zinc' phase from gradle dependency-check plugin [kafka]

2024-04-14 Thread via GitHub


github-actions[bot] commented on PR #15054:
URL: https://github.com/apache/kafka/pull/15054#issuecomment-2054899945

   This PR is being marked as stale since it has not had any activity in 90 
days. If you would like to keep this PR alive, please ask a committer for 
review. If the PR has  merge conflicts, please update it with the latest from 
trunk (or appropriate release branch)  If this PR is no longer valid or 
desired, please feel free to close it. If no activity occurs in the next 30 
days, it will be automatically closed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-16467) Add README to docs folder

2024-04-14 Thread ASF GitHub Bot (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837047#comment-17837047
 ] 

ASF GitHub Bot commented on KAFKA-16467:


showuon commented on code in PR #596:
URL: https://github.com/apache/kafka-site/pull/596#discussion_r1565129468


##
README.md:
##
@@ -10,4 +10,30 @@ You can run it with the following command, note that it 
requires docker:
 
 Then you can open [localhost:8080](http://localhost:8080) on your browser and 
browse the documentation.
 
-To kill the process, just type ctrl + c
\ No newline at end of file
+To kill the process, just type ctrl + c.
+
+## Preview latest kafka document
+
+1. Assume you have [kafka](https://github.com/apache/kafka) and kafka-site 
folder structure like this:
+
+```shell
+.
+├── kafka
+└── kafka-site
+```
+
+2. Generate document in kafka folder:

Review Comment:
   Generate document in kafka folder: -> Generating document from kafka 
repository:



##
README.md:
##
@@ -10,4 +10,30 @@ You can run it with the following command, note that it 
requires docker:
 
 Then you can open [localhost:8080](http://localhost:8080) on your browser and 
browse the documentation.
 
-To kill the process, just type ctrl + c
\ No newline at end of file
+To kill the process, just type ctrl + c.
+
+## Preview latest kafka document
+
+1. Assume you have [kafka](https://github.com/apache/kafka) and kafka-site 
folder structure like this:
+
+```shell
+.
+├── kafka
+└── kafka-site
+```
+

Review Comment:
   I think we don't need to have this assumption.



##
README.md:
##
@@ -10,4 +10,30 @@ You can run it with the following command, note that it 
requires docker:
 
 Then you can open [localhost:8080](http://localhost:8080) on your browser and 
browse the documentation.
 
-To kill the process, just type ctrl + c
\ No newline at end of file
+To kill the process, just type ctrl + c.
+
+## Preview latest kafka document

Review Comment:
   We could be consistent with the L1 title. Ex: 
   `How to preview the latest documentation changes in Kafka repository?`



##
README.md:
##
@@ -10,4 +10,30 @@ You can run it with the following command, note that it 
requires docker:
 
 Then you can open [localhost:8080](http://localhost:8080) on your browser and 
browse the documentation.
 
-To kill the process, just type ctrl + c
\ No newline at end of file
+To kill the process, just type ctrl + c.
+
+## Preview latest kafka document
+
+1. Assume you have [kafka](https://github.com/apache/kafka) and kafka-site 
folder structure like this:
+
+```shell
+.
+├── kafka
+└── kafka-site
+```
+
+2. Generate document in kafka folder:
+
+```shell
+./gradlew clean siteDocTar
+tar zxvf core/build/distributions/kafka_2.13-$(./gradlew properties | grep 
version: | awk '{print $NF}' | head -n 1)-site-docs.tgz
+```
+
+3. Running website in kafka-site folder and open 
[http://localhost:8080/dev/documentation/](http://localhost:8080/dev/documentation/):
+
+```shell
+rm -rf dev
+mkdir dev
+cp -r ../kafka/site-docs/* dev
+./start-preview.sh
+```

Review Comment:
   How about this:
   
   3. Copying the generated documents in Kafka repository into kafka-site, and 
preview them (note that it requires docker):
   
   ```shell
   # change directory into kafka-site repository
   cd KAFKA_SITE_REPO
   # copy the generated documents into dev folder
   rm -rf dev
   mkdir dev
   cp -r ../kafka/site-docs/* dev
   # preview it
   ./start-preview.sh
   ```
   
   Then you can open 
[http://localhost:8080/dev/documentation/](http://localhost:8080/dev/documentation/)
 on your browser and browse the generated documentation.



##
README.md:
##
@@ -10,4 +10,30 @@ You can run it with the following command, note that it 
requires docker:
 
 Then you can open [localhost:8080](http://localhost:8080) on your browser and 
browse the documentation.
 
-To kill the process, just type ctrl + c
\ No newline at end of file
+To kill the process, just type ctrl + c.
+
+## Preview latest kafka document
+
+1. Assume you have [kafka](https://github.com/apache/kafka) and kafka-site 
folder structure like this:
+
+```shell
+.
+├── kafka
+└── kafka-site
+```
+
+2. Generate document in kafka folder:
+
+```shell
+./gradlew clean siteDocTar
+tar zxvf core/build/distributions/kafka_2.13-$(./gradlew properties | grep 
version: | awk '{print $NF}' | head -n 1)-site-docs.tgz
+```

Review Comment:
   We didn't mention we are in kafka repo now. How about this:
   
   
   ```shell
   # change directory into kafka repository
   cd KAFKA_REPO
   ./gradlew clean siteDocTar
   # supposing built with scala 2.13
   tar zxvf core/build/distributions/kafka_2.13-$(./gradlew properties | grep 
version: | awk '{print $NF}' | head -n 1)-site-docs.tgz
   ```





> Add README to docs folder
> -
>
> Key: KAFKA-16467
> URL: 

[PR] JMH Benchmarks for testing the performance of the Server Side Rebalances: KIP_848 [kafka]

2024-04-14 Thread via GitHub


rreddy-22 opened a new pull request, #15717:
URL: https://github.com/apache/kafka/pull/15717

   This PR is for the four benchmarks that are being used to test the 
performance and efficiency of the consumer group rebalance process.
   
   - Client Assignors 
   - Server Assignors
   - Target Assignment Builder
   - Micro Benchmark for assigning partitions
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR : Replaced the while loop with TestUtils.waitForCondition [kafka]

2024-04-14 Thread via GitHub


showuon commented on PR #15678:
URL: https://github.com/apache/kafka/pull/15678#issuecomment-2054458671

   @chiacyu , thanks for the update. LGTM! Comments:
   1. Please resolve merge conflict
   2. In L163, we did:
   ```
   if (readyToAssert.getCount() > 0) {
   readyToAssert.countDown();
   }
   ```
   
   Actually we can remove the if condition because the javadoc of 
CountDownLatch#countDown said:
   
   > If the current count equals zero then nothing happens.
   
   Same comment applies to other similar places.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15265) Remote copy/fetch quotas for tiered storage.

2024-04-14 Thread Henry Cai (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15265?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17837025#comment-17837025
 ] 

Henry Cai commented on KAFKA-15265:
---

[~abhijeetkumar] Is the PR getting ready to be reviewed?

> Remote copy/fetch quotas for tiered storage.
> 
>
> Key: KAFKA-15265
> URL: https://issues.apache.org/jira/browse/KAFKA-15265
> Project: Kafka
>  Issue Type: Improvement
>  Components: core
>Reporter: Satish Duggana
>Assignee: Abhijeet Kumar
>Priority: Major
>
> Related KIP: 
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-956+Tiered+Storage+Quotas



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14507) Add ConsumerGroupPrepareAssignment API

2024-04-14 Thread Phuc Hong Tran (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phuc Hong Tran reassigned KAFKA-14507:
--

Assignee: (was: Phuc Hong Tran)

> Add ConsumerGroupPrepareAssignment API
> --
>
> Key: KAFKA-14507
> URL: https://issues.apache.org/jira/browse/KAFKA-14507
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-14508) Add ConsumerGroupInstallAssignment API

2024-04-14 Thread Phuc Hong Tran (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14508?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phuc Hong Tran reassigned KAFKA-14508:
--

Assignee: (was: Phuc Hong Tran)

> Add ConsumerGroupInstallAssignment API
> --
>
> Key: KAFKA-14508
> URL: https://issues.apache.org/jira/browse/KAFKA-14508
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16516) Fix the controller node provider for broker to control channel

2024-04-14 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-16516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio updated KAFKA-16516:
---
Description: 
The broker to controller channel gets the set of voters directly from the 
static configuration. This needs to change so that the leader nodes comes from 
the kraft client/manager.

The code is in KafkaServer where it construct the RaftControllerNodeProvider.

  was:The broker to controller channel gets the set of voters directly from the 
static configuration. This needs to change so that the leader nodes comes from 
the kraft client/manager.


> Fix the controller node provider for broker to control channel
> --
>
> Key: KAFKA-16516
> URL: https://issues.apache.org/jira/browse/KAFKA-16516
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.8.0
>
>
> The broker to controller channel gets the set of voters directly from the 
> static configuration. This needs to change so that the leader nodes comes 
> from the kraft client/manager.
> The code is in KafkaServer where it construct the RaftControllerNodeProvider.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Updated] (KAFKA-16515) Fix the ZK Metadata cache use of voter static configuration

2024-04-14 Thread Jira


 [ 
https://issues.apache.org/jira/browse/KAFKA-16515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

José Armando García Sancio updated KAFKA-16515:
---
Description: 
Looks like because of ZK migration to KRaft the ZK Metadata cache was changed 
to read the voter static configuration. This needs to change to use the voter 
nodes reported by  the raft manager or the kraft client.

The injection code is in KafkaServer where it constructs 
MetadataCache.zkMetadata.

  was:Looks like because of ZK migration to KRaft the ZK Metadata cache was 
changed to read the voter static configuration. This needs to change to use the 
voter nodes reported by  the raft manager or the kraft client.


> Fix the ZK Metadata cache use of voter static configuration
> ---
>
> Key: KAFKA-16515
> URL: https://issues.apache.org/jira/browse/KAFKA-16515
> Project: Kafka
>  Issue Type: Sub-task
>  Components: core
>Reporter: José Armando García Sancio
>Assignee: José Armando García Sancio
>Priority: Major
> Fix For: 3.8.0
>
>
> Looks like because of ZK migration to KRaft the ZK Metadata cache was changed 
> to read the voter static configuration. This needs to change to use the voter 
> nodes reported by  the raft manager or the kraft client.
> The injection code is in KafkaServer where it constructs 
> MetadataCache.zkMetadata.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Various cleanups in trogdor [kafka]

2024-04-14 Thread via GitHub


mimaison commented on PR #15708:
URL: https://github.com/apache/kafka/pull/15708#issuecomment-2054155906

   Thanks for the review!
   No need to rebase, you can just log into Jenkins with your Apache 
credentials and rekick the build. I went to 
https://ci-builds.apache.org/job/Kafka/job/kafka-pr/job/PR-15708/2/ and clicked 
the "Rebuild" button on the left side.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16481: Fixing flaky test kafka.server.ReplicaManagerTest#testRemoteLogReaderMetrics [kafka]

2024-04-14 Thread via GitHub


vamossagar12 commented on PR #15677:
URL: https://github.com/apache/kafka/pull/15677#issuecomment-2054135665

   @showuon , btw I am noticing that this test is failing with a different 
error. I am looking at the test history 
[here](https://ge.apache.org/scans/tests?search.relativeStartTime=P28D=Asia%2FCalcutta=kafka.server.ReplicaManagerTest=testRemoteLogReaderMetrics())
   
   ```
   java.lang.NoClassDefFoundError: Could not initialize class 
kafka.server.KafkaConfig$ |
   ```
   
   Noticed this as well 
   
   ```
   Exception org.apache.kafka.common.config.ConfigException: Invalid value none 
for configuration offsets.topic.compression.codec: Expected value to be a 
32-bit integer, but it was a org.apache.kafka.common.record.CompressionType$1 
[in thread "Test worker"]   
   ```
   
   Noticed it was introduced 
[here](https://github.com/apache/kafka/pull/15158/files#diff-5f72f144ecddda0b7fa3e0ef370a9b487a7c90bcecc2e437173a30555af76776R172)
 but it seems ok. Will try to dig deeper this week.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-15729) KRaft support in GetOffsetShellTest

2024-04-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15729?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai resolved KAFKA-15729.

Fix Version/s: 3.8.0
   Resolution: Fixed

> KRaft support in GetOffsetShellTest
> ---
>
> Key: KAFKA-15729
> URL: https://issues.apache.org/jira/browse/KAFKA-15729
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Owen C.H. Leung
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
> Fix For: 3.8.0
>
>
> The following tests in GetOffsetShellTest in 
> tools/src/test/java/org/apache/kafka/tools/GetOffsetShellTest.java need to be 
> updated to support KRaft
> 62 : def testNoFilterOptions(): Unit = {
> 68 : def testInternalExcluded(): Unit = {
> 74 : def testTopicNameArg(): Unit = {
> 82 : def testTopicPatternArg(): Unit = {
> 88 : def testPartitionsArg(): Unit = {
> 94 : def testTopicPatternArgWithPartitionsArg(): Unit = {
> 100 : def testTopicPartitionsArg(): Unit = {
> 116 : def testGetLatestOffsets(time: String): Unit = {
> 131 : def testGetEarliestOffsets(time: String): Unit = {
> 146 : def testGetOffsetsByMaxTimestamp(time: String): Unit = {
> 155 : def testGetOffsetsByTimestamp(): Unit = {
> 170 : def testNoOffsetIfTimestampGreaterThanLatestRecord(): Unit = {
> 177 : def testTopicPartitionsArgWithInternalExcluded(): Unit = {
> 192 : def testTopicPartitionsArgWithInternalIncluded(): Unit = {
> 198 : def testTopicPartitionsNotFoundForNonExistentTopic(): Unit = {
> 203 : def testTopicPartitionsNotFoundForExcludedInternalTopic(): Unit = {
> 208 : def testTopicPartitionsNotFoundForNonMatchingTopicPartitionPattern(): 
> Unit = {
> 213 : def testTopicPartitionsFlagWithTopicFlagCauseExit(): Unit = {
> 218 : def testTopicPartitionsFlagWithPartitionsFlagCauseExit(): Unit = {
> Scanned 279 lines. Found 0 KRaft tests out of 19 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-15729: Add KRaft support in GetOffsetShellTest [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on PR #15489:
URL: https://github.com/apache/kafka/pull/15489#issuecomment-2054058198

   failed tests pass on my local. will merge it.
   ```
   ./gradlew cleanTest :streams:test --tests 
SlidingWindowedKStreamIntegrationTest.shouldRestoreAfterJoinRestart --tests 
StreamsAssignmentScaleTest.testHighAvailabilityTaskAssignorLargeNumConsumers 
:tools:test --tests 
MetadataQuorumCommandTest.testDescribeQuorumReplicationSuccessful --tests 
MetadataQuorumCommandTest.testDescribeQuorumStatusSuccessful --tests 
ReassignPartitionsIntegrationTest.testProduceAndConsumeWithReassignmentInProgress
 --tests ReassignPartitionsIntegrationTest.testReassignment --tests 
ReassignPartitionsIntegrationTest.testHighWaterMarkAfterPartitionReassignment 
:storage:test --tests 
TransactionsWithTieredStoreTest.testAbortTransactionTimeout 
:connect:runtime:test --tests 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testAddingWorker
 --tests 
org.apache.kafka.connect.integration.RebalanceSourceConnectorsIntegrationTest.testRemovingWorker
 :trogdor:test --tests CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated 
:connect:mirror:tes
 t --tests MirrorConnectorsIntegrationSSLTest.testSyncTopicConfigs :core:test 
--tests 
DelegationTokenEndToEndAuthorizationWithOwnerTest.testProduceConsumeViaSubscribe
 --tests 
DelegationTokenEndToEndAuthorizationWithOwnerTest.testCreateUserWithDelegationToken
 --tests ConsumerBounceTest.testConsumptionWithBrokerFailures --tests 
ConsumerBounceTest.testSeekAndCommitWithBrokerFailures --tests 
PlaintextConsumerTest.testCoordinatorFailover --tests 
SaslMultiMechanismConsumerTest.testCoordinatorFailover --tests 
SslConsumerTest.testCoordinatorFailover
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15729: Add KRaft support in GetOffsetShellTest [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on PR #15489:
URL: https://github.com/apache/kafka/pull/15489#issuecomment-2054058383

   @Owen-CH-Leung thanks for your contribution and effort!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-15729: Add KRaft support in GetOffsetShellTest [kafka]

2024-04-14 Thread via GitHub


chia7712 merged PR #15489:
URL: https://github.com/apache/kafka/pull/15489


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Commented] (KAFKA-15224) Automate version change to snapshot

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15224?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836954#comment-17836954
 ] 

Walter Hernandez commented on KAFKA-15224:
--

It looks like some great headway was made, but the branches have gone stale 
since.

Any update on this? I ask since I am looking for tickets to pick up, and this 
part of a bigger improvement (KAFKA-15198)

> Automate version change to snapshot 
> 
>
> Key: KAFKA-15224
> URL: https://issues.apache.org/jira/browse/KAFKA-15224
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Tanay Karmarkar
>Priority: Minor
>
> We require changing to SNAPSHOT version as part of the release process [1]. 
> The specific manual steps are:
> Update version on the branch to 0.10.0.1-SNAPSHOT in the following places:
>  * 
>  ** docs/js/templateData.js
>  ** gradle.properties
>  ** kafka-merge-pr.py
>  ** streams/quickstart/java/pom.xml
>  ** streams/quickstart/java/src/main/resources/archetype-resources/pom.xml
>  ** streams/quickstart/pom.xml
>  ** tests/kafkatest/_{_}init{_}_.py (note: this version name can't follow the 
> -SNAPSHOT convention due to python version naming restrictions, instead
> update it to 0.10.0.1.dev0)
>  ** tests/kafkatest/version.py
> The diff of the changes look like 
> [https://github.com/apache/kafka/commit/484a86feb562f645bdbec74b18f8a28395a686f7#diff-21a0ab11b8bbdab9930ad18d4bca2d943bbdf40d29d68ab8a96f765bd1f9]
>  
>  
> It would be nice if we could run a script to automatically do it. Note that 
> release.py (line 550) already does something similar where it replaces 
> SNAPSHOT with actual version. We need to do the opposite here. We can 
> repurpose that code in release.py and extract into a new script to perform 
> this opertaion.
> [1] [https://cwiki.apache.org/confluence/display/KAFKA/Release+Process]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15201) When git fails, script goes into a loop

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836953#comment-17836953
 ] 

Walter Hernandez commented on KAFKA-15201:
--

This PR was merged, and others developing on Forks that use this code.

I see that a there is a test failure post merge: 
[https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-14645/4/pipeline]

Is this something to be alarmed about? If not, it seems like this should be 
resolved.

> When git fails, script goes into a loop
> ---
>
> Key: KAFKA-15201
> URL: https://issues.apache.org/jira/browse/KAFKA-15201
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Owen C.H. Leung
>Priority: Major
>
> When the git push to remote fails (let's say with unauthenticated exception), 
> then the script runs into a loop. It should not retry and fail gracefully 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15201) When git fails, script goes into a loop

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15201?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15201:


Assignee: Owen C.H. Leung

> When git fails, script goes into a loop
> ---
>
> Key: KAFKA-15201
> URL: https://issues.apache.org/jira/browse/KAFKA-15201
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: Divij Vaidya
>Assignee: Owen C.H. Leung
>Priority: Major
>
> When the git push to remote fails (let's say with unauthenticated exception), 
> then the script runs into a loop. It should not retry and fail gracefully 
> instead.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-4560) Min / Max Partitions Fetch Records params

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-4560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-4560.
-
Resolution: Abandoned

> Min / Max Partitions Fetch Records params
> -
>
> Key: KAFKA-4560
> URL: https://issues.apache.org/jira/browse/KAFKA-4560
> Project: Kafka
>  Issue Type: Improvement
>  Components: consumer
>Affects Versions: 0.10.0.1
>Reporter: Stephane Maarek
>Priority: Major
>  Labels: features, newbie
>
> There is currently a `max.partition.fetch.bytes` parameter to limit the total 
> size of the fetch call (also a min).
> Sometimes I'd like to control how many records altogether I'm getting at the 
> time and I'd like to see a `max.partition.fetch.records` (also a min).
> If both are specified the first condition that is met would complete the 
> fetch call. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-14 Thread Walter Hernandez (Jira)


[ https://issues.apache.org/jira/browse/KAFKA-15736 ]


Walter Hernandez deleted comment on KAFKA-15736:
--

was (Author: JIRAUSER305029):
https://github.com/apache/kafka/pull/14295

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16262) Add IQv2 to Kafka Streams documentation

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-16262:


Assignee: Walter Hernandez

> Add IQv2 to Kafka Streams documentation
> ---
>
> Key: KAFKA-16262
> URL: https://issues.apache.org/jira/browse/KAFKA-16262
> Project: Kafka
>  Issue Type: Task
>  Components: docs, streams
>Reporter: Matthias J. Sax
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: beginner, newbie
>
> The new IQv2 API was added many release ago. While it is still not feature 
> complete, we should add it to the docs 
> ([https://kafka.apache.org/documentation/streams/developer-guide/interactive-queries.html])
>  to make users aware of the new API so they can start to try it out, report 
> issue and provide feedback / feature requests.
> We might still state that IQv2 is not yet feature complete, but should change 
> the docs in a way to position is as the "new API", and have code exmples.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16484: Support to define per broker/controller property by ClusterConfigProperty [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on code in PR #15715:
URL: https://github.com/apache/kafka/pull/15715#discussion_r1564660916


##
core/src/test/java/kafka/test/junit/ClusterTestExtensions.java:
##
@@ -190,10 +192,10 @@ private void processClusterTest(ExtensionContext context, 
ClusterTest annot, Clu
 
 ClusterConfig config = builder.build();

Review Comment:
   > Sure, but that could require some efforts here, there are plenty of places 
directly invoke ClusterConfig#serverProperties and add server properties before 
cluster start. e.g. KafkaServerKRaftRegistrationTest.
   
   yep, but it is worth the effort. We adopt the builder pattern already, so 
the built object should be immutable. If the refactor could includes huge 
changes, we can have a separate PR for that. Or we can refactor them one by one.
   
   1. `ClusterConfig`
   2. `BrokerNode`
   3. `ControllerNode` 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16484: Support to define per broker/controller property by ClusterConfigProperty [kafka]

2024-04-14 Thread via GitHub


brandboat commented on code in PR #15715:
URL: https://github.com/apache/kafka/pull/15715#discussion_r1564660683


##
core/src/test/java/kafka/test/junit/ClusterTestExtensions.java:
##
@@ -190,10 +192,10 @@ private void processClusterTest(ExtensionContext context, 
ClusterTest annot, Clu
 
 ClusterConfig config = builder.build();

Review Comment:
   There are places like 
https://github.com/apache/kafka/blob/0b4e9afee2ace7edf6ff8690e070100b98627836/core/src/test/scala/integration/kafka/server/KafkaServerKRaftRegistrationTest.scala#L74
 
   need to add extra properties and then restart cluster. If we make 
ClusterConfig immutable, this may requires more effort to think about how do we 
handle this scenario. What I want to say is the work could be huge, and 
overwhelm what we want to address in this JIRA. i.e. define per 
broker/controller property



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-2499:
---

Assignee: (was: Walter Hernandez)

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15748:


Assignee: (was: Walter Hernandez)

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-15748.
--
Resolution: Fixed

Changes can be found here:
[https://github.com/mannoopj/kafka/blob/6796517858cc2789b61b3419bfbcfc6199ccd43f/core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala]

, where all tests passed:
https://app.harness.io/ng/#/account/vpCkHKsDSxK9_KYfjCTMKA/ci/orgs/default/projects/TI_ML_Replays/pipelines/Test_Pipeline/executions/EUcKsJjtRYu8b1ei0rBFaA/pipeline?stage=CNacpu3LQT2iZqMKpnxHxQ=PiOPgL8gTlKXYFQJScvk8Q

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15736) KRaft support in PlaintextConsumerTest

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15736?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836950#comment-17836950
 ] 

Walter Hernandez commented on KAFKA-15736:
--

https://github.com/apache/kafka/pull/14295

> KRaft support in PlaintextConsumerTest
> --
>
> Key: KAFKA-15736
> URL: https://issues.apache.org/jira/browse/KAFKA-15736
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in PlaintextConsumerTest in 
> core/src/test/scala/integration/kafka/api/PlaintextConsumerTest.scala need to 
> be updated to support KRaft
> 49 : def testHeaders(): Unit = {
> 136 : def testDeprecatedPollBlocksForAssignment(): Unit = {
> 144 : def testHeadersSerializerDeserializer(): Unit = {
> 153 : def testMaxPollRecords(): Unit = {
> 169 : def testMaxPollIntervalMs(): Unit = {
> 194 : def testMaxPollIntervalMsDelayInRevocation(): Unit = {
> 234 : def testMaxPollIntervalMsDelayInAssignment(): Unit = {
> 258 : def testAutoCommitOnClose(): Unit = {
> 281 : def testAutoCommitOnCloseAfterWakeup(): Unit = {
> 308 : def testAutoOffsetReset(): Unit = {
> 319 : def testGroupConsumption(): Unit = {
> 339 : def testPatternSubscription(): Unit = {
> 396 : def testSubsequentPatternSubscription(): Unit = {
> 447 : def testPatternUnsubscription(): Unit = {
> 473 : def testCommitMetadata(): Unit = {
> 494 : def testAsyncCommit(): Unit = {
> 513 : def testExpandingTopicSubscriptions(): Unit = {
> 527 : def testShrinkingTopicSubscriptions(): Unit = {
> 541 : def testPartitionsFor(): Unit = {
> 551 : def testPartitionsForAutoCreate(): Unit = {
> 560 : def testPartitionsForInvalidTopic(): Unit = {
> 566 : def testSeek(): Unit = {
> 621 : def testPositionAndCommit(): Unit = {
> 653 : def testPartitionPauseAndResume(): Unit = {
> 671 : def testFetchInvalidOffset(): Unit = {
> 696 : def testFetchOutOfRangeOffsetResetConfigEarliest(): Unit = {
> 717 : def testFetchOutOfRangeOffsetResetConfigLatest(): Unit = {
> 743 : def testFetchRecordLargerThanFetchMaxBytes(): Unit = {
> 772 : def testFetchHonoursFetchSizeIfLargeRecordNotFirst(): Unit = {
> 804 : def testFetchHonoursMaxPartitionFetchBytesIfLargeRecordNotFirst(): Unit 
> = {
> 811 : def testFetchRecordLargerThanMaxPartitionFetchBytes(): Unit = {
> 819 : def testLowMaxFetchSizeForRequestAndPartition(): Unit = {
> 867 : def testRoundRobinAssignment(): Unit = {
> 903 : def testMultiConsumerRoundRobinAssignor(): Unit = {
> 940 : def testMultiConsumerStickyAssignor(): Unit = {
> 986 : def testMultiConsumerDefaultAssignor(): Unit = {
> 1024 : def testRebalanceAndRejoin(assignmentStrategy: String): Unit = {
> 1109 : def testMultiConsumerDefaultAssignorAndVerifyAssignment(): Unit = {
> 1141 : def testMultiConsumerSessionTimeoutOnStopPolling(): Unit = {
> 1146 : def testMultiConsumerSessionTimeoutOnClose(): Unit = {
> 1151 : def testInterceptors(): Unit = {
> 1210 : def testAutoCommitIntercept(): Unit = {
> 1260 : def testInterceptorsWithWrongKeyValue(): Unit = {
> 1286 : def testConsumeMessagesWithCreateTime(): Unit = {
> 1303 : def testConsumeMessagesWithLogAppendTime(): Unit = {
> 1331 : def testListTopics(): Unit = {
> 1351 : def testUnsubscribeTopic(): Unit = {
> 1367 : def testPauseStateNotPreservedByRebalance(): Unit = {
> 1388 : def testCommitSpecifiedOffsets(): Unit = {
> 1415 : def testAutoCommitOnRebalance(): Unit = {
> 1454 : def testPerPartitionLeadMetricsCleanUpWithSubscribe(): Unit = {
> 1493 : def testPerPartitionLagMetricsCleanUpWithSubscribe(): Unit = {
> 1533 : def testPerPartitionLeadMetricsCleanUpWithAssign(): Unit = {
> 1562 : def testPerPartitionLagMetricsCleanUpWithAssign(): Unit = {
> 1593 : def testPerPartitionLagMetricsWhenReadCommitted(): Unit = {
> 1616 : def testPerPartitionLeadWithMaxPollRecords(): Unit = {
> 1638 : def testPerPartitionLagWithMaxPollRecords(): Unit = {
> 1661 : def testQuotaMetricsNotCreatedIfNoQuotasConfigured(): Unit = {
> 1809 : def testConsumingWithNullGroupId(): Unit = {
> 1874 : def testConsumingWithEmptyGroupId(): Unit = {
> 1923 : def testStaticConsumerDetectsNewPartitionCreatedAfterRestart(): Unit = 
> {
> Scanned 1951 lines. Found 0 KRaft tests out of 61 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836949#comment-17836949
 ] 

Walter Hernandez commented on KAFKA-15748:
--

I will verify that the merged PR commented above does indeed resolve the issue 
with the specified unit test above.

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-15748) KRaft support in MetricsDuringTopicCreationDeletionTest

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez reassigned KAFKA-15748:


Assignee: Walter Hernandez

> KRaft support in MetricsDuringTopicCreationDeletionTest
> ---
>
> Key: KAFKA-15748
> URL: https://issues.apache.org/jira/browse/KAFKA-15748
> Project: Kafka
>  Issue Type: Task
>  Components: core
>Reporter: Sameer Tejani
>Assignee: Walter Hernandez
>Priority: Minor
>  Labels: kraft, kraft-test, newbie
>
> The following tests in MetricsDuringTopicCreationDeletionTest in 
> core/src/test/scala/unit/kafka/integration/MetricsDuringTopicCreationDeletionTest.scala
>  need to be updated to support KRaft
> 71 : def testMetricsDuringTopicCreateDelete(): Unit = {
> Scanned 154 lines. Found 0 KRaft tests out of 1 tests



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] KAFKA-16484: Support to define per broker/controller property by ClusterConfigProperty [kafka]

2024-04-14 Thread via GitHub


brandboat commented on code in PR #15715:
URL: https://github.com/apache/kafka/pull/15715#discussion_r1564654806


##
core/src/test/java/kafka/test/junit/ClusterTestExtensions.java:
##
@@ -190,10 +192,10 @@ private void processClusterTest(ExtensionContext context, 
ClusterTest annot, Clu
 
 ClusterConfig config = builder.build();

Review Comment:
   Sure, but that could require some efforts here, there are plenty of places 
directly invoke `ClusterConfig#serverProperties` and add server properties 
before cluster start. e.g. `KafkaServerKRaftRegistrationTest`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Cleanup in MetadataShell [kafka]

2024-04-14 Thread via GitHub


wernerdv closed pull request #15672: MINOR: Cleanup in MetadataShell
URL: https://github.com/apache/kafka/pull/15672


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Resolved] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Walter Hernandez resolved KAFKA-2499.
-
Resolution: Invalid

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-2499) kafka-producer-perf-test should use something more realistic than empty byte arrays

2024-04-14 Thread Walter Hernandez (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-2499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836947#comment-17836947
 ] 

Walter Hernandez commented on KAFKA-2499:
-

After researching, KAFKA-6921 removed all Scala related code and tests 
regarding the ProducerPerformance.*code.

Tickets that were referenced are about modifying the Producer Performance tests 
payloads, but are no longer relevant to any supported code base.

This leads me to closing this ticket, and beginning cleaning up older open 
tickets regarding this feature enhancement.

> kafka-producer-perf-test should use something more realistic than empty byte 
> arrays
> ---
>
> Key: KAFKA-2499
> URL: https://issues.apache.org/jira/browse/KAFKA-2499
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Ben Stopford
>Assignee: Walter Hernandez
>Priority: Major
>  Labels: newbie
>
> ProducerPerformance.scala (There are two of these, one used by the shell 
> script and one used by the system tests. Both exhibit this problem)
> creates messags from empty byte arrays. 
> This is likely to provide unrealistically fast compression and hence 
> unrealistically fast results. 
> Suggest randomised bytes or more realistic sample messages are used. 
> Thanks to Prabhjot Bharaj for reporting this. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Add test for PartitionMetadataFile [kafka]

2024-04-14 Thread via GitHub


KevinZTW commented on PR #15714:
URL: https://github.com/apache/kafka/pull/15714#issuecomment-2053970919

   >@KevinZTW thanks for enhancing the test coverage. Could you also add unit 
tests for other methods? thanks
   
   @chia7712 no problem! I have added test cases to cover other methods, could 
you help me to take another look?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Add test for PartitionMetadataFile [kafka]

2024-04-14 Thread via GitHub


KevinZTW commented on code in PR #15714:
URL: https://github.com/apache/kafka/pull/15714#discussion_r1564569432


##
storage/src/test/java/org/apache/kafka/storage/internals/checkpoint/PartitionMetadataFileTest.java:
##
@@ -0,0 +1,98 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.storage.internals.checkpoint;
+
+import org.apache.kafka.common.Uuid;
+import org.apache.kafka.common.errors.InconsistentTopicIdException;
+
+import org.apache.kafka.storage.internals.log.LogDirFailureChannel;
+import org.apache.kafka.test.TestUtils;
+import org.junit.jupiter.api.Test;
+import org.mockito.Mockito;
+
+import java.io.File;
+import java.nio.file.Files;
+import java.util.List;
+
+import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
+import static org.junit.jupiter.api.Assertions.assertEquals;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+
+public class PartitionMetadataFileTest  {
+private final File dir = TestUtils.tempDirectory();
+
+@Test
+public void testSetRecordWithDifferentTopicId() {
+File file = PartitionMetadataFile.newFile(dir);
+PartitionMetadataFile partitionMetadataFile = new 
PartitionMetadataFile(file, null);
+Uuid topicId = Uuid.randomUuid();
+assertDoesNotThrow(() -> partitionMetadataFile.record(topicId));
+Uuid differentTopicId = Uuid.randomUuid();
+assertThrows(InconsistentTopicIdException.class, () -> 
partitionMetadataFile.record(differentTopicId));
+}
+
+@Test
+public void testSetRecordWithSameTopicId() {
+File file = PartitionMetadataFile.newFile(dir);
+PartitionMetadataFile partitionMetadataFile = new 
PartitionMetadataFile(file, null);
+Uuid topicId = Uuid.randomUuid();
+assertDoesNotThrow(() -> partitionMetadataFile.record(topicId));
+assertDoesNotThrow(() -> partitionMetadataFile.record(topicId));
+}
+
+@Test
+public void testMaybeFlushWithTopicIdPresent() {
+File file = PartitionMetadataFile.newFile(dir);
+PartitionMetadataFile partitionMetadataFile = new 
PartitionMetadataFile(file, null);
+
+Uuid topicId = Uuid.randomUuid();
+assertDoesNotThrow(() -> partitionMetadataFile.record(topicId));
+assertDoesNotThrow(partitionMetadataFile::maybeFlush);
+
+assertDoesNotThrow(() -> {
+List lines = Files.readAllLines(file.toPath());

Review Comment:
   Since the temp file would be created and swap with metadata file, I found it 
a bit hard to directly verify related behavior. So I test on the end result 
instead, hope that is the right approach.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Add test for PartitionMetadataFile [kafka]

2024-04-14 Thread via GitHub


KevinZTW commented on code in PR #15714:
URL: https://github.com/apache/kafka/pull/15714#discussion_r1564564066


##
storage/src/test/java/org/apache/kafka/storage/internals/checkpoint/PartitionMetadataFileTest.java:
##
@@ -0,0 +1,59 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one or more
+ * contributor license agreements. See the NOTICE file distributed with
+ * this work for additional information regarding copyright ownership.
+ * The ASF licenses this file to You under the Apache License, Version 2.0
+ * (the "License"); you may not use this file except in compliance with
+ * the License. You may obtain a copy of the License at
+ *
+ *http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+
+package org.apache.kafka.storage.internals.checkpoint;
+
+import org.apache.kafka.common.Uuid;
+import org.apache.kafka.common.errors.InconsistentTopicIdException;
+import org.apache.kafka.common.utils.Utils;
+
+import org.junit.jupiter.api.AfterEach;
+import org.junit.jupiter.api.Test;
+
+import java.io.File;
+import java.nio.file.Files;
+
+import static org.junit.jupiter.api.Assertions.assertDoesNotThrow;
+import static org.junit.jupiter.api.Assertions.assertThrows;
+
+public class PartitionMetadataFileTest  {
+private final File dir = assertDoesNotThrow(() -> 
Files.createTempDirectory("tmp")).toFile();

Review Comment:
   @chia7712 Thanks! I did try with that one and received the error message, 
thought that is intentionally forbidden.
   @brandboat Much thanks for pointing me the way!



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] KAFKA-16490: Upgrade gradle from 8.6 to 8.7 [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on PR #15716:
URL: https://github.com/apache/kafka/pull/15716#issuecomment-2053949996

   @chiacyu Could you update wrapper link also? 
https://raw.githubusercontent.com/gradle/gradle/v8.7.0/gradle/wrapper/gradle-wrapper.jar


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in shell [kafka]

2024-04-14 Thread via GitHub


chia7712 merged PR #15712:
URL: https://github.com/apache/kafka/pull/15712


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in shell [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on PR #15712:
URL: https://github.com/apache/kafka/pull/15712#issuecomment-2053949370

   ```
   ./gradlew cleanTest :streams:test --tests 
EOSUncleanShutdownIntegrationTest.shouldWorkWithUncleanShutdownWipeOutStateStore
 :storage:test --tests ReassignReplicaMoveTest.executeTieredStorageTest 
:metadata:test --tests QuorumControllerTest.testFenceMultipleBrokers --tests 
QuorumControllerTest.testConfigurationOperations :trogdor:test --tests 
CoordinatorTest.testTaskRequestWithOldStartMsGetsUpdated :connect:mirror:test 
--tests MirrorConnectorsIntegrationBaseTest.testReplicateSourceDefault --tests 
MirrorConnectorsIntegrationExactlyOnceTest.testReplicateSourceDefault --tests 
MirrorConnectorsIntegrationSSLTest.testReplicateSourceDefault --tests 
MirrorConnectorsWithCustomForwardingAdminIntegrationTest.testReplicateSourceDefault
 :core:test --tests 
DelegationTokenEndToEndAuthorizationWithOwnerTest.testCreateUserWithDelegationToken
 --tests SaslSslConsumerTest.testCoordinatorFailover --tests 
ControllerIntegrationTest.testTopicIdPersistsThroughControllerRestart
   ```
   those failed tests pass on my local. will merge this PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Various cleanups in trogdor [kafka]

2024-04-14 Thread via GitHub


chia7712 commented on PR #15708:
URL: https://github.com/apache/kafka/pull/15708#issuecomment-2053947267

   Could you rebase code to trigger QA for `JDK 17 and Scala 2.13`? the build 
is not completed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Assigned] (KAFKA-16547) add test for DescribeConfigsOptions#includeDocumentation

2024-04-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16547:
--

Assignee: Yu-Lin Chen  (was: Yu-Lin Chen)

> add test for DescribeConfigsOptions#includeDocumentation
> 
>
> Key: KAFKA-16547
> URL: https://issues.apache.org/jira/browse/KAFKA-16547
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Yu-Lin Chen
>Priority: Major
>
> as title, we have no tests for the query option.
> If the option is configured to false, `ConfigEntry#documentation` should be 
> null. otherwise, it should return the config documention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Assigned] (KAFKA-16547) add test for DescribeConfigsOptions#includeDocumentation

2024-04-14 Thread Chia-Ping Tsai (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chia-Ping Tsai reassigned KAFKA-16547:
--

Assignee: Yu-Lin Chen  (was: Chia-Ping Tsai)

> add test for DescribeConfigsOptions#includeDocumentation
> 
>
> Key: KAFKA-16547
> URL: https://issues.apache.org/jira/browse/KAFKA-16547
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Yu-Lin Chen
>Priority: Major
>
> as title, we have no tests for the query option.
> If the option is configured to false, `ConfigEntry#documentation` should be 
> null. otherwise, it should return the config documention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Commented] (KAFKA-16547) add test for DescribeConfigsOptions#includeDocumentation

2024-04-14 Thread Yu-Lin Chen (Jira)


[ 
https://issues.apache.org/jira/browse/KAFKA-16547?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17836895#comment-17836895
 ] 

Yu-Lin Chen commented on KAFKA-16547:
-

Hi [~chia7712],
I would like to fix it. Could you assign this Jira to me?

> add test for DescribeConfigsOptions#includeDocumentation
> 
>
> Key: KAFKA-16547
> URL: https://issues.apache.org/jira/browse/KAFKA-16547
> Project: Kafka
>  Issue Type: Test
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
>Priority: Major
>
> as title, we have no tests for the query option.
> If the option is configured to false, `ConfigEntry#documentation` should be 
> null. otherwise, it should return the config documention.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)