Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2668

2024-02-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 466248 lines...]
[2024-02-23T03:51:16.963Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testTopicAndBrokerConfigsMigrationWithSnapshots() 
PASSED
[2024-02-23T03:51:16.963Z] 
[2024-02-23T03:51:16.963Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() STARTED
[2024-02-23T03:51:16.963Z] 
[2024-02-23T03:51:16.963Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAndReleaseExistingController() PASSED
[2024-02-23T03:51:16.963Z] 
[2024-02-23T03:51:16.963Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAbsentController() STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testClaimAbsentController() PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testIdempotentCreateTopics() STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testIdempotentCreateTopics() PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testCreateNewTopic() STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testCreateNewTopic() PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED
[2024-02-23T03:51:18.808Z] 
[2024-02-23T03:51:18.808Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testConnection() STARTED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testConnection() PASSED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetAclExistingZNode() STARTED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testGetAclExistingZNode() PASSED
[2024-02-23T03:51:20.546Z] 
[2024-02-23T03:51:20.546Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED
[2024-02-23T03:51:22.288Z] 
[2024-02-23T03:51:22.288Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED
[2024-02-23T03:51:22.288Z] 
[2024-02-23T03:51:22.288Z] Gradle Test Run :core:test > Gradle Test Executor 94 
> ZooKeeperClientTest 

Build failed in Jenkins: Kafka » Kafka Branch Builder » 3.7 #100

2024-02-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 457854 lines...]
[2024-02-23T03:10:10.044Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > 
testPartitionReassignmentInHybridMode(ClusterInstance) > 
testPartitionReassignmentInHybridMode [1] Type=ZK, MetadataVersion=3.7-IV0, 
Security=PLAINTEXT PASSED
[2024-02-23T03:10:10.044Z] 
[2024-02-23T03:10:10.044Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
STARTED
[2024-02-23T03:10:20.051Z] 
[2024-02-23T03:10:20.051Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWriteScram(ClusterInstance) > 
testDualWriteScram [1] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT 
PASSED
[2024-02-23T03:10:20.051Z] 
[2024-02-23T03:10:20.051Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-02-23T03:10:30.454Z] 
[2024-02-23T03:10:30.454Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > 
testNewAndChangedTopicsInDualWrite(ClusterInstance) > 
testNewAndChangedTopicsInDualWrite [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-02-23T03:10:30.454Z] 
[2024-02-23T03:10:30.454Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT STARTED
[2024-02-23T03:10:40.971Z] 
[2024-02-23T03:10:40.971Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWriteQuotaAndScram(ClusterInstance) > 
testDualWriteQuotaAndScram [1] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT PASSED
[2024-02-23T03:10:40.971Z] 
[2024-02-23T03:10:40.971Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-02-23T03:10:45.905Z] 
[2024-02-23T03:10:45.905Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testMigrate(ClusterInstance) > testMigrate [1] 
Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-02-23T03:10:45.905Z] 
[2024-02-23T03:10:45.905Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-02-23T03:10:47.490Z] 
[2024-02-23T03:10:47.490Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testMigrateAcls(ClusterInstance) > 
testMigrateAcls [1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-02-23T03:10:47.490Z] 
[2024-02-23T03:10:47.490Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT STARTED
[2024-02-23T03:10:59.940Z] 
[2024-02-23T03:10:59.940Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testStartZkBrokerWithAuthorizer(ClusterInstance) 
> testStartZkBrokerWithAuthorizer [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT PASSED
[2024-02-23T03:10:59.940Z] 
[2024-02-23T03:10:59.940Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT STARTED
[2024-02-23T03:11:12.448Z] 
[2024-02-23T03:11:12.448Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[1] Type=ZK, MetadataVersion=3.4-IV0, Security=PLAINTEXT PASSED
[2024-02-23T03:11:12.448Z] 
[2024-02-23T03:11:12.448Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[2] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT STARTED
[2024-02-23T03:11:22.650Z] 
[2024-02-23T03:11:22.650Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[2] Type=ZK, MetadataVersion=3.5-IV2, Security=PLAINTEXT PASSED
[2024-02-23T03:11:22.650Z] 
[2024-02-23T03:11:22.650Z] Gradle Test Run :core:test > Gradle Test Executor 96 
> ZkMigrationIntegrationTest > testDualWrite(ClusterInstance) > testDualWrite 
[3] Type=ZK, MetadataVersion=3.6-IV2, Security=PLAINTEXT STARTED

[jira] [Resolved] (KAFKA-12549) Allow state stores to opt-in transactional support

2024-02-22 Thread Matthias J. Sax (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-12549?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matthias J. Sax resolved KAFKA-12549.
-
Resolution: Duplicate

Closing this ticket in favor of K14412.

> Allow state stores to opt-in transactional support
> --
>
> Key: KAFKA-12549
> URL: https://issues.apache.org/jira/browse/KAFKA-12549
> Project: Kafka
>  Issue Type: New Feature
>  Components: streams
>Reporter: Guozhang Wang
>Priority: Major
>
> Right now Kafka Stream's EOS implementation does not make any assumptions 
> about the state store's transactional support. Allowing the state stores to 
> optionally provide transactional support can have multiple benefits. E.g., if 
> we add some APIs into the {{StateStore}} interface, like {{beginTxn}}, 
> {{commitTxn}} and {{abortTxn}}. Streams library can determine if these are 
> supported via an additional {{boolean transactional()}} API, and if yes the 
> these APIs can be used under both ALOS and EOS like the following (otherwise 
> then just fallback to the normal processing logic):
> Within thread processing loops:
> 1. store.beginTxn
> 2. store.put // during processing
> 3. streams commit // either through eos protocol or not
> 4. store.commitTxn
> 5. start the next txn by store.beginTxn
> If the state stores allow Streams to do something like above, we can have the 
> following benefits:
> * Reduce the duplicated records upon crashes for ALOS (note this is not EOS 
> still, but some middle-ground where uncommitted data within a state store 
> would not be retained if store.commitTxn failed).
> * No need to wipe the state store and re-bootstrap from scratch upon crashes 
> for EOS. E.g., if a crash-failure happened between streams commit completes 
> and store.commitTxn. We can instead just roll-forward the transaction by 
> replaying the changelog from the second recent streams committed offset 
> towards the most recent committed offset.
> * Remote stores that support txn then do not need to support wiping 
> (https://issues.apache.org/jira/browse/KAFKA-12475).
> * We can fix the known issues of emit-on-change 
> (https://cwiki.apache.org/confluence/display/KAFKA/KIP-557%3A+Add+emit+on+change+support+for+Kafka+Streams).
> * We can support "query committed data only" for interactive queries (see 
> below for reasons).
> As for the implementation of these APIs, there are several options:
> * The state store itself have natural transaction features (e.g. RocksDB).
> * Use an in-memory buffer for all puts within a transaction, and upon 
> `commitTxn` write the whole buffer as a batch to the underlying state store, 
> or just drop the whole buffer upon aborting. Then for interactive queries, 
> one can optionally only query the underlying store for committed data only.
> * Use a separate store as the transient persistent buffer. Upon `beginTxn` 
> create a new empty transient store, and upon `commitTxn` merge the store into 
> the underlying store. Same applies for interactive querying committed-only 
> data. This has a benefit compared with the one above that there's no memory 
> pressure even with long transactions, but incurs more complexity / 
> performance overhead with the separate persistent store.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [DISCUSS] KIP-844: Transactional State Stores

2024-02-22 Thread Matthias J. Sax
To close the loop on this thread. KIP-892 was accepted and is currently 
implemented. Thus I'll go a head and mark this KIP a discarded.


Thanks a lot Alex for spending so much time on this very important 
feature! Without your ground work, we would not have KIP-892 and your 
contributions are noticed!


-Matthias


On 11/21/22 5:12 AM, Nick Telford wrote:

Hi Alex,

Thanks for getting back to me. I actually have most of a working
implementation already. I'm going to write it up as a new KIP, so that it
can be reviewed independently of KIP-844.

Hopefully, working together we can have it ready sooner.

I'll keep you posted on my progress.

Regards,
Nick

On Mon, 21 Nov 2022 at 11:25, Alexander Sorokoumov
 wrote:


Hey Nick,

Thank you for the prototype testing and benchmarking, and sorry for the
late reply!

I agree that it is worth revisiting the WriteBatchWithIndex approach. I
will implement a fork of the current prototype that uses that mechanism to
ensure transactionality and let you know when it is ready for
review/testing in this ML thread.

As for time estimates, I might not have enough time to finish the prototype
in December, so it will probably be ready for review in January.

Best,
Alex

On Fri, Nov 11, 2022 at 4:24 PM Nick Telford 
wrote:


Hi everyone,

Sorry to dredge this up again. I've had a chance to start doing some
testing with the WIP Pull Request, and it appears as though the secondary
store solution performs rather poorly.

In our testing, we had a non-transactional state store that would restore
(from scratch), at a rate of nearly 1,000,000 records/second. When we
switched it to a transactional store, it restored at a rate of less than
40,000 records/second.

I suspect the key issues here are having to copy the data out of the
temporary store and into the main store on-commit, and to a lesser

extent,

the extra memory copies during writes.

I think it's worth re-visiting the WriteBatchWithIndex solution, as it's
clear from the RocksDB post[1] on the subject that it's the recommended

way

to achieve transactionality.

The only issue you identified with this solution was that uncommitted
writes are required to entirely fit in-memory, and RocksDB recommends

they

don't exceed 3-4MiB. If we do some back-of-the-envelope calculations, I
think we'll find that this will be a non-issue for all but the most

extreme

cases, and for those, I think I have a fairly simple solution.

Firstly, when EOS is enabled, the default commit.interval.ms is set to
100ms, which provides fairly short intervals that uncommitted writes need
to be buffered in-memory. If we assume a worst case of 1024 byte records
(and for most cases, they should be much smaller), then 4MiB would hold
~4096 records, which with 100ms commit intervals is a throughput of
approximately 40,960 records/second. This seems quite reasonable.

For use cases that wouldn't reasonably fit in-memory, my suggestion is

that

we have a mechanism that tracks the number/size of uncommitted records in
stores, and prematurely commits the Task when this size exceeds a
configured threshold.

Thanks for your time, and let me know what you think!
--
Nick

1: https://rocksdb.org/blog/2015/02/27/write-batch-with-index.html

On Thu, 6 Oct 2022 at 19:31, Alexander Sorokoumov
 wrote:


Hey Nick,

It is going to be option c. Existing state is considered to be

committed

and there will be an additional RocksDB for uncommitted writes.

I am out of office until October 24. I will update KIP and make sure

that

we have an upgrade test for that after coming back from vacation.

Best,
Alex

On Thu, Oct 6, 2022 at 5:06 PM Nick Telford 
wrote:


Hi everyone,

I realise this has already been voted on and accepted, but it

occurred

to

me today that the KIP doesn't define the migration/upgrade path for
existing non-transactional StateStores that *become* transactional,

i.e.

by

adding the transactional boolean to the StateStore constructor.

What would be the result, when such a change is made to a Topology,

without

explicitly wiping the application state?
a) An error.
b) Local state is wiped.
c) Existing RocksDB database is used as committed writes and new

RocksDB

database is created for uncommitted writes.
d) Something else?

Regards,

Nick

On Thu, 1 Sept 2022 at 12:16, Alexander Sorokoumov
 wrote:


Hey Guozhang,

Sounds good. I annotated all added StateStore methods (commit,

recover,

transactional) with @Evolving.

Best,
Alex



On Wed, Aug 31, 2022 at 7:32 PM Guozhang Wang 

wrote:



Hello Alex,

Thanks for the detailed replies, I think that makes sense, and in

the

long

run we would need some public indicators from StateStore to

determine

if

checkpoints can really be used to indicate clean snapshots.

As for the @Evolving label, I think we can still keep it but for

a

different reason, since as we add more state management

functionalities

in

the near future we may need to revisit the public APIs again and

hence

keeping it as @Evolving 

[jira] [Created] (KAFKA-16303) Add upgrade notes about recent MM2 offset translation changes

2024-02-22 Thread Greg Harris (Jira)
Greg Harris created KAFKA-16303:
---

 Summary: Add upgrade notes about recent MM2 offset translation 
changes
 Key: KAFKA-16303
 URL: https://issues.apache.org/jira/browse/KAFKA-16303
 Project: Kafka
  Issue Type: Task
  Components: mirrormaker
Reporter: Greg Harris
Assignee: Greg Harris






--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Justine Olshan (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-16302?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Justine Olshan resolved KAFKA-16302.

Resolution: Fixed

> Builds failing due to streams test execution failures
> -
>
> Key: KAFKA-16302
> URL: https://issues.apache.org/jira/browse/KAFKA-16302
> Project: Kafka
>  Issue Type: Task
>  Components: streams, unit tests
>Reporter: Justine Olshan
>Assignee: Justine Olshan
>Priority: Major
>
> I'm seeing this on master and many PR builds for all versions:
>  
> {code:java}
> [2024-02-22T14:37:07.076Z] * What went wrong:
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426[2024-02-22T14:37:07.076Z]
>  Execution failed for task ':streams:test'.
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427[2024-02-22T14:37:07.076Z]
>  > The following test methods could not be retried, which is unexpected. 
> Please file a bug report at 
> https://github.com/gradle/test-retry-gradle-plugin/issues
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431[2024-02-22T14:37:07.076Z]
>  
> org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
> https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432[2024-02-22T14:37:07.076Z]
> {code}
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] 37: Add latest apache/kafka/3.7 site-docs [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on PR #587:
URL: https://github.com/apache/kafka-site/pull/587#issuecomment-1960520431

   @mimaison @divijvaidya these are the upgrades notes to be merged to the 
kafka-site repo


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [WIP] 37: Add latest apache/kafka/3.7 site-docs [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #587:
URL: https://github.com/apache/kafka-site/pull/587#discussion_r1500054892


##
37/streams/developer-guide/config-streams.html:
##
@@ -730,7 +713,6 @@ rack.aware.assignment.strategy
   none. This is the default value which means rack 
aware task assignment will be disabled.
   min_traffic. This settings means that the rack aware 
task assigner will compute an assignment which tries to minimize cross rack 
traffic.
-  balance_subtopology. This settings means that the 
rack aware task assigner will compute an assignment which will try to balance 
tasks from same subtopology to different clients and minimize cross rack 
traffic on top of that.

Review Comment:
   need to double check this and the above



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [WIP] 37: Add latest apache/kafka/3.7 site-docs [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #587:
URL: https://github.com/apache/kafka-site/pull/587#discussion_r1500054892


##
37/streams/developer-guide/config-streams.html:
##
@@ -730,7 +713,6 @@ rack.aware.assignment.strategy
   none. This is the default value which means rack 
aware task assignment will be disabled.
   min_traffic. This settings means that the rack aware 
task assigner will compute an assignment which tries to minimize cross rack 
traffic.
-  balance_subtopology. This settings means that the 
rack aware task assigner will compute an assignment which will try to balance 
tasks from same subtopology to different clients and minimize cross rack 
traffic on top of that.

Review Comment:
   need to double check this and the above



##
37/security.html:
##
@@ -54,7 +54,7 @@ The LISTENER_NAME is usually a descriptive name which 
defines the purpose of
   the listener. For example, many configurations use a separate listener 
for client traffic,
-  so they might refer to the corresponding listener as CLIENT 
in the configuration:
+  so they might refer to the corresponding listener as CLIENT 
in the configuration: - need to fix in apache/kafka



##
37/generated/kafka_config.html:
##
@@ -1357,7 +1357,7 @@ The LISTENER_NAME is usually a descriptive name which 
defines the purpose of
   the listener. For example, many configurations use a separate listener 
for client traffic,
-  so they might refer to the corresponding listener as CLIENT 
in the configuration:
+  so they might refer to the corresponding listener as CLIENT 
in the configuration: - need to fix in apache/kafka



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1500071774


##
blog.html:
##
@@ -22,6 +22,128 @@
 
 
 Blog
+  
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+February 2024 - Stanislav Kozlovski (https://twitter.com/BdKozlovski;>@BdKozlovski)
+We are proud to announce the release of Apache Kafka 3.7.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.7.0/RELEASE_NOTES.html;>release 
notes.
+See the https://kafka.apache.org/documentation.html#upgrade_3_7_0;>Upgrading to 
3.7.0 from any version 0.8.x through 3.6.x section in the documentation for 
the list of notable changes and detailed upgrade steps.
+
+In the last release, 3.6,
+https://kafka.apache.org/documentation/#kraft_zk_migration;>the ability 
to migrate Kafka clusters from a ZooKeeper metadata system
+to a KRaft metadata system was ready for usage in 
production environments with one caveat -- JBOD was not yet available for KRaft 
clusters.
+In this release, we are shipping an early access release 
of JBOD in KRaft. (See https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft;>KIP-858
 and the https://cwiki.apache.org/confluence/display/KAFKA/Kafka+JBOD+in+KRaft+Early+Access+Release+Notes;>release
 notes for details).

Review Comment:
   added more detail here



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on PR #578:
URL: https://github.com/apache/kafka-site/pull/578#issuecomment-1960516750

   i rebased the branch to have just the blog.html changes but that closed the 
PR... I'm not sure why. I will re-open a new one


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski opened a new pull request, #578:
URL: https://github.com/apache/kafka-site/pull/578

   This patch adds the blog post for the 3.7 release


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski closed pull request #578: 3.7: Add blog post for Kafka 3.7
URL: https://github.com/apache/kafka-site/pull/578


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [WIP] 37: Add latest apache/kafka/3.7 site-docs [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski opened a new pull request, #587:
URL: https://github.com/apache/kafka-site/pull/587

   WIP
   
   This patch adds the latest apache kafka site-docs to the kafka-site repo


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on PR #586:
URL: https://github.com/apache/kafka-site/pull/586#issuecomment-1960469437

   This is still WIP, I should have marked it as such - sorry


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1500034979


##
36/documentation.html:
##
@@ -54,12 +54,10 @@ Kafka 3.6 Documentation
 2.6.X, 
 2.7.X,
 2.8.X,
-3.0.X,
-3.1.X,
-3.2.X,
-3.3.X,
-3.4.X,
-3.5.X.
+3.0.X.
+3.1.X.
+3.2.X.
+3.3.X.

Review Comment:
   yes, I don't intend to - I captured it in 
https://github.com/apache/kafka-site/pull/586/files#r1499488695 - that commit 
is missing. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1500034521


##
36/upgrade.html:
##
@@ -214,10 +214,8 @@ Upgrading KRaft-based cl
 ./bin/kafka-features.sh upgrade --metadata 3.5
 
 
-Note that cluster metadata downgrade is not supported in this 
version since it has metadata changes.
-Every https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java;>MetadataVersion
-after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. IBP_3_3_IV3(7, "3.3", "IV3", true) means 
this version has metadata changes).
-Given your current and target versions, a downgrade is only 
possible if there are no metadata changes in the versions between.
+Note that the cluster metadata version cannot be downgraded to a 
pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.

Review Comment:
   ok so this was fixed in 3.7+ onwards for the old versions with 
https://github.com/apache/kafka/commit/aec07f76d763068feb6c1d19e4fc326cffd9c620
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [DISCUSS] KIP-853: KRaft Controller Membership Changes

2024-02-22 Thread José Armando García Sancio
Thanks for the additional feedback Jun. Comments below.

On Fri, Feb 16, 2024 at 4:09 PM Jun Rao  wrote:
> 10. "The controller state machine will instead push the brokers'
> kraft.version information to the KRaft client". If we do that, why do we
> need KRaftVersionRecord?

I am doing this as a reasonable compromise. Let me better explain why
we need KRaftVersionRecord and VotersRecord for voters as control
records.

First the controller state machine (QuorumController in the code)
operates on committed data (records that have an offset smaller than
the HWM). That means that to read committed data the HWM needs to be
established. The HWM is discovered by the KRaft leader. To establish
the KRaft leader the voters need to send RPCs to other voters. To be
able to send RPCs to other voters the replicas need to be able to read
and process the locally uncommitted KRaftVersionRecord and
VotersRecord.

In short, the metadata layer (quorum controller) reads and processes
committed data while the KRaft layer reads and processes uncommitted
data. KRaft needs to read and process uncommitted data because that
data is required to establish a majority (consensus) and a leader.

I am relaxing this for observers (brokers) for two reasons:
1. Observers are dynamic and unknown to the voters (leader). Voters
only need to handle Fetch and FetchSnapshot requests from observers
(brokers). Their information is not persisted to disk and it is only
tracked in-memory for reporting purposes (DescribeQuorum) while they
continue to Fetch from the leader.
2. The voters don't need to read the uncommitted information about the
brokers (observers) to establish a majority and the leader. So there
is not strict requirement to include this information as control
record in the log and snapshot.

> 15. Hmm, I thought controller.listener.names already provides the listener
> name. It's a list so that we could support changing security protocols.

Not sure if I fully understand the comment but here is an example that
maybe illustrates why we need all of the information included in the
KIP (VotersRecord). Let's assume the following local configuration:
controller.listener.names=CONTROLLER_SSL,CONTROLLER_PLAINTEXT

With this configuration the voter (controller) prefers connecting
through CONTROLLER_SSL first and CONTROLLER_PLAINTEXT second. To
establish consensus and leadership the voters need to send the Vote
request to other voters. Which host and endpoint should the voter use?
Let's assume the follow VotersRecord:

{ "VoterId": 0, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_SSL", "host": "controller-0", "port": 1234}, {"name":
"CONTROLLER_PLAINTEXT", ... } ]
{ "VoterId": 1, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_SSL", "host": "controller-1", "port": 1234}, {"name":
"CONTROLLER_PLAINTEXT", ... } ]
{ "VoterId": 2, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_SSL", "host": "controller-2", "port": 1234}, {"name":
"CONTROLLER_PLAINTEXT", ... } ]

In this configuration, the local replica can use CONTROLLER_SSL and
lookup the host and port because that is the preferred (first)
listener and it is supported by all of the voters.

Now let's assume the following VotersRecord:

{ "VoterId": 0, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_SSL", "host": "controller-0", "port": 1234}, {"name":
"CONTROLLER_PLAINTEXT", ... } ]
{ "VoterId": 1, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_PLAINTEXT", ... } ]
{ "VoterId": 2, "VoterUuid": "...", "Endpoints": [ {"name":
"CONTROLLER_SSL", "host": "controller-2", "port": 1234}, {"name":
"CONTROLLER_PLAINTEXT", ... } ]

In this configuration, the local replica needs to use
CONTROLLER_PLAINTEXT because that is what is supported by all of the
voters.

> 17.1 "1. They are implemented at two different layers of the protocol. The
> Kafka controller is an application of the KRaft protocol. I wanted to
> keep this distinction in this design. The controller API is going to
> forward ControllerRegistrationRequest to the QuorumController and it
> is going to forward UpdateVoter to the KafkaRaftClient."
> Hmm, but the controller already pushes the brokers' kraft.version
> information to the KRaft client.

Right but only for brokers (observers). Voters are different in that
their information is required to establish consensus. KRaft needs to
read this information as uncommitted data because committed data (HWM)
can only be established after leadership has been established.

> "2. If the voter getting updated is not part of the
> voter set the leader will reject the update."
> Would it be simpler to just relax that? The KIP already relaxed some of the
> checks during the vote.

We could but in this KIP replicas only stores information about
voters. If we relax that, replicas will start storing information
about observers (or brokers). Now we would have to add a mechanism for
deleting this data similar to UnregisterBrokerRequest.

> "3. The other semantic difference is 

[jira] [Created] (KAFKA-16302) Builds failing due to streams test execution failures

2024-02-22 Thread Justine Olshan (Jira)
Justine Olshan created KAFKA-16302:
--

 Summary: Builds failing due to streams test execution failures
 Key: KAFKA-16302
 URL: https://issues.apache.org/jira/browse/KAFKA-16302
 Project: Kafka
  Issue Type: Task
Reporter: Justine Olshan


I'm seeing this on master and many PR builds for all versions:

```
[2024-02-22T14:37:07.076Z] * What went wrong: 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1426][2024-02-22T14:37:07.076Z]
 Execution failed for task ':streams:test'. 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1427][2024-02-22T14:37:07.076Z]
 > The following test methods could not be retried, which is unexpected. Please 
file a bug report at 
[https://github.com/gradle/test-retry-gradle-plugin/issues] 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1428][2024-02-22T14:37:07.076Z]
 
org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@78d39a69]
 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1429][2024-02-22T14:37:07.076Z]
 
org.apache.kafka.streams.state.internals.RocksDBSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@3c818ac4]
 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1430][2024-02-22T14:37:07.076Z]
 
org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.WindowKeySchema@251f7d26]
 

[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1431][2024-02-22T14:37:07.076Z]
 
org.apache.kafka.streams.state.internals.RocksDBTimestampedSegmentedBytesStoreTest#shouldLogAndMeasureExpiredRecords[org.apache.kafka.streams.state.internals.SessionKeySchema@52c8295b]
```
 


[|https://ci-builds.apache.org/blue/organizations/jenkins/Kafka%2Fkafka-pr/detail/PR-15417/1/pipeline#step-89-log-1432][2024-02-22T14:37:07.076Z]
 
 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2667

2024-02-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 468732 lines...]
[2024-02-22T21:17:22.796Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED
[2024-02-22T21:17:22.796Z] 
[2024-02-22T21:17:22.797Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED
[2024-02-22T21:17:22.797Z] 
[2024-02-22T21:17:22.797Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED
[2024-02-22T21:17:22.797Z] 
[2024-02-22T21:17:22.797Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED
[2024-02-22T21:17:24.230Z] 
[2024-02-22T21:17:24.230Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testConnection() STARTED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testConnection() PASSED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetAclExistingZNode() STARTED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetAclExistingZNode() PASSED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED
[2024-02-22T21:17:25.701Z] 
[2024-02-22T21:17:25.701Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED
[2024-02-22T21:17:28.586Z] 
[2024-02-22T21:17:28.586Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED
[2024-02-22T21:17:28.586Z] 
[2024-02-22T21:17:28.586Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED
[2024-02-22T21:17:28.586Z] 
[2024-02-22T21:17:28.586Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testSetAclNonExistentZNode() PASSED
[2024-02-22T21:17:28.586Z] 
[2024-02-22T21:17:28.586Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testConnectionLossRequestTermination() STARTED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testConnectionLossRequestTermination() PASSED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testExistsNonExistentZNode() STARTED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testExistsNonExistentZNode() PASSED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetDataNonExistentZNode() STARTED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testGetDataNonExistentZNode() PASSED
[2024-02-22T21:17:38.543Z] 
[2024-02-22T21:17:38.543Z] Gradle Test Run :core:test > Gradle Test Executor 97 
> ZooKeeperClientTest > testConnectionTimeout() STARTED

[jira] [Created] (KAFKA-16301) Review fenced member unsubscribe/subscribe callbacks interaction

2024-02-22 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16301:
--

 Summary: Review fenced member unsubscribe/subscribe callbacks 
interaction
 Key: KAFKA-16301
 URL: https://issues.apache.org/jira/browse/KAFKA-16301
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Reporter: Lianet Magrans


When a member gets fenced, it triggers the onPartitionsLost callback if any, 
and then rejoins the group. If while the callback completes the member attempts 
to leave the group (ex. unsubscribe), the leave operation detects that the 
member is already removed from the group (fenced), and just aligns the client 
state with the current broker state, and marks the client as UNSUBSCRIBED 
(client side state for not in group). 

This means that the member could attempt to rejoin the group if the user calls 
subscribe, get an assignment, and trigger onPartitionsAssigned, when maybe the 
onPartitionsLost hasn't completed.

This approach keeps the client state machine simple given that it does not need 
to block the new member (it will effectively be a new member because the old 
one got fenced). The new member could rejoin, get an assignment and make 
progress. Downside is that it would potentially allow for overlapped callback 
executions (lost and assign) in the above edge case, which is not the behaviour 
in the old coordinator. Review and validate. Alternative would definitely 
require more complex logic on the client to ensure that we do not allow a new 
member to rejoin until the fenced one completes the callback



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #99

2024-02-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16300) Wrong documentation for producer config retries

2024-02-22 Thread Fede (Jira)
Fede created KAFKA-16300:


 Summary: Wrong documentation for producer config retries
 Key: KAFKA-16300
 URL: https://issues.apache.org/jira/browse/KAFKA-16300
 Project: Kafka
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.4.1, 3.2.3, 3.1.2, 3.3
Reporter: Fede


In documentation from version 3.1 to version 3.4, it looks like the retries 
explanation has a bug related to max.in.flight.request.per.connection related 
parameter and possible message reordering.

[https://kafka.apache.org/31/documentation.html#producerconfigs_retries]

[https://kafka.apache.org/32/documentation.html#producerconfigs_retries]

[https://kafka.apache.org/33/documentation.html#producerconfigs_retries]

[https://kafka.apache.org/34/documentation.html#producerconfigs_retries]

 

in particular, the section


Allowing retries while setting enable.idempotence to false and 
max.in.flight.requests.per.connection to 1 will potentially change the ordering 
of records because if two batches are sent to a single partition, and the first 
fails and is retried but the second succeeds, then the records in the second 
batch may appear first.

 

Is states 

max.in.flight.requests.per.connection to 1

 

It should be said

max.in.flight.requests.per.connection to *greater than*  1

 

This bug has been fixed in the latest versions, but it still confuses users 
using affected versions as the meaning is the opposite of what it should be.

 

I created a PR  ([https://github.com/apache/kafka/pull/15413)] for version 3.2, 
but the build failed. Not sure why.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16299) Classic group error responses should contain a no generation id and an empty members list

2024-02-22 Thread Jeff Kim (Jira)
Jeff Kim created KAFKA-16299:


 Summary: Classic group error responses should contain a no 
generation id and an empty members list
 Key: KAFKA-16299
 URL: https://issues.apache.org/jira/browse/KAFKA-16299
 Project: Kafka
  Issue Type: Sub-task
Reporter: Jeff Kim
Assignee: Jeff Kim


In the new coordinator, the classic group response handling is not consistent 
compared to the old coordinator.

 

The old coordinator responds with a NoGeneration id (-1) and an empty members 
list whereas the new coordinator responds with an empty generation id (0) and a 
null members list.

 

We should have the new coordinator respond with the same values.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Created] (KAFKA-16298) Ensure user callbacks exceptions are propagated to the user on consumer poll

2024-02-22 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-16298:
--

 Summary: Ensure user callbacks exceptions are propagated to the 
user on consumer poll
 Key: KAFKA-16298
 URL: https://issues.apache.org/jira/browse/KAFKA-16298
 Project: Kafka
  Issue Type: Sub-task
  Components: clients, consumer
Affects Versions: 3.7.0
Reporter: Lianet Magrans


When user-defined callbacks fail with an exception, the expectation is that the 
error should be propagated to the user as a KafkaExpception and break the poll 
loop (behaviour in the legacy coordinator). The new coordinator executes 
callbacks in the application thread, and sends en event to the background with 
the callback result 
([here|https://github.com/apache/kafka/blob/98a658f871fc2c533b16fb5fd567a5ceb1c340b7/clients/src/main/java/org/apache/kafka/clients/consumer/internals/AsyncKafkaConsumer.java#L252]),
 but does not seem to propagate the exception to the user. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


mimaison commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499499148


##
36/documentation.html:
##
@@ -33,7 +33,7 @@
 
 
 Documentation
-Kafka 3.6 Documentation
+Kafka 3.4 Documentation

Review Comment:
   This does not seem right. This is the 3.6 documentation so it should be 
`Kafka 3.6`. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


mimaison commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499499955


##
36/documentation.html:
##
@@ -54,12 +54,10 @@ Kafka 3.6 Documentation
 2.6.X, 
 2.7.X,
 2.8.X,
-3.0.X,
-3.1.X,
-3.2.X,
-3.3.X,
-3.4.X,
-3.5.X.
+3.0.X.
+3.1.X.
+3.2.X.
+3.3.X.

Review Comment:
   This is the 3.6 docs, so it should point to all previous releases including 
3.4 and 3.5. Why are we removing them?



##
36/generated/connect_rest.yaml:
##
@@ -8,7 +8,7 @@ info:
 name: Apache 2.0
 url: https://www.apache.org/licenses/LICENSE-2.0.html
   title: Kafka Connect REST API
-  version: 3.6.1
+  version: 3.6.2-SNAPSHOT

Review Comment:
   Again we don't want this change. The docs should cover the last released 
version for 3.6, hence 3.6.1.



##
36/ops.html:
##
@@ -3984,95 +3984,27 @@ Quick Start 
Example
-
-Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java;>LocalTieredStorage
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-
-
-To adopt the `LocalTieredStorage`, the test library needs to be built 
locally
-# please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar
-After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.
+Configurations 
Example
 
+Here is a sample configuration to enable tiered storage feature in broker 
side:
 
 # Sample Zookeeper/Kraft broker server.properties listening on 
PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
+# Please provide the implementation for remoteStorageManager. This is the 
mandatory configuration for tiered storage.
+# 
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager 
to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-x.x.x-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
 
 
 
-Following quick start guide to start 
up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log 
segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets 
uploaded to remote.
-# retention.ms=360 -> when segments exceed this time, the segments in 
remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling 
interval
-# file.delete.delay.ms=1 -> for test only, to speed up the local-log 
segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config 
retention.ms=360 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
+After broker is started, creating a topic with tiered storage enabled, and 
a small log time retention value to try this feature:
+bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 --config remote.storage.enable=true --config 
local.retention.ms=1000
 
 
 
-Try to send messages to the `tieredTopic` topic to roll the 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2666

2024-02-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 466055 lines...]
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testClaimAbsentController() STARTED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testClaimAbsentController() PASSED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testIdempotentCreateTopics() STARTED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testIdempotentCreateTopics() PASSED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testCreateNewTopic() STARTED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testCreateNewTopic() PASSED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
STARTED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZkMigrationClientTest > testUpdateExistingTopicWithNewAndChangedPartitions() 
PASSED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() STARTED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForDataChange() PASSED
[2024-02-22T15:36:46.231Z] 
[2024-02-22T15:36:46.231Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() STARTED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZooKeeperSessionStateMetric() PASSED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() STARTED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testExceptionInBeforeInitializingSession() PASSED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetChildrenExistingZNode() STARTED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetChildrenExistingZNode() PASSED
[2024-02-22T15:36:47.636Z] 
[2024-02-22T15:36:47.636Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testConnection() STARTED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testConnection() PASSED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() STARTED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForCreation() PASSED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetAclExistingZNode() STARTED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetAclExistingZNode() PASSED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSessionExpiryDuringClose() STARTED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSessionExpiryDuringClose() PASSED
[2024-02-22T15:36:49.392Z] 
[2024-02-22T15:36:49.392Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() STARTED
[2024-02-22T15:36:52.205Z] 
[2024-02-22T15:36:52.206Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testReinitializeAfterAuthFailure() PASSED
[2024-02-22T15:36:52.206Z] 
[2024-02-22T15:36:52.206Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSetAclNonExistentZNode() STARTED
[2024-02-22T15:36:52.206Z] 
[2024-02-22T15:36:52.206Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > 

Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499490542


##
36/ops.html:
##
@@ -3984,95 +3984,27 @@ Quick Start 
Example
-
-Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java;>LocalTieredStorage
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-
-
-To adopt the `LocalTieredStorage`, the test library needs to be built 
locally
-# please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar
-After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.
+Configurations 
Example

Review Comment:
   
https://github.com/apache/kafka-site/commit/81773f6d1afe2ce8e6305299a085f8dd559d8140
 was not done in AK 3.6



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499488695


##
36/documentation.html:
##
@@ -33,7 +33,7 @@
 
 
 Documentation
-Kafka 3.6 Documentation
+Kafka 3.4 Documentation

Review Comment:
   
https://github.com/apache/kafka/commit/4302653d9efa0906f4b9a2f59798dd55b8510ef9 
is missing in AK/3.6.x



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499482011


##
36/upgrade.html:
##
@@ -214,10 +214,8 @@ Upgrading KRaft-based cl
 ./bin/kafka-features.sh upgrade --metadata 3.5
 
 
-Note that cluster metadata downgrade is not supported in this 
version since it has metadata changes.
-Every https://github.com/apache/kafka/blob/trunk/server-common/src/main/java/org/apache/kafka/server/common/MetadataVersion.java;>MetadataVersion
-after 3.2.x has a boolean parameter that indicates if there are 
metadata changes (i.e. IBP_3_3_IV3(7, "3.3", "IV3", true) means 
this version has metadata changes).
-Given your current and target versions, a downgrade is only 
possible if there are no metadata changes in the versions between.
+Note that the cluster metadata version cannot be downgraded to a 
pre-production 3.0.x, 3.1.x, or 3.2.x version once it has been upgraded.

Review Comment:
   not sure about the source of truth here either



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499478729


##
36/ops.html:
##
@@ -3984,95 +3984,27 @@ Quick Start 
Example
-
-Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java;>LocalTieredStorage
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-
-
-To adopt the `LocalTieredStorage`, the test library needs to be built 
locally
-# please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar
-After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.
+Configurations 
Example
 
+Here is a sample configuration to enable tiered storage feature in broker 
side:
 
 # Sample Zookeeper/Kraft broker server.properties listening on 
PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
+# Please provide the implementation for remoteStorageManager. This is the 
mandatory configuration for tiered storage.
+# 
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager 
to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-x.x.x-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
 
 
 
-Following quick start guide to start 
up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log 
segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets 
uploaded to remote.
-# retention.ms=360 -> when segments exceed this time, the segments in 
remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling 
interval
-# file.delete.delay.ms=1 -> for test only, to speed up the local-log 
segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config 
retention.ms=360 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
+After broker is started, creating a topic with tiered storage enabled, and 
a small log time retention value to try this feature:
+bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 --config remote.storage.enable=true --config 
local.retention.ms=1000
 
 
 
-Try to send messages to the `tieredTopic` topic to roll the log segment:

Review Comment:
   Need to find the commit that introduced this



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499478265


##
36/ops.html:
##
@@ -3984,95 +3984,27 @@ Quick Start 
Example
-
-Apache Kafka doesn't provide an out-of-the-box RemoteStorageManager 
implementation. To have a preview of the tiered storage
-  feature, the https://github.com/apache/kafka/blob/trunk/storage/src/test/java/org/apache/kafka/server/log/remote/storage/LocalTieredStorage.java;>LocalTieredStorage
-  implemented for integration test can be used, which will create a temporary 
directory in local storage to simulate the remote storage.
-
-
-To adopt the `LocalTieredStorage`, the test library needs to be built 
locally
-# please checkout to the specific version tag you're using before 
building it
-# ex: `git checkout 3.6.1`
-./gradlew clean :storage:testJar
-After build successfully, there should be a `kafka-storage-x.x.x-test.jar` 
file under `storage/build/libs`.
-Next, setting configurations in the broker side to enable tiered storage 
feature.
+Configurations 
Example
 
+Here is a sample configuration to enable tiered storage feature in broker 
side:
 
 # Sample Zookeeper/Kraft broker server.properties listening on 
PLAINTEXT://:9092
 remote.log.storage.system.enable=true
-
-# Setting the listener for the clients in RemoteLogMetadataManager to talk to 
the brokers.
+# Please provide the implementation for remoteStorageManager. This is the 
mandatory configuration for tiered storage.
+# 
remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.NoOpRemoteStorageManager
+# Using the "PLAINTEXT" listener for the clients in RemoteLogMetadataManager 
to talk to the brokers.
 remote.log.metadata.manager.listener.name=PLAINTEXT
-
-# Please provide the implementation info for remoteStorageManager.
-# This is the mandatory configuration for tiered storage.
-# Here, we use the `LocalTieredStorage` built above.
-remote.log.storage.manager.class.name=org.apache.kafka.server.log.remote.storage.LocalTieredStorage
-remote.log.storage.manager.class.path=/PATH/TO/kafka-storage-x.x.x-test.jar
-
-# These 2 prefix are default values, but customizable
-remote.log.storage.manager.impl.prefix=rsm.config.
-remote.log.metadata.manager.impl.prefix=rlmm.config.
-
-# Configure the directory used for `LocalTieredStorage`
-# Note, please make sure the brokers need to have access to this directory
-rsm.config.dir=/tmp/kafka-remote-storage
-
-# This needs to be changed if number of brokers in the cluster is more than 1
-rlmm.config.remote.log.metadata.topic.replication.factor=1
-
-# Try to speed up the log retention check interval for testing
-log.retention.check.interval.ms=1000
 
 
 
-Following quick start guide to start 
up the kafka environment.
-  Then, create a topic with tiered storage enabled with configs:
-
-
-# remote.storage.enable=true -> enables tiered storage on the topic
-# local.retention.ms=1000 -> The number of milliseconds to keep the local log 
segment before it gets deleted.
-  Note that a local log segment is eligible for deletion only after it gets 
uploaded to remote.
-# retention.ms=360 -> when segments exceed this time, the segments in 
remote storage will be deleted
-# segment.bytes=1048576 -> for test only, to speed up the log segment rolling 
interval
-# file.delete.delay.ms=1 -> for test only, to speed up the local-log 
segment file delete delay
-
-bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 \
---config remote.storage.enable=true --config local.retention.ms=1000 --config 
retention.ms=360 \
---config segment.bytes=1048576 --config file.delete.delay.ms=1000
+After broker is started, creating a topic with tiered storage enabled, and 
a small log time retention value to try this feature:
+bin/kafka-topics.sh --create --topic tieredTopic --bootstrap-server 
localhost:9092 --config remote.storage.enable=true --config 
local.retention.ms=1000
 
 
 
-Try to send messages to the `tieredTopic` topic to roll the log segment:

Review Comment:
   Seems like this should be kept?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499477330


##
36/js/templateData.js:
##
@@ -19,6 +19,6 @@ limitations under the License.
 var context={
 "version": "36",
 "dotVersion": "3.6",
-"fullDotVersion": "3.6.1",
+"fullDotVersion": "3.6.2-SNAPSHOT",

Review Comment:
   shouldn't be snapshot, I should check out 3.6.1 and build from there I think



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #586:
URL: https://github.com/apache/kafka-site/pull/586#discussion_r1499476790


##
36/documentation.html:
##
@@ -33,7 +33,7 @@
 
 
 Documentation
-Kafka 3.6 Documentation
+Kafka 3.4 Documentation

Review Comment:
   this is stale



##
36/generated/connect_rest.yaml:
##
@@ -8,7 +8,7 @@ info:
 name: Apache 2.0
 url: https://www.apache.org/licenses/LICENSE-2.0.html
   title: Kafka Connect REST API
-  version: 3.6.1
+  version: 3.6.2-SNAPSHOT

Review Comment:
   shouldn't be snapshot



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] MINOR: Copy over apache/kafka/3.6 docs into here [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski opened a new pull request, #586:
URL: https://github.com/apache/kafka-site/pull/586

   (no comment)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.7 #98

2024-02-22 Thread Apache Jenkins Server
See 




Jenkins build is still unstable: Kafka » Kafka Branch Builder » 3.6 #145

2024-02-22 Thread Apache Jenkins Server
See 




[jira] [Created] (KAFKA-16297) Race condition while promoting future replica can lead to partition unavailability.

2024-02-22 Thread Igor Soarez (Jira)
Igor Soarez created KAFKA-16297:
---

 Summary: Race condition while promoting future replica can lead to 
partition unavailability.
 Key: KAFKA-16297
 URL: https://issues.apache.org/jira/browse/KAFKA-16297
 Project: Kafka
  Issue Type: Sub-task
Reporter: Igor Soarez


KIP-858 proposed that when a directory failure occurs after changing the 
assignment of a replica that's moved between two directories in the same 
broker, but before the future replica promotion completes, the broker should 
reassign the replica to inform the controller of its correct status. But this 
hasn't yet been implemented, and without it this failure may lead to indefinite 
partition unavailability.

Example scenario:
 # A broker which leads partition P receives a request to alter the replica 
from directory A to directory B.
 # The broker creates a future replica in directory B and starts a replica 
fetcher.
 # Once the future replica first catches up, the broker queues a reassignment 
to inform the controller of the directory change.
 # The next time the replica catches up, the broker briefly blocks appends and 
promotes the replica. However, before the promotion is attempted, directory A 
fails.
 # The controller was informed that P in now in directory B before it received 
the notification that directory A has failed, so it does not elect a new 
leader, and as long as the broker is online, partition A remains unavailable.

As per KIP-858, the broker should detect this scenario and queue a reassignment 
of P into directory ID {{{}DirectoryId.LOST{}}}.

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] KIP-1019: Expose method to determine Metric Measurability

2024-02-22 Thread Apoorv Mittal
Thanks everyone for reviewing and voting. The voting for this KIP is now
complete.

+4 binding (Manikumar, Matthias, Jun, Jason)
+1 non-binding (Andrew)


Regards,
Apoorv Mittal
+44 7721681581


On Wed, Feb 21, 2024 at 9:22 PM Jason Gustafson 
wrote:

> +1 Thanks for the KIP!
>
> On Wed, Feb 21, 2024 at 9:15 AM Jun Rao  wrote:
>
> > Hi, Apoorv,
> >
> > Thanks for the KIP. +1
> >
> > Jun
> >
> > On Mon, Feb 19, 2024 at 2:32 PM Apoorv Mittal 
> > wrote:
> >
> > > Hi,
> > > I’d like to start the voting for KIP-1019: Expose method to determine
> > > Metric Measurability.
> > >
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/KAFKA/KIP-1019%3A+Expose+method+to+determine+Metric+Measurability
> > >
> > > Regards,
> > > Apoorv Mittal
> > >
> >
>


[jira] [Created] (KAFKA-16296) Broker shrinks ISR when restarting

2024-02-22 Thread Colin Leroy (Jira)
Colin Leroy created KAFKA-16296:
---

 Summary: Broker shrinks ISR when restarting
 Key: KAFKA-16296
 URL: https://issues.apache.org/jira/browse/KAFKA-16296
 Project: Kafka
  Issue Type: Bug
  Components: core
Affects Versions: 3.6.1
Reporter: Colin Leroy


We have a rolling-restart problem we don't understand on a 3-node cluster.

When stopping a broker, everything goes fine and the partitions are reassigned 
to the other brokers.

When that broker restarts, it shrinks ISR because of "Out of sync replicas":
{code:java}
[2024-02-22 10:18:02,069] INFO [Partition OSS.PREPROD.Monitoring.Metric-5 
broker=3] Shrinking ISR from 2,1,3 to 3. Leader: (highWatermark: 704389542, 
endOffset: 704395843). Out of sync replicas: (brokerId: 2, endOffset: -1, 
lastCaughtUpTimeMs: 1708593437335) (brokerId: 1, endOffset: -1, 
lastCaughtUpTimeMs: 1708593437335). (kafka.cluster.Partition)

[2024-02-22 10:18:02,124] INFO [Partition OSS.PREPROD.Monitoring.Metric-5 
broker=3] ISR updated to 3 (under-min-isr) and version updated to 1075 
(kafka.cluster.Partition) {code}
I do not understand why brokers 1 and 2 would be out of sync, it seems to me 
that given that brokers 1 and 2 were not restarted, they should be in sync.

This, of course, causes problems as producers reconnect to broker 3 only to 
find the min ISR requirement is not fullfilled.

Thanks in advance,

Colin



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1498967918


##
37/upgrade.html:
##
@@ -18,8 +18,85 @@
 
 
 

Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1498946795


##
blog.html:
##
@@ -22,6 +22,125 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+February 2024 - Stanislav Kozlovski (https://twitter.com/BdKozlovski;>@BdKozlovski)
+We are proud to announce the release of Apache Kafka 3.7.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.7.0/RELEASE_NOTES.html;>release 
notes.
+See the https://kafka.apache.org/36/documentation.html#upgrade_3_7_0;>Upgrading 
to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation 
for the list of notable changes and detailed upgrade steps.
+
+In the last release, 3.6,
+https://kafka.apache.org/documentation/#kraft_zk_migration;>the ability 
to migrate Kafka clusters from a ZooKeeper metadata system
+to a KRaft metadata system was ready for usage in 
production environments with one caveat -- JBOD was not yet available for KRaft 
clusters.
+In this release, we are shipping an early access release 
of JBOD in KRaft. (See https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft;>KIP-858
 for details)
+
+
+Additionally, client APIs released prior to Apache Kafka 
2.1 are now marked deprecated in 3.7 and will be removed in Apache Kafka 4.0. 
See https://cwiki.apache.org/confluence/display/KAFKA/KIP-896%3A+Remove+old+client+protocol+API+versions+in+Kafka+4.0;>KIP-896
 for details and RPC versions that are now deprecated.
+
+
+Java 11 support for the Kafka broker is also marked 
deprecated in 3.7, and is planned to be removed in Kafka 4.0. See https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510;>KIP-1013
 for more details

Review Comment:
   renamed to JDK 11 for now



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] 3.7: Add blog post for Kafka 3.7 [kafka-site]

2024-02-22 Thread via GitHub


stanislavkozlovski commented on code in PR #578:
URL: https://github.com/apache/kafka-site/pull/578#discussion_r1498945271


##
blog.html:
##
@@ -22,6 +22,125 @@
 
 
 Blog
+
+
+
+Apache 
Kafka 3.7.0 Release Announcement
+
+February 2024 - Stanislav Kozlovski (https://twitter.com/BdKozlovski;>@BdKozlovski)
+We are proud to announce the release of Apache Kafka 3.7.0. 
This release contains many new features and improvements. This blog post will 
highlight some of the more prominent features. For a full list of changes, be 
sure to check the https://downloads.apache.org/kafka/3.7.0/RELEASE_NOTES.html;>release 
notes.
+See the https://kafka.apache.org/36/documentation.html#upgrade_3_7_0;>Upgrading 
to 3.7.0 from any version 0.8.x through 3.6.x section in the documentation 
for the list of notable changes and detailed upgrade steps.
+
+In the last release, 3.6,
+https://kafka.apache.org/documentation/#kraft_zk_migration;>the ability 
to migrate Kafka clusters from a ZooKeeper metadata system
+to a KRaft metadata system was ready for usage in 
production environments with one caveat -- JBOD was not yet available for KRaft 
clusters.
+In this release, we are shipping an early access release 
of JBOD in KRaft. (See https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft;>KIP-858
 for details)
+
+
+Additionally, client APIs released prior to Apache Kafka 
2.1 are now marked deprecated in 3.7 and will be removed in Apache Kafka 4.0. 
See https://cwiki.apache.org/confluence/display/KAFKA/KIP-896%3A+Remove+old+client+protocol+API+versions+in+Kafka+4.0;>KIP-896
 for details and RPC versions that are now deprecated.
+
+
+Java 11 support for the Kafka broker is also marked 
deprecated in 3.7, and is planned to be removed in Kafka 4.0. See https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=284789510;>KIP-1013
 for more details
+
+
+Note: ZooKeeper is marked as deprecated since the 3.5.0 
release. ZooKeeper is planned to be removed in Apache Kafka 4.0. For more 
information, please see the documentation for https://kafka.apache.org/documentation/#zk_depr;>ZooKeeper 
Deprecation.
+
+
+Kafka Broker, Controller, Producer, Consumer and Admin 
Client
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-858%3A+Handle+JBOD+broker+disk+failure+in+KRaft;>(Early
 Access) KIP-858 Handle JBOD broker disk failure in KRaft:
+This update closes the gap on one of the last 
major missing features in KRaft by adding JBOD support in KRaft-based clusters.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-714%3A+Client+metrics+and+observability;>KIP-714
 Client metrics and observability:
+With KIP-714, operators get better visibility 
into the clients connecting to their cluster with broker-side support of 
client-level metrics via a standardized telemetry interface.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-1000%3A+List+Client+Metrics+Configuration+Resources;>KIP-1000
 List Client Metrics Configuration Resources:
+KIP-1000 supports KIP-714 by introducing a way 
to create, read, update, and delete the client metrics configuration resources 
using the existing RPCs and the kafka-configs.sh tool.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-848%3A+The+Next+Generation+of+the+Consumer+Rebalance+Protocol;>(Early
 Access) KIP-848 The Next Generation of the Consumer Rebalance Protocol:
+The new simplified Consumer Rebalance Protocol 
moves complexity away from the consumer and into the Group Coordinator within 
the broker and completely revamps the protocol to be incremental in nature. It 
provides the same guarantee as the current protocol––but better and more 
efficient, including no longer relying on a global synchronization barrier. https://cwiki.apache.org/confluence/display/KAFKA/The+Next+Generation+of+the+Consumer+Rebalance+Protocol+%28KIP-848%29+-+Early+Access+Release+Notes;>See
 the early access release notes for more information.
+
+https://cwiki.apache.org/confluence/display/KAFKA/KIP-951%3A+Leader+discovery+optimisations+for+the+client;>KIP-951
 Leader discovery optimisations for the client:
+KIP-951 optimizes the time it takes for a 
client to discover the new leader of a 

Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2665

2024-02-22 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 460933 lines...]
[2024-02-22T05:37:09.353Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSetDataExistingZNode() STARTED
[2024-02-22T05:37:09.353Z] 
[2024-02-22T05:37:09.353Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSetDataExistingZNode() PASSED
[2024-02-22T05:37:09.353Z] 
[2024-02-22T05:37:09.353Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChangeNotTriggered() 
STARTED
[2024-02-22T05:37:09.353Z] 
[2024-02-22T05:37:09.353Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChildChangeHandlerForChildChangeNotTriggered() 
PASSED
[2024-02-22T05:37:09.353Z] 
[2024-02-22T05:37:09.353Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testMixedPipeline() STARTED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testMixedPipeline() PASSED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetDataExistingZNode() STARTED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetDataExistingZNode() PASSED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testDeleteExistingZNode() STARTED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testDeleteExistingZNode() PASSED
[2024-02-22T05:37:10.461Z] 
[2024-02-22T05:37:10.461Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSessionExpiry() STARTED
[2024-02-22T05:37:12.555Z] 
[2024-02-22T05:37:12.555Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSessionExpiry() PASSED
[2024-02-22T05:37:12.555Z] 
[2024-02-22T05:37:12.555Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSetDataNonExistentZNode() STARTED
[2024-02-22T05:37:12.555Z] 
[2024-02-22T05:37:12.555Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testSetDataNonExistentZNode() PASSED
[2024-02-22T05:37:12.555Z] 
[2024-02-22T05:37:12.555Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testConnectionViaNettyClient() STARTED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testConnectionViaNettyClient() PASSED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testDeleteNonExistentZNode() STARTED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testDeleteNonExistentZNode() PASSED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testExistsExistingZNode() STARTED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testExistsExistingZNode() PASSED
[2024-02-22T05:37:13.663Z] 
[2024-02-22T05:37:13.663Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() STARTED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.768Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZooKeeperStateChangeRateMetrics() PASSED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.768Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() STARTED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.768Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testZNodeChangeHandlerForDeletion() PASSED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.768Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetAclNonExistentZNode() STARTED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.768Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testGetAclNonExistentZNode() PASSED
[2024-02-22T05:37:14.768Z] 
[2024-02-22T05:37:14.769Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() STARTED
[2024-02-22T05:37:15.875Z] 
[2024-02-22T05:37:15.875Z] Gradle Test Run :core:test > Gradle Test Executor 95 
> ZooKeeperClientTest > testStateChangeHandlerForAuthFailure() PASSED
[2024-02-22T05:37:15.875Z]