Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2465

2023-12-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 348526 lines...]

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
shouldDrainPendingTasksToCreate() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
onlyRemovePendingTaskToRecycleShouldRemoveTaskFromPendingUpdateActions() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
shouldAddAndRemovePendingTaskToCloseClean() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
shouldKeepAddedTasks() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > TasksTest > 
shouldKeepAddedTasks() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@6d686a2a, 
org.apache.kafka.test.MockInternalProcessorContext@6c237a56 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@6d686a2a, 
org.apache.kafka.test.MockInternalProcessorContext@6c237a56 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@59816ffa, 
org.apache.kafka.test.MockInternalProcessorContext@51cf7b5d STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > shouldRecordCorrectBlockCacheUsage(RocksDBStore, 
StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@59816ffa, 
org.apache.kafka.test.MockInternalProcessorContext@51cf7b5d PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@5cb1511e, 
org.apache.kafka.test.MockInternalProcessorContext@66ea2739 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@5cb1511e, 
org.apache.kafka.test.MockInternalProcessorContext@66ea2739 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@1899c3ea, 
org.apache.kafka.test.MockInternalProcessorContext@4ddc6f02 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCachePinnedUsage(RocksDBStore, StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@1899c3ea, 
org.apache.kafka.test.MockInternalProcessorContext@4ddc6f02 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheCapacity(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@7be6d3b6, 
org.apache.kafka.test.MockInternalProcessorContext@3d209a53 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheCapacity(RocksDBStore, StateStoreContext) > [1] 
org.apache.kafka.streams.state.internals.RocksDBStore@7be6d3b6, 
org.apache.kafka.test.MockInternalProcessorContext@3d209a53 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheCapacity(RocksDBStore, StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@8da692b, 
org.apache.kafka.test.MockInternalProcessorContext@226e6195 STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBBlockCacheMetricsTest > 
shouldRecordCorrectBlockCacheCapacity(RocksDBStore, StateStoreContext) > [2] 
org.apache.kafka.streams.state.internals.RocksDBTimestampedStore@8da692b, 
org.apache.kafka.test.MockInternalProcessorContext@226e6195 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfDbToAddWasAlreadyAddedForOtherSegment() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 90 > 
RocksDBMetricsRecorderTest > 

Re: [DISCUSS] KIP-939: Support Participation in 2PC

2023-12-08 Thread Jun Rao
Hi, Artem,

Thanks for the KIP. A few comments below.

10. For the two new fields in Enable2Pc and KeepPreparedTxn in
InitProducerId, it would be useful to document a bit more detail on what
values are set under what cases. For example, are all four combinations
valid?

11.  InitProducerIdResponse: If there is no ongoing txn, what will
OngoingTxnProducerId and OngoingTxnEpoch be set?

12. ListTransactionsRequest related changes: It seems those are already
covered by KIP-994?

13. TransactionalLogValue: Could we name TransactionProducerId and
ProducerId better? It's not clear from the name which is for which.

14. "Note that the (producerId, epoch) pair that corresponds to the ongoing
transaction is going to be written instead of the existing ProducerId and
ProducerEpoch fields (which are renamed to reflect the semantics) to
support downgrade.": I am a bit confused on that. Are we writing different
values to the existing fields? Then, we can't downgrade, right?

15. active-transaction-total-time-max : Would
active-transaction-open-time-max be more intuitive? Also, could we include
the full name (group, tags, etc)?

16. "transaction.two.phase.commit.enable The default would be ‘false’.  If
it’s ‘false’, 2PC functionality is disabled even if the ACL is set, clients
that attempt to use this functionality would receive
TRANSACTIONAL_ID_AUTHORIZATION_FAILED error."
TRANSACTIONAL_ID_AUTHORIZATION_FAILED seems unintuitive for the client to
understand what the actual cause is.

17. completeTransaction(). We expect this to be only used during recovery.
Could we document this clearly? Could we prevent it from being used
incorrectly (e.g. throw an exception if the producer has called other
methods like send())?

18. "either prepareTransaction was called or initTransaction(true) was
called": "either" should be "neither"?

19. Since InitProducerId always bumps up the epoch, it creates a situation
where there could be multiple outstanding txns. The following is an example
of a potential problem during recovery.
   The last txn epoch in the external store is 41 when the app dies.
   Instance1 is created for recovery.
 1. (instance1) InitProducerId(keepPreparedTxn=true), epoch=42,
ongoingEpoch=41
 2. (instance1) dies before completeTxn(41) can be called.
   Instance2 is created for recovery.
 3. (instance2) InitProducerId(keepPreparedTxn=true), epoch=43,
ongoingEpoch=42
 4. (instance2) completeTxn(41) => abort
   The first problem is that 41 now is aborted when it should be committed.
The second one is that it's not clear who could abort epoch 42, which is
still open.

Jun


On Thu, Dec 7, 2023 at 2:43 PM Justine Olshan 
wrote:

> Hey Artem,
>
> Thanks for the updates. I think what you say makes sense. I just updated my
> KIP so I want to reconcile some of the changes we made especially with
> respect to the TransactionLogValue.
>
> Firstly, I believe tagged fields require a default value so that if they
> are not filled, we return the default (and know that they were empty). For
> my KIP, I proposed the default for producer ID tagged fields should be -1.
> I was wondering if we could update the KIP to include the default values
> for producer ID and epoch.
>
> Next, I noticed we decided to rename the fields. I guess that the field
> "NextProducerId" in my KIP correlates to "ProducerId" in this KIP. Is that
> correct? So we would have "TransactionProducerId" for the non-tagged field
> and have "ProducerId" (NextProducerId) and "PrevProducerId" as tagged
> fields the final version after KIP-890 and KIP-936 are implemented. Is this
> correct? I think the tags will need updating, but that is trivial.
>
> The final question I had was with respect to storing the new epoch. In
> KIP-890 part 2 (epoch bumps) I think we concluded that we don't need to
> store the epoch since we can interpret the previous epoch based on the
> producer ID. But here we could call the InitProducerId multiple times and
> we only want the producer with the correct epoch to be able to commit the
> transaction. Is that the correct reasoning for why we need epoch here but
> not the Prepare/Commit state.
>
> Thanks,
> Justine
>
> On Wed, Nov 22, 2023 at 9:48 AM Artem Livshits
>  wrote:
>
> > Hi Justine,
> >
> > After thinking a bit about supporting atomic dual writes for Kafka +
> NoSQL
> > database, I came to a conclusion that we do need to bump the epoch even
> > with InitProducerId(keepPreparedTxn=true).  As I described in my previous
> > email, we wouldn't need to bump the epoch to protect from zombies so that
> > reasoning is still true.  But we cannot protect from split-brain
> scenarios
> > when two or more instances of a producer with the same transactional id
> try
> > to produce at the same time.  The dual-write example for SQL databases (
> > https://github.com/apache/kafka/pull/14231/files) doesn't have a
> > split-brain problem because execution is protected by the update lock on
> > the transaction state record; however NoSQL 

[jira] [Created] (KAFKA-15991) Flaky new consumer test testGroupIdNotNullAndValid

2023-12-08 Thread Lianet Magrans (Jira)
Lianet Magrans created KAFKA-15991:
--

 Summary: Flaky new consumer test testGroupIdNotNullAndValid
 Key: KAFKA-15991
 URL: https://issues.apache.org/jira/browse/KAFKA-15991
 Project: Kafka
  Issue Type: Task
  Components: clients, consumer
Reporter: Lianet Magrans


Fails locally when running it in a loop with it's latest changes from 
[https://github.com/apache/kafka/commit/6df192b6cb1397a6e6173835bbbd8a3acb7e3988.]
 Failed the build so temporarily disabled.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Jenkins build is still unstable: Kafka » Kafka Branch Builder » trunk #2464

2023-12-08 Thread Apache Jenkins Server
See 




Re: [VOTE] KIP-996: Pre-Vote

2023-12-08 Thread Jun Rao
Hi, Alyssa,

Thanks for the KIP. +1

Jun

On Fri, Dec 8, 2023 at 10:52 AM José Armando García Sancio
 wrote:

> +1.
>
> Thanks for the KIP. Looking forward to the implementation!
>
> --
> -José
>


Re: [VOTE] KIP-996: Pre-Vote

2023-12-08 Thread José Armando García Sancio
+1.

Thanks for the KIP. Looking forward to the implementation!

-- 
-José


Jenkins build is unstable: Kafka » Kafka Branch Builder » trunk #2463

2023-12-08 Thread Apache Jenkins Server
See 




[jira] [Resolved] (KAFKA-15985) Mirrormaker 2 offset sync is incomplete

2023-12-08 Thread Philipp Dallig (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15985?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Philipp Dallig resolved KAFKA-15985.

Resolution: Fixed

> Mirrormaker 2 offset sync is incomplete
> ---
>
> Key: KAFKA-15985
> URL: https://issues.apache.org/jira/browse/KAFKA-15985
> Project: Kafka
>  Issue Type: Bug
>Affects Versions: 3.5.1
>Reporter: Philipp Dallig
>Priority: Major
>
> We are currently trying to migrate between two Kafka clusters using 
> Mirrormaker2
> new kafka cluster version: 7.5.2-ccs
> old kafka cluster version: kafka_2.13-2.8.0
> The Mirrormaker 2 process runs on the new cluster (target cluster).
> My main problem: The lag in the target cluster is not the same as in the 
> source cluster.
> I have set up a producer and consumer against the old Kafka cluster. If I 
> stop both. The lag in the old Kafka cluster is 0, while it is > 0 in the new 
> Kafka cluster.
> target cluster
> {code}
> GROUP   TOPICPARTITION  CURRENT-OFFSET  
> LOG-END-OFFSET  LAG CONSUMER-ID HOSTCLIENT-ID
> test-sync-5 kafka-replication-test-5 0  36373668  
>   31  -   -   -
> {code}
> source cluster
> {code}
> GROUP   TOPICPARTITION  CURRENT-OFFSET  
> LOG-END-OFFSET  LAG CONSUMER-ID HOSTCLIENT-ID
> test-sync-5 kafka-replication-test-5 0  36683668  
>   0   -   -   -
> {code}
> MM2 configuration without connection properties.
> {code}
> t-kafka->t-extkafka.enabled = true
> t-kafka->t-extkafka.topics = ops_filebeat, kafka-replication-.*
> t-kafka->t-extkafka.sync.topic.acls.enabled = false
> t-kafka->t-extkafka.sync.group.offsets.enabled = true
> t-kafka->t-extkafka.sync.group.offsets.interval.seconds = 30
> t-kafka->t-extkafka.refresh.groups.interval.seconds = 30
> t-kafka->t-extkafka.offset-syncs.topic.location = target
> t-kafka->t-extkafka.emit.checkpoints.interval.seconds = 30
> t-kafka->t-extkafka.replication.policy.class = 
> org.apache.kafka.connect.mirror.IdentityReplicationPolicy
> {code}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-14516) Implement static membeship

2023-12-08 Thread David Jacot (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-14516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David Jacot resolved KAFKA-14516.
-
Fix Version/s: 3.7.0
   Resolution: Fixed

> Implement static membeship
> --
>
> Key: KAFKA-14516
> URL: https://issues.apache.org/jira/browse/KAFKA-14516
> Project: Kafka
>  Issue Type: Sub-task
>Reporter: David Jacot
>Assignee: Sagar Rao
>Priority: Major
>  Labels: kip-848-preview
> Fix For: 3.7.0
>
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15979) Add KIP-1001 CurrentControllerId metric

2023-12-08 Thread Luke Chen (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15979?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Luke Chen resolved KAFKA-15979.
---
Resolution: Duplicate

Duplicated with KAFKA-15980. Closing this one. cc [~cmccabe]

> Add KIP-1001 CurrentControllerId metric
> ---
>
> Key: KAFKA-15979
> URL: https://issues.apache.org/jira/browse/KAFKA-15979
> Project: Kafka
>  Issue Type: Improvement
>Reporter: Colin McCabe
>Assignee: Colin McCabe
>Priority: Major
>




--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Build failed in Jenkins: Kafka » Kafka Branch Builder » trunk #2462

2023-12-08 Thread Apache Jenkins Server
See 


Changes:


--
[...truncated 344387 lines...]
Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldSetStatsLevelToExceptDetailedTimersWhenValueProvidersWithStatisticsAreAdded()
 PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldRecordStatisticsBasedMetrics() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldRecordStatisticsBasedMetrics() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfMetricRecorderIsReInitialisedWithDifferentStreamsMetrics() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfMetricRecorderIsReInitialisedWithDifferentStreamsMetrics() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldInitMetricsRecorder() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldInitMetricsRecorder() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfMetricRecorderIsReInitialisedWithDifferentTask() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfMetricRecorderIsReInitialisedWithDifferentTask() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldCorrectlyHandleAvgRecordingsWithZeroSumAndCount() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldCorrectlyHandleAvgRecordingsWithZeroSumAndCount() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfStatisticsToAddIsNullButExistingStatisticsAreNotNull() STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > 
shouldThrowIfStatisticsToAddIsNullButExistingStatisticsAreNotNull() PASSED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldNotAddItselfToRecordingTriggerWhenNotEmpty() 
STARTED

Gradle Test Run :streams:test > Gradle Test Executor 91 > 
RocksDBMetricsRecorderTest > shouldNotAddItselfToRecordingTriggerWhenNotEmpty() 
PASSED

streams-0: SMOKE-TEST-CLIENT-CLOSED
streams-2: SMOKE-TEST-CLIENT-CLOSED

> Task :core:test

Gradle Test Run :core:test > Gradle Test Executor 102 > 
DelegationTokenEndToEndAuthorizationWithOwnerTest > 
testProduceConsumeWithWildcardAcls(String) > 
testProduceConsumeWithWildcardAcls(String).quorum=zk PASSED

Gradle Test Run :core:test > Gradle Test Executor 102 > 
PlaintextAdminIntegrationTest > testAlterReplicaLogDirs(String) > 
testAlterReplicaLogDirs(String).quorum=zk STARTED

streams-4: SMOKE-TEST-CLIENT-CLOSED
streams-6: SMOKE-TEST-CLIENT-CLOSED
streams-5: SMOKE-TEST-CLIENT-CLOSED
streams-2: SMOKE-TEST-CLIENT-CLOSED
streams-3: SMOKE-TEST-CLIENT-CLOSED
streams-8: SMOKE-TEST-CLIENT-CLOSED
streams-1: SMOKE-TEST-CLIENT-CLOSED
streams-5: SMOKE-TEST-CLIENT-CLOSED
streams-4: SMOKE-TEST-CLIENT-CLOSED
streams-0: SMOKE-TEST-CLIENT-CLOSED
streams-2: SMOKE-TEST-CLIENT-CLOSED
streams-4: SMOKE-TEST-CLIENT-CLOSED
streams-6: SMOKE-TEST-CLIENT-CLOSED
streams-1: SMOKE-TEST-CLIENT-CLOSED
streams-7: SMOKE-TEST-CLIENT-CLOSED
streams-7: SMOKE-TEST-CLIENT-CLOSED
streams-1: SMOKE-TEST-CLIENT-CLOSED
streams-0: SMOKE-TEST-CLIENT-CLOSED
streams-3: SMOKE-TEST-CLIENT-CLOSED
streams-3: SMOKE-TEST-CLIENT-CLOSED
streams-5: SMOKE-TEST-CLIENT-CLOSED
streams-7: SMOKE-TEST-CLIENT-CLOSED
streams-6: SMOKE-TEST-CLIENT-CLOSED

> Task :core:test

Gradle Test Run :core:test > Gradle Test Executor 102 > 
PlaintextAdminIntegrationTest > testAlterReplicaLogDirs(String) > 
testAlterReplicaLogDirs(String).quorum=zk PASSED

Gradle Test Run :core:test > Gradle Test Executor 102 > 
PlaintextAdminIntegrationTest > testAlterReplicaLogDirs(String) > 
testAlterReplicaLogDirs(String).quorum=kraft STARTED

Gradle Test Run :core:test > Gradle Test Executor 103 > 
ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [1] Type=ZK, MetadataVersion=3.4-IV0, 
Security=PLAINTEXT SKIPPED

Gradle Test Run :core:test > Gradle Test Executor 103 > 
ZkMigrationIntegrationTest > testMigrateTopicDeletions(ClusterInstance) > 
testMigrateTopicDeletions [2] Type=ZK, MetadataVersion=3.5-IV2, 
Security=PLAINTEXT STARTED

Failed to map supported failure 'org.opentest4j.AssertionFailedError: timed out 
waiting for replica movement' with mapper 
'org.gradle.api.internal.tasks.testing.failure.mappers.OpenTestAssertionFailedMapper@3c3a767':
 Cannot invoke "Object.getClass()" because "obj" is null

> Task :streams:test

Gradle Test Run :streams:test > Gradle Test Executor 104 > 
EosV2UpgradeIntegrationTest > [false] > 

Re: [ANNOUNCE] Apache Kafka 3.6.1

2023-12-08 Thread Luke Chen
Hi Mickael,

Thanks for running this release!

Luke

On Thu, Dec 7, 2023 at 7:13 PM Mickael Maison  wrote:

> The Apache Kafka community is pleased to announce the release for
> Apache Kafka 3.6.1
>
> This is a bug fix release and it includes fixes and improvements from 30
> JIRAs.
>
> All of the changes in this release can be found in the release notes:
> https://www.apache.org/dist/kafka/3.6.1/RELEASE_NOTES.html
>
> You can download the source and binary release (Scala 2.12 and Scala 2.13)
> from:
> https://kafka.apache.org/downloads#3.6.1
>
>
> ---
>
> Apache Kafka is a distributed streaming platform with four core APIs:
>
> ** The Producer API allows an application to publish a stream of records to
> one or more Kafka topics.
>
> ** The Consumer API allows an application to subscribe to one or more
> topics and process the stream of records produced to them.
>
> ** The Streams API allows an application to act as a stream processor,
> consuming an input stream from one or more topics and producing an
> output stream to one or more output topics, effectively transforming the
> input streams to output streams.
>
> ** The Connector API allows building and running reusable producers or
> consumers that connect Kafka topics to existing applications or data
> systems. For example, a connector to a relational database might
> capture every change to a table.
>
>
> With these APIs, Kafka can be used for two broad classes of application:
>
> ** Building real-time streaming data pipelines that reliably get data
> between systems or applications.
>
> ** Building real-time streaming applications that transform or react
> to the streams of data.
>
>
> Apache Kafka is in use at large and small companies worldwide, including
> Capital One, Goldman Sachs, ING, LinkedIn, Netflix, Pinterest, Rabobank,
> Target, The New York Times, Uber, Yelp, and Zalando, among others.
>
> A big thank you for the following 39 contributors to this release!
> (Please report an unintended omission)
>
> Anna Sophie Blee-Goldman, Arpit Goyal, atu-sharm, Bill Bejeck, Chris
> Egerton, Colin P. McCabe, David Arthur, David Jacot, Divij Vaidya,
> Federico Valeri, Greg Harris, Guozhang Wang, Hao Li, hudeqi,
> iit2009060, Ismael Juma, Jorge Esteban Quilcate Otoya, Josep Prat,
> Jotaniya Jeel, Justine Olshan, Kamal Chandraprakash, kumarpritam863,
> Levani Kokhreidze, Lucas Brutschy, Luke Chen, Manikumar Reddy,
> Matthias J. Sax, Mayank Shekhar Narula, Mickael Maison, Nick Telford,
> Philip Nee, Qichao Chu, Rajini Sivaram, Robert Wagner, Sagar Rao,
> Satish Duggana, Walker Carlson, Xiaobing Fang, Yash Mayya
>
> We welcome your help and feedback. For more information on how to
> report problems, and to get involved, visit the project website at
> https://kafka.apache.org/
>
> Thank you!
>
> Regards,
> Mickael
>


[jira] [Created] (KAFKA-15990) Add additional parameter which allows parameter remote.enabled and its relatives to be wiped if server tiering is disabled

2023-12-08 Thread Viktor Nikitash (Jira)
Viktor Nikitash created KAFKA-15990:
---

 Summary: Add additional parameter which allows parameter 
remote.enabled and its relatives to be wiped if server tiering is disabled
 Key: KAFKA-15990
 URL: https://issues.apache.org/jira/browse/KAFKA-15990
 Project: Kafka
  Issue Type: Improvement
  Components: Tiered-Storage
Reporter: Viktor Nikitash


h2. Background


There is a miss in Kafka versions [3.1.x; 3.6.0) what we can enable tiering, 
which is not working on that versions at all, and when we upgrade kafka to 
version 3.6.0 we have crash restart loop with message "You have to delete all 
topics with the property remote.storage.enable=true before disabling tiered 
storage cluster-wide".
https://github.com/apache/kafka/blob/43c635f3a48fcb16dd34eac16def379141912e77/storage/src/main/java/org/apache/kafka/storage/internals/log/LogConfig.java#L564
h3. 
The Idea 


The idea is to create new parameter which allows to wipe configurations related 
to remote storage on topic level, with default value set to false (which won't 
change logic from the current one), but in case we manually set parameter to 
true, those remote topic parameters will be wiped.


Of course additional checks required. Like do it only on Kafka startup and 
ensure that there is no data on remote storage etc...
h3. Motivation

Since tiering in to working on version 3.6.0-, there is no any reason to keep 
this configs during migration from older 3rd version to 3.6.0+. When it comes 
to managing lots of clusters it can be a problem. So enabling this new 
parameter before migration to 3.6.+ and disabling it after, makes migration 
easier.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[jira] [Resolved] (KAFKA-15986) New consumer group protocol integration test failures

2023-12-08 Thread Andrew Schofield (Jira)


 [ 
https://issues.apache.org/jira/browse/KAFKA-15986?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Schofield resolved KAFKA-15986.
--
Resolution: Fixed

> New consumer group protocol integration test failures
> -
>
> Key: KAFKA-15986
> URL: https://issues.apache.org/jira/browse/KAFKA-15986
> Project: Kafka
>  Issue Type: Bug
>  Components: clients
>Affects Versions: 3.7.0
>Reporter: Andrew Schofield
>Assignee: Andrew Schofield
>Priority: Major
>  Labels: CTR
> Fix For: 3.7.0
>
>
> A recent change in `AsyncKafkaConsumer.updateFetchPositions` has made 
> fetching fail without returning records in some situations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[PR] MINOR: update doc and javadoc for release v3.5.2 [kafka-site]

2023-12-08 Thread via GitHub


showuon opened a new pull request, #571:
URL: https://github.com/apache/kafka-site/pull/571

   update doc and javadoc for release v3.5.2


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: dev-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[jira] [Created] (KAFKA-15989) Upgrade existing generic group to consumer group

2023-12-08 Thread Emanuele Sabellico (Jira)
Emanuele Sabellico created KAFKA-15989:
--

 Summary: Upgrade existing generic group to consumer group
 Key: KAFKA-15989
 URL: https://issues.apache.org/jira/browse/KAFKA-15989
 Project: Kafka
  Issue Type: Sub-task
Reporter: Emanuele Sabellico


It should be possible to upgrade an existing generic group to a new consumer 
group, in case it was using either the previous generic protocol or manual 
partition assignment and commit.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


Re: [VOTE] 3.5.2 RC1

2023-12-08 Thread Luke Chen
Hi all,

Thanks for the vote. I've got
3 binding +1 votes (Michael, Justine, Tom)
3 non-binding +1 votes (Josep, Federico, Jakub)

I will close this vote thread and go ahead to complete the release process.

Thanks.
Luke


On Fri, Dec 8, 2023 at 3:30 PM Tom Bentley  wrote:

> Hi,
>
> I have validated signatures, checked the Java docs, built from source and
> run tests. I had a few unit test failures, but I note that others saw them
> pass and the CI was green too, so I think this is a problem with my system
> rather than the release.
>
> +1 (binding).
>
> Thanks!
>
> On Wed, 6 Dec 2023 at 12:17, Justine Olshan 
> wrote:
>
> > Hey all,
> >
> > I've built from source, ran unit tests, and ran a produce bench test on a
> > running server.
> > I've also scanned the various release components. Given the test results
> > and the validations, +1 (binding) from me.
> >
> > Thanks,
> > Justine
> >
> > On Tue, Dec 5, 2023 at 3:59 AM Luke Chen  wrote:
> >
> > > Hi all,
> > >
> > > Thanks for helping validate the RC1 build.
> > > I've got 1 binding, and 3 non-binding votes.
> > > Please help validate it when available.
> > >
> > > Update for the system test results:
> > >
> > >
> >
> https://drive.google.com/file/d/1gLt5hTFCVnpoKZ_I5KmUvnowVGtzfip_/view?usp=sharing
> > >
> > > The result failed at 2 groups of tests:
> > > 1. quota_test test suite failed with "ValueError: max() arg is an empty
> > > sequence".
> > > This is a known issue and these tests can be passed after re-run.
> > > 2. zookeeper_migration_test failed with
> > >   2.1. "Kafka server didn't finish startup in 60 seconds" : This is
> > because
> > > we added a constraint to ZK migrating to KRaft that we don't support
> JBOD
> > > in use. These system tests are fixed in this PR in trunk:
> > >
> > >
> >
> https://github.com/apache/kafka/pull/14654/files#diff-17b8c06d37fe43a3bd6ba5b89e08ff8f988ad5f4e5f7eda87844d51f7e5a5b96R61
> > >   2.2. "Zookeeper node failed to start": This is because the ZK is
> > pointing
> > > to 3.4.0 version, which should be 3.4.1. These system tests are fixed
> in
> > > this PR in trunk:
> > >
> > >
> >
> https://github.com/apache/kafka/pull/14208/files#diff-17b8c06d37fe43a3bd6ba5b89e08ff8f988ad5f4e5f7eda87844d51f7e5a5b96R143
> > >
> > > I've confirmed that after applying this patch, the system tests pass
> now.
> > > The PR to backport this fix to 3.5 branch is opened:
> > > https://github.com/apache/kafka/pull/14927
> > > But that doesn't block 3.5.2 release because they are test problems
> only.
> > >
> > > Thank you.
> > > Luke
> > >
> > > On Mon, Nov 27, 2023 at 2:13 AM Mickael Maison <
> mickael.mai...@gmail.com
> > >
> > > wrote:
> > >
> > > > Hi Luke,
> > > >
> > > > I ran the following checks:
> > > > - Verified signatures and checksums
> > > > - Ran the KRaft and ZooKeeper quickstarts with the 2.13 binaries
> > > > - Built sources and ran unit/integration tests with Java 17
> > > >
> > > > +1 (binding)
> > > >
> > > > Thanks,
> > > > Mickael
> > > >
> > > >
> > > > On Fri, Nov 24, 2023 at 10:41 AM Jakub Scholz 
> wrote:
> > > > >
> > > > > +1 non-binding. I used the staged Scala 2.13 binaries and the
> staged
> > > > Maven
> > > > > repo to run my tests and all seems to work fine.
> > > > >
> > > > > Thanks & Regards
> > > > > Jakub
> > > > >
> > > > > On Tue, Nov 21, 2023 at 11:09 AM Luke Chen 
> > wrote:
> > > > >
> > > > > > Hello Kafka users, developers and client-developers,
> > > > > >
> > > > > > This is the first candidate for release of Apache Kafka 3.5.2.
> > > > > >
> > > > > > This is a bugfix release with several fixes since the release of
> > > 3.5.1,
> > > > > > including dependency version bumps for CVEs.
> > > > > >
> > > > > > Release notes for the 3.5.2 release:
> > > > > >
> > https://home.apache.org/~showuon/kafka-3.5.2-rc1/RELEASE_NOTES.html
> > > > > >
> > > > > > *** Please download, test and vote by Nov. 28.
> > > > > >
> > > > > > Kafka's KEYS file containing PGP keys we use to sign the release:
> > > > > > https://kafka.apache.org/KEYS
> > > > > >
> > > > > > * Release artifacts to be voted upon (source and binary):
> > > > > > https://home.apache.org/~showuon/kafka-3.5.2-rc1/
> > > > > >
> > > > > > * Maven artifacts to be voted upon:
> > > > > >
> > > https://repository.apache.org/content/groups/staging/org/apache/kafka/
> > > > > >
> > > > > > * Javadoc:
> > > > > > https://home.apache.org/~showuon/kafka-3.5.2-rc1/javadoc/
> > > > > >
> > > > > > * Tag to be voted upon (off 3.5 branch) is the 3.5.2 tag:
> > > > > > https://github.com/apache/kafka/releases/tag/3.5.2-rc1
> > > > > >
> > > > > > * Documentation:
> > > > > > https://kafka.apache.org/35/documentation.html
> > > > > >
> > > > > > * Protocol:
> > > > > > https://kafka.apache.org/35/protocol.html
> > > > > >
> > > > > > * Successful Jenkins builds for the 3.5 branch:
> > > > > > Unit/integration tests:
> > > > > > https://ci-builds.apache.org/job/Kafka/job/kafka/job/3.5/98/
> > > > > > There are some falky tests, including the 

Re: [VOTE] KIP-996: Pre-Vote

2023-12-08 Thread Luke Chen
Hi Alyssa,

+1 from me.
Thanks for the improvements!

Luke

On Fri, Dec 8, 2023 at 10:36 AM Jason Gustafson 
wrote:

> +1 Thanks for the KIP! Nice to see progress with the raft protocol.
>
> On Thu, Dec 7, 2023 at 5:10 PM Alyssa Huang 
> wrote:
>
> > Hey folks,
> >
> > I would like to start a vote on Pre-vote  Thank you Jose, Jason, Luke,
> > and Jun for your comments on the discussion thread!
> >
> > Here's the link to the proposal -
> > https://cwiki.apache.org/confluence/display/KAFKA/KIP-996%3A+Pre-Vote
> > 
> > Here's the link to the discussion -
> > https://lists.apache.org/thread/pqj9f1r3rk83oqtxxtg6y5h7m7cf56r2
> >
> > Best,
> > Alyssa
> >
>


Re: [DISCUSS] KIP-996: Pre-Vote

2023-12-08 Thread Luke Chen
Hi Alyssa,

Thanks for the update.
LGTM now.

Luke

On Fri, Dec 8, 2023 at 10:03 AM José Armando García Sancio
 wrote:

> Hi Alyssa,
>
> Thanks for the answers and the updates to the KIP. I took a look at
> the latest version and it looks good to me.
>
> --
> -José
>