[
https://issues.apache.org/jira/browse/KAFKA-18442?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Jonah Hooper updated KAFKA-18442:
---------------------------------
Description:
Downgrading from 3.6.x and above to 3.3.x will fail with the following error:
{code:java}
kafka.common.InconsistentBrokerMetadataException: BrokerMetadata is not
consistent across log.dirs. This could happen if multiple brokers shared a log
directory (log.dirs) or partial data was manually copied from another broker.
...{code}
This is only broken in 3.3.2 version of Kafka
[BrokerMetatadataCheckpoint.scala|https://github.com/apache/kafka/blob/c9c03dd7ef9ff4edf2596e905cabececc72a9e9d/core/src/main/scala/kafka/server/BrokerMetadataCheckpoint.scala#L186].
In 3.3.2 kafka loads information about metadata directories via
{{metadata.properties}} files and expects that the properties are duplicated
for all log directories. We crash with a fatal error if they are non-duplicate
which at that time would mean that another instance of kafka was using the same
log directories.
In [#14291|https://github.com/apache/kafka/pull/14291] we dropped the
requirement that the {{metadata.properties}} files is duplicate in each
directory since they will now contain a non-unique {{directory.id}} field for
each dir. This has no effect on versions of kafka running {{kraft}} mode, >=
3.4.x (they care only about uniqueness of node.id and cluster.id) but does
affect 3.3.2 and lower (than
[#9967|[https://github.com/apache/kafka/pull/9967])] since it expects every
{{metadata.properties}} file to be the same.
was:
Downgrading from 3.6.x and above to 3.3.x will fail with the following error:
{code:java}
kafka.common.InconsistentBrokerMetadataException: BrokerMetadata is not
consistent across log.dirs. This could happen if multiple brokers shared a log
directory (log.dirs) or partial data was manually copied from another broker.
Found:- /mnt/kafka/kafka-metadata-logs -> {node.id=1,
directory.id=ItAoMTrsidYVfoRnX3gsAA, version=1,
cluster.id=I2eXt9rvSnyhct8BYmW6-w}- /mnt/kafka/kafka-data-logs-2 -> {node.id=1,
directory.id=MiQDnIX6WuYL0NdMaLOsRQ, version=1,
cluster.id=I2eXt9rvSnyhct8BYmW6-w}- /mnt/kafka/kafka-data-logs-1 -> {node.id=1,
directory.id=F1m5lsdOIsGtTpTYT0Ao9g, version=1,
cluster.id=I2eXt9rvSnyhct8BYmW6-w}
at
kafka.server.BrokerMetadataCheckpoint$.getBrokerMetadataAndOfflineDirs(BrokerMetadataCheckpoint.scala:194)
at
kafka.server.KafkaRaftServer$.initializeLogDirs(KafkaRaftServer.scala:184) at
kafka.server.KafkaRaftServer.<init>(KafkaRaftServer.scala:61) at
kafka.Kafka$.buildServer(Kafka.scala:79) at
kafka.Kafka$.main(Kafka.scala:87) at kafka.Kafka.main(Kafka.scala){code}
This is only broken in 3.3.2 version of Kafka
[BrokerMetatadataCheckpoint.scala|https://github.com/apache/kafka/blob/c9c03dd7ef9ff4edf2596e905cabececc72a9e9d/core/src/main/scala/kafka/server/BrokerMetadataCheckpoint.scala#L186].
In 3.3.2 kafka loads information about metadata directories via
{{metadata.properties}} files and expects that the properties are duplicated
for all log directories. We crash with a fatal error if they are non-duplicate
which at that time would mean that another instance of kafka was using the same
log directories.
In [#14291|https://github.com/apache/kafka/pull/14291] we dropped the
requirement that the {{metadata.properties}} files is duplicate in each
directory since they will now contain a non-unique {{directory.id}} field for
each dir. This has no effect on versions of kafka running {{kraft}} mode, >=
3.4.x (they care only about uniqueness of node.id and cluster.id) but does
affect 3.3.2 and lower (than
[#9967|[https://github.com/apache/kafka/pull/9967])] since it expects every
{{metadata.properties}} file to be the same.
> Downgrades to 3.3.2 will fail from versions 3.6.0 and above
> -----------------------------------------------------------
>
> Key: KAFKA-18442
> URL: https://issues.apache.org/jira/browse/KAFKA-18442
> Project: Kafka
> Issue Type: Bug
> Reporter: Jonah Hooper
> Priority: Minor
>
> Downgrading from 3.6.x and above to 3.3.x will fail with the following error:
> {code:java}
> kafka.common.InconsistentBrokerMetadataException: BrokerMetadata is not
> consistent across log.dirs. This could happen if multiple brokers shared a
> log directory (log.dirs) or partial data was manually copied from another
> broker.
> ...{code}
> This is only broken in 3.3.2 version of Kafka
> [BrokerMetatadataCheckpoint.scala|https://github.com/apache/kafka/blob/c9c03dd7ef9ff4edf2596e905cabececc72a9e9d/core/src/main/scala/kafka/server/BrokerMetadataCheckpoint.scala#L186].
> In 3.3.2 kafka loads information about metadata directories via
> {{metadata.properties}} files and expects that the properties are duplicated
> for all log directories. We crash with a fatal error if they are
> non-duplicate which at that time would mean that another instance of kafka
> was using the same log directories.
> In [#14291|https://github.com/apache/kafka/pull/14291] we dropped the
> requirement that the {{metadata.properties}} files is duplicate in each
> directory since they will now contain a non-unique {{directory.id}} field for
> each dir. This has no effect on versions of kafka running {{kraft}} mode, >=
> 3.4.x (they care only about uniqueness of node.id and cluster.id) but does
> affect 3.3.2 and lower (than
> [#9967|[https://github.com/apache/kafka/pull/9967])] since it expects every
> {{metadata.properties}} file to be the same.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)