brandboat commented on code in PR #17921:
URL: https://github.com/apache/kafka/pull/17921#discussion_r1864153616


##########
docs/ops.html:
##########
@@ -1366,8 +1366,9 @@ <h4 class="anchor-heading"><a id="replace_disk" 
class="anchor-link"></a><a href=
   <p>If the data in the cluster metadata directory is lost either because of 
hardware failure or the hardware needs to be replaced, care should be taken 
when provisioning the new controller node. The new controller node should not 
be formatted and started until the majority of the controllers have all of the 
committed data. To determine if the majority of the controllers have the 
committed data, run the kafka-metadata-quorum.sh tool to describe the 
replication status:</p>
 
   <pre><code class="language-bash">$ bin/kafka-metadata-quorum.sh 
--bootstrap-server localhost:9092 describe --replication
-NodeId  LogEndOffset    Lag     LastFetchTimestamp      LastCaughtUpTimestamp  
 Status
-1       25806           0       1662500992757           1662500992757          
 Leader
+NodeId DirectoryId             LogEndOffset    Lag     LastFetchTimestamp      
LastCaughtUpTimestamp   Status

Review Comment:
   Interesting issue... I saw what you mentioned from 
`testDescribeQuorumReplicationSuccessful`. The DirectoryId is always 
`AAAAAAAAAAAAAAAAAAAAAA` even though they are from different directory. And the 
string `AAAAAAAAAAAAAAAAAAAAAA` equals to `Uuid.ZERO_UUID`. I'm still trying to 
find out the root cause. Will file a PR later. Thanks for bring this up !



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: jira-unsubscr...@kafka.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to