[ https://issues.apache.org/jira/browse/HDFS-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17837689#comment-17837689 ]
ASF GitHub Bot commented on HDFS-17463: --------------------------------------- XbaoWu commented on code in PR #6736: URL: https://github.com/apache/hadoop/pull/6736#discussion_r1567230524 ########## hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberManager.java: ########## @@ -115,6 +119,36 @@ public static StringTable getStringTable() { return map; } + // returns serial snapshot of current values for a save. + public static StringTable getSerialStringTable() { + // approximate size for capacity. + int size = 0; + for (final SerialNumberManager snm : values) { + size += snm.size(); + } + + int tableMaskBits = getMaskBits(); + StringTable map = new StringTable(size, 0); + Set<String> entrySet = new HashSet<String>(); + AtomicInteger index = new AtomicInteger(); + + Map <Integer, String> entryMap = new TreeMap<Integer, String>(); + for (final SerialNumberManager snm : values) { + final int mask = snm.getMask(tableMaskBits); + for (Entry<Integer, String> entry : snm.entrySet()) { + entryMap.put(entry.getKey() | mask, entry.getValue()); + } + } + + entryMap.forEach((key, value) -> { + if(entrySet.add(value)){ + map.put(index.incrementAndGet(), value); Review Comment: incrementAndGet method is executed when the fsimage file is persisted to ensure that the index is continuous. > Support the switch StringTable Split ID feature > ----------------------------------------------- > > Key: HDFS-17463 > URL: https://issues.apache.org/jira/browse/HDFS-17463 > Project: Hadoop HDFS > Issue Type: Improvement > Components: namenode > Affects Versions: 3.2.0, 3.3.5, 3.3.3, 3.3.4 > Reporter: wangzhihui > Priority: Major > Labels: pull-request-available > Attachments: Image_struct.png, error.png > > > desc: > * > Hadoop 3.2 introduced optimization features for HDFS StringTable > (b60ca37914b22550e3630fa02742d40697decb3), It resulted in lower versions of > Hadoop upgraded to 3.2 and later versions not supporting downgrade > operations. > !error.png! > * This issue has also been discussed in HDFS-14831, and it is recommended to > revert the feature, but it cannot fundamentally solve the problem。 > * > Therefore, we have added an optimization to support downgrading > > Solution: > * First, we will add the "dfs. image. save. splitId. stringTable" conf > switch "StringTable optimization feature" is enabled > * When the conf value is false, an Image file compatible with lower versions > of HDFS is generated to support downgrading. > * > The difference in HDFS Image file format between Hadoop 3.1.1 and Hadoop 3.2 > is shown in the following figure. > * With the sub-sections feature introduced in HDFS-14617, Protobuf can > support compatible reading. > * > The data structure causing incompatible differences is mainly StringTable. > !Image_struct.png|width=396,height=163! > * In "dfs.image.save.splitId.stringTable = false " the Id generation order > of StringTable starts from 0 to Integer.Max. When true, the Id value range > follows the latest rules. > -- This message was sent by Atlassian Jira (v8.20.10#820010) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org