[ 
https://issues.apache.org/jira/browse/HDFS-17463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17837566#comment-17837566
 ] 

ASF GitHub Bot commented on HDFS-17463:
---------------------------------------

hiwangzhihui commented on code in PR #6736:
URL: https://github.com/apache/hadoop/pull/6736#discussion_r1566889645


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberManager.java:
##########
@@ -115,6 +119,36 @@ public static StringTable getStringTable() {
     return map;
   }
 
+  // returns serial snapshot of current values for a save.
+  public static StringTable getSerialStringTable() {
+    // approximate size for capacity.
+    int size = 0;
+    for (final SerialNumberManager snm : values) {
+      size += snm.size();
+    }
+
+    int tableMaskBits = getMaskBits();
+    StringTable map = new StringTable(size, 0);
+    Set<String> entrySet = new HashSet<String>();
+    AtomicInteger index = new AtomicInteger();
+
+    Map <Integer, String> entryMap = new TreeMap<Integer, String>();
+    for (final SerialNumberManager snm : values) {
+      final int mask = snm.getMask(tableMaskBits);
+      for (Entry<Integer, String> entry : snm.entrySet()) {
+        entryMap.put(entry.getKey() | mask, entry.getValue());
+      }
+    }
+
+    entryMap.forEach((key, value) -> {
+      if(entrySet.add(value)){
+        map.put(index.incrementAndGet(), value);

Review Comment:
   Why not execute the incrementAndGet method when persistently collecting 
information?



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/FSImageFormatPBINode.java:
##########
@@ -628,25 +638,59 @@ private void loadRootINode(INodeSection.INode p) {
   public final static class Saver {
     private long numImageErrors;
 
-    private static long buildPermissionStatus(INodeAttributes n) {
-      return n.getPermissionLong();
+    private static long buildPermissionStatus(INodeAttributes n,
+        boolean enableSaveSplitIdStringTable) {
+      if(enableSaveSplitIdStringTable){
+        return n.getPermissionLong();
+      } else {
+        StringTable serialStringTable = 
SerialNumberManager.getSerialStringTable();
+        Map<String, Integer> stringMap = StreamSupport
+            .stream(serialStringTable.spliterator(), false)
+            .collect(Collectors.toMap(Map.Entry::getValue, Map.Entry::getKey));
+
+        long permission = n.getPermissionLong();
+        permission = USER.BITS.combine(stringMap.get(n.getUserName()), 
permission);
+        permission = GROUP.BITS.combine(stringMap.get(n.getGroupName()), 
permission);
+        return permission;
+      }
     }
 
-    private static AclFeatureProto.Builder buildAclEntries(AclFeature f) {
+    private static AclFeatureProto.Builder buildAclEntries(AclFeature f,
+        boolean enableSaveSplitIdStringTable) {
       AclFeatureProto.Builder b = AclFeatureProto.newBuilder();
+      Map<String, Integer> stringMap = null;
+      if(!enableSaveSplitIdStringTable){
+        StringTable serialStringTable = 
SerialNumberManager.getSerialStringTable();
+        stringMap = StreamSupport.stream(serialStringTable.spliterator(), 
false)
+                .collect(Collectors.toMap(Map.Entry::getValue, 
Map.Entry::getKey));
+      }
+
       for (int pos = 0, e; pos < f.getEntriesSize(); pos++) {
         e = f.getEntryAt(pos);
+        if(!enableSaveSplitIdStringTable){
+          e = (int) 
NAME.BITS.combine(stringMap.get(AclEntryStatusFormat.getName(e)), e);
+        }
         b.addEntries(e);
       }
       return b;
     }
 
-    private static XAttrFeatureProto.Builder buildXAttrs(XAttrFeature f) {
+    private static XAttrFeatureProto.Builder buildXAttrs(XAttrFeature f, 
boolean enableSaveSplitIdStringTable) {
       XAttrFeatureProto.Builder b = XAttrFeatureProto.newBuilder();
+      Map<String, Integer> stringMap = null;
+      if(!enableSaveSplitIdStringTable){
+        StringTable serialStringTable = 
SerialNumberManager.getSerialStringTable();

Review Comment:
   When enableSaveSplitIdStringTable=false, different StringMaps should be 
obtained to store data



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/AclEntryStatusFormat.java:
##########
@@ -47,7 +47,7 @@ public enum AclEntryStatusFormat implements 
LongBitFormat.Enum {
   private static final AclEntryType[] ACL_ENTRY_TYPE_VALUES =
       AclEntryType.values();
 
-  private final LongBitFormat BITS;

Review Comment:
   Why remove  the “private”



##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/namenode/SerialNumberManager.java:
##########
@@ -115,6 +119,36 @@ public static StringTable getStringTable() {
     return map;
   }
 
+  // returns serial snapshot of current values for a save.
+  public static StringTable getSerialStringTable() {

Review Comment:
   Please add test cases.





> Support the switch StringTable Split ID feature
> -----------------------------------------------
>
>                 Key: HDFS-17463
>                 URL: https://issues.apache.org/jira/browse/HDFS-17463
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>    Affects Versions: 3.2.0, 3.3.5, 3.3.3, 3.3.4
>            Reporter: wangzhihui
>            Priority: Major
>              Labels: pull-request-available
>         Attachments: Image_struct.png, error.png
>
>
> desc:
>  * 
> Hadoop 3.2 introduced optimization features for HDFS StringTable 
> (b60ca37914b22550e3630fa02742d40697decb3), It resulted in lower versions of 
> Hadoop upgraded to 3.2 and later versions not supporting downgrade 
> operations. 
> !error.png!
>  * This issue has also been discussed in HDFS-14831, and it is recommended to 
> revert the feature, but it cannot fundamentally solve the problem。
>  * 
> Therefore, we have added an optimization to support downgrading
>  
> Solution:
>  * First, we will add the "dfs. image. save. splitId. stringTable" conf 
> switch "StringTable optimization feature" is enabled
>  * When the conf value is false, an Image file compatible with lower versions 
> of HDFS is generated to support downgrading.
>  * 
> The difference in HDFS Image file format between Hadoop 3.1.1 and Hadoop 3.2 
> is shown in the following figure.
>  * With the sub-sections feature introduced in HDFS-14617, Protobuf can 
> support compatible reading.
>  * 
> The data structure causing incompatible differences is mainly StringTable.
> !Image_struct.png|width=396,height=163!
>  * In "dfs.image.save.splitId.stringTable = false " the Id generation order 
> of StringTable starts from 0 to Integer.Max. When true, the Id value range 
> follows the latest rules.
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org

Reply via email to