liuml07 commented on a change in pull request #2189:
URL: https://github.com/apache/hadoop/pull/2189#discussion_r476214970



##########
File path: 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockStoragePolicySuite.java
##########
@@ -63,6 +63,12 @@ public static BlockStoragePolicySuite createDefaultSuite(
         new StorageType[]{StorageType.DISK},
         new StorageType[]{StorageType.DISK},
         true);    // Cannot be changed on regular files, but inherited.
+    final byte allNVDIMMId = HdfsConstants.StoragePolicy.ALL_NVDIMM.value();

Review comment:
       Yes I agree the checksum calculation and read is built-in hadoop. I'm 
thinking of more about: do we need checksum for this storage type? For example, 
the RAM_DISK does not need checksum as far as I remember. This is RAM, but this 
also survives the service restarts, so checksum makes sense for data integrity. 
The code so far looks good regarding this.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to