[ 
https://issues.apache.org/jira/browse/HDFS-16147?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17391605#comment-17391605
 ] 

liuyongpan edited comment on HDFS-16147 at 8/3/21, 11:26 AM:
-------------------------------------------------------------

[~sodonnell], I have tested your question carefully, and here is my answer.
 1、Upon careful examination, oiv can indeed work normally, and I can't explain 
why it works.

You can simply verify as follows:

In class TestOfflineImageViewer , method{color:#172b4d} createOriginalFSImage, 
add and remove such code to make a contrast:{color}

{color:#de350b}****{color}first get my patch  HDFS-16147.002.patch !
{code:java}
// turn on both parallelization and compression
conf.setBoolean(DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY, true);
 conf.set(DFSConfigKeys.DFS_IMAGE_COMPRESSION_CODEC_KEY,
 "org.apache.hadoop.io.compress.GzipCodec"); 
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY, "true");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY, "2");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY, "2");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY, "2");
{code}
run test unit   {color:#ffc66d}testPBDelimitedWriter , 
{color:#172b4d}y{color}{color}{color:#172b4d}ou c{color}an get the answer.
 2、If I create a parallel compressed image with this patch, and then try to 
load it in a NN without this patch and parallel loading disabled, the NN still 
able to load it.

{color:#de350b}*****{color} first you must merge the patch HDFS-14617

You can simply verify as follows:

in  class TestFSImageWithSnapshot , method : setUp , add such code, to make you 
save fsImage with parallel and compressed:
{code:java}
  public void setUp() throws Exception {
    conf = new Configuration();
    //*************add**************
    conf.setBoolean(DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY, true);
    conf.set(DFSConfigKeys.DFS_IMAGE_COMPRESSION_CODEC_KEY,
            "org.apache.hadoop.io.compress.GzipCodec");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY, "true");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY, "3");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY, "3");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY, "3");
   //**************add*************
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
        .build();
    cluster.waitActive();
    fsn = cluster.getNamesystem();
    hdfs = cluster.getFileSystem();
  }
{code}
In class TestOfflineImageViewer , method createOriginalFSImage, change as 
follow, to make it run single thread.
{code:java}
class FSImageFormatProtobuf, method loadInternal     
case INODE: {
          currentStep = new Step(StepType.INODES);
          prog.beginStep(Phase.LOADING_FSIMAGE, currentStep);
          stageSubSections = getSubSectionsOfName(
              subSections, SectionName.INODE_SUB);
//          if (loadInParallel && (stageSubSections.size() > 0)) {
//            inodeLoader.loadINodeSectionInParallel(executorService,
//                stageSubSections, summary.getCodec(), prog, currentStep);
//          } else {
//            inodeLoader.loadINodeSection(in, prog, currentStep);
//          }
           inodeLoader.loadINodeSection(in, prog, currentStep);
        }
{code}
 then run test unit  {color:#ffc66d}testSaveLoadImage , you can get the 
answer.{color}

{color:#172b4d}3、50% improvement measured against a compressed single threaded 
load verses parallel compressed loading. {color}

{color:#172b4d}An FsImage is generated, before compression it is  27.18M{color} 
, after compression it is {color:#172b4d}128M{color}. A  simple comparisons 
were made in table. 
||state||ave loading time||
|compress and parallel|7.5sec|
|compress and unparallel|9.5sec|
|uncompress and parallel|6.5sec|
     {color:#313131}In fact loading fsimage with uncompress and parallel will 
be faster than compress and parallel. {color}{color:#313131}As disscussed in 
HDFS-1435 , compressed fsimage is necessary.{color}
  
  3、HDFS-16147.002.patch fix the error of test unit at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testNoParallelSectionsWithCompressionEnabled
  


was (Author: mofei):
[~sodonnell], I have tested your question carefully, and here is my answer.

1、HDFS-16147.002.patch fix the error of test unit at 
org.apache.hadoop.hdfs.server.namenode.TestFSImage.testNoParallelSectionsWithCompressionEnabled
 2、Upon careful examination, oiv can indeed work normally, and I can't explain 
why it works.

You can simply verify as follows:

In class TestOfflineImageViewer , method{color:#172b4d} createOriginalFSImage, 
add and remove such code , make a contrast:{color}

{color:#de350b}note:{color} first get my patch  HDFS-16147.002.patch !
{code:java}
// turn on both parallelization and compression
conf.setBoolean(DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY, true);
 conf.set(DFSConfigKeys.DFS_IMAGE_COMPRESSION_CODEC_KEY,
 "org.apache.hadoop.io.compress.GzipCodec"); 
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY, "true");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY, "2");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY, "2");
conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY, "2");
{code}
run test unit   {color:#ffc66d}testPBDelimitedWriter , y{color}ou can get the 
answer.
 3、If I create a parallel compressed image with this patch, and then try to 
load it in a NN without this patch and parallel loading disabled, the NN still 
able to load it.

{color:#de350b}note:{color} first you must merge the patch HDFS-14617

You can simply verify as follows:

in  class TestFSImageWithSnapshot , method : setUp , add such code:
{code:java}
  public void setUp() throws Exception {
    conf = new Configuration();
    //*************add**************
    conf.setBoolean(DFSConfigKeys.DFS_IMAGE_COMPRESS_KEY, true);
    conf.set(DFSConfigKeys.DFS_IMAGE_COMPRESSION_CODEC_KEY,
            "org.apache.hadoop.io.compress.GzipCodec");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_LOAD_KEY, "true");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_INODE_THRESHOLD_KEY, "3");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_TARGET_SECTIONS_KEY, "3");
    conf.set(DFSConfigKeys.DFS_IMAGE_PARALLEL_THREADS_KEY, "3");
   //**************add*************
    cluster = new MiniDFSCluster.Builder(conf).numDataNodes(REPLICATION)
        .build();
    cluster.waitActive();
    fsn = cluster.getNamesystem();
    hdfs = cluster.getFileSystem();
  }
{code}
In class TestOfflineImageViewer , method createOriginalFSImage, change as 
follow:
{code:java}
class FSImageFormatProtobuf, method loadInternal     
case INODE: {
          currentStep = new Step(StepType.INODES);
          prog.beginStep(Phase.LOADING_FSIMAGE, currentStep);
          stageSubSections = getSubSectionsOfName(
              subSections, SectionName.INODE_SUB);
//          if (loadInParallel && (stageSubSections.size() > 0)) {
//            inodeLoader.loadINodeSectionInParallel(executorService,
//                stageSubSections, summary.getCodec(), prog, currentStep);
//          } else {
//            inodeLoader.loadINodeSection(in, prog, currentStep);
//          }
           inodeLoader.loadINodeSection(in, prog, currentStep);
        }
{code}
 then run test unit  {color:#ffc66d}testSaveLoadImage , you can get the 
answer.{color}

{color:#172b4d}3、50% improvement measured against a compressed single threaded 
load verses parallel compressed loading. {color}

{color:#172b4d}An FsImage is generated, before compression it is  27.18M{color} 
, after compression it is {color:#172b4d}128M{color}. A  simple comparisons 
were made in table.

 
||state||ave loading time||
|compress and parallel|7.5sec|
|compress and unparallel|9.5sec|
|uncompress and parallel|6.5sec|

 
{color:#313131}In fact loading fsimage with uncompress and parallel will be 
faster than compress and parallel. {color}
{color:#313131}As disscussed in HDFS-1435 , compressed fsimage is 
necessary.{color}
 
 
 

> load fsimage with parallelization and compression
> -------------------------------------------------
>
>                 Key: HDFS-16147
>                 URL: https://issues.apache.org/jira/browse/HDFS-16147
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namanode
>    Affects Versions: 3.3.0
>            Reporter: liuyongpan
>            Priority: Minor
>         Attachments: HDFS-16147.001.patch, HDFS-16147.002.patch, 
> subsection.svg
>
>




--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to