[ 
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17201450#comment-17201450
 ] 

Stephen O'Donnell commented on HDFS-15569:
------------------------------------------

If there are multiple restarts of the DN, you would get current.tmp after the 
first restart. Then the second restart would need to wait on it to be deleted.

Do you think it would be better, to rename the folder to current.<timestamp> 
and then the async delete thread would simply delete all folders named in that 
pattern one by one?

This delete may have an impact on the disks while the upgrade step is 
attempting to create the hardlinks in the new directory, as the delete will be 
fighting for disk bandwidth too. I wonder if this delete would be better 
delayed until after the hard link creation has completed? One possible negative 
of this, is that the overhead of the delete is postponed until when the DN is 
actually in service, which might impact workloads.

> Speed up the Storage#doRecover during datanode rolling upgrade 
> ---------------------------------------------------------------
>
>                 Key: HDFS-15569
>                 URL: https://issues.apache.org/jira/browse/HDFS-15569
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>            Reporter: Hemanth Boyina
>            Assignee: Hemanth Boyina
>            Priority: Major
>         Attachments: HDFS-15569.001.patch, HDFS-15569.002.patch, 
> HDFS-15569.003.patch
>
>
> When upgrading datanode from hadoop 2.7.2 to 3.1.1 , because of jvm not 
> having enough memory upgrade failed , Adjusted memory configurations and re 
> upgraded datanode ,
> Now datanode upgrade has taken more time , on analyzing found that 
> Storage#deleteDir has taken more time in RECOVER_UPGRADE state 
> {code:java}
> "Thread-28" #270 daemon prio=5 os_prio=0 tid=0x00007fed5a9b8000 nid=0x2b5c 
> runnable [0x00007fdcdad2a000]"Thread-28" #270 daemon prio=5 os_prio=0 
> tid=0x00007fed5a9b8000 nid=0x2b5c runnable [0x00007fdcdad2a000]   
> java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.delete0(Native 
> Method) at java.io.UnixFileSystem.delete(UnixFileSystem.java:265) at 
> java.io.File.delete(File.java:1041) at 
> org.apache.hadoop.fs.FileUtil.deleteImpl(FileUtil.java:229) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:270) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at 
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:153) at 
> org.apache.hadoop.hdfs.server.common.Storage.deleteDir(Storage.java:1348) at 
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.doRecover(Storage.java:782)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:174)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:224)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:253)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:455)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
>  - locked <0x00007fdf08ec7548> (a 
> org.apache.hadoop.hdfs.server.datanode.DataStorage) at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
>  at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1761)
>  - locked <0x00007fdf08ec7598> (a 
> org.apache.hadoop.hdfs.server.datanode.DataNode) at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1697)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
>  at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
>  at java.lang.Thread.run(Thread.java:748) {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to