[
https://issues.apache.org/jira/browse/HDFS-15569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
]
Hemanth Boyina updated HDFS-15569:
----------------------------------
Fix Version/s: 3.4.0
Resolution: Fixed
Status: Resolved (was: Patch Available)
> Speed up the Storage#doRecover during datanode rolling upgrade
> ---------------------------------------------------------------
>
> Key: HDFS-15569
> URL: https://issues.apache.org/jira/browse/HDFS-15569
> Project: Hadoop HDFS
> Issue Type: Improvement
> Reporter: Hemanth Boyina
> Assignee: Hemanth Boyina
> Priority: Major
> Fix For: 3.4.0
>
> Attachments: HDFS-15569.001.patch, HDFS-15569.002.patch,
> HDFS-15569.003.patch
>
>
> When upgrading datanode from hadoop 2.7.2 to 3.1.1 , because of jvm not
> having enough memory upgrade failed , Adjusted memory configurations and re
> upgraded datanode ,
> Now datanode upgrade has taken more time , on analyzing found that
> Storage#deleteDir has taken more time in RECOVER_UPGRADE state
> {code:java}
> "Thread-28" #270 daemon prio=5 os_prio=0 tid=0x00007fed5a9b8000 nid=0x2b5c
> runnable [0x00007fdcdad2a000]"Thread-28" #270 daemon prio=5 os_prio=0
> tid=0x00007fed5a9b8000 nid=0x2b5c runnable [0x00007fdcdad2a000]
> java.lang.Thread.State: RUNNABLE at java.io.UnixFileSystem.delete0(Native
> Method) at java.io.UnixFileSystem.delete(UnixFileSystem.java:265) at
> java.io.File.delete(File.java:1041) at
> org.apache.hadoop.fs.FileUtil.deleteImpl(FileUtil.java:229) at
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:270) at
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at
> org.apache.hadoop.fs.FileUtil.fullyDeleteContents(FileUtil.java:285) at
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:182) at
> org.apache.hadoop.fs.FileUtil.fullyDelete(FileUtil.java:153) at
> org.apache.hadoop.hdfs.server.common.Storage.deleteDir(Storage.java:1348) at
> org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.doRecover(Storage.java:782)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadStorageDirectory(BlockPoolSliceStorage.java:174)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.loadBpStorageDirectories(BlockPoolSliceStorage.java:224)
> at
> org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceStorage.recoverTransitionRead(BlockPoolSliceStorage.java:253)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.loadBlockPoolSliceStorage(DataStorage.java:455)
> at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:389)
> - locked <0x00007fdf08ec7548> (a
> org.apache.hadoop.hdfs.server.datanode.DataStorage) at
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:557)
> at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1761)
> - locked <0x00007fdf08ec7598> (a
> org.apache.hadoop.hdfs.server.datanode.DataNode) at
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1697)
> at
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:392)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:282)
> at
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:822)
> at java.lang.Thread.run(Thread.java:748) {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]