[ https://issues.apache.org/jira/browse/HDFS-8578?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15048664#comment-15048664 ]
Allen Wittenauer commented on HDFS-8578: ---------------------------------------- You're misinterpreting the output entirely. a. The first message means the value of JAVA_HOME that was inherited from pre-Docker doesn't work in Docker. Since Yetus is running under Docker and wasn't told to use anything different, it's going to find a new one to use instead. b. Oh, --jenkins was passed, so let's turn on all the specific bits that are enabled when running under Jenkins *regardless* of whether this is Docker or not. You'll note: {code} apache-yetus-201a378/shelldocs/ apache-yetus-201a378/shelldocs/shelldocs.py apache-yetus-201a378/yetus-project/ apache-yetus-201a378/yetus-project/pom.xml Running in Jenkins mode Processing: HDFS-8578 HDFS-8578 patch is being downloaded at Tue Dec 8 10:29:37 UTC 2015 from https://issues.apache.org/jira/secure/attachment/12776283/HDFS-8578-13.patch {code} ... that Yetus tell you it is running in Jenkins mode immediately after untarring too. Jenkins and Docker are not exclusive modes to each other. > On upgrade, Datanode should process all storage/data dirs in parallel > --------------------------------------------------------------------- > > Key: HDFS-8578 > URL: https://issues.apache.org/jira/browse/HDFS-8578 > Project: Hadoop HDFS > Issue Type: Improvement > Components: datanode > Reporter: Raju Bairishetti > Assignee: Vinayakumar B > Priority: Critical > Attachments: HDFS-8578-01.patch, HDFS-8578-02.patch, > HDFS-8578-03.patch, HDFS-8578-04.patch, HDFS-8578-05.patch, > HDFS-8578-06.patch, HDFS-8578-07.patch, HDFS-8578-08.patch, > HDFS-8578-09.patch, HDFS-8578-10.patch, HDFS-8578-11.patch, > HDFS-8578-12.patch, HDFS-8578-13.patch, HDFS-8578-14.patch, > HDFS-8578-branch-2.6.0.patch, HDFS-8578-branch-2.7-001.patch, > HDFS-8578-branch-2.7-002.patch > > > Right now, during upgrades datanode is processing all the storage dirs > sequentially. Assume it takes ~20 mins to process a single storage dir then > datanode which has ~10 disks will take around 3hours to come up. > *BlockPoolSliceStorage.java* > {code} > for (int idx = 0; idx < getNumStorageDirs(); idx++) { > doTransition(datanode, getStorageDir(idx), nsInfo, startOpt); > assert getCTime() == nsInfo.getCTime() > : "Data-node and name-node CTimes must be the same."; > } > {code} > It would save lots of time during major upgrades if datanode process all > storagedirs/disks parallelly. > Can we make datanode to process all storage dirs parallelly? -- This message was sent by Atlassian JIRA (v6.3.4#6332)