[ 
https://issues.apache.org/jira/browse/HDFS-6233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13966275#comment-13966275
 ] 

Hadoop QA commented on HDFS-6233:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12639738/HDFS-6233.01.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

                  org.apache.hadoop.fs.TestHardLink

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/6649//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/6649//console

This message is automatically generated.

> Datanode upgrade in Windows from 1.x to 2.4 fails with symlink error.
> ---------------------------------------------------------------------
>
>                 Key: HDFS-6233
>                 URL: https://issues.apache.org/jira/browse/HDFS-6233
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: datanode, tools
>    Affects Versions: 2.4.0
>         Environment: Windows
>            Reporter: Huan Huang
>            Assignee: Arpit Agarwal
>         Attachments: HDFS-6233.01.patch
>
>
> I try to upgrade Hadoop from 1.x and 2.4, but DataNode failed to start due to 
> hard link exception.
> Repro steps:
> *Installed Hadoop 1.x
> *hadoop dfsadmin -safemode enter
> *hadoop dfsadmin -saveNamespace
> *hadoop namenode -finalize
> *Stop all services
> *Uninstall Hadoop 1.x 
> *Install Hadoop 2.4 
> *Start namenode with -upgrade option
> *Try to start datanode, begin to see Hardlink exception in datanode service 
> log.
> {code}
> 2014-04-10 22:47:11,655 INFO org.apache.hadoop.ipc.Server: IPC Server 
> listener on 8010: starting
> 2014-04-10 22:47:11,656 INFO org.apache.hadoop.ipc.Server: IPC Server 
> Responder: starting
> 2014-04-10 22:47:11,999 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Data-node version: -55 and name-node layout version: -56
> 2014-04-10 22:47:12,008 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Lock on d:\hadoop\data\hdfs\dn\in_use.lock acquired by nodename 7268@myhost
> 2014-04-10 22:47:12,011 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Recovering storage directory D:\hadoop\data\hdfs\dn from previous upgrade
> 2014-04-10 22:47:12,017 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Upgrading storage directory d:\hadoop\data\hdfs\dn.
>    old LV = -44; old CTime = 0.
>    new LV = -55; new CTime = 1397168400373
> 2014-04-10 22:47:12,021 INFO org.apache.hadoop.hdfs.server.common.Storage: 
> Formatting block pool BP-39008719-10.0.0.1-1397168400092 directory 
> d:\hadoop\data\hdfs\dn\current\BP-39008719-10.0.0.1-1397168400092\current
> 2014-04-10 22:47:12,254 FATAL 
> org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for 
> block pool Block pool <registering> (Datanode Uuid unassigned) service to 
> myhost/10.0.0.1:8020
> java.io.IOException: Usage: hardlink create [LINKNAME] [FILENAME] |Incorrect 
> command line arguments.
>       at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:479)
>       at org.apache.hadoop.fs.HardLink.createHardLinkMult(HardLink.java:416)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkBlocks(DataStorage.java:816)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.linkAllBlocks(DataStorage.java:759)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doUpgrade(DataStorage.java:566)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:486)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)
>       at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,258 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Ending block pool service for: Block pool <registering> (Datanode Uuid 
> unassigned) service to myhost/10.0.0.1:8020
> 2014-04-10 22:47:12,359 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.remove(BlockPoolManager.java:91)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:859)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>       at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:12,359 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Removed Block pool <registering> (Datanode Uuid unassigned)
> 2014-04-10 22:47:12,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Block pool ID needed, but service not yet registered with NN
> java.lang.Exception: trace
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:143)
>       at 
> org.apache.hadoop.hdfs.server.datanode.DataNode.shutdownBlockPool(DataNode.java:861)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPOfferService.shutdownActor(BPOfferService.java:350)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.cleanUp(BPServiceActor.java:619)
>       at 
> org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:837)
>       at java.lang.Thread.run(Thread.java:722)
> 2014-04-10 22:47:14,360 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
> Exiting Datanode
> 2014-04-10 22:47:14,361 INFO org.apache.hadoop.util.ExitUtil: Exiting with 
> status 0
> 2014-04-10 22:47:14,362 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: 
> SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down DataNode at myhost/10.0.0.1
> ************************************************************/
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)

Reply via email to