After leaving safe mode, you can manually validate that your data is indeed there. Then you can finalize the upgrade so that the disk space is recycled.

Peter Thygesen wrote:
I guess you are right. :) I don't recall upgrading.. but I did play around a while back when I started working with hadoop. I managed to rollback and got the cluster up and running again with 0.14.2, then I fired the finalize command, stopped and upgraded.
Now I've started dfs with the -upgrade option. But for how long should I expect 
dfs to stay in save mode? I have noticed that all the datanode logs writes 
something like:
"Upgrading storage directory /mnt/data/.... Old LV = -7; old CTime = 0
  New LV = -10; new CTime = 1196429533506

Thank you for your help
\Peter

-----Original Message-----
From: Enis Soztutar [mailto:[EMAIL PROTECTED] Sent: 30. november 2007 13:57
To: [email protected]
Subject: Re: Upgrade Problem (0.14.2-> 0.15.1)

It seems that you have not finalized your previous update. When an upgrade is not finalized, "previous" directories are kept intact. Did you upgrade to 0.14.2 from a previous version ? If so please first run
bin/hadoop dfsadmin -finalizeUpgrade
so that your upgrade to 0.14.2 is finalized. Then upgrade to 0.15.1.


Peter Thygesen wrote:
I'm upgrading from 0.14.2 to 0.15.1 I followed the upgrade guide, but
when I started dfs (start-dfs.sh -upgrade) I got the following error:

2007-11-30 13:07:32,515 INFO org.apache.hadoop.dfs.NameNode:
STARTUP_MSG: /************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoopmaster/192.168.0.129 STARTUP_MSG: args = [-upgrade]
************************************************************/
2007-11-30 13:07:32,888 INFO org.apache.hadoop.dfs.NameNode: Namenode up
at: had
oopmaster/192.168.0.129:54310
2007-11-30 13:07:32,899 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializ
ing JVM Metrics with processName=NameNode, sessionId=null
2007-11-30 13:07:33,132 INFO org.apache.hadoop.ipc.Server: Stopping
server on 54 310
2007-11-30 13:07:33,135 ERROR org.apache.hadoop.dfs.NameNode:
org.apache.hadoop.
dfs.InconsistentFSStateException: Directory
/usr/local/hadoop-datastore/hadoop-h
adoop/dfs/name is in an inconsistent state: previous fs state should not
exist d uring upgrade. Finalize or rollback first.
        at org.apache.hadoop.dfs.FSImage.doUpgrade(FSImage.java:243)
        at
org.apache.hadoop.dfs.FSImage.recoverTransitionRead(FSImage.java:216)
        at org.apache.hadoop.dfs.FSDirectory.loadFSImage
(FSDirectory.java:76)
        at
org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:221)
        at org.apache.hadoop.dfs.NameNode.init(NameNode.java:130)
        at org.apache.hadoop.dfs.NameNode.<init>( NameNode.java:168)
        at
org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:804)
        at org.apache.hadoop.dfs.NameNode.main(NameNode.java:813)

2007-11-30 13:07:33,144 INFO org.apache.hadoop.dfs.NameNode :
SHUTDOWN_MSG: /***************************************

I saw that the datanodes stated and the secondarynamenode also started,
but could off cause not connect to the namenode.

Hope someone can help me getting my cluster started again.

Thx.

\Peter Thygesen



Reply via email to