[ 
https://issues.apache.org/jira/browse/HADOOP-4773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Focus resolved HADOOP-4773.
---------------------------

       Resolution: Fixed
    Fix Version/s: 0.17.2

> namenode startup error, hadoop-user-namenode.pid permission denied.
> -------------------------------------------------------------------
>
>                 Key: HADOOP-4773
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4773
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.2
>         Environment: Cent OS 5.2,
> hadoop-site.xml setting:
> fs.default.name hdfs://master.cloud:9000  
> mapred.job.tracker hdfs://master.cloud:9001  
> hadoop.tmp.dir /home/user/hadoop/tmp/  
> mapred.chile.java.opts Xmls512M 
>            Reporter: Focus
>            Priority: Critical
>             Fix For: 0.17.2
>
>
> when i run startup-dfs, it shows:
> $ start-dfs.sh 
> starting namenode, logging to 
> /home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-namenode-master.cloud.out
> /home/user/hadoop-0.17.2.1/bin/hadoop-daemon.sh: line 117: 
> /tmp/hadoop-user-namenode.pid: Permission denied
> slave3.cloud: starting datanode, logging to 
> /home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave3.cloud.out
> slave2.cloud: starting datanode, logging to 
> /home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave2.cloud.out
> slave4.cloud: starting datanode, logging to 
> /home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-datanode-slave4.cloud.out
> master.cloud: starting secondarynamenode, logging to 
> /home/user/hadoop-0.17.2.1/bin/../logs/hadoop-user-secondarynamenode-master.cloud.out
> the log shows:
> 2008-12-04 17:59:10,696 INFO org.apache.hadoop.dfs.NameNode: STARTUP_MSG: 
> /************************************************************
> STARTUP_MSG: Starting NameNode
> STARTUP_MSG:   host = master.cloud/10.100.4.226
> STARTUP_MSG:   args = []
> STARTUP_MSG:   version = 0.17.2.1
> STARTUP_MSG:   build = 
> https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.17 -r 684969; 
> compiled by 'oom' on Wed Aug 20 22:29:32 UTC 2008
> ************************************************************/
> 2008-12-04 17:59:10,823 INFO org.apache.hadoop.ipc.metrics.RpcMetrics: 
> Initializing RPC Metrics with hostName=NameNode, port=9000
> 2008-12-04 17:59:10,830 INFO org.apache.hadoop.dfs.NameNode: Namenode up at: 
> master.cloud/10.100.4.226:9000
> 2008-12-04 17:59:10,834 INFO org.apache.hadoop.metrics.jvm.JvmMetrics: 
> Initializing JVM Metrics with processName=NameNode, sessionId=null
> 2008-12-04 17:59:10,838 INFO org.apache.hadoop.dfs.NameNodeMetrics: 
> Initializing NameNodeMeterics using context 
> object:org.apache.hadoop.metrics.spi.NullContext
> 2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem: 
> fsOwner=user,users
> 2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem: 
> supergroup=supergroup
> 2008-12-04 17:59:10,924 INFO org.apache.hadoop.fs.FSNamesystem: 
> isPermissionEnabled=true
> 2008-12-04 17:59:10,983 INFO org.apache.hadoop.fs.FSNamesystem: Finished 
> loading FSImage in 102 msecs
> 2008-12-04 17:59:10,985 INFO org.apache.hadoop.dfs.StateChange: STATE* 
> Leaving safe mode after 0 secs.
> 2008-12-04 17:59:10,986 INFO org.apache.hadoop.dfs.StateChange: STATE* 
> Network topology has 0 racks and 0 datanodes
> 2008-12-04 17:59:10,986 INFO org.apache.hadoop.dfs.StateChange: STATE* 
> UnderReplicatedBlocks has 0 blocks
> 2008-12-04 17:59:10,993 INFO org.apache.hadoop.fs.FSNamesystem: Registered 
> FSNamesystemStatusMBean
> 2008-12-04 17:59:11,066 INFO org.mortbay.util.Credential: Checking Resource 
> aliases
> 2008-12-04 17:59:11,176 INFO org.mortbay.http.HttpServer: Version Jetty/5.1.4
> 2008-12-04 17:59:11,178 INFO org.mortbay.util.Container: Started 
> HttpContext[/static,/static]
> 2008-12-04 17:59:11,178 INFO org.mortbay.util.Container: Started 
> HttpContext[/logs,/logs]
> 2008-12-04 17:59:11,557 INFO org.mortbay.util.Container: Started [EMAIL 
> PROTECTED]
> 2008-12-04 17:59:11,618 INFO org.mortbay.util.Container: Started 
> WebApplicationContext[/,/]
> 2008-12-04 17:59:11,619 WARN org.mortbay.util.ThreadedServer: Failed to 
> start: [EMAIL PROTECTED]:50070
> 2008-12-04 17:59:11,619 WARN org.apache.hadoop.fs.FSNamesystem: 
> ReplicationMonitor thread received 
> InterruptedException.java.lang.InterruptedException: sleep interrupted
> 2008-12-04 17:59:11,620 ERROR org.apache.hadoop.fs.FSNamesystem: 
> java.lang.InterruptedException
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.reportInterruptAfterWait(AbstractQueuedSynchronizer.java:1899)
>       at 
> java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1934)
>       at 
> java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
>       at 
> org.apache.hadoop.dfs.FSNamesystem$ResolutionMonitor.run(FSNamesystem.java:1931)
>       at java.lang.Thread.run(Thread.java:619)
> 2008-12-04 17:59:11,621 INFO org.apache.hadoop.fs.FSNamesystem: Number of 
> transactions: 0 Total time for transactions(ms): 0 Number of syncs: 0 
> SyncTimes(ms): 0 
> 2008-12-04 17:59:11,674 INFO org.apache.hadoop.ipc.Server: Stopping server on 
> 9000
> 2008-12-04 17:59:11,730 ERROR org.apache.hadoop.dfs.NameNode: 
> java.net.BindException: Address already in use
>       at java.net.PlainSocketImpl.socketBind(Native Method)
>       at java.net.PlainSocketImpl.bind(PlainSocketImpl.java:359)
>       at java.net.ServerSocket.bind(ServerSocket.java:319)
>       at java.net.ServerSocket.<init>(ServerSocket.java:185)
>       at 
> org.mortbay.util.ThreadedServer.newServerSocket(ThreadedServer.java:391)
>       at org.mortbay.util.ThreadedServer.open(ThreadedServer.java:477)
>       at org.mortbay.util.ThreadedServer.start(ThreadedServer.java:503)
>       at org.mortbay.http.SocketListener.start(SocketListener.java:203)
>       at org.mortbay.http.HttpServer.doStart(HttpServer.java:761)
>       at org.mortbay.util.Container.start(Container.java:72)
>       at 
> org.apache.hadoop.mapred.StatusHttpServer.start(StatusHttpServer.java:207)
>       at org.apache.hadoop.dfs.FSNamesystem.initialize(FSNamesystem.java:335)
>       at org.apache.hadoop.dfs.FSNamesystem.<init>(FSNamesystem.java:255)
>       at org.apache.hadoop.dfs.NameNode.initialize(NameNode.java:133)
>       at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:178)
>       at org.apache.hadoop.dfs.NameNode.<init>(NameNode.java:164)
>       at org.apache.hadoop.dfs.NameNode.createNameNode(NameNode.java:846)
>       at org.apache.hadoop.dfs.NameNode.main(NameNode.java:855)
> 2008-12-04 17:59:11,733 INFO org.apache.hadoop.dfs.NameNode: SHUTDOWN_MSG: 
> /************************************************************
> SHUTDOWN_MSG: Shutting down NameNode at master.cloud/10.100.4.226
> ************************************************************/

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to