Yes, i have configuared multinode setup, 1 master 2 slaves,

i have formated the namenode and then i run the stat-dfs.sh script and
start-mapred.sh script.

I run the bin/hadoop fs -put input input command , getting following error
on my terminal.

hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop$ bin/hadoop fs -put input
input
Warning: $HADOOP_HOME is deprecated.
put: org.apache.hadoop.security.AccessControlException: Permission denied:
user=hduser, access=WRITE, inode="":root:supergroup:rwxr-xr-x
and executed the below command, getting /hadoop-install/hadoop directroy, i
coud't understand what's wrong iam doing?


hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop$ echo $HADOOP_HOME
/hadoop-install/hadoop

*Namenode log:*
==========

java.lang.InterruptedException: sleep interrupted
        at java.lang.Thread.sleep(Native Method)
        at
org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
        at java.lang.Thread.run(Thread.java:662)
2012-07-09 19:02:12,696 ERROR
org.apache.hadoop.hdfs.server.namenode.NameNode: java.net.BindException:
Problem binding to md-trngpoc1/10.5.114.110:54310 : Address alrea
dy in use
        at org.apache.hadoop.ipc.Server.bind(Server.java:227)
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
        at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
Caused by: java.net.BindException: Address already in use
        at sun.nio.ch.Net.bind(Native Method)
        at
sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
        at sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
        at org.apache.hadoop.ipc.Server.bind(Server.java:225)
        ... 8 more
*Datanode log*
=========================================
2012-07-09 18:44:39,949 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = md-trngpoc3/10.5.114.168
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
2012-07-09 18:44:40,039 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2012-07-09 18:44:40,047 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-09 18:44:40,048 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-09 18:44:40,048 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2012-07-09 18:44:40,125 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2012-07-09 18:44:40,163 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
dfs.data.dir: can not create directory: /app/hadoop_dir/hadoop/tmp/df
s/data
2012-07-09 18:44:40,163 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
dfs.data.dir are invalid.
2012-07-09 18:44:40,163 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-07-09 18:44:40,164 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at md-trngpoc3/10.5.114.168
************************************************************/
2012-07-09 18:46:09,586 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = md-trngpoc3/10.5.114.168
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
2012-07-09 18:46:09,676 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2012-07-09 18:46:09,684 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-09 18:46:09,684 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-09 18:46:09,684 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2012-07-09 18:46:09,737 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2012-07-09 18:46:09,758 WARN
org.apache.hadoop.hdfs.server.datanode.DataNode: Invalid directory in
dfs.data.dir: can not create directory: /app/hadoop_dir/hadoop/tmp/df
s/data
2012-07-09 18:46:09,758 ERROR
org.apache.hadoop.hdfs.server.datanode.DataNode: All directories in
dfs.data.dir are invalid.
2012-07-09 18:46:09,758 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2012-07-09 18:46:09,758 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at md-trngpoc3/10.5.114.168
************************************************************/
2012-07-09 19:02:34,942 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = md-trngpoc3/10.5.114.168
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 1.0.3
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
************************************************************/
2012-07-09 19:02:35,033 INFO org.apache.hadoop.metrics2.impl.MetricsConfig:
loaded properties from hadoop-metrics2.properties
2012-07-09 19:02:35,041 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
MetricsSystem,sub=Stats registered.
2012-07-09 19:02:35,041 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
period at 10 second(s).
2012-07-09 19:02:35,046 INFO
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system
started
2012-07-09 19:02:35,102 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi
registered.
2012-07-09 19:02:35,124 INFO org.apache.hadoop.util.NativeCodeLoader:
Loaded the native-hadoop library
2012-07-09 19:02:35,220 INFO org.apache.hadoop.hdfs.server.common.Storage:
Storage directory /app/hadoop/tmp/dfs/data is not formatted.
2012-07-09 19:02:35,220 INFO org.apache.hadoop.hdfs.server.common.Storage:
Formatting ...
2012-07-09 19:02:35,439 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2012-07-09 19:02:35,445 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2012-07-09 19:02:35,447 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2012-07-09 19:02:40,489 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2012-07-09 19:02:40,526 INFO org.apache.hadoop.http.HttpServer: Added
global filtersafety
(class=org.apache.hadoop.http.HttpServer$QuotingInputFilter)
2012-07-09 19:02:40,533 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dfs.webhdfs.enabled = false
2012-07-09 19:02:40,533 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is
-1. Opening the listener o
n 50075
2012-07-09 19:02:40,534 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2012-07-09 19:02:40,534 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50075
2012-07-09 19:02:40,534 INFO org.mortbay.log: jetty-6.1.26
2012-07-09 19:02:40,705 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2012-07-09 19:02:40,708 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm
registered.
2012-07-09 19:02:40,709 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
DataNode registered.
2012-07-09 19:02:45,849 INFO org.apache.hadoop.ipc.Server: Starting
SocketReader
2012-07-09 19:02:45,852 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcDetailedActivityForPort50020 registered.
2012-07-09 19:02:45,852 INFO
org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
RpcActivityForPort50020 registered.
2012-07-09 19:02:45,855 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(md-trngpoc3:50010, storageID=, infoPort=50075, ipcPo
rt=50020)
2012-07-09 19:03:14,690 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-648442314-10.5.114.168-50010-1341840794658 is assigned to data-node 10.5
.114.168:50010
2012-07-09 19:03:14,690 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting asynchronous
block report scan
2012-07-09 19:03:14,690 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
10.5.114.168:50010, storageID=DS-648442314-10.5.114.168-50010-1341840
794658, infoPort=50075, ipcPort=50020)In DataNode.run, data =
FSDataset{dirpath='/app/hadoop/tmp/dfs/data/current'}
2012-07-09 19:03:14,690 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Finished asynchronous
block report scan in 0ms
2012-07-09 19:03:14,691 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2012-07-09 19:03:14,691 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2012-07-09 19:03:14,693 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2012-07-09 19:03:14,693 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2012-07-09 19:03:14,693 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
of 3600000msec Initial delay: 0msec
2012-07-09 19:03:14,693 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2012-07-09 19:03:14,696 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
block report against current state in 0 ms
2012-07-09 19:03:14,698 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks
took 1 msec to generate and 2 msecs for RPC and NN processing
2012-07-09 19:03:14,698 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
scanner.
2012-07-09 19:03:14,699 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Generated rough (lockless)
block report in 0 ms
2012-07-09 19:03:14,699 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Reconciled asynchronous
block report against current state in 0 ms


Please help me out on this issue.

On Mon, Jul 9, 2012 at 6:53 PM, Nitin Pawar <nitinpawar...@gmail.com> wrote:

> from the error it looks like the port is already in use.
>
> can you please confirm that all of the below have a different port to
> operate
> namenode
> datanode
> jobtracker
> tasktracker
> secondary namenode
>
> there should not be any common port used by any of these services
>
> On Mon, Jul 9, 2012 at 6:51 PM, prabhu K <prabhu.had...@gmail.com> wrote:
>
> > can you please have any idea on the inline issue?
> >
> > On Mon, Jul 9, 2012 at 5:29 PM, prabhu K <prabhu.had...@gmail.com>
> wrote:
> >
> > > Hi users,
> > >
> > > I have installed hadoop 1.0.3 version, completed the single node setup.
> > > and then run the start-all.sh script,
> > >
> > > am getting the following output.
> > >
> > >
> > > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ ./start-all.sh
> > > *Warning: $HADOOP_HOME is deprecated.*
>  > >
> > > starting namenode, logging to
> > >
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-namenode-md-trngpoc1.out
> > > localhost: starting datanode, logging to
> > >
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-datanode-md-trngpoc1.out
> > > localhost: starting secondarynamenode, logging to
> > >
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-secondarynamenode-md-trngpoc1.out
> > > starting jobtracker, logging to
> > >
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-jobtracker-md-trngpoc1.out
> > > localhost: starting tasktracker, logging to
> > >
> >
> /usr/local/hadoop_dir/hadoop/libexec/../logs/hadoop-hduser-tasktracker-md-trngpoc1.out
> > >
> > >
> > > and I run the jps command am getting following output. am not getting
> the
> > > namenode,datanode,jobtracker in the jps list.
> > >
> > >
> > > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/bin$ jps
> > > 20620 TaskTracker
> > > 20670 Jps
> > > 20347 SecondaryNameNode
> > >
> > >
> > >
> > > when i see the namenode log file, am getting the following output:
> > >
> > > hduser@md-trngpoc1:/usr/local/hadoop_dir/hadoop/logs$ more
> > > hadoop-hduser-namenode-md-trngpoc1.log
> > > 2012-07-09 17:05:42,989 INFO
> > > org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
> > > /************************************************************
> > > STARTUP_MSG: Starting NameNode
> > > STARTUP_MSG:   host = md-trngpoc1/10.5.114.110
> > > STARTUP_MSG:   args = []
> > > STARTUP_MSG:   version = 1.0.3
> > > STARTUP_MSG:   build =
> > > https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.0 -r
> > > 1335192; compiled by 'hortonfo' on Tue May  8 20:31:25 UTC 2012
> > > ************************************************************/
> > > 2012-07-09 17:05:43,082 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from
> > > hadoop-metrics2.properties
> > > 2012-07-09 17:05:43,089 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > > MetricsSystem,sub=Stats registered.
> > > 2012-07-09 17:05:43,090 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot
> > > period at 10 second(s).
> > > 2012-07-09 17:05:43,090 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics
> > system
> > > started
> > > 2012-07-09 17:05:43,169 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > ugi
> > > registered.
> > > 2012-07-09 17:05:43,174 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > jvm
> > > registered.
> > > 2012-07-09 17:05:43,175 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > > NameNode registered.
> > > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: VM
> > > type       = 32-bit
> > > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet: 2% max
> > > memory = 17.77875 MB
> > > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > > capacity      = 2^22 = 4194304 entries
> > > 2012-07-09 17:05:43,193 INFO org.apache.hadoop.hdfs.util.GSet:
> > > recommended=4194304, actual=4194304
> > > 2012-07-09 17:05:43,211 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hduser
> > > 2012-07-09 17:05:43,211 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > supergroup=supergroup
> > > 2012-07-09 17:05:43,211 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > isPermissionEnabled=true
> > > 2012-07-09 17:05:43,216 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > dfs.block.invalidate.limit=100
> > > 2012-07-09 17:05:43,216 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem:
> > > isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s),
> > > accessTokenLifetime=0 min
> > > (s)
> > > 2012-07-09 17:05:43,352 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered
> > > FSNamesystemStateMBean and NameNodeMXBean
> > > 2012-07-09 17:05:43,365 INFO
> > > org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names
> > > occuring more than 10 times
> > > 2012-07-09 17:05:43,372 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Number of files = 1
> > > 2012-07-09 17:05:43,375 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Number of files under construction = 0
> > > 2012-07-09 17:05:43,375 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Image file of size 112 loaded in 0 seconds.
> > > 2012-07-09 17:05:43,375 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Edits file /app/hadoop_dir/hadoop/tmp/dfs/name/current/edits of size 4
> > > edits # 0 loaded in 0
> > > seconds.
> > > 2012-07-09 17:05:43,376 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Image file of size 112 saved in 0 seconds.
> > > 2012-07-09 17:05:43,614 INFO
> > org.apache.hadoop.hdfs.server.common.Storage:
> > > Image file of size 112 saved in 0 seconds.
> > > 2012-07-09 17:05:43,844 INFO
> > > org.apache.hadoop.hdfs.server.namenode.NameCache: initialized with 0
> > > entries 0 lookups
> > > 2012-07-09 17:05:43,844 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Finished loading
> > > FSImage in 637 msecs
> > > 2012-07-09 17:05:43,857 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Total number of
> > blocks
> > > = 0
> > > 2012-07-09 17:05:43,857 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of invalid
> > > blocks = 0
> > > 2012-07-09 17:05:43,857 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > under-replicated blocks = 0
> > > 2012-07-09 17:05:43,857 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > over-replicated blocks = 0
> > > 2012-07-09 17:05:43,857 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > > Safe mode termination scan for invalid, over- and under-replicated
> blocks
> > > completed in 12 msec
> > > 2012-07-09 17:05:43,857 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > > Leaving safe mode after 0 secs.
> > > 2012-07-09 17:05:43,858 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > > Network topology has 0 racks and 0 datanodes
> > > 2012-07-09 17:05:43,858 INFO org.apache.hadoop.hdfs.StateChange: STATE*
> > > UnderReplicatedBlocks has 0 blocks
> > > 2012-07-09 17:05:43,863 INFO org.apache.hadoop.util.HostsFileReader:
> > > Refreshing hosts (include/exclude) list
> > > 2012-07-09 17:05:43,867 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
> > > QueueProcessingStatistics: First cycle completed 0 blocks in 3 msec
> > > 2012-07-09 17:05:43,867 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicateQueue
> > > QueueProcessingStatistics: Queue flush completed 0 blocks in 3 msec pro
> > > cessing time, 3 msec clock time, 1 cycles
> > > 2012-07-09 17:05:43,867 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
> > > QueueProcessingStatistics: First cycle completed 0 blocks in 0 msec
> > > 2012-07-09 17:05:43,867 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: InvalidateQueue
> > > QueueProcessingStatistics: Queue flush completed 0 blocks in 0 msec pr
> > > ocessing time, 0 msec clock time, 1 cycles
> > > 2012-07-09 17:05:43,867 INFO
> > > org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source
> > > FSNamesystemMetrics registered.
> > > 2012-07-09 17:05:43,874 WARN
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: ReplicationMonitor
> > > thread received InterruptedException.java.lang.InterruptedException
> > > : sleep interrupted
> > > 2012-07-09 17:05:43,874 INFO
> > > org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Number of
> > > transactions: 0 Total time for transactions(ms): 0Number of
> transactions
> > bat
> > > ched in Syncs: 0 Number of syncs: 0 SyncTimes(ms): 0
> > > 2012-07-09 17:05:43,875 INFO
> > > org.apache.hadoop.hdfs.server.namenode.DecommissionManager: Interrupted
> > > Monitor
> > > java.lang.InterruptedException: sleep interrupted
> > >         at java.lang.Thread.sleep(Native Method)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.DecommissionManager$Monitor.run(DecommissionManager.java:65)
> > >         at java.lang.Thread.run(Thread.java:662)
> > > 2012-07-09 17:05:43,907 ERROR
> > > org.apache.hadoop.hdfs.server.namenode.NameNode:
> java.net.BindException:
> > > Problem binding to md-trngpoc1/10.5.114.110:54310 : Address alrea
> > > dy in use
> > >         at org.apache.hadoop.ipc.Server.bind(Server.java:227)
> > >         at
> org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:301)
> > >         at org.apache.hadoop.ipc.Server.<init>(Server.java:1483)
> > >         at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:545)
> > >         at org.apache.hadoop.ipc.RPC.getServer(RPC.java:506)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:294)
> > >         at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:496)
> > >         at
> > >
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1279)
> > >         at
> > >
> org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1288)
> > > Caused by: java.net.BindException: Address already in use
> > >         at sun.nio.ch.Net.bind(Native Method)
> > >         at
> > >
> sun.nio.ch.ServerSocketChannelImpl.bind(ServerSocketChannelImpl.java:126)
> > >         at
> > sun.nio.ch.ServerSocketAdaptor.bind(ServerSocketAdaptor.java:59)
> > >         at org.apache.hadoop.ipc.Server.bind(Server.java:225)
> > >         ... 8 more
> > >
> > > 2012-07-09 17:05:43,908 INFO
> > > org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
> > > /************************************************************
> > > SHUTDOWN_MSG: Shutting down NameNode at md-trngpoc1/10.5.114.110
> > > ************************************************************/
> > >
> > >
> > > Please suggest on this issue. What i am doing wrong?
> > >
> > > Thanks,
> > > Prabhu
> > >
> >
>
>
>
> --
> Nitin Pawar
>

Reply via email to