On 06/17/2011 09:51 AM, Lemon Cheng wrote:
Hi,
Thanks for your reply.
I am not sure that. How can I prove that?
Which is your dfs.tmp.dir and dfs.data.dir values?
You can check the DataNodes´s health with bin/slaves.sh jps | grep
Datanode | sort
Which is the output of bin/hadoop dfsadmin -report?
One recomendation that I could say you is to have at least 1 NameNode
and two Datanodes
regards
I checked the localhost:50070, it shows 1 live node and 0 dead node.
And the log "hadoop-appuser-datanode-localhost.localdomain.log" shows:
************************************************************/
2011-06-17 19:59:38,658 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG: host = localhost.localdomain/127.0.0.1 <http://127.0.0.1>
STARTUP_MSG: args = []
STARTUP_MSG: version = 0.20.2
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2011-06-17 19:59:46,738 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2011-06-17 19:59:46,749 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at
50010
2011-06-17 19:59:46,752 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2011-06-17 19:59:46,812 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2011-06-17 19:59:46,870 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open()
is -1. Opening the listener on 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2011-06-17 19:59:46,871 INFO org.apache.hadoop.http.HttpServer: Jetty
bound to port 50075
2011-06-17 19:59:46,875 INFO org.mortbay.log: jetty-6.1.14
2011-06-17 20:01:45,702 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
<http://SelectChannelConnector@0.0.0.0:50075>
2011-06-17 20:01:45,709 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2011-06-17 20:01:45,743 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=DataNode, port=50020
2011-06-17 20:01:45,751 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(localhost.localdomain:50010,
storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
ipcPort=50020)
2011-06-17 20:01:45,751 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2011-06-17 20:01:45,753 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2011-06-17 20:01:45,754 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2011-06-17 20:01:45,795 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode:
DatanodeRegistration(127.0.0.1:50010 <http://127.0.0.1:50010>,
storageID=DS-993704729-127.0.0.1-50010-1308296320968, infoPort=50075,
ipcPort=50020)In DataNode.run, data =
FSDataset{dirpath='/tmp/hadoop-appuser/dfs/data/current'}
2011-06-17 20:01:45,799 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using
BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec
2011-06-17 20:01:45,828 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 11 msecs
2011-06-17 20:01:45,833 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic
block scanner.
2011-06-17 20:56:02,945 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 1 msecs
2011-06-17 21:56:02,248 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0
blocks got processed in 1 msecs
On Fri, Jun 17, 2011 at 9:42 PM, Marcos Ortiz <mlor...@uci.cu
<mailto:mlor...@uci.cu>> wrote:
On 06/17/2011 07:41 AM, Lemon Cheng wrote:
Hi,
I am using the hadoop-0.20.2. After calling ./start-all.sh, i can
type "hadoop dfs -ls".
However, when i type "hadoop dfs -cat
/usr/lemon/wordcount/input/file01", the error is shown as follow.
I have searched the related problem in the web, but i can't find
a solution for helping me to solve this problem.
Anyone can give suggestion?
Many Thanks.
11/06/17 19:27:12 INFO hdfs.DFSClient: No node available for
block: blk_7095683278339921538_1029
file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:12 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node: java.io.IOException:
No live nodes contain current block
11/06/17 19:27:15 INFO hdfs.DFSClient: No node available for
block: blk_7095683278339921538_1029
file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:15 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node: java.io.IOException:
No live nodes contain current block
11/06/17 19:27:18 INFO hdfs.DFSClient: No node available for
block: blk_7095683278339921538_1029
file=/usr/lemon/wordcount/input/file01
11/06/17 19:27:18 INFO hdfs.DFSClient: Could not obtain block
blk_7095683278339921538_1029 from any node: java.io.IOException:
No live nodes contain current block
11/06/17 19:27:21 WARN hdfs.DFSClient: DFS Read:
java.io.IOException: Could not obtain block:
blk_7095683278339921538_1029 file=/usr/lemon/wordcount/input/file01
at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.chooseDataNode(DFSClient.java:1812)
at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.blockSeekTo(DFSClient.java:1638)
at
org.apache.hadoop.hdfs.DFSClient$DFSInputStream.read(DFSClient.java:1767)
at java.io.DataInputStream.read(DataInputStream.java:83)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:47)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:85)
at
org.apache.hadoop.fs.FsShell.printToStdout(FsShell.java:114)
at org.apache.hadoop.fs.FsShell.access$100(FsShell.java:49)
at org.apache.hadoop.fs.FsShell$1.process(FsShell.java:352)
at
org.apache.hadoop.fs.FsShell$DelayedExceptionThrowing.globAndProcess(FsShell.java:1898)
at org.apache.hadoop.fs.FsShell.cat
<http://org.apache.hadoop.fs.fsshell.cat/>(FsShell.java:346)
at org.apache.hadoop.fs.FsShell.doall(FsShell.java:1543)
at org.apache.hadoop.fs.FsShell.run(FsShell.java:1761)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
at org.apache.hadoop.fs.FsShell.main(FsShell.java:1880)
Regards,
Lemon
Are you sure that all your DataNodes are online?
--
Marcos Luís Ortíz Valmaseda
Software Engineer (UCI)
http://marcosluis2186.posterous.com
http://twitter.com/marcosluis2186
--
Marcos Luís Ortíz Valmaseda
Software Engineer (UCI)
http://marcosluis2186.posterous.com
http://twitter.com/marcosluis2186