Hi,
Out-of-memory exceptions can also be caused by having too many files
open at once. What does 'ulimit -n' show?
29491
You presented an excerpt from a jobtracker log, right? What do the
tasktracker logs show?
I saw the some warning in the tasktracker log:
2007-12-06 12:23:41,604 WARN org.apache.hadoop.ipc.Server: IPC Server handler 0
on 50050, call progress(task_200712031900_0014_m_000058_0, 9.126612E-12,
hdfs:///usr/ruish/400.gz:0+9528361, MAP, [EMAIL PROTECTED]) from: output error
java.nio.channels.ClosedChannelException
at
sun.nio.ch.SocketChannelImpl.ensureWriteOpen(SocketChannelImpl.java:125)
at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:294)
at
org.apache.hadoop.ipc.SocketChannelOutputStream.flushBuffer(SocketChannelOutputStream.java:108)
at
org.apache.hadoop.ipc.SocketChannelOutputStream.write(SocketChannelOutputStream.java:89)
at
java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at java.io.DataOutputStream.flush(DataOutputStream.java:106)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:585)
And in the datanode logs:
2007-12-06 14:42:20,831 ERROR org.apache.hadoop.dfs.DataNode: DataXceiver:
java.io.IOException: Block blk_-8176614602638949879 is valid, and cannot be
written to.
at org.apache.hadoop.dfs.FSDataset.writeToBlock(FSDataset.java:515)
at
org.apache.hadoop.dfs.DataNode$DataXceiver.writeBlock(DataNode.java:822)
at org.apache.hadoop.dfs.DataNode$DataXceiver.run(DataNode.java:727)
at java.lang.Thread.run(Thread.java:595)
Also, can you please provide more details about your application?
I.e.,
what is your inputformat, map function, etc.
Very simple stuff, projecting certain fields as key and sorting. The input is
gzipped files in which each line has some fields separated by a delimiter.
Doug
____________________________________________________________________________________
Never miss a thing. Make Yahoo your home page.
http://www.yahoo.com/r/hs