[ http://issues.apache.org/jira/browse/HADOOP-66?page=comments#action_12369308 ]
Doug Cutting commented on HADOOP-66: ------------------------------------ It's hard to resume writing a block when a connection fails, since you don't know how much of the previous write succeeded. Currently the block is streamed over TCP connections. We could instead write it as a series of length-prefixed buffers, and query the remote datanode on reconnect about which buffers it had recieved, etc. But that seems like reinventing a lot of TCP. If the datanode goes down then currently the entire block is in a temp file so that it can instead be written to a different datanode. Thus if datanodes die during, e.g., a reduce, then the reduce task does not have to restart. But if reduce tasks are running on the same pool of machines as datanodes, then, when a node fails, some reduce tasks will need to be restarted anyway. So I agree that this may not be helping us much. I think throwing an exception when the connection to the datanode fails would be fine. > dfs client writes all data for a chunk to /tmp > ---------------------------------------------- > > Key: HADOOP-66 > URL: http://issues.apache.org/jira/browse/HADOOP-66 > Project: Hadoop > Type: Bug > Components: dfs > Versions: 0.1 > Reporter: Sameer Paranjpye > Fix For: 0.1 > > The dfs client writes all the data for the current chunk to a file in /tmp, > when the chunk is complete it is shipped out to the Datanodes. This can cause > /tmp to fill up fast when a lot of files are being written. A potentially > better scheme is to buffer the written data in RAM (application code can set > the buffer size) and flush it to the Datanodes when the buffer fills up. -- This message is automatically generated by JIRA. - If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa - For more information on JIRA, see: http://www.atlassian.com/software/jira