[ http://issues.apache.org/jira/browse/HADOOP-66?page=all ]
     
Doug Cutting resolved HADOOP-66:
--------------------------------

    Resolution: Fixed
     Assign To: Doug Cutting

I just committed a fix for this and some problems that it was hiding.

> dfs client writes all data for a chunk to /tmp
> ----------------------------------------------
>
>          Key: HADOOP-66
>          URL: http://issues.apache.org/jira/browse/HADOOP-66
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>     Versions: 0.1
>     Reporter: Sameer Paranjpye
>     Assignee: Doug Cutting
>      Fix For: 0.1
>  Attachments: no-tmp.patch
>
> The dfs client writes all the data for the current chunk to a file in /tmp, 
> when the chunk is complete it is shipped out to the Datanodes. This can cause 
> /tmp to fill up fast when a lot of files are being written. A potentially 
> better scheme is to buffer the written data in RAM (application code can set 
> the buffer size) and flush it to the Datanodes when the buffer fills up.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira

Reply via email to