[ https://issues.apache.org/jira/browse/HDFS-3696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13422727#comment-13422727 ]
Tsz Wo (Nicholas), SZE commented on HDFS-3696: ---------------------------------------------- > Any idea why it performed best at 32KB? Since TCP has a max packet size of 64kB, I thought the best should be somehow close to 64kB. I was surprise that 32kB was better than 48kB in my experiment. Perhaps it was due to some implementation details in the Java library. > Create files with WebHdfsFileSystem goes OOM when file size is big > ------------------------------------------------------------------ > > Key: HDFS-3696 > URL: https://issues.apache.org/jira/browse/HDFS-3696 > Project: Hadoop HDFS > Issue Type: Bug > Affects Versions: 2.0.0-alpha > Reporter: Kihwal Lee > Assignee: Tsz Wo (Nicholas), SZE > Priority: Critical > Fix For: 0.23.3, 3.0.0, 2.2.0-alpha > > Attachments: h3696_20120724.patch > > > When doing "fs -put" to a WebHdfsFileSystem (webhdfs://), the FsShell goes > OOM if the file size is large. When I tested, 20MB files were fine, but 200MB > didn't work. > I also tried reading a large file by issuing "-cat" and piping to a slow sink > in order to force buffering. The read path didn't have this problem. The > memory consumption stayed the same regardless of progress. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators: https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa For more information on JIRA, see: http://www.atlassian.com/software/jira