[ https://issues.apache.org/jira/browse/HADOOP-1338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12649271#action_12649271 ]
Devaraj Das commented on HADOOP-1338: ------------------------------------- Matei, the problem with pulling large number of segments (amounting to a large total size) is that it would interfere with the inMemory shuffle thing. Note that we want to use the memory buffer for shuffle as much as possible to avoid disk IO. We probably need to base the max size we pull (in the case we are trying to pull multiple segments) on the buffer available for shuffle... Raghu, that's an interesting suggestion. Worth trying out. > Improve the shuffle phase by using the "connection: keep-alive" and doing > batch transfers of files > -------------------------------------------------------------------------------------------------- > > Key: HADOOP-1338 > URL: https://issues.apache.org/jira/browse/HADOOP-1338 > Project: Hadoop Core > Issue Type: Improvement > Components: mapred > Reporter: Devaraj Das > > We should do transfers of map outputs at the granularity of > *total-bytes-transferred* rather than the current way of transferring a > single file and then closing the connection to the server. A single > TaskTracker might have a couple of map output files for a given reduce, and > we should transfer multiple of them (upto a certain total size) in a single > connection to the TaskTracker. Using HTTP-1.1's keep-alive connection would > help since it would keep the connection open for more than one file transfer. > We should limit the transfers to a certain size so that we don't hold up a > jetty thread indefinitely (and cause timeouts for other clients). > Overall, this should give us improved performance. -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.