[ 
https://issues.apache.org/jira/browse/HADOOP-10048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15316672#comment-15316672
 ] 

Junping Du commented on HADOOP-10048:
-------------------------------------

Thanks [~jlowe] for updating the patch.
bq.  All or most of the threads could end up theoretically clustering on the 
same disk which is less than ideal. Attaching a new patch that uses an 
AtomicInteger to make sure that simultaneous threads won't get the same 
starting point when searching the directories.
Make sense. This approach looks better in solving this problem.

bq. An alternative approach would be to use a random starting location like is 
done when the size is not specified.
Agree. This could be a nice improvement that we could make later. However, for 
size not specified case, creating a Random object per call may not be 
necessary. May be this is also something we can improve later?

Latest (006) patch looks pretty good to me. I would like to get it in for next 
24 hours if no further comments from others.

> LocalDirAllocator should avoid holding locks while accessing the filesystem
> ---------------------------------------------------------------------------
>
>                 Key: HADOOP-10048
>                 URL: https://issues.apache.org/jira/browse/HADOOP-10048
>             Project: Hadoop Common
>          Issue Type: Improvement
>    Affects Versions: 2.3.0
>            Reporter: Jason Lowe
>            Assignee: Jason Lowe
>         Attachments: HADOOP-10048.003.patch, HADOOP-10048.004.patch, 
> HADOOP-10048.005.patch, HADOOP-10048.006.patch, HADOOP-10048.patch, 
> HADOOP-10048.trunk.patch
>
>
> As noted in MAPREDUCE-5584 and HADOOP-7016, LocalDirAllocator can be a 
> bottleneck for multithreaded setups like the ShuffleHandler.  We should 
> consider moving to a lockless design or minimizing the critical sections to a 
> very small amount of time that does not involve I/O operations.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to