[ 
https://issues.apache.org/jira/browse/MAPREDUCE-6923?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16120829#comment-16120829
 ] 

Ravi Prakash commented on MAPREDUCE-6923:
-----------------------------------------

Oh and sorry about neglecting your questions earlier. Apologies also if this is 
too deep in the details. Maybe a better understanding could help.

The Hadoop project has tried to make clear distinctions between YARN (the 
resource management) and frameworks that can run on top of  YARN (eg. 
MapReduce, Tez, Slider etc.). Even so some dependencies have stuck around.

bq. I see that some 1.5 GiB is spent on reading the mapreduce jar files (in 
Yarn), and another 1.2 GiB is spent reading jar files in /usr/lib/jvm.
I'm not entirely sure what you mean when you say Yarn here. I'm guessing you 
mean the NodeManager. _Technically_ the NodeManager shouldn't really even be 
loading the MapReduce jars (because separate projects blah blah). However, 
there's a MapReduce Auxiliary Shuffle Service (if you see your yarn-site.xml 
{{yarn.nodemanager.aux-services}} probably has 
{{org.apache.hadoop.mapred.ShuffleHandler}} which I'm sure pulls in all sorts 
of MapReduce code into the NodeManager JVM. This happens only when you start 
the cluster (the auxiliary ShuffleService is a long-running service in the 
NodeManager) . 

{quote}
I have instrumented reading zip and jar files separately, and over the course 
of all map tasks (TeraGen + TeraSort), my instrumentation gives a total of 638 
GiB / (2048 + 2048) = 159.5 MiB per mapper, and 337 GiB / 2048 = 168.5 MiB per 
reducer. However I wouldn't rely too much on these numbers, because if I added 
them to the regular I/O induced by reading/writing the input/output, shuffle 
and spill, then my numbers wouldn't agree any longer with the XFS counters.
{quote}
Hmm.. without knowing exactly what your instrumentation does, I will choose to 
share your skepticism of these numbers :-)

bq. Do you mean that Yarn should exhibit this I/O, or would I see this in the 
map and reduce JVMs (as explained above)?
Again, I'm guessing by "Yarn" over here you mean the NodeManager. To launch any 
YARN container (MapTask or ReduceTask or TezChild etc) the NodeManager does a 
[lot of 
things|https://github.com/apache/hadoop/blob/ac7d0604bc73c0925eff240ad9837e14719d57b7/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/main/java/org/apache/hadoop/yarn/server/nodemanager/containermanager/launcher/ContainerLaunch.java#L161]
 . One of the things is to localize the resources. For this, usually a separate 
process called a Localizer is run. This process may download things from HDFS 
to the local machine under certain circumstances. (usually though if the job 
jars are already in the DistributedCache, then it may be skipped)
However I was referring to the MapTask and ReduceTask JVMs loading the jar 
files.


> Optimize MapReduce Shuffle I/O for small partitions
> ---------------------------------------------------
>
>                 Key: MAPREDUCE-6923
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-6923
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>         Environment: Observed in Hadoop 2.7.3 and above (judging from the 
> source code of future versions), and Ubuntu 16.04.
>            Reporter: Robert Schmidtke
>            Assignee: Robert Schmidtke
>             Fix For: 2.9.0, 3.0.0-beta1
>
>         Attachments: MAPREDUCE-6923.00.patch, MAPREDUCE-6923.01.patch
>
>
> When a job configuration results in small partitions read by each reducer 
> from each mapper (e.g. 65 kilobytes as in my setup: a 
> [TeraSort|https://github.com/apache/hadoop/blob/branch-2.7.3/hadoop-mapreduce-project/hadoop-mapreduce-examples/src/main/java/org/apache/hadoop/examples/terasort/TeraSort.java]
>  of 256 gigabytes using 2048 mappers and reducers each), and setting
> {code:xml}
> <property>
>   <name>mapreduce.shuffle.transferTo.allowed</name>
>   <value>false</value>
> </property>
> {code}
> then the default setting of
> {code:xml}
> <property>
>   <name>mapreduce.shuffle.transfer.buffer.size</name>
>   <value>131072</value>
> </property>
> {code}
> results in almost 100% overhead in reads during shuffle in YARN, because for 
> each 65K needed, 128K are read.
> I propose a fix in 
> [FadvisedFileRegion.java|https://github.com/apache/hadoop/blob/trunk/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-shuffle/src/main/java/org/apache/hadoop/mapred/FadvisedFileRegion.java#L114]
>  as follows:
> {code:java}
> ByteBuffer byteBuffer = ByteBuffer.allocate(Math.min(this.shuffleBufferSize, 
> trans > Integer.MAX_VALUE ? Integer.MAX_VALUE : (int) trans));
> {code}
> e.g. 
> [here|https://github.com/apache/hadoop/compare/branch-2.7.3...robert-schmidtke:adaptive-shuffle-buffer].
>  This sets the shuffle buffer size to the minimum value of the shuffle buffer 
> size specified in the configuration (128K by default), and the actual 
> partition size (65K on average in my setup). In my benchmarks this reduced 
> the read overhead in YARN from about 100% (255 additional gigabytes as 
> described above) down to about 18% (an additional 45 gigabytes). The runtime 
> of the job remained the same in my setup.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to