[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2841?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13093534#comment-13093534
 ] 

Binglin Chang commented on MAPREDUCE-2841:
------------------------------------------

Update some test results.

1. Terasort 10G input 40map 40reduce on 9node cluster, 7map/7reduce slot per 
node
   io.sort.mb 500MB
   Results on jobhistory:
   ||        ||   Total ||  AverageMap || AverageShuffle || AverageReduce
   ||java     |   54s   |  14s  |         14s   |        10s |
   ||native   |   39s   |   7s   |        15s    |       9s|
   ||java-snappy |36s   |  15s      |     9s        |    8s|
   ||native-snappy|27s  |  7s |  7s | 8s |
   speedup-without-compression: 1.38
   speedup-with-compression: 1.33
   
2. I did another test of big data set
   Terasort 100G 400map 400reduce on 9node cluster, 7map/7reduce slot per node
   ||        ||   Total ||  AverageMap || AverageShuffle || AverageReduce
   ||java-snappy  | 277s  |  17s    |  28s  | 10s |
   ||native-snappy| 234s  |  10s    |  22s  | 10s |
   speedup: 1.18
   When cluster is under heavy workload, the bottleneck will be shown in page 
cache, shuffle, so optimizations in sort&spill do not play big roles.

3. I test the dual pivot quicksort patch provided by Chris, using the same test 
as test No.1
   There are no observable differences compare to old QuickSort,  Average map 
task time for java-snappy is the same as before(15s), perhaps the data set is 
too small, or the bottleneck is dominated by other factors, like memory random 
access.

 




> Task level native optimization
> ------------------------------
>
>                 Key: MAPREDUCE-2841
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2841
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: task
>         Environment: x86-64 Linux
>            Reporter: Binglin Chang
>            Assignee: Binglin Chang
>         Attachments: MAPREDUCE-2841.v1.patch, MAPREDUCE-2841.v2.patch, 
> dualpivot-0.patch, dualpivotv20-0.patch
>
>
> I'm recently working on native optimization for MapTask based on JNI. 
> The basic idea is that, add a NativeMapOutputCollector to handle k/v pairs 
> emitted by mapper, therefore sort, spill, IFile serialization can all be done 
> in native code, preliminary test(on Xeon E5410, jdk6u24) showed promising 
> results:
> 1. Sort is about 3x-10x as fast as java(only binary string compare is 
> supported)
> 2. IFile serialization speed is about 3x of java, about 500MB/s, if hardware 
> CRC32C is used, things can get much faster(1G/s).
> 3. Merge code is not completed yet, so the test use enough io.sort.mb to 
> prevent mid-spill
> This leads to a total speed up of 2x~3x for the whole MapTask, if 
> IdentityMapper(mapper does nothing) is used.
> There are limitations of course, currently only Text and BytesWritable is 
> supported, and I have not think through many things right now, such as how to 
> support map side combine. I had some discussion with somebody familiar with 
> hive, it seems that these limitations won't be much problem for Hive to 
> benefit from those optimizations, at least. Advices or discussions about 
> improving compatibility are most welcome:) 
> Currently NativeMapOutputCollector has a static method called canEnable(), 
> which checks if key/value type, comparator type, combiner are all compatible, 
> then MapTask can choose to enable NativeMapOutputCollector.
> This is only a preliminary test, more work need to be done. I expect better 
> final results, and I believe similar optimization can be adopt to reduce task 
> and shuffle too. 

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to