[ 
https://issues.apache.org/jira/browse/HDFS-4070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13483440#comment-13483440
 ] 

Hadoop QA commented on HDFS-4070:
---------------------------------

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12549678/HDFS-4090-dfs%2Bpacketsize.patch
  against trunk revision .

    {color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

    {color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
                        Please justify why no new tests are needed for this 
patch.
                        Also please list what manual steps were performed to 
verify this patch.

    {color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

    {color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

    {color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

    {color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

    {color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

    {color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-hdfs-project/hadoop-hdfs:

                  
org.apache.hadoop.hdfs.server.blockmanagement.TestUnderReplicatedBlocks

    {color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HDFS-Build/3395//testReport/
Console output: https://builds.apache.org/job/PreCommit-HDFS-Build/3395//console

This message is automatically generated.
                
> DFSClient ignores bufferSize argument & always performs small writes
> --------------------------------------------------------------------
>
>                 Key: HDFS-4070
>                 URL: https://issues.apache.org/jira/browse/HDFS-4070
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: hdfs client
>    Affects Versions: 1.0.3, 2.0.3-alpha
>         Environment: RHEL 5.5 x86_64 (ec2)
>            Reporter: Gopal V
>            Priority: Minor
>              Labels: optimization
>         Attachments: 
> gistfe319436b880026cbad4-aad495d50e0d6b538831327752b984e0fdcc74db.tar.gz, 
> HDFS-4090-dfs+packetsize.patch
>
>
> The following code illustrates the issue at hand 
> {code}
>  protected void map(LongWritable offset, Text value, Context context) 
>               throws IOException, InterruptedException {
>                       OutputStream out = fs.create(new 
> Path("/tmp/benchmark/",value.toString()), true, 1024*1024); 
>                       int i;
>                       for(i = 0; i < 1024*1024; i++) {
>                               out.write(buffer, 0, 1024);
>                       }
>                       out.close();
>                       context.write(value, new IntWritable(i));
>       }
> {code}
> This code is run as a single map-only task with an input file on disk and 
> map-output to disk.
> {{# su - hdfs -c 'hadoop jar /tmp/dfs-test-1.0-SNAPSHOT-job.jar  
> file:///tmp/list file:///grid/0/hadoop/hdfs/tmp/benchmark'}}
> In the data node disk access patterns, the following consistent pattern was 
> observed irrespective of bufferSize provided.
> {code}
> 21119 read(58,  <unfinished ...>
> 21119 <... read resumed> 
> "\0\1\0\0\0\0\0\0\0034\212\0\0\0\0\0\0\0+\220\0\0\0\376\0\262\252ux\262\252u"...,
>  65557) = 65557
> 21119 lseek(107, 0, SEEK_CUR <unfinished ...>
> 21119 <... lseek resumed> )             = 53774848
> 21119 write(107, 
> "\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0"..., 65024 
> <unfinished ...>
> 21119 <... write resumed> )             = 65024
> 21119 write(108, 
> "\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux\262\252ux"...,
>  508 <unfinished ...>
> 21119 <... write resumed> )             = 508
> {code}
> Here fd 58 is the incoming socket, 107 is the blk file and 108 is the .meta 
> file.
> The DFS packet size ignores the bufferSize argument and suffers from 
> suboptimal syscall & disk performance because of the default 64kb value, as 
> is obvious from the interrupted read/write operations.
> Changing the packet size to a more optimal 1056405 bytes results in a decent 
> spike in performance, by cutting down on disk & network iops.
> h3. Average time (milliseconds) for a 10 GB write as 10 files in a single map 
> task
> ||timestamp||65536||1056252||
> |1350469614|88530|78662|
> |1350469827|88610|81680|
> |1350470042|92632|78277|
> |1350470261|89726|79225|
> |1350470476|92272|78265|
> |1350470696|89646|81352|
> |1350470913|92311|77281|
> |1350471132|89632|77601|
> |1350471345|89302|81530|
> |1350471564|91844|80413|
> That is by average an increase from ~115 MB/s to ~130 MB/s, by modifying the 
> global packet size setting.
> This suggests that there is value in adapting the user provided buffer sizes 
> to hadoop packet sizing, per stream.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to