[jira] [Created] (HDFS-7380) unsteady and slow performance when writing to file with block size 2GB

2014-11-07 Thread Adam Fuchs (JIRA)
Adam Fuchs created HDFS-7380:


 Summary: unsteady and slow performance when writing to file with 
block size 2GB
 Key: HDFS-7380
 URL: https://issues.apache.org/jira/browse/HDFS-7380
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Adam Fuchs
 Attachments: BenchmarkWrites.java

Appending to a large file with block size  2GB can lead to periods of really 
poor performance (4X slower than optimal). I found this issue when looking at 
Accmulo write performance in ACCUMULO-3303. I wrote a small test application to 
isolate this performance down to some basic API calls (to be attached). A 
description of the execution can be found here: 
https://issues.apache.org/jira/browse/ACCUMULO-3303?focusedCommentId=14202830page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14202830

The specific hadoop version was as follows:
{code}
[root@n1 ~]# hadoop version
Hadoop 2.4.0.2.1.2.0-402
Subversion g...@github.com:hortonworks/hadoop.git -r 
9e5db004df1a751e93aa89b42956c5325f3a4482
Compiled by jenkins on 2014-04-27T22:28Z
Compiled with protoc 2.5.0
From source with checksum 9e788148daa5dd7934eb468e57e037b5
This command was run using /usr/lib/hadoop/hadoop-common-2.4.0.2.1.2.0-402.jar
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HDFS-7380) unsteady and slow performance when writing to file with block size 2GB

2014-11-07 Thread Adam Fuchs (JIRA)

 [ 
https://issues.apache.org/jira/browse/HDFS-7380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Fuchs updated HDFS-7380:
-
Attachment: BenchmarkWrites.java

 unsteady and slow performance when writing to file with block size 2GB
 ---

 Key: HDFS-7380
 URL: https://issues.apache.org/jira/browse/HDFS-7380
 Project: Hadoop HDFS
  Issue Type: Bug
Affects Versions: 2.4.0
Reporter: Adam Fuchs
 Attachments: BenchmarkWrites.java


 Appending to a large file with block size  2GB can lead to periods of really 
 poor performance (4X slower than optimal). I found this issue when looking at 
 Accmulo write performance in ACCUMULO-3303. I wrote a small test application 
 to isolate this performance down to some basic API calls (to be attached). A 
 description of the execution can be found here: 
 https://issues.apache.org/jira/browse/ACCUMULO-3303?focusedCommentId=14202830page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14202830
 The specific hadoop version was as follows:
 {code}
 [root@n1 ~]# hadoop version
 Hadoop 2.4.0.2.1.2.0-402
 Subversion g...@github.com:hortonworks/hadoop.git -r 
 9e5db004df1a751e93aa89b42956c5325f3a4482
 Compiled by jenkins on 2014-04-27T22:28Z
 Compiled with protoc 2.5.0
 From source with checksum 9e788148daa5dd7934eb468e57e037b5
 This command was run using /usr/lib/hadoop/hadoop-common-2.4.0.2.1.2.0-402.jar
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)