[ https://issues.apache.org/jira/browse/HDFS-14481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16838312#comment-16838312 ]
Jinglun commented on HDFS-14481: -------------------------------- Hi [~dustinday], the BUFFER_SIZE has got a default value, and anyone who wants a different value should takes care of the new value and makes sure it's reasonable. So in my view it's not a bug and we can leave it the way it is !/jira/images/icons/emoticons/smile.png|width=16,height=16! . > BenchmarkThroughput.writeLocalFile hangs with misconfigured BUFFER_SIZE > ----------------------------------------------------------------------- > > Key: HDFS-14481 > URL: https://issues.apache.org/jira/browse/HDFS-14481 > Project: Hadoop HDFS > Issue Type: Bug > Components: test > Affects Versions: 2.5.0 > Reporter: John Doe > Priority: Major > > When the configuration file is corrupted, reading BUFFER_SIZE from corrupted > conf can return 0. > The "for" loop in BenchmarkThroughput.writeLocalFile function hangs > endlessly. > Here is the code snippet. > {code:java} > BUFFER_SIZE = conf.getInt("dfsthroughput.buffer.size", 4 * 1024); > private Path writeLocalFile(String name, Configuration conf, > long total) throws IOException { > Path path = dir.getLocalPathForWrite(name, total, conf); > System.out.print("Writing " + name); > resetMeasurements(); > OutputStream out = new FileOutputStream(new File(path.toString())); > byte[] data = new byte[BUFFER_SIZE]; > for(long size=0; size < total; size += BUFFER_SIZE) {//Bug!!! > System.out.println("inside for loop...size = " + size); > out.write(data); > } > out.close(); > printMeasurements(); > return path; > } > {code} > This configuration error also affects HDFS-13513, HDFS-13514 -- This message was sent by Atlassian JIRA (v7.6.3#76005) --------------------------------------------------------------------- To unsubscribe, e-mail: hdfs-issues-unsubscr...@hadoop.apache.org For additional commands, e-mail: hdfs-issues-h...@hadoop.apache.org