[
https://issues.apache.org/jira/browse/HDFS-14501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16915963#comment-16915963
]
hemanthboyina commented on HDFS-14501:
--------------------------------------
[~csun] can you have a look into the patch
> BenchmarkThroughput.writeFile hangs with misconfigured BUFFER_SIZE
> ------------------------------------------------------------------
>
> Key: HDFS-14501
> URL: https://issues.apache.org/jira/browse/HDFS-14501
> Project: Hadoop HDFS
> Issue Type: Bug
> Affects Versions: 2.5.0
> Reporter: John Doe
> Assignee: hemanthboyina
> Priority: Major
> Attachments: HDFS-14501.001.patch
>
>
> When the configuration file is corrupted, reading BUFFER_SIZE from corrupted
> conf can return 0.
> The "for" loop in BenchmarkThroughput.writeLocalFile function hangs
> endlessly.
> Here is the code snippet.
> {code:java}
> BUFFER_SIZE = conf.getInt("dfsthroughput.buffer.size", 4 * 1024);
> private Path writeFile(FileSystem fs,
> String name,
> Configuration conf,
> long total
> ) throws IOException {
> Path f = dir.getLocalPathForWrite(name, total, conf);
> System.out.print("Writing " + name);
> resetMeasurements();
> OutputStream out = fs.create(f);
> byte[] data = new byte[BUFFER_SIZE];
> for(long size = 0; size < total; size += BUFFER_SIZE) { //Bug!
> out.write(data);
> }
> out.close();
> printMeasurements();
> return f;
> }
> {code}
> This configuration error also affects HDFS-13513, HDFS-13514,
--
This message was sent by Atlassian Jira
(v8.3.2#803003)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]