[
https://issues.apache.org/jira/browse/HDFS-13513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18047847#comment-18047847
]
ASF GitHub Bot commented on HDFS-13513:
---------------------------------------
github-actions[bot] commented on PR #375:
URL: https://github.com/apache/hadoop/pull/375#issuecomment-3694325984
We're closing this stale PR because it has been open for 100 days with no
activity. This isn't a judgement on the merit of the PR in any way. It's just a
way of keeping the PR queue manageable.
If you feel like this was a mistake, or you would like to continue working
on it, please feel free to re-open it and ask for a committer to remove the
stale tag and review again.
Thanks all for your contribution.
> BenchmarkThroughput.readFile hangs with misconfigured BUFFER_SIZE
> -----------------------------------------------------------------
>
> Key: HDFS-13513
> URL: https://issues.apache.org/jira/browse/HDFS-13513
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Reporter: John Doe
> Priority: Minor
> Fix For: 2.5.0
>
>
> When the BUFFER_SIZE is configured to be 0, the while loop in
> BenchmarkThroughput.readFile function hangs endlessly.
> This is because when the data.size (i.e., BUFFER_SIZE) is 0, the val will
> always be 0 by invoking val=in.read(data).
> Here is the code snippet.
> {code:java}
> BUFFER_SIZE = conf.getInt("dfsthroughput.buffer.size", 4 * 1024);//when
> dfsthroughput.buffer.size is configued to be 0
> private void readFile(FileSystem fs, Path f, String name, Configuration
> conf ) throws IOException {
> System.out.print("Reading " + name);
> resetMeasurements();
> InputStream in = fs.open(f);
> byte[] data = new byte[BUFFER_SIZE];
> long val = 0;
> while (val >= 0) {
> val = in.read(data);
> }
> in.close();
> printMeasurements();
> }
> {code}
> The similar case is
> [HDFS-13514|https://issues.apache.org/jira/browse/HDFS-13514]
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]