[ 
https://issues.apache.org/jira/browse/MAPREDUCE-7089?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16730431#comment-16730431
 ] 

Sharath Chalya nagaraju commented on MAPREDUCE-7089:
----------------------------------------------------

I would rather put the check outside while loop. 

PR: https://github.com/apache/hadoop/pull/454

> ReadMapper.doIO hangs with user misconfigured inputs
> ----------------------------------------------------
>
>                 Key: MAPREDUCE-7089
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-7089
>             Project: Hadoop Map/Reduce
>          Issue Type: Bug
>          Components: client, test
>    Affects Versions: 2.5.0
>            Reporter: John Doe
>            Priority: Minor
>
> When a user configures the bufferSize to be 0, the while loop in 
> TestDFSIO$ReadMapper.doIO function hangs endlessly.
> This is because the loop stride, curSize is always 0, making the loop index 
> actualSize always less than the upper bound totalSize.
> Here is the code snippet.
> {code:java}
> int bufferSize = DEFAULT_BUFFER_SIZE;
> for(int i = 0; i < args.length; i++) { // parse command line
>   ...
>   else if (args[i].equals("-bufferSize")) {
>   bufferSize = Integer.parseInt(args[++i]);
>   }
>   ...
> }
> public Long doIO(Reporter reporter, String name, long totalSize) throws 
> IOException {
>   InputStream in = (InputStream)this.stream;
>   long actualSize = 0;
>   while (actualSize < totalSize) {
>     int curSize = in.read(buffer, 0, bufferSize);
>     if(curSize < 0) break;
>     actualSize += curSize;
>     reporter.setStatus("reading " + name + "@" + actualSize + "/" + totalSize 
> + " ::host = " + hostName);
>   }
>   return Long.valueOf(actualSize);
> }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: mapreduce-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: mapreduce-issues-h...@hadoop.apache.org

Reply via email to