[ 
https://issues.apache.org/jira/browse/MAPREDUCE-2243?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12982466#action_12982466
 ] 

Laxman commented on MAPREDUCE-2243:
-----------------------------------

@Owen

bq. In most cases, the exceptions outside of IOException don't matter much 
because they will bring down.
bq. this leaves the nominal case simple. Note that this is the worst case, if 
we get an Error every system in Hadoop should shutdown.
bq. There is no point in continuing and worrying about lost file handles at 
that point is too extreme. 

Yes, I agree to your point in *Error* scenarios. How about some runtime 
exception which need not be handled in the positive flow?

Handling unexpected generic exceptions and errors will result in catch and 
rethrow pattern. So, I prefer to handle the stream closure in try block as well 
as in finally block.

As per your initial comments Kamesh has corrected to close the streams in try 
block as well as in finally block.
Do you still see some issue with this approach? 
How handling stream close in catch block is better than handling in try and 
finally blocks? 

My opinion on this issue is "Handling stream closures in try and finally block 
is fool proof and it will avoid some code duplication."

> Close all the file streams propely in a finally block to avoid their leakage.
> -----------------------------------------------------------------------------
>
>                 Key: MAPREDUCE-2243
>                 URL: https://issues.apache.org/jira/browse/MAPREDUCE-2243
>             Project: Hadoop Map/Reduce
>          Issue Type: Improvement
>          Components: jobtracker, tasktracker
>    Affects Versions: 0.20.1, 0.22.0
>         Environment: NA
>            Reporter: Bhallamudi Venkata Siva Kamesh
>            Priority: Minor
>   Original Estimate: 72h
>  Remaining Estimate: 72h
>
> In the following classes streams should be closed in finally block to avoid 
> their leakage in the exceptional cases.
> CompletedJobStatusStore.java
> ------------------------------------------
>        dataOut.writeInt(events.length);
>         for (TaskCompletionEvent event : events) {
>           event.write(dataOut);
>         }
>        dataOut.close() ;
> EventWriter.java
> ----------------------
>    encoder.flush();
>    out.close();
> MapTask.java
> -------------------
>     splitMetaInfo.write(out);
>      out.close();
> TaskLog
> ------------
>  1) str = fis.readLine();
>       fis.close();
> 2) dos.writeBytes(Long.toString(new File(logLocation, LogName.SYSLOG
>       .toString()).length() - prevLogLength) + "\n");
>     dos.close();
> TotalOrderPartitioner.java
> -----------------------------------
>  while (reader.next(key, value)) {
>             parts.add(key);
>             key = ReflectionUtils.newInstance(keyClass, conf);
>           }
> reader.close();

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to