[ 
https://issues.apache.org/jira/browse/HDFS-9494?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15057377#comment-15057377
 ] 

GAO Rui commented on HDFS-9494:
-------------------------------

[~rakeshr], thank you very much for your detailed comments!  I have addressed 
1,3,4 in previous comment. For the second one, I drafted the following code:
{code}
    for (int i = 0; i < healthyStreamerCount; i++) {
      try {
        executorCompletionService.take().get();
      } catch (InterruptedException ie) {
        throw DFSUtilClient.toInterruptedIOException(
            "Interrupted during waiting all streamer flush. ", ie);
      } catch (ExecutionException ee) {
        LOG.warn("Caught ExecutionException during waiting all streamer " +
            "flush", ee);
      }
    }
{code}
I think it should be enough for handling {{ExecutionException}}. What do you 
think?

> Parallel optimization of DFSStripedOutputStream#flushAllInternals( )
> --------------------------------------------------------------------
>
>                 Key: HDFS-9494
>                 URL: https://issues.apache.org/jira/browse/HDFS-9494
>             Project: Hadoop HDFS
>          Issue Type: Sub-task
>            Reporter: GAO Rui
>            Assignee: GAO Rui
>            Priority: Minor
>         Attachments: HDFS-9494-origin-trunk.00.patch, 
> HDFS-9494-origin-trunk.01.patch
>
>
> Currently, in DFSStripedOutputStream#flushAllInternals( ), we trigger and 
> wait for flushInternal( ) in sequence. So the runtime flow is like:
> {code}
> Streamer0#flushInternal( )
> Streamer0#waitForAckedSeqno( )
> Streamer1#flushInternal( )
> Streamer1#waitForAckedSeqno( )
> …
> Streamer8#flushInternal( )
> Streamer8#waitForAckedSeqno( )
> {code}
> It could be better to trigger all the streamers to flushInternal( ) and
> wait for all of them to return from waitForAckedSeqno( ),  and then 
> flushAllInternals( ) returns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to