[
https://issues.apache.org/jira/browse/HDFS-9079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14952804#comment-14952804
]
Walter Su commented on HDFS-9079:
---------------------------------
In 01 patch, {{DNFailureEvent}} is emitted by the failed streamer itself.
{{BlockMetadataCoordinator.run()}} has a blocking wait. Consider a situation
that a streamer is dead by unknown reason. It doesn't go to
{{setupPipelineForAppendOrRecovery}}, how do you detect this failed streamer?
So I think {{DNFailureEvent}} should emitted by writeChunk(..) via
handleStreamerFailure().
Consider another situation that a streamer is dead by unknown reason during
bumping GS. All {{DNAcceptedGSEvent}} don't trigger {{updatePipeline}} because
one is missing. So I think {{BlockMetadataCoordinator.run()}} should have a
timeout wait. In this way it can check streamer health periodically. Just like
{{waitCreatingNewStreams}}.
bq. Limiting the lifespan of StripedDataStreamer to a single block. This is to
simplify the logic.
That's an interesting idea. Instead of replacing failed streamers, we replace
all streamers.
> Erasure coding: preallocate multiple generation stamps and serialize updates
> from data streamers
> ------------------------------------------------------------------------------------------------
>
> Key: HDFS-9079
> URL: https://issues.apache.org/jira/browse/HDFS-9079
> Project: Hadoop HDFS
> Issue Type: Sub-task
> Components: erasure-coding
> Affects Versions: HDFS-7285
> Reporter: Zhe Zhang
> Assignee: Zhe Zhang
> Attachments: HDFS-9079-HDFS-7285.00.patch, HDFS-9079.01.patch
>
>
> A non-striped DataStreamer goes through the following steps in error handling:
> {code}
> 1) Finds error => 2) Asks NN for new GS => 3) Gets new GS from NN => 4)
> Applies new GS to DN (createBlockOutputStream) => 5) Ack from DN => 6)
> Updates block on NN
> {code}
> To simplify the above we can preallocate GS when NN creates a new striped
> block group ({{FSN#createNewBlock}}). For each new striped block group we can
> reserve {{NUM_PARITY_BLOCKS}} GS's. Then steps 1~3 in the above sequence can
> be saved. If more than {{NUM_PARITY_BLOCKS}} errors have happened we
> shouldn't try to further recover anyway.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)