[
https://issues.apache.org/jira/browse/HBASE-14790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15033245#comment-15033245
]
Zhe Zhang commented on HBASE-14790:
-----------------------------------
On the high level, the key requirements here are very similar to what we need
for {{DFSOutputStream}} in HDFS erasure coding:
# *Fanout write to multiple DNs*
# *Simpler logic without hairy pipeline recovery*
In HDFS-EC we have spent a lot of time trying to fit the above requirements
with existing {{DFSOutputStream}} and {{DataStreamer}}. [~jingzhao] and
[~walter.k.su] did a great job on HDFS-9040 and I'm working on HDFS-9079 to
further simplify the logic. HDFS-9079 also tries to create a *simpler
single-block writing logic* in {{StripedDataStreamer}}, which I guess is also a
goal of this JIRA. It's actually also based on an *event-driven* model. But I'm
not sure yet if the events collected by {{BlockMetadataCoordinator}} are the
same type of events needed here.
I'm still reading through Duo's patch and haven't fleshed out full details of
how to fulfill requirements from both sides, but on the high level this looks
like a potential synergy. Maybe we can consider:
# In HDFS project, implement a single-block fail-stop {{DataStreamer}} so at
the very least that part can be shared by both efforts. The new
{{DataStreamer}} will stop after writing a block, and won't attempt to recover
from DN failures. [~Apache9] LMK if your patch could use such a streamer, and
focus only on the {{OutputStream}} logic. Actually the {{StripedDataStreamer}}
in the latest HDFS-9079 patch is close to this requirement. But it does have
the additional logic to report and handle failure events. I guess that's
inevitable to correctly set block generation stamps.
# If that goes well, we can explore whether / how to share the fan-out logic
with {{DFSStripedOutputStream}}. The outcome might be a
{{DFSParalleOutputStream}}, subclassed by {{DFSParallelStripedOutputStream}}
and {{DFSParallelContiguousOutputStream}} (with better names).
# We can further consider whether it's possible to pass the streamer events
collected by {{BlockMetadataCoordinator}} up to HBase.
> Implement a new DFSOutputStream for logging WAL only
> ----------------------------------------------------
>
> Key: HBASE-14790
> URL: https://issues.apache.org/jira/browse/HBASE-14790
> Project: HBase
> Issue Type: Improvement
> Reporter: Duo Zhang
>
> The original {{DFSOutputStream}} is very powerful and aims to serve all
> purposes. But in fact, we do not need most of the features if we only want to
> log WAL. For example, we do not need pipeline recovery since we could just
> close the old logger and open a new one. And also, we do not need to write
> multiple blocks since we could also open a new logger if the old file is too
> large.
> And the most important thing is that, it is hard to handle all the corner
> cases to avoid data loss or data inconsistency(such as HBASE-14004) when
> using original DFSOutputStream due to its complicated logic. And the
> complicated logic also force us to use some magical tricks to increase
> performance. For example, we need to use multiple threads to call {{hflush}}
> when logging, and now we use 5 threads. But why 5 not 10 or 100?
> So here, I propose we should implement our own {{DFSOutputStream}} when
> logging WAL. For correctness, and also for performance.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)