[ 
https://issues.apache.org/jira/browse/HBASE-20952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16617773#comment-16617773
 ] 

Josh Elser commented on HBASE-20952:
------------------------------------

{quote}First ,WALSplitter is not a separated topic, it is the core of HBase. 
You can disable replication but you can not disable wal splitting...
{quote}
I think this got lost in text: Sergey's point was that this _could_ be made 
optional. In the current HDFS-based style, you are obviously correct that this 
is not optional. However, when we think about hypothetical log systems, we 
could see an optimization where the filtering of _just one Region's_ edits from 
a log can be pushed down into the log system itself. Such a feature would 
remove the need for WAL splitting to happen universally in HBase.
{quote}So I think we should add another method to the WAL system, which is used 
to get the recovered edits for a region when opening a region
{quote}
Yup! We're on the same page. In other words, WAL splitting is a function of 
getting the recovered edits for a region, not necessarily an invariant for 
HBase.
{quote}working on deploy HBase on S3 and was fighting with the recovered edits 
directory should be on S3 or HDFS, do not know what's the final solution but 
after the discussion here, I think it should be on HDFS, not S3?
{quote}
Yeah, I think they were co-locating the recovered edits on the same FS that 
WALs were put on.
{quote}The replication is just another subscriber of the wals, right?
{quote}
Yup. We put a simple "Listener" style interface into the Ratis LogService 
specifically around this. I think the difficult part around this for HBase is 
getting the implementation right, but definitely the architecture is simple.
{quote}And for sync replication, I think we should make it work with different 
wal implementations. This is another story and I will keep tracking it. To be 
honest I do not know the solution yet, but I'm optimistic.
{quote}
Ok! Sounds good for now. This is obviously a long way from hitting any "main" 
branches (still a long way from a feature branch). I'll spin off an issue on 
the parent issue to revisit sync-replication.
{quote}the line is too low, we should cut it at a higher place, where fencing, 
log splitting, and reading recovered edits should all be included in it, but 
now lots of the code are outside the wal system
{quote}
+1 that's a nice phrasing. Let me work with Ted and get some of this doc'ed.

> Re-visit the WAL API
> --------------------
>
>                 Key: HBASE-20952
>                 URL: https://issues.apache.org/jira/browse/HBASE-20952
>             Project: HBase
>          Issue Type: Sub-task
>          Components: wal
>            Reporter: Josh Elser
>            Priority: Major
>         Attachments: 20952.v1.txt
>
>
> Take a step back from the current WAL implementations and think about what an 
> HBase WAL API should look like. What are the primitive calls that we require 
> to guarantee durability of writes with a high degree of performance?
> The API needs to take the current implementations into consideration. We 
> should also have a mind for what is happening in the Ratis LogService (but 
> the LogService should not dictate what HBase's WAL API looks like RATIS-272).
> Other "systems" inside of HBase that use WALs are replication and 
> backup&restore. Replication has the use-case for "tail"'ing the WAL which we 
> should provide via our new API. B&R doesn't do anything fancy (IIRC). We 
> should make sure all consumers are generally going to be OK with the API we 
> create.
> The API may be "OK" (or OK in a part). We need to also consider other methods 
> which were "bolted" on such as {{AbstractFSWAL}} and 
> {{WALFileLengthProvider}}. Other corners of "WAL use" (like the 
> {{WALSplitter}} should also be looked at to use WAL-APIs only).
> We also need to make sure that adequate interface audience and stability 
> annotations are chosen.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to