[
https://issues.apache.org/jira/browse/HBASE-24679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17237633#comment-17237633
]
Andrew Kyle Purtell commented on HBASE-24679:
---------------------------------------------
This should only be scheduled against trunk (3.0) until there is an actual
implementation.
> HBase on Cloud Blob FS : Provide config to skip HFile archival while table
> deletion
> ------------------------------------------------------------------------------------
>
> Key: HBASE-24679
> URL: https://issues.apache.org/jira/browse/HBASE-24679
> Project: HBase
> Issue Type: Improvement
> Reporter: Anoop Sam John
> Assignee: Anoop Sam John
> Priority: Major
> Fix For: 3.0.0-alpha-1
>
>
> When we delete a table as part of delete of table from FS, we do below things
> 1. Rename to table directory to come under /hbase/.tmp. This is an atomic
> rename op
> 2. Go through each of HFiles under every region:cf and archive that one by
> one. (Rename the file from .tmp path to go to /hbase/archive)
> 3. Delete the table dir under .tmp dir
> In case of HDFS this is not a big deal as every rename op is just a meta op
> (Though the HFiles archival is a costly only as there will be so many calls
> to NN based the table's regions# and total storesfiles#) But on Cloud blob
> based FS impl, this is a concerning op. Every rename will be a copy blob op.
> And we are doing it twice per each of the HFiles in this table !
> The proposal here is to provide a config option (default to false) to skip
> this archival step.
> We can provide another config to even avoid the .tmp rename? The atomicity of
> the Table delete can be achieved by HM side procedure and proc WAL. In table
> delete the 1st step is to delete the table form META anyways
--
This message was sent by Atlassian Jira
(v8.3.4#803005)