[ 
https://issues.apache.org/jira/browse/HBASE-24632?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17150481#comment-17150481
 ] 

Anoop Sam John commented on HBASE-24632:
----------------------------------------

Yes Stack mostly we might compact these files as after region open we issue a 
compact request.  But there can be some cases.
Assume there were some small files because of flush but never got compacted 
before the RS down happened.  We will look for the possible candidate from 
oldest files and in all chance the very old files would get excluded because of 
the size math.  But It is possible that new flushed files would get selected. 
And we have the max files to compact config also which is 10 by default.  Even 
these small files count alone might be >10. If there are say 15 WAL files to 
split, for sure we will have at least 15 small HFiles.
My thinking was this. After the region open, we have to make sure these small 
files are compacted in one go and we should not even consider the max files 
limit for this compaction.  Also to note that this files might not even have 
the DBE/compression etc being applied.   Ya coding wise am not sure how clean 
it might come. Let us see. Extremely busy these days. Once out of that, I will 
have a look at this.

> Enable procedure-based log splitting as default in hbase3
> ---------------------------------------------------------
>
>                 Key: HBASE-24632
>                 URL: https://issues.apache.org/jira/browse/HBASE-24632
>             Project: HBase
>          Issue Type: Sub-task
>          Components: wal
>            Reporter: Michael Stack
>            Priority: Major
>             Fix For: 3.0.0-alpha-1
>
>
> Means changing this value in HConstants to false:
>    public static final boolean DEFAULT_HBASE_SPLIT_COORDINATED_BY_ZK = true;
> Should probably also deprecate the current zk distributed split too so we can 
> clear out those classes to.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

Reply via email to