[ 
https://issues.apache.org/jira/browse/HBASE-20800?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20800:
--------------------------
    Summary: Master-orchestrated compactions  (was: Master orchestrated 
compactions)

> Master-orchestrated compactions
> -------------------------------
>
>                 Key: HBASE-20800
>                 URL: https://issues.apache.org/jira/browse/HBASE-20800
>             Project: HBase
>          Issue Type: Umbrella
>          Components: Compaction
>            Reporter: stack
>            Assignee: Mohit Goel
>            Priority: Major
>
> An umbrella issue for having compactions go via the Master so we can have a 
> centralized arbitrator of cluster i/o. If we put Master in the way, we can do 
> stuff like:
>  * Ask the Master for current cluster compaction state; what is running, what 
> is blocked
>  * Master can manage cluster-wide compaction policy and/or throttling/or 
> blocking of compaction i/os.
>  * Master can schedule when and where compactions run so we can guard against 
> the pathological where all RegionServers decide now is the time to major 
> compact bringing on a compaction storm.
> Other side-benefits might include being able to farm out the compaction work 
> to another process -- e.g. the splice machine model of having spark run the 
> compactions -- or just to a separate compactor that we might i/o nice.
>  * We'll need to figure how to externalize the CompactionRequest so it can be 
> passed over RPC.
>  * We'll need to have something like a CompactionManager in the Master 
> process that keeps up current cluster state.
> MOB needs a compaction fabric it can use. Its compactions are currently 
> Master-based only and so don't scale. It could make use of this mechanism to 
> ask the Master to farm out its compaction requests.
> This is an umbrella issue. I thought I'd filed one already on this topic but 
> can't find it. Will shut it down if I trip over it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to