[ 
https://issues.apache.org/jira/browse/HBASE-4755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13589095#comment-13589095
 ] 

Jonathan Hsieh commented on HBASE-4755:
---------------------------------------

I know work as already started here, but I want to make sure we consider 
multiple designs, have thought about where they may end up and the tradeoffs.

I posted a [long 
post|https://issues.apache.org/jira/browse/HDFS-2576?focusedCommentId=13589059&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13589059]
 over on the HDFS-2576 jira, probably should have posted here instead. 

I'm assuming that HBase provides the list of DN's, likely selected at region 
creation time?

bq. The balancers run independently, and if the HDFS rebalancer damages the 
block placements, it'd be repaired at the time compactions in hbase happen.

So they will potentially "fight" -- but we'll just have a perf penalty upon 
recovery.  Does the HDFS balancer by default run automatically (or is only 
triggered manually)?  

bq. HBase should be able to handle these cases (and the new patches will handle 
these). In addition, there would be a tool (that would be a subtask of this 
jira) that should be run periodically that would look at the block placements 
and update region maps (region -> favored nodes) in the meta table in HBase to 
keep the mapping more optimal in terms of locality of data.

Ok, so so when we attempt to write when we compact or flush maybe we'd check to 
see all N hdfs replica targets are alive?  Would the tool be the only 
process/thread that select data node targets?  Is this the only mechanism we 
have to force block replcias to different datanodes?

My spidey sense feels that putting hdfs specific info in hbase means we need to 
go all in or push it hdfs stuff into hdfs an interact with a policy.

                
> HBase based block placement in DFS
> ----------------------------------
>
>                 Key: HBASE-4755
>                 URL: https://issues.apache.org/jira/browse/HBASE-4755
>             Project: HBase
>          Issue Type: New Feature
>    Affects Versions: 0.94.0
>            Reporter: Karthik Ranganathan
>            Assignee: Christopher Gist
>            Priority: Critical
>         Attachments: 4755-wip-1.patch
>
>
> The feature as is only useful for HBase clusters that care about data 
> locality on regionservers, but this feature can also enable a lot of nice 
> features down the road.
> The basic idea is as follows: instead of letting HDFS determine where to 
> replicate data (r=3) by place blocks on various regions, it is better to let 
> HBase do so by providing hints to HDFS through the DFS client. That way 
> instead of replicating data at a blocks level, we can replicate data at a 
> per-region level (each region owned by a promary, a secondary and a tertiary 
> regionserver). This is better for 2 things:
> - Can make region failover faster on clusters which benefit from data affinity
> - On large clusters with random block placement policy, this helps reduce the 
> probability of data loss
> The algo is as follows:
> - Each region in META will have 3 columns which are the preferred 
> regionservers for that region (primary, secondary and tertiary)
> - Preferred assignment can be controlled by a config knob
> - Upon cluster start, HMaster will enter a mapping from each region to 3 
> regionservers (random hash, could use current locality, etc)
> - The load balancer would assign out regions preferring region assignments to 
> primary over secondary over tertiary over any other node
> - Periodically (say weekly, configurable) the HMaster would run a locality 
> checked and make sure the map it has for region to regionservers is optimal.
> Down the road, this can be enhanced to control region placement in the 
> following cases:
> - Mixed hardware SKU where some regionservers can hold fewer regions
> - Load balancing across tables where we dont want multiple regions of a table 
> to get assigned to the same regionservers
> - Multi-tenancy, where we can restrict the assignment of the regions of some 
> table to a subset of regionservers, so an abusive app cannot take down the 
> whole HBase cluster.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira

Reply via email to