[
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15817527#comment-15817527
]
Hudson commented on HBASE-14061:
--------------------------------
FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2297 (See
[https://builds.apache.org/job/HBase-Trunk_matrix/2297/])
HBASE-14061 Support CF-level Storage Policy (addendum) (liyu: rev
36eeb2c569c574b299f8628bed6b8dd20fb900e2)
* (edit)
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit)
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* (edit)
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ReflectionUtils.java
* (edit)
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java
* (edit)
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
* (edit)
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
> Support CF-level Storage Policy
> -------------------------------
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
> Issue Type: Sub-task
> Components: HFile, regionserver
> Environment: hadoop-2.6.0
> Reporter: Victor Xu
> Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch,
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch,
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848]
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote
> a patch to implement cf-level storage policy.
> My main purpose is to improve random-read performance for some really hot
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' =>
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA =>
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a
> configured directory, so I had to make sub directories(for each cf) in
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed
> this api to finish my unit test.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)