[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15818640#comment-15818640
 ] 

Hudson commented on HBASE-14061:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2299 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2299/])
HBASE-14061 Support CF-level Storage Policy (addendum2) (liyu: rev 
953416eb3411f7361f39283fabd4a555bfc873f0)
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java


> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.addendum2.patch, 
> HBASE-14061.addendum2.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15818504#comment-15818504
 ] 

Sean Busbey commented on HBASE-14061:
-

Thanks for the quick follow ups, [~carp84] and [~zghaobac]!

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.addendum2.patch, 
> HBASE-14061.addendum2.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817929#comment-15817929
 ] 

Yu Li commented on HBASE-14061:
---

Thanks for review fella, just pushed to master.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.addendum2.patch, 
> HBASE-14061.addendum2.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817922#comment-15817922
 ] 

Guanghao Zhang commented on HBASE-14061:


Test 2nd addendum patch locally and TestPartitionedMobCompactor passed. +1.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.addendum2.patch, 
> HBASE-14061.addendum2.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817885#comment-15817885
 ] 

Hadoop QA commented on HBASE-14061:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 43s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 22s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 122m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846773/HBASE-14061.addendum2.patch
 |
| JIRA Issue | HBASE-14061 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux cb1ed84feacd 3.13.0-103-generic #150-Ubuntu SMP Thu Nov 24 
10:34:17 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 36eeb2c |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5232/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5232/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817598#comment-15817598
 ] 

Yu Li commented on HBASE-14061:
---

The same issue reported by this post-commit check as GuangHao pointed out 
above. Will commit the 2nd addendum to resolve it after HadoopQA check

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.addendum2.patch, 
> HBASE-14061.addendum2.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-11 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817581#comment-15817581
 ] 

Yu Li commented on HBASE-14061:
---

Yep, somehow I neglected this one... Thanks for pointing this out.

This could be fixed by simply changing below codes in 
{{HFileSystem#newInstanceFileSystem}}
{code}
if (clazz != null) {
  // This will be true for Hadoop 1.0, or 0.20.
  fs = (FileSystem)ReflectionUtils.newInstance(clazz, conf);
  fs.initialize(uri, conf);
} else {
{code}
to
{code}
if (clazz != null) {
  // This will be true for Hadoop 1.0, or 0.20.
  fs = (FileSystem) 
org.apache.hadoop.util.ReflectionUtils.newInstance(clazz, conf);
  fs.initialize(uri, conf);
} else {
{code}

Will push another addendum soon.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817527#comment-15817527
 ] 

Hudson commented on HBASE-14061:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #2297 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2297/])
HBASE-14061 Support CF-level Storage Policy (addendum) (liyu: rev 
36eeb2c569c574b299f8628bed6b8dd20fb900e2)
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestHFileOutputFormat2.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* (edit) 
hbase-common/src/main/java/org/apache/hadoop/hbase/util/ReflectionUtils.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java


> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-10 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817410#comment-15817410
 ] 

Guanghao Zhang commented on HBASE-14061:


[~carp84] Did this related to this failed ut?

{code}
Unable to find suitable constructor for class 
org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor$FaultyDistributedFileSystem
Stacktrace

java.lang.UnsupportedOperationException: Unable to find suitable constructor 
for class 
org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor$FaultyDistributedFileSystem
at 
org.apache.hadoop.hbase.util.ReflectionUtils.findConstructor(ReflectionUtils.java:103)
at 
org.apache.hadoop.hbase.util.ReflectionUtils.newInstance(ReflectionUtils.java:73)
at 
org.apache.hadoop.hbase.fs.HFileSystem.newInstanceFileSystem(HFileSystem.java:260)
at org.apache.hadoop.hbase.fs.HFileSystem.(HFileSystem.java:110)
at org.apache.hadoop.hbase.fs.HFileSystem.get(HFileSystem.java:476)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.getTestFileSystem(HBaseTestingUtility.java:2951)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.getNewDataTestDirOnTestFS(HBaseTestingUtility.java:565)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.setupDataTestDirOnTestFS(HBaseTestingUtility.java:554)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.getDataTestDirOnTestFS(HBaseTestingUtility.java:527)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.getDefaultRootDirPath(HBaseTestingUtility.java:1228)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.createRootDir(HBaseTestingUtility.java:1259)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniHBaseCluster(HBaseTestingUtility.java:1085)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:1057)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:929)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:911)
at 
org.apache.hadoop.hbase.HBaseTestingUtility.startMiniCluster(HBaseTestingUtility.java:898)
at 
org.apache.hadoop.hbase.mob.compactions.TestPartitionedMobCompactor.setUpBeforeClass(TestPartitionedMobCompactor.java:87)
{code}

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-10 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15816859#comment-15816859
 ] 

Yu Li commented on HBASE-14061:
---

Thanks for review [~busbey], will commit soon.

bq. I'd recommend updating references to "Hadoop 2.8.0+" to say "Hadoop 2.8.0+ 
/ 3.0.0-alpha1+"
Ok, will do when commit.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-10 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815237#comment-15815237
 ] 

Sean Busbey commented on HBASE-14061:
-

Addendum looks good, I can confirm that the hadoop profile 3 works locally for 
me with it, as well as Hadoop 2.6.1 and 2.7.1.

I'd recommend updating references to "Hadoop 2.8.0+" to say "Hadoop 2.8.0+ / 
3.0.0-alpha1+", because the latter came out before Hadoop 2.8.0 and future 
releases on the Hadoop 3.y line don't have any particular time relationship to 
future Hadoop 2.w releases.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.addendum.patch, 
> HBASE-14061.addendum.patch, HBASE-14061.v2.patch, HBASE-14061.v3.patch, 
> HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15815178#comment-15815178
 ] 

Hadoop QA commented on HBASE-14061:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
57s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 56s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
19s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 46s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 82m 58s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 126m 44s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.mob.compactions.TestPartitionedMobCompactor 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12846587/HBASE-14061.addendum.patch
 |
| JIRA Issue | HBASE-14061 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux a72ebadc77ce 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / ac3b1c9 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5215/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/5215/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15810901#comment-15810901
 ] 

Hudson commented on HBASE-14061:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #2285 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/2285/])
HBASE-14061 Support CF-level Storage Policy (liyu: rev 
f92a14ade635e4b081f3938620979b5864ac261f)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegion.java
* (edit) hbase-shell/src/main/ruby/hbase/admin.rb
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/StoreFileWriter.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHRegionFileSystem.java
* (edit) hbase-server/src/main/java/org/apache/hadoop/hbase/fs/HFileSystem.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionFileSystem.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCompaction.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* (edit) 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HStore.java


> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15808102#comment-15808102
 ] 

Yu Li commented on HBASE-14061:
---

Thanks for review [~tedyu] and [~ashish singhi], will commit this soon.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807677#comment-15807677
 ] 

Ted Yu commented on HBASE-14061:


+1

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-07 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807447#comment-15807447
 ] 

Ashish Singhi commented on HBASE-14061:
---

lgtm

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-06 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15807052#comment-15807052
 ] 

Yu Li commented on HBASE-14061:
---

The failed UT case is irrelative to change here and confirmed could pass 
locally.

[~tedyu] and [~ashish singhi], mind take another look at the latest patch? 
Thanks.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch, HBASE-14061.v4.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-06 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15804662#comment-15804662
 ] 

Hadoop QA commented on HBASE-14061:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
47s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 54s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 12s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 10s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m 8s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 4s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 56s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 4s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 102m 32s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 5m 3s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
42s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 159m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.balancer.TestStochasticLoadBalancer2 
|
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845996/HBASE-14061.v4.patch |
| JIRA Issue | HBASE-14061 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  rubocop  ruby_lint  |
| uname | Linux 9e5bb944d768 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-05 Thread Ashish Singhi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15801374#comment-15801374
 ] 

Ashish Singhi commented on HBASE-14061:
---

{code}
/**
1280   * Return the encryption algorithm in use by this family
1281   * 
1282   * Not using {@code enum} here because HDFS is not using {@code enum} 
for storage policy, see
1283   * 
org.apache.hadoop.hdfs.server.blockmanagement.BlockStoragePolicySuite for more 
details
1284   */
1285  public String getStoragePolicy() {
1286return getValue(STORAGE_POLICY);
1287  }

 /**
1290   * Set the encryption algorithm for use with this family
1291   * @param policy
1292   */
1293  public HColumnDescriptor setStoragePolicy(String policy) {
1294setValue(STORAGE_POLICY, policy);
1295return this;
1296  }
{code}
That javadoc is for HCD#getEncryptionType, need to correct it.
Otheriswe LGTM.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch, 
> HBASE-14061.v3.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-04 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15800526#comment-15800526
 ] 

Hadoop QA commented on HBASE-14061:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 9s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} rubocop {color} | {color:blue} 0m 0s 
{color} | {color:blue} rubocop was not available. {color} |
| {color:blue}0{color} | {color:blue} ruby-lint {color} | {color:blue} 0m 0s 
{color} | {color:blue} Ruby-lint was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 20s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
49s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
2s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
25s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 10s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
8s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 2s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 1m 
3s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
36s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 58s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 85m 12s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 4m 27s 
{color} | {color:green} hbase-shell in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
39s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 134m 13s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845702/HBASE-14061.v3.patch |
| JIRA Issue | HBASE-14061 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  rubocop  ruby_lint  |
| uname | Linux 578c40254371 3.13.0-105-generic #152-Ubuntu SMP Fri Dec 2 
15:37:11 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 20a7ae2 |
| Default 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2017-01-02 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15793319#comment-15793319
 ] 

Ted Yu commented on HBASE-14061:


lgtm
{code}
169* @return Storage policy name.
{code}
Add note that the returned policy name may be null

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2016-12-30 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15789076#comment-15789076
 ] 

Hadoop QA commented on HBASE-14061:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
25m 51s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
53s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 80m 27s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 118m 11s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12845161/HBASE-14061.v2.patch |
| JIRA Issue | HBASE-14061 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux fce82658b153 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 0e48665 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5099/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/5099/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Yu Li
> Attachments: HBASE-14061-master-v1.patch, HBASE-14061.v2.patch
>
>
> After reading 

[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2016-01-26 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15117348#comment-15117348
 ] 

Yu Li commented on HBASE-14061:
---

After an offline discussion with [~victorunique], I'll take over this task (let 
me know if you changed your mind by any chance Victor:-))

Adding links to two related HDFS JIRAs, which resolve issues in HDFS layer in 
heterogeneous (some machine has SSD while the others don't) environment.

Will refine the patch according to review comments later.

> Support CF-level Storage Policy
> ---
>
> Key: HBASE-14061
> URL: https://issues.apache.org/jira/browse/HBASE-14061
> Project: HBase
>  Issue Type: Sub-task
>  Components: HFile, regionserver
> Environment: hadoop-2.6.0
>Reporter: Victor Xu
>Assignee: Victor Xu
> Attachments: HBASE-14061-master-v1.patch
>
>
> After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
> and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
> a patch to implement cf-level storage policy. 
> My main purpose is to improve random-read performance for some really hot 
> data, which usually locates in certain column family of a big table.
> Usage:
> $ hbase shell
> > alter 'TABLE_NAME', METADATA => {'hbase.hstore.block.storage.policy' => 
> > 'POLICY_NAME'}
> > alter 'TABLE_NAME', {NAME=>'CF_NAME', METADATA => 
> > {'hbase.hstore.block.storage.policy' => 'POLICY_NAME'}}
> HDFS's setStoragePolicy can only take effect when new hfile is created in a 
> configured directory, so I had to make sub directories(for each cf) in 
> region's .tmp directory and set storage policy for them.
> Besides, I had to upgrade hadoop version to 2.6.0 because 
> dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
> this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14061) Support CF-level Storage Policy

2015-07-28 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14645371#comment-14645371
 ] 

Andrew Purtell commented on HBASE-14061:


bq. 'hbase.hstore.block.storage.policy'

Consider making this a first class CF schema attribute. 'STORAGE_POLICY' 
perhaps?

bq. fs.getStoragePolicy cannot be easily written in reflection

Agreed, this is going to be a bit messy but I had a look at the code in the 
patch and it seems doable at first glance.

We could only commit this to a branch where we're willing to set the minimum 
supported Hadoop version at 2.6.0 or if the storage policy API is called 
through reflection we can port the result to more places.

 Support CF-level Storage Policy
 ---

 Key: HBASE-14061
 URL: https://issues.apache.org/jira/browse/HBASE-14061
 Project: HBase
  Issue Type: Sub-task
  Components: HFile, regionserver
 Environment: hadoop-2.6.0
Reporter: Victor Xu
Assignee: Victor Xu
 Attachments: HBASE-14061-master-v1.patch


 After reading [HBASE-12848|https://issues.apache.org/jira/browse/HBASE-12848] 
 and [HBASE-12934|https://issues.apache.org/jira/browse/HBASE-12934], I wrote 
 a patch to implement cf-level storage policy. 
 My main purpose is to improve random-read performance for some really hot 
 data, which usually locates in certain column family of a big table.
 Usage:
 $ hbase shell
  alter 'TABLE_NAME', METADATA = {'hbase.hstore.block.storage.policy' = 
  'POLICY_NAME'}
  alter 'TABLE_NAME', {NAME='CF_NAME', METADATA = 
  {'hbase.hstore.block.storage.policy' = 'POLICY_NAME'}}
 HDFS's setStoragePolicy can only take effect when new hfile is created in a 
 configured directory, so I had to make sub directories(for each cf) in 
 region's .tmp directory and set storage policy for them.
 Besides, I had to upgrade hadoop version to 2.6.0 because 
 dfs.getStoragePolicy cannot be easily written in reflection, and I needed 
 this api to finish my unit test.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)