[jira] [Updated] (HBASE-11157) [hbck] NotServingRegionException: Received close for regionName but we are not serving it

2014-05-22 Thread dailidong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dailidong updated HBASE-11157:
--

Fix Version/s: (was: 0.94.19)

 [hbck] NotServingRegionException: Received close for regionName but we are 
 not serving it
 ---

 Key: HBASE-11157
 URL: https://issues.apache.org/jira/browse/HBASE-11157
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.13
Reporter: dailidong
Priority: Trivial
 Attachments: HBASE-11157-v2.patch, HBASE-11157.patch


 if hbck close a region then meet a NotServerRegionException,hbck will hang up 
 . we will close the region on the regionserver, but this regionserver is not 
 serving the region, so we should try catch this exception.
 Trying to fix unassigned region...
 Exception in thread main org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.NotServingRegionException: Received close for 
 regionName but we are not serving it
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3204)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3185)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1012)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:87)
 at com.sun.proxy.$Proxy7.closeRegion(Unknown Source)
 at 
 org.apache.hadoop.hbase.util.HBaseFsckRepair.closeRegionSilentlyAndWait(HBaseFsckRepair.java:150)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.closeRegion(HBaseFsck.java:1565)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1704)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1406)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:419)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:438)
 at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3670)
 at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3489)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3483)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11157) [hbck] NotServingRegionException: Received close for regionName but we are not serving it

2014-05-22 Thread dailidong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11157?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

dailidong updated HBASE-11157:
--

Affects Version/s: 0.94.19

 [hbck] NotServingRegionException: Received close for regionName but we are 
 not serving it
 ---

 Key: HBASE-11157
 URL: https://issues.apache.org/jira/browse/HBASE-11157
 Project: HBase
  Issue Type: Bug
  Components: hbck
Affects Versions: 0.94.13, 0.94.19
Reporter: dailidong
Priority: Trivial
 Attachments: HBASE-11157-v2.patch, HBASE-11157.patch


 if hbck close a region then meet a NotServerRegionException,hbck will hang up 
 . we will close the region on the regionserver, but this regionserver is not 
 serving the region, so we should try catch this exception.
 Trying to fix unassigned region...
 Exception in thread main org.apache.hadoop.ipc.RemoteException: 
 org.apache.hadoop.hbase.NotServingRegionException: Received close for 
 regionName but we are not serving it
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3204)
 at 
 org.apache.hadoop.hbase.regionserver.HRegionServer.closeRegion(HRegionServer.java:3185)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
 at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
 at java.lang.reflect.Method.invoke(Method.java:606)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Server.call(WritableRpcEngine.java:323)
 at 
 org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1426)
 at org.apache.hadoop.hbase.ipc.HBaseClient.call(HBaseClient.java:1012)
 at 
 org.apache.hadoop.hbase.ipc.WritableRpcEngine$Invoker.invoke(WritableRpcEngine.java:87)
 at com.sun.proxy.$Proxy7.closeRegion(Unknown Source)
 at 
 org.apache.hadoop.hbase.util.HBaseFsckRepair.closeRegionSilentlyAndWait(HBaseFsckRepair.java:150)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.closeRegion(HBaseFsck.java:1565)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkRegionConsistency(HBaseFsck.java:1704)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.checkAndFixConsistency(HBaseFsck.java:1406)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineConsistencyRepair(HBaseFsck.java:419)
 at 
 org.apache.hadoop.hbase.util.HBaseFsck.onlineHbck(HBaseFsck.java:438)
 at org.apache.hadoop.hbase.util.HBaseFsck.exec(HBaseFsck.java:3670)
 at org.apache.hadoop.hbase.util.HBaseFsck.run(HBaseFsck.java:3489)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
 at org.apache.hadoop.hbase.util.HBaseFsck.main(HBaseFsck.java:3483)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10835) DBE encode path improvements

2014-05-22 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-10835:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to master. Thanks for the reviews.

 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10835.patch, HBASE-10835_V2.patch, 
 HBASE-10835_V3.patch, HBASE-10835_V4.patch, HBASE-10835_V5.patch


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-10831:
-

Attachment: (was: HBASE-10831_v1.patch)

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_trunk_v2.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11156) Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use io.native.lib.available

2014-05-22 Thread Jiten (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11156?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005696#comment-14005696
 ] 

Jiten commented on HBASE-11156:
---

Hello, i dropped mail to u...@hbase.apache.org with details. By d way i just 
want my solution, my intention is not to Fire a jira for this issue. 
Kindly reply on mail asap.

  Configuration.deprecation: hadoop.native.lib is deprecated. Instead, use 
 io.native.lib.available
 -

 Key: HBASE-11156
 URL: https://issues.apache.org/jira/browse/HBASE-11156
 Project: HBase
  Issue Type: Bug
  Components: Admin
Affects Versions: 0.96.1.1
Reporter: Jiten
Priority: Critical

 # hbase shell
 2014-05-13 14:51:41,582 INFO  [main] Configuration.deprecation: 
 hadoop.native.lib is deprecated. Instead, use io.native.lib.available
 HBase Shell; enter 'helpRETURN' for list of supported commands.
 Type exitRETURN to leave the HBase Shell
 Version 0.96.1.1-cdh5.0.0, rUnknown, Thu Mar 27 23:01:59 PDT 2014.
 Not able to create table in Hbase. Please help



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-10831:
-

Attachment: HBASE-10831_trunk_v3.patch
HBASE-10831_98_v3.patch

Addressed comments.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_trunk_v2.patch, HBASE-10831_trunk_v3.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)
chunhui shen created HBASE-11234:


 Summary: FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong 
result
 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0


As Ted found, 
{format}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 but 
was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{format}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Description: 
As Ted found, 
{noformat}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 but 
was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{noformat}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.

  was:
As Ted found, 
{format}
With this change:

Index: 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
===
--- 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (revision 1596579)
+++ 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 (working copy)
@@ -51,6 +51,7 @@
 import org.apache.hadoop.hbase.filter.FilterList.Operator;
 import org.apache.hadoop.hbase.filter.PageFilter;
 import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
+import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
 import org.apache.hadoop.hbase.io.hfile.CacheConfig;
 import org.apache.hadoop.hbase.io.hfile.HFileContext;
 import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
@@ -90,6 +91,7 @@
 CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
 HFileContextBuilder hcBuilder = new HFileContextBuilder();
 hcBuilder.withBlockSize(2 * 1024);
+hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
 HFileContext hFileContext = hcBuilder.build();
 StoreFile.Writer writer = new StoreFile.WriterBuilder(
 TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(

I got:

java.lang.AssertionError: 
expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 but 
was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
  at org.junit.Assert.fail(Assert.java:88)
  at org.junit.Assert.failNotEquals(Assert.java:743)
  at org.junit.Assert.assertEquals(Assert.java:118)
  at org.junit.Assert.assertEquals(Assert.java:144)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
  at 
org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
{format}


After debugging, it seems the method of FastDiffDeltaEncoder#getFirstKeyInBlock 
become broken. And it will cause hfilescanner#seekBefore returns wrong result.


The solution is simple, see the patch.


 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 

[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-11234:
-

Attachment: HBASE-11234.patch

 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0

 Attachments: HBASE-11234.patch


 As Ted found, 
 {format}
 With this change:
 Index: 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 ===
 --- 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(revision 1596579)
 +++ 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(working copy)
 @@ -51,6 +51,7 @@
  import org.apache.hadoop.hbase.filter.FilterList.Operator;
  import org.apache.hadoop.hbase.filter.PageFilter;
  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
  import org.apache.hadoop.hbase.io.hfile.HFileContext;
  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
 @@ -90,6 +91,7 @@
  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
  HFileContextBuilder hcBuilder = new HFileContextBuilder();
  hcBuilder.withBlockSize(2 * 1024);
 +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  HFileContext hFileContext = hcBuilder.build();
  StoreFile.Writer writer = new StoreFile.WriterBuilder(
  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
 I got:
 java.lang.AssertionError: 
 expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 
 but was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
 {format}
 After debugging, it seems the method of 
 FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
 hfilescanner#seekBefore returns wrong result.
 The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005734#comment-14005734
 ] 

ramkrishna.s.vasudevan commented on HBASE-11234:


Patch looks good to me.

 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0

 Attachments: HBASE-11234.patch


 As Ted found, 
 With this change:
 {noformat}
 Index: 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 ===
 --- 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(revision 1596579)
 +++ 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(working copy)
 @@ -51,6 +51,7 @@
  import org.apache.hadoop.hbase.filter.FilterList.Operator;
  import org.apache.hadoop.hbase.filter.PageFilter;
  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
  import org.apache.hadoop.hbase.io.hfile.HFileContext;
  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
 @@ -90,6 +91,7 @@
  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
  HFileContextBuilder hcBuilder = new HFileContextBuilder();
  hcBuilder.withBlockSize(2 * 1024);
 +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  HFileContext hFileContext = hcBuilder.build();
  StoreFile.Writer writer = new StoreFile.WriterBuilder(
  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
 {noformat}
 I got:
 java.lang.AssertionError: 
 expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 
 but was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
 After debugging, it seems the method of 
 FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
 hfilescanner#seekBefore returns wrong result.
 The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005742#comment-14005742
 ] 

ramkrishna.s.vasudevan commented on HBASE-10831:


+1 on patch.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_trunk_v2.patch, HBASE-10831_trunk_v3.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005748#comment-14005748
 ] 

Anoop Sam John commented on HBASE-11234:


+1

 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0

 Attachments: HBASE-11234.patch


 As Ted found, 
 With this change:
 {noformat}
 Index: 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 ===
 --- 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(revision 1596579)
 +++ 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(working copy)
 @@ -51,6 +51,7 @@
  import org.apache.hadoop.hbase.filter.FilterList.Operator;
  import org.apache.hadoop.hbase.filter.PageFilter;
  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
  import org.apache.hadoop.hbase.io.hfile.HFileContext;
  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
 @@ -90,6 +91,7 @@
  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
  HFileContextBuilder hcBuilder = new HFileContextBuilder();
  hcBuilder.withBlockSize(2 * 1024);
 +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  HFileContext hFileContext = hcBuilder.build();
  StoreFile.Writer writer = new StoreFile.WriterBuilder(
  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
 {noformat}
 I got:
 java.lang.AssertionError: 
 expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 
 but was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
 After debugging, it seems the method of 
 FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
 hfilescanner#seekBefore returns wrong result.
 The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005780#comment-14005780
 ] 

Mikhail Antonov commented on HBASE-10070:
-

[~devaraj],  [~stack],  [~enis]

Yeah, the reason I brought it up is that unlike changes for example in LB, this 
is public (yet evolving) API, so just wanted to double-check that we don't 
expose to client code details which would limit us later.

bq. Even within a given Consistency model, you may want different execution 
strategies I think (like for TIMELINE consistency, parallel and parallel with 
delay, or go to first replica, then second, then third, etc). In the committed 
code in branch, the consistency model implies hard coded execution model.
Sure, any consistency model (except current behavior i guess) would benefit 
from being customizable.

bq. So, rather than have the client ask for level of 'consistency' in the API, 
instead, the replica interaction would be set on client construction dependent 
on the plugin supplied?
Either rather or both, I guess. If we could say that level of consistency 
(strong, timeline or quorum-strong) could be defined in config files per-client 
(not per-operations), we would be able to avoid having this enum. But we 
consider that being able to define consistency level per-operation is 
mandatory, right?

In that case I'm thinking of the following model:
 - deploy pluggable policy at client side which which decide on RPC requests, 
this policy would be used globally for all requests as default
 - consider the Consistency enum (and point it out in both user and dev level 
docs) as a hint, used only to be able to customize individual scans or gets, 
and probably add note in class documentation that cluster may ignore the flag 
set if the feature isn't available?
 - current timeline consistency model doesn't assume quorums for write, so I 
think it makes sense to add QUORUM_STRONG in enum.
 
Thoughts?


 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005781#comment-14005781
 ] 

Anoop Sam John commented on HBASE-10831:


{code}
+if (cmd.hasOption(OPT_AUTHN)) {
+  authnFileName = cmd.getOptionValue(OPT_AUTHN);
+  if(LoadTestTool.isSecure(getConf())  
(StringUtils.isEmpty(authnFileName))) {
+super.printUsage();
{code}
When LoadTestTool.isSecure() is true we have to ensure the OPT_AUTHN is 
provided and having a non empty value. The above check will not happen when the 
option is not at all provided.  We need change the check

{code}
+String keyTableFileLocation = conf.get(keyTabFileConfKey);
{code}
Rename to keyTabFileLocation (?)

Else the patch looks good to me.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_trunk_v2.patch, HBASE-10831_trunk_v3.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11229) Change block cache percentage metrics to be doubles rather than ints

2014-05-22 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005797#comment-14005797
 ] 

Nick Dimiduk commented on HBASE-11229:
--

+1

 Change block cache percentage metrics to be doubles rather than ints
 

 Key: HBASE-11229
 URL: https://issues.apache.org/jira/browse/HBASE-11229
 Project: HBase
  Issue Type: Sub-task
  Components: metrics
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11229.txt


 See parent issue.  Small changes in the hit percentage can have large 
 implications, even when movement is inside a single percent: i.e. going from 
 99.11 to 99.87 percent.  As is, percents are ints.  Make them doubles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11223) Limit the actions number of a call in the batch

2014-05-22 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11223?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005802#comment-14005802
 ] 

Nicolas Liochon commented on HBASE-11223:
-

bq. right?
For sure :-)
But as it needs to be done server side anyway; and it's should remain an 
extreme case (i.e. a bug in the client code) that we don't need to optimize, 
imho there is no much value in doing it client side.

 Limit the actions number of a call in the batch 
 

 Key: HBASE-11223
 URL: https://issues.apache.org/jira/browse/HBASE-11223
 Project: HBase
  Issue Type: Bug
  Components: Client
Affects Versions: 0.99.0
Reporter: Liu Shaohui
Assignee: Liu Shaohui

 Huge batch operation will make regionserver crash for GC.
 The extreme code like this:
 {code}
 final ListDelete deletes = new ArrayListDelete();
 final long rows = 400;
 for (long i = 0; i  rows; ++i) {
   deletes.add(new Delete(Bytes.toBytes(i)));
 }
 table.delete(deletes);
 {code}
 We should limit the actions number of a call in the batch. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10818) Add integration test for bulkload with replicas

2014-05-22 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005809#comment-14005809
 ] 

Nick Dimiduk commented on HBASE-10818:
--

Ah, of course. Interdiff gives me eyes. Yes, this is looking good, +1 from me. 
Perhaps someone else should provide review, given that I wrote the original. 

[~enis], [~tedyu] mind taking another look?

 Add integration test for bulkload with replicas
 ---

 Key: HBASE-10818
 URL: https://issues.apache.org/jira/browse/HBASE-10818
 Project: HBase
  Issue Type: Sub-task
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Nick Dimiduk
 Fix For: hbase-10070

 Attachments: 10818-7.txt, HBASE-10818.00.patch, HBASE-10818.01.patch, 
 HBASE-10818.02.patch, IntegrationTestBulkLoad_replicas.log


 Should verify bulkload is not affected by region replicas.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Comment Edited] (HBASE-11026) Provide option to filter out all rows in PerformanceEvaluation tool

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005846#comment-14005846
 ] 

ramkrishna.s.vasudevan edited comment on HBASE-11026 at 5/22/14 11:58 AM:
--

bq.Any chance to get this backported to 0.96 and 0.94?
We can. 
I have a suggestion.  can we move the new filter to hbase-client main package 
itself. Currently it is in hbase-server test package and while trying to use it 
with YCSB the test package contents are not getting added.  May be am not able 
to write the correct pom.xml for YCSB? am not sure on that.  But any way moving 
to client in the main package would resolve this.
But one thing is adding this filter in the main code may not be correct in 
terms of user's perspective?  What others say?


was (Author: ram_krish):
bq.Any chance to get this backported to 0.96 and 0.94?
We can. 
I have a suggestion.  can we move the new filter to hbase-client main package 
itself. Currently it is in hbase-server test package and while trying to use it 
with YCSB the test package contents are not getting added.  May be am not able 
to write the correct pom.xml for YCSB? am not sure on that.  But any way moving 
to client in the main package would resolve this.
But one thing is adding this filter in the main code may not be correct in 
terms of user's perspective?  What say?

 Provide option to filter out all rows in PerformanceEvaluation tool
 ---

 Key: HBASE-11026
 URL: https://issues.apache.org/jira/browse/HBASE-11026
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-11026_1.patch, HBASE-11026_2.patch, 
 HBASE-11026_4-0.98.patch, HBASE-11026_4.patch


 Performance Evaluation could also be used to check the actual performance of 
 the scans on the Server side by passing Filters that filters out all the 
 rows.  We can create a test filter and add it to the Filter.proto and set 
 this filter based on input params.  Could be helpful in testing.
 If you feel this is not needed pls feel free to close this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11026) Provide option to filter out all rows in PerformanceEvaluation tool

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005846#comment-14005846
 ] 

ramkrishna.s.vasudevan commented on HBASE-11026:


bq.Any chance to get this backported to 0.96 and 0.94?
We can. 
I have a suggestion.  can we move the new filter to hbase-client main package 
itself. Currently it is in hbase-server test package and while trying to use it 
with YCSB the test package contents are not getting added.  May be am not able 
to write the correct pom.xml for YCSB? am not sure on that.  But any way moving 
to client in the main package would resolve this.
But one thing is adding this filter in the main code may not be correct in 
terms of user's perspective?  What say?

 Provide option to filter out all rows in PerformanceEvaluation tool
 ---

 Key: HBASE-11026
 URL: https://issues.apache.org/jira/browse/HBASE-11026
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-11026_1.patch, HBASE-11026_2.patch, 
 HBASE-11026_4-0.98.patch, HBASE-11026_4.patch


 Performance Evaluation could also be used to check the actual performance of 
 the scans on the Server side by passing Filters that filters out all the 
 rows.  We can create a test filter and add it to the Filter.proto and set 
 this filter based on input params.  Could be helpful in testing.
 If you feel this is not needed pls feel free to close this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10801) Ensure DBE interfaces can work with Cell

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10801?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005848#comment-14005848
 ] 

ramkrishna.s.vasudevan commented on HBASE-10801:


Am able to get the above results consistently.  But not able to check that part 
of the code execution specifically.  Unless objections I will commit this patch 
later today or tomorrow.

 Ensure DBE interfaces can work with Cell
 

 Key: HBASE-10801
 URL: https://issues.apache.org/jira/browse/HBASE-10801
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10801.patch, HBASE-10801_1.patch, 
 HBASE-10801_2.patch, HBASE-10801_3.patch, HBASE-10801_4.patch, 
 HBASE-10801_5.patch, HBASE-10801_6.patch


 Some changes to the interfaces may be needed for DBEs or may be the way it 
 works currently may be need to be modified inorder to make DBEs work with 
 Cells. Suggestions and ideas welcome.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10974) Improve DBEs read performance by avoiding byte array deep copies for key[] and value[]

2014-05-22 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10974?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005849#comment-14005849
 ] 

ramkrishna.s.vasudevan commented on HBASE-10974:


Can we have the existing way of comparison during gets and for scans take the 
new code.  We could try creating an interface that gets instantiated based on 
get/scan.

 Improve DBEs read performance by avoiding byte array deep copies for key[] 
 and value[]
 --

 Key: HBASE-10974
 URL: https://issues.apache.org/jira/browse/HBASE-10974
 Project: HBase
  Issue Type: Improvement
  Components: Scanners
Affects Versions: 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0

 Attachments: HBASE-10974_1.patch


 As part of HBASE-10801, we  tried to reduce the copy of the value [] in 
 forming the KV from the DBEs. 
 The keys required copying and this was restricting us in using Cells and 
 always wanted to copy to be done.
 The idea here is to replace the key byte[] as ByteBuffer and create a 
 consecutive stream of the keys (currently the same byte[] is used and hence 
 the copy).  Use offset and length to track this key bytebuffer.
 The copy of the encoded format to normal Key format is definitely needed and 
 can't be avoided but we could always avoid the deep copy of the bytes to form 
 a KV and thus use cells effectively. Working on a patch, will post it soon.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11225) Backport fix for HBASE-10417 'index is not incremented in PutSortReducer#reduce()'

2014-05-22 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11225:


Status: Patch Available  (was: Open)

 Backport fix for HBASE-10417 'index is not incremented in 
 PutSortReducer#reduce()'
 --

 Key: HBASE-11225
 URL: https://issues.apache.org/jira/browse/HBASE-11225
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11225.patch


 The problem reported in HBASE-10417 exists in 0.94 code base.
 {code}
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
   }
 {code}
 This JIRA backports the fix to 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11225) Backport fix for HBASE-10417 'index is not incremented in PutSortReducer#reduce()'

2014-05-22 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11225:


Attachment: HBASE-11225.patch

 Backport fix for HBASE-10417 'index is not incremented in 
 PutSortReducer#reduce()'
 --

 Key: HBASE-11225
 URL: https://issues.apache.org/jira/browse/HBASE-11225
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11225.patch


 The problem reported in HBASE-10417 exists in 0.94 code base.
 {code}
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
   }
 {code}
 This JIRA backports the fix to 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11235) Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer

2014-05-22 Thread Gustavo Anatoly (JIRA)
Gustavo Anatoly created HBASE-11235:
---

 Summary: Backport fix for HBASE-11212 - Fix increment index in 
KeyValueSortReducer
 Key: HBASE-11235
 URL: https://issues.apache.org/jira/browse/HBASE-11235
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20


Fix increment index reported on to version 0.94: 
[https://issues.apache.org/jira/browse/HBASE-11212] 

{code}
 int index = 0;
for (KeyValue kv: map) {
  context.write(row, kv);
  if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
}
{code}





--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11235) Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer

2014-05-22 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11235:


Description: 
Fix increment index reported on : 
[https://issues.apache.org/jira/browse/HBASE-11212] 

{code}
 int index = 0;
for (KeyValue kv: map) {
  context.write(row, kv);
  if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
}
{code}


  was:
Fix increment index reported on to version 0.94: 
[https://issues.apache.org/jira/browse/HBASE-11212] 

{code}
 int index = 0;
for (KeyValue kv: map) {
  context.write(row, kv);
  if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
}
{code}




 Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer
 -

 Key: HBASE-11235
 URL: https://issues.apache.org/jira/browse/HBASE-11235
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20


 Fix increment index reported on : 
 [https://issues.apache.org/jira/browse/HBASE-11212] 
 {code}
  int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11235) Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer

2014-05-22 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11235:


Attachment: HBASE-11235.patch

 Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer
 -

 Key: HBASE-11235
 URL: https://issues.apache.org/jira/browse/HBASE-11235
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11235.patch


 Fix increment index reported on : 
 [https://issues.apache.org/jira/browse/HBASE-11212] 
 {code}
  int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11235) Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer

2014-05-22 Thread Gustavo Anatoly (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gustavo Anatoly updated HBASE-11235:


Status: Patch Available  (was: Open)

 Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer
 -

 Key: HBASE-11235
 URL: https://issues.apache.org/jira/browse/HBASE-11235
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11235.patch


 Fix increment index reported on : 
 [https://issues.apache.org/jira/browse/HBASE-11212] 
 {code}
  int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006065#comment-14006065
 ] 

Hadoop QA commented on HBASE-11234:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12646203/HBASE-11234.patch
  against trunk revision .
  ATTACHMENT ID: 12646203

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 3 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/9566//console

This message is automatically generated.

 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0

 Attachments: HBASE-11234.patch


 As Ted found, 
 With this change:
 {noformat}
 Index: 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 ===
 --- 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(revision 1596579)
 +++ 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(working copy)
 @@ -51,6 +51,7 @@
  import org.apache.hadoop.hbase.filter.FilterList.Operator;
  import org.apache.hadoop.hbase.filter.PageFilter;
  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
  import org.apache.hadoop.hbase.io.hfile.HFileContext;
  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
 @@ -90,6 +91,7 @@
  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
  HFileContextBuilder hcBuilder = new HFileContextBuilder();
  hcBuilder.withBlockSize(2 * 1024);
 +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  HFileContext hFileContext = hcBuilder.build();
  StoreFile.Writer writer = new StoreFile.WriterBuilder(
  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
 {noformat}
 I got:
 java.lang.AssertionError: 
 expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 
 but was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
   

[jira] [Updated] (HBASE-10818) Add integration test for bulkload with replicas

2014-05-22 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-10818:
-

Assignee: Devaraj Das  (was: Nick Dimiduk)

Assigning to Devaraj as he's picking this one up.

 Add integration test for bulkload with replicas
 ---

 Key: HBASE-10818
 URL: https://issues.apache.org/jira/browse/HBASE-10818
 Project: HBase
  Issue Type: Sub-task
Affects Versions: hbase-10070
Reporter: Nick Dimiduk
Assignee: Devaraj Das
 Fix For: hbase-10070

 Attachments: 10818-7.txt, HBASE-10818.00.patch, HBASE-10818.01.patch, 
 HBASE-10818.02.patch, IntegrationTestBulkLoad_replicas.log


 Should verify bulkload is not affected by region replicas.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11236) Last flushed sequence id is ignored by ServerManager

2014-05-22 Thread Jimmy Xiang (JIRA)
Jimmy Xiang created HBASE-11236:
---

 Summary: Last flushed sequence id is ignored by ServerManager
 Key: HBASE-11236
 URL: https://issues.apache.org/jira/browse/HBASE-11236
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang


HRegion.lastFlushSeqId is set to -1 at the beginning. So the first value master 
gets is a really a huge number instead since it is a uint64. That's why all 
valid last flushed sequence ids are ignored by the server manager.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11236) Last flushed sequence id is ignored by ServerManager

2014-05-22 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11236:


Description: 
I got lots of error messages like this:

{quote}
2014-05-22 08:58:59,793 DEBUG [RpcServer.handler=1,port=20020] 
master.ServerManager: RegionServer a2428.halxg.cloudera.com,20020,1400742071109 
indicates a last flushed sequence id (numberOfStores=9, numberOfStorefiles=2, 
storefileUncompressedSizeMB=517, storefileSizeMB=517, compressionRatio=1., 
memstoreSizeMB=0, storefileIndexSizeMB=0, readRequestsCount=0, 
writeRequestsCount=0, rootIndexSizeKB=34, totalStaticIndexSizeKB=381, 
totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, 
compactionProgressPct=NaN) that is less than the previous last flushed sequence 
id (605446) for region IntegrationTestBigLinkedList, 
�A��*t�^FU�2��0,1400740489477.a44d3e309b5a7e29355f6faa0d3a4095. Ignoring.
{quote}

RegionLoad.toString doesn't print out the last flushed sequence id passed in. 
Why is it less than the previous one?

  was:HRegion.lastFlushSeqId is set to -1 at the beginning. So the first value 
master gets is a really a huge number instead since it is a uint64. That's why 
all valid last flushed sequence ids are ignored by the server manager.


 Last flushed sequence id is ignored by ServerManager
 

 Key: HBASE-11236
 URL: https://issues.apache.org/jira/browse/HBASE-11236
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang
Assignee: Jimmy Xiang

 I got lots of error messages like this:
 {quote}
 2014-05-22 08:58:59,793 DEBUG [RpcServer.handler=1,port=20020] 
 master.ServerManager: RegionServer 
 a2428.halxg.cloudera.com,20020,1400742071109 indicates a last flushed 
 sequence id (numberOfStores=9, numberOfStorefiles=2, 
 storefileUncompressedSizeMB=517, storefileSizeMB=517, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=34, 
 totalStaticIndexSizeKB=381, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN) that is less than the 
 previous last flushed sequence id (605446) for region 
 IntegrationTestBigLinkedList, 
 �A��*t�^FU�2��0,1400740489477.a44d3e309b5a7e29355f6faa0d3a4095. Ignoring.
 {quote}
 RegionLoad.toString doesn't print out the last flushed sequence id passed in. 
 Why is it less than the previous one?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7456:
--

Affects Version/s: (was: 0.95.2)
   0.94.19

 Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
 

 Key: HBASE-7456
 URL: https://issues.apache.org/jira/browse/HBASE-7456
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.19
Reporter: Chip Salzenberg
Priority: Minor
 Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch


 Please allow the Configuration to override the hard-coded maxSize of 10 for 
 its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7456:
--

Fix Version/s: (was: 0.99.0)

 Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
 

 Key: HBASE-7456
 URL: https://issues.apache.org/jira/browse/HBASE-7456
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.19
Reporter: Chip Salzenberg
Priority: Minor
 Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch


 Please allow the Configuration to override the hard-coded maxSize of 10 for 
 its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7456:
--

Affects Version/s: (was: 0.94.4)

 Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
 

 Key: HBASE-7456
 URL: https://issues.apache.org/jira/browse/HBASE-7456
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.19
Reporter: Chip Salzenberg
Priority: Minor
 Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch


 Please allow the Configuration to override the hard-coded maxSize of 10 for 
 its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-7456:
--

Fix Version/s: (was: 0.98.3)

 Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
 

 Key: HBASE-7456
 URL: https://issues.apache.org/jira/browse/HBASE-7456
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.19
Reporter: Chip Salzenberg
Priority: Minor
 Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch


 Please allow the Configuration to override the hard-coded maxSize of 10 for 
 its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11214) Fixes for scans on a replicated table

2014-05-22 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das resolved HBASE-11214.
-

Resolution: Fixed

Committed to hbase-10070 branch. Thanks for the review, Enis.

 Fixes for scans on a replicated table
 -

 Key: HBASE-11214
 URL: https://issues.apache.org/jira/browse/HBASE-11214
 Project: HBase
  Issue Type: Sub-task
Reporter: Devaraj Das
Assignee: Devaraj Das
 Fix For: hbase-10070

 Attachments: 11214-1.txt, 11214-2.txt


 During testing with the IT in HBASE-10818, found an issue to do with how 
 close of scanners was handled. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-7456) Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads

2014-05-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-7456?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006132#comment-14006132
 ] 

Andrew Purtell commented on HBASE-7456:
---

Change is stale for trunk/0.98 but still relevant for 0.94 branch.

 Stargate's HTablePool maxSize is hard-coded at 10, too small for heavy loads
 

 Key: HBASE-7456
 URL: https://issues.apache.org/jira/browse/HBASE-7456
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.19
Reporter: Chip Salzenberg
Priority: Minor
 Attachments: HBASE-7456-0.94.patch, HBASE-7456-trunk.patch


 Please allow the Configuration to override the hard-coded maxSize of 10 for 
 its HTablePool.  Under high loads, 10 is too small.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11234) FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11234?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-11234:
---

Fix Version/s: 0.98.3

+1 for 0.98

 FastDiffDeltaEncoder#getFirstKeyInBlock returns wrong result
 

 Key: HBASE-11234
 URL: https://issues.apache.org/jira/browse/HBASE-11234
 Project: HBase
  Issue Type: Bug
Reporter: chunhui shen
Assignee: chunhui shen
Priority: Critical
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-11234.patch


 As Ted found, 
 With this change:
 {noformat}
 Index: 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
 ===
 --- 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(revision 1596579)
 +++ 
 hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestReversibleScanners.java
(working copy)
 @@ -51,6 +51,7 @@
  import org.apache.hadoop.hbase.filter.FilterList.Operator;
  import org.apache.hadoop.hbase.filter.PageFilter;
  import org.apache.hadoop.hbase.filter.SingleColumnValueFilter;
 +import org.apache.hadoop.hbase.io.encoding.DataBlockEncoding;
  import org.apache.hadoop.hbase.io.hfile.CacheConfig;
  import org.apache.hadoop.hbase.io.hfile.HFileContext;
  import org.apache.hadoop.hbase.io.hfile.HFileContextBuilder;
 @@ -90,6 +91,7 @@
  CacheConfig cacheConf = new CacheConfig(TEST_UTIL.getConfiguration());
  HFileContextBuilder hcBuilder = new HFileContextBuilder();
  hcBuilder.withBlockSize(2 * 1024);
 +hcBuilder.withDataBlockEncoding(DataBlockEncoding.FAST_DIFF);
  HFileContext hFileContext = hcBuilder.build();
  StoreFile.Writer writer = new StoreFile.WriterBuilder(
  TEST_UTIL.getConfiguration(), cacheConf, fs).withOutputDir(
 {noformat}
 I got:
 java.lang.AssertionError: 
 expected:testRow0197/testCf:testQual/1400712260004/Put/vlen=13/mvcc=5 
 but was:testRow0198/testCf:testQual/1400712260004/   Put/vlen=13/mvcc=0
   at org.junit.Assert.fail(Assert.java:88)
   at org.junit.Assert.failNotEquals(Assert.java:743)
   at org.junit.Assert.assertEquals(Assert.java:118)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.seekTestOfReversibleKeyValueScanner(TestReversibleScanners.java:533)
   at 
 org.apache.hadoop.hbase.regionserver.TestReversibleScanners.testReversibleStoreFileScanner(TestReversibleScanners.java:108)
 After debugging, it seems the method of 
 FastDiffDeltaEncoder#getFirstKeyInBlock become broken. And it will cause 
 hfilescanner#seekBefore returns wrong result.
 The solution is simple, see the patch.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11236) Last flushed sequence id is ignored by ServerManager

2014-05-22 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11236?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11236:


Assignee: (was: Jimmy Xiang)

 Last flushed sequence id is ignored by ServerManager
 

 Key: HBASE-11236
 URL: https://issues.apache.org/jira/browse/HBASE-11236
 Project: HBase
  Issue Type: Bug
Reporter: Jimmy Xiang

 I got lots of error messages like this:
 {quote}
 2014-05-22 08:58:59,793 DEBUG [RpcServer.handler=1,port=20020] 
 master.ServerManager: RegionServer 
 a2428.halxg.cloudera.com,20020,1400742071109 indicates a last flushed 
 sequence id (numberOfStores=9, numberOfStorefiles=2, 
 storefileUncompressedSizeMB=517, storefileSizeMB=517, 
 compressionRatio=1., memstoreSizeMB=0, storefileIndexSizeMB=0, 
 readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=34, 
 totalStaticIndexSizeKB=381, totalStaticBloomSizeKB=0, totalCompactingKVs=0, 
 currentCompactedKVs=0, compactionProgressPct=NaN) that is less than the 
 previous last flushed sequence id (605446) for region 
 IntegrationTestBigLinkedList, 
 �A��*t�^FU�2��0,1400740489477.a44d3e309b5a7e29355f6faa0d3a4095. Ignoring.
 {quote}
 RegionLoad.toString doesn't print out the last flushed sequence id passed in. 
 Why is it less than the previous one?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-9857) Blockcache prefetch option

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-9857:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and 0.98. Thanks for the reviews!

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10507) Proper filter tests for TestImportExport

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10507:
---

Fix Version/s: (was: 0.98.3)
   0.98.4

 Proper filter tests for TestImportExport
 

 Key: HBASE-10507
 URL: https://issues.apache.org/jira/browse/HBASE-10507
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl
 Fix For: 0.99.0, 0.96.3, 0.94.20, 0.98.4


 See parent. TestImportExport.testWithFilter used to passed by accident (until 
 parent is fixed and until very recently also in trunk).
 This is as simple as just added some non-matching rows to the tests. Other 
 than parent that should be added to all branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10536) ImportTsv should fail fast if any of the column family passed to the job is not present in the table

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10536:
---

Fix Version/s: (was: 0.98.3)
   0.98.4

 ImportTsv should fail fast if any of the column family passed to the job is 
 not present in the table
 

 Key: HBASE-10536
 URL: https://issues.apache.org/jira/browse/HBASE-10536
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.99.0, 0.96.3, 0.94.20, 0.98.4


 While checking 0.98 rc, running bulkload tools. By mistake passed wrong 
 column family to importtsv. LoadIncrementalHfiles failed with following 
 exception
 {code}
 Exception in thread main java.io.IOException: Unmatched family names found: 
 unmatched family names in HFiles to be bulkloaded: [f1]; valid family names 
 of table test are: [f]
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:823)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:828)
 {code}
  
 Its better to fail fast if any of the passed column family is not present in 
 table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10664) TestImportExport runs too long

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10664:
---

Fix Version/s: (was: 0.98.3)
   0.98.4

 TestImportExport runs too long
 --

 Key: HBASE-10664
 URL: https://issues.apache.org/jira/browse/HBASE-10664
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0
Reporter: Andrew Purtell
 Fix For: 0.99.0, 0.98.4


 Debugging with -Dsurefire.firstPartForkMode=always 
 -Dsurefire.secondPartForkMode=always looking for a hanging test. 
 388 seconds.
 {noformat}
 Forking command line: /bin/sh -c cd /data/src/hbase/hbase-server  
 /usr/lib/jvm/java-1.7.0.45-oracle-amd64/jre/bin/java -enableassertions 
 -Xmx1900m -XX:MaxPermSize=100m -Djava.security.egd=file:/dev/./urandom 
 -Djava.net.preferIPv4Stack=true -Djava.awt.headless=true -jar 
 /data/src/hbase/hbase-server/target/surefire/surefirebooter7637958208277391169.jar
  /data/src/hbase/hbase-server/target/surefire/surefire6877889026110956843tmp 
 /data/src/hbase/hbase-server/target/surefire/surefire_1907837210788480451831tmp
 Running org.apache.hadoop.hbase.mapreduce.TestImportExport
 Tests run: 11, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 388.246 sec
 {noformat}
 Slim down or break it up.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10820) Bulkload: something is going bananas with '_tmp'

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10820?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10820:
---

Fix Version/s: (was: 0.98.3)
   0.98.4

 Bulkload: something is going bananas with '_tmp'
 

 Key: HBASE-10820
 URL: https://issues.apache.org/jira/browse/HBASE-10820
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Affects Versions: 0.98.0, 0.99.0
Reporter: Nick Dimiduk
Priority: Minor
 Fix For: 0.99.0, 0.98.4


 While working on HBASE-10818, I noted the following in my logs
 {noformat}
 2014-03-24 15:26:25,283 INFO  [RpcServer.handler=24,port=52056] 
 regionserver.HStore: Successfully loaded store file 
 file:/Users/ndimiduk/repos/hbase/target/test-data/8e203abb-90b4-4284-9816-963ce3461d5f/IntegrationTestBulkLoad-2/L/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/_tmp/IntegrationTestBulkLoad,119.bottom
  into store L (new location: 
 file:/var/folders/b8/n5n91drd7xg0rlt5n6fgsjtwgn/T/hbase-ndimiduk/hbase/data/default/IntegrationTestBulkLoad/42e004b8909307e2d983ddbe59276638/L/6443c445f00d453dbdb0b41f925efdb5_SeqId_9_)
 {noformat}
 Something is going overboard with the temp directory path. Worth 
 investigating.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10761:
---

Fix Version/s: (was: 0.98.3)

Unscheduling from 0.98. Put back when ready.

 StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
 

 Key: HBASE-10761
 URL: https://issues.apache.org/jira/browse/HBASE-10761
 Project: HBase
  Issue Type: Bug
  Components: Balancer
Affects Versions: 0.98.0
Reporter: Victor Xu
 Fix For: 0.99.0

 Attachments: HBASE_10761.patch, HBASE_10761_v2.patch


 StochasticLoadBalancer has become the default balancer since 0.98.0. But its 
 balanceCluster method still uses the BaseLoadBalancer.needBalance() which is 
 originally designed for SimpleLoadBalancer. It's all based on the number of 
 regions on the regionservers.
 This can cause such a problem: when the cluster has some Hot Spot Region, the 
 balance process may not be triggered because the numbers of regions on the 
 RegionServers are averaged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11110) Ability to load FilterList class is dependent on context classloader

2014-05-22 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-0?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006169#comment-14006169
 ] 

Lars Hofhansl commented on HBASE-0:
---

Looks good to me.

 Ability to load FilterList class is dependent on context classloader
 

 Key: HBASE-0
 URL: https://issues.apache.org/jira/browse/HBASE-0
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.19
Reporter: Gabriel Reid
Assignee: Gabriel Reid
 Fix For: 0.94.20

 Attachments: HBASE-0.patch


 In the 0.94 branch, the FilterList class contains a static call to 
 HBaseConfiguration.create(). This create call in turn adds the needed hbase 
 resources to the Configuration object, and sets the classloader of the 
 Configuration object to be the context classloader of the current thread (if 
 it isn't null).
 This approach causes issues if the FilterList class is loaded from a thread 
 that has a custom context classloader that doesn't run back up to the main 
 application classloader. In this case, 
 HBaseConfiguration.checkDefaultsVersion fails because the 
 hbase.defaults.for.version configuration value can't be found (because 
 hbase-default.xml can't be found by the custom context classloader).
 This is a concrete issue that was discovered via Apache Phoenix within a 
 commercial tool, when a (JDBC) connection is opened via a pool, and then 
 passed off to a UI thread that has a custom context classloader. The UI 
 thread is then the first thing to load FilterList, leading to this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10832) IntegrationTestIngestStripeCompactions timed out

2014-05-22 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10832?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell updated HBASE-10832:
---

Fix Version/s: (was: 0.98.3)
   0.98.4

 IntegrationTestIngestStripeCompactions timed out
 

 Key: HBASE-10832
 URL: https://issues.apache.org/jira/browse/HBASE-10832
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.4


 IntegrationTestIngestStripeCompactions timed out when executing in local 
 mode, failed to shut down cleanly (LoadTestTool worker threads were trying to 
 finish, master's catalog scanner was trying to scan a meta table that had 
 gone away, etc.), and became a zombie. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006170#comment-14006170
 ] 

Andrew Purtell commented on HBASE-10831:


Deadline for commit for 0.98.3 is tomorrow, otherwise lets push to .4.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_trunk_v2.patch, HBASE-10831_trunk_v3.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11048) Support setting custom priority per client RPC

2014-05-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006198#comment-14006198
 ] 

Andrew Purtell commented on HBASE-11048:


So this is all you need to achieve your objectives [~jesse_yates] I assume.

I looked at the 0.98 patch. The changes are additions to existing interfaces or 
changes to classes with an internal audience. I will compare the next RC 
against earlier releases looking for variations in per op latencies. We can 
sink the RC and back out the change if there is a significant regression. 

Any objections to committing? 

 Support setting custom priority per client RPC
 --

 Key: HBASE-11048
 URL: https://issues.apache.org/jira/browse/HBASE-11048
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.99.0, 0.98.2
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: Phoenix
 Fix For: 0.99.0, 0.98.3

 Attachments: hbase-11048-0.98-v0.patch, hbase-11048-trunk-v0.patch, 
 hbase-11048-trunk-v1.patch


 Servers have the ability to handle custom rpc priority levels, but currently 
 we are only using it to differentiate META/ROOT updates from replication and 
 other 'priority' updates (as specified by annotation tags per RS method). 
 However, some clients need the ability to create custom handlers (e.g. 
 PHOENIX-938) which can really only be cleanly tied together to requests by 
 the request priority. The disconnect is in that there is no way for the 
 client to overwrite the priority per table - the PayloadCarryingRpcController 
 will always just set priority per ROOT/META and otherwise just use the 
 generic priority.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11120) Update documentation about major compaction algorithm

2014-05-22 Thread Sergey Shelukhin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006215#comment-14006215
 ] 

Sergey Shelukhin commented on HBASE-11120:
--

This may need correcting?
{noformat}
+entry /
+entry /
+entry1.2F/entry
+  /row
{noformat}

{noformat}
+entryhbase.hstore.compaction.max.size/entry
+entryThe maximum size for a file to be eligible for 
compaction, expressed in
+  bytes./entry
+entry1000/entry
{noformat}
This is not a very good description imho...

It would be good to insert link to effect of major compaction on read results 
into the section about deletes. There's description somewhere in the book 
already.

 Update documentation about major compaction algorithm
 -

 Key: HBASE-11120
 URL: https://issues.apache.org/jira/browse/HBASE-11120
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-11120.patch


 [14:20:38]  jdcryans seems that there's 
 http://hbase.apache.org/book.html#compaction and 
 http://hbase.apache.org/book.html#managed.compactions
 [14:20:56]  jdcryans the latter doesn't say much, except that you 
 should manage them
 [14:21:44]  jdcryans the former gives a good description of the 
 _old_ selection algo
 [14:45:25]  jdcryans this is the new selection algo since C5 / 
 0.96.0: https://issues.apache.org/jira/browse/HBASE-7842



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006238#comment-14006238
 ] 

Enis Soztutar commented on HBASE-10070:
---

bq.  But we consider that being able to define consistency level per-operation 
is mandatory, right?
Yes. 
bq. deploy pluggable policy at client side which which decide on RPC requests, 
this policy would be used globally for all requests as default
Right now there is no alternate implementation for normal RPCs, and only 1 
model for TIMELINE RPCs. When we have alternate implementations, we can make it 
pluggable (and configurable per operation). 
bq. current timeline consistency model doesn't assume quorums for write, so I 
think it makes sense to add QUORUM_STRONG in enum.
I don't like adding this now. Once we have a corresponding implementation for 
the proposed quorum write we can add it to the enum later. 


 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006247#comment-14006247
 ] 

Vladimir Rodionov commented on HBASE-9857:
--

[~apurtell], do you take into account that all new blocks are cached in young 
gen space, which is 25% of overall cache? If you do not read block immediately 
after write (into cache) it will never get promoted into multi-bucket (50% of a 
cache) and you will be trashing bottom 25% of a block cache?

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-11110) Ability to load FilterList class is dependent on context classloader

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-0?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-0.
---

  Resolution: Fixed
Hadoop Flags: Reviewed

Committed to 0.94. Thanks Gabriel.

 Ability to load FilterList class is dependent on context classloader
 

 Key: HBASE-0
 URL: https://issues.apache.org/jira/browse/HBASE-0
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.94.19
Reporter: Gabriel Reid
Assignee: Gabriel Reid
 Fix For: 0.94.20

 Attachments: HBASE-0.patch


 In the 0.94 branch, the FilterList class contains a static call to 
 HBaseConfiguration.create(). This create call in turn adds the needed hbase 
 resources to the Configuration object, and sets the classloader of the 
 Configuration object to be the context classloader of the current thread (if 
 it isn't null).
 This approach causes issues if the FilterList class is loaded from a thread 
 that has a custom context classloader that doesn't run back up to the main 
 application classloader. In this case, 
 HBaseConfiguration.checkDefaultsVersion fails because the 
 hbase.defaults.for.version configuration value can't be found (because 
 hbase-default.xml can't be found by the custom context classloader).
 This is a concrete issue that was discovered via Apache Phoenix within a 
 commercial tool, when a (JDBC) connection is opened via a pool, and then 
 passed off to a UI thread that has a custom context classloader. The UI 
 thread is then the first thing to load FilterList, leading to this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11235) Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11235?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11235:
--

   Resolution: Duplicate
Fix Version/s: (was: 0.94.20)
   Status: Resolved  (was: Patch Available)

Lets commit as part of HBASE-11212.

 Backport fix for HBASE-11212 - Fix increment index in KeyValueSortReducer
 -

 Key: HBASE-11235
 URL: https://issues.apache.org/jira/browse/HBASE-11235
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Attachments: HBASE-11235.patch


 Fix increment index reported on : 
 [https://issues.apache.org/jira/browse/HBASE-11212] 
 {code}
  int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11212:
--

Fix Version/s: 0.94.20

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006267#comment-14006267
 ] 

Lars Hofhansl commented on HBASE-11212:
---

+1 on 0.94 as well.
[~stack] 0.96? Or are we letting that one die?

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006274#comment-14006274
 ] 

stack commented on HBASE-11212:
---

[~lhofhansl] Too trivial...

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11225) Backport fix for HBASE-10417 'index is not incremented in PutSortReducer#reduce()'

2014-05-22 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006277#comment-14006277
 ] 

Lars Hofhansl commented on HBASE-11225:
---

+1

[~stack], 0.96?

 Backport fix for HBASE-10417 'index is not incremented in 
 PutSortReducer#reduce()'
 --

 Key: HBASE-11225
 URL: https://issues.apache.org/jira/browse/HBASE-11225
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11225.patch


 The problem reported in HBASE-10417 exists in 0.94 code base.
 {code}
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
   }
 {code}
 This JIRA backports the fix to 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-10831:
-

Attachment: HBASE-10831_98_v4.patch
HBASE-10831_trunk_v4.patch

Addressed review comments. Thanks for prompt reviews.

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_98_v4.patch, HBASE-10831_trunk_v2.patch, 
 HBASE-10831_trunk_v3.patch, HBASE-10831_trunk_v4.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11217) Race between SplitLogManager task creation + TimeoutMonitor

2014-05-22 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-11217:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I have committed v2 patch to master and v1 to 0.98. Thanks for reviews. 

 Race between SplitLogManager task creation + TimeoutMonitor
 ---

 Key: HBASE-11217
 URL: https://issues.apache.org/jira/browse/HBASE-11217
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.99.0, 0.98.3

 Attachments: hbase-11217_v1.patch, hbase-11217_v2.patch


 Some time ago, we reported a test failure in HBASE-11036, which resulted in 
 already-split and merged regions coming back to life, causing split brain for 
 region boundaries and resulting in data loss. 
 It turns out that the root cause was not concurrent online schema change + 
 region split/merge, but meta log splitting failing and the meta updates 
 getting lost. This in turn causes the region split/merge information and 
 assignment to be lost causing large scale data loss. 
 Logs below shows that the split task for meta log is created, but before the 
 znode is created, the timeout thread kicks in and sees the unassigned task. 
 Then it does a get on znode which fails with NoNode (because the znode is not 
 created yet). This causes the task to be marked complete (setDone(path, 
 SUCCESS)) which means that the logs are lost. Meta is assigned elsewhere (and 
 opened with the same seqId as previous) confirming data loss in meta. 
 {code}
 2014-04-16 18:31:26,267 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] 
 handler.MetaServerShutdownHandler: Splitting hbase:meta logs for 
 hor13n03.gq1.ygridcore.net,60020,1397672668647
 2014-04-16 18:31:26,274 DEBUG 
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.MasterFileSystem: 
 Renamed region directory: 
 hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting
 2014-04-16 18:31:26,274 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: dead 
 splitlog workers [hor13n03.gq1.ygridcore.net,60020,1397672668647]
 2014-04-16 18:31:26,276 DEBUG 
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 Scheduling batch of logs to split
 2014-04-16 18:31:26,276 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 started splitting 1 logs in 
 [hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting]
 2014-04-16 18:31:26,276 INFO  
 [hor13n02.gq1.ygridcore.net,6,1397672191204.splitLogManagerTimeoutMonitor]
  master.SplitLogManager: total tasks = 1 unassigned = 1 
 tasks={/hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta=last_update
  = -1 last_version = -
 2014-04-16 18:31:26,276 DEBUG 
 [hor13n02.gq1.ygridcore.net,6,1397672191204.splitLogManagerTimeoutMonitor]
  master.SplitLogManager: resubmitting unassigned task(s) after timeout
 2014-04-16 18:31:26,277 WARN  [main-EventThread] 
 master.SplitLogManager$GetDataAsyncCallback: task znode 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
  vanished.
 2014-04-16 18:31:26,277 INFO  [main-EventThread] master.SplitLogManager: Done 
 splitting 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
 2014-04-16 18:31:26,282 DEBUG [main-EventThread] master.SplitLogManager: put 
 up splitlog task at znode 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
   
 
 2014-04-16 18:31:26,286 WARN  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 returning success without actually splitting and deleting all the log files 
 in path 
 hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting
 2014-04-16 18:31:26,286 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 finished splitting (more than or equal to) 9 bytes in 1 log files in 
 [hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting]
  in 10ms
 2014-04-16 18:31:26,290 DEBUG [main-EventThread] 
 

[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006293#comment-14006293
 ] 

Vladimir Rodionov commented on HBASE-10070:
---

{quote}
In the present HBase architecture, it is hard, probably impossible, to satisfy 
constraints like 99th percentile of the reads will be served under 10 ms
{quote}

I did some quick math. To blame bottom 1% on RS down event, means, that, on 
average (with 1 min MTTR),  we should see ~ 14-15 RS down events per cluster 
per day (1440 minutes). I think it is way above what we have in a real life. 
Just a quick math. I am not saying that this is not worth doing, but it will 
not give us 10ms 99% (this is what I am pretty sure about). Just saying. This 
is only for RS down type of failures. We all know that HBase cluster may 
experience other types of temporary disabilities, which may affect read request 
latency:

* blocked writes under heavy load (probably  reads as well?) - not sure. 
Solution : tune configuration and throttle incoming requests
* blocked reads due to blocked writes (no available handlers to serve incoming 
requests). Solution: have different pools for write/reads or use priority on 
RPC requests (new feature, correct?)
* excessive GC (sometimes) - Solution: off heap, off heap, off heap.
* something else, I forgot or not aware about?

but all of these should be and must avoided in a properly configured and tuned 
cluster. 

So, this is basically, to mitigate serious events (RS down) , not transient 
ones. To improve read request latency distribution there is one classic 
solution that works for sure - cache, cache, cache. 






 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11229) Change block cache percentage metrics to be doubles rather than ints

2014-05-22 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11229:
--

  Resolution: Fixed
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)
  Status: Resolved  (was: Patch Available)

Committed to master.  Thanks for the review [~ndimiduk].

 Change block cache percentage metrics to be doubles rather than ints
 

 Key: HBASE-11229
 URL: https://issues.apache.org/jira/browse/HBASE-11229
 Project: HBase
  Issue Type: Sub-task
  Components: metrics
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11229.txt


 See parent issue.  Small changes in the hit percentage can have large 
 implications, even when movement is inside a single percent: i.e. going from 
 99.11 to 99.87 percent.  As is, percents are ints.  Make them doubles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11225) Backport fix for HBASE-10417 'index is not incremented in PutSortReducer#reduce()'

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11225?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11225:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94. Can add 0.96 if needed.

 Backport fix for HBASE-10417 'index is not incremented in 
 PutSortReducer#reduce()'
 --

 Key: HBASE-11225
 URL: https://issues.apache.org/jira/browse/HBASE-11225
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.94.20

 Attachments: HBASE-11225.patch


 The problem reported in HBASE-10417 exists in 0.94 code base.
 {code}
   for (KeyValue kv : map) {
 context.write(row, kv);
 if (index  0  index % 100 == 0)
   context.setStatus(Wrote  + index);
   }
 {code}
 This JIRA backports the fix to 0.94.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006296#comment-14006296
 ] 

Mikhail Antonov commented on HBASE-10070:
-

[~enis] yep, sounds good to me. Adding new value in enum should be 
backward-compatible.

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006300#comment-14006300
 ] 

Lars Hofhansl commented on HBASE-11212:
---

Does that mean it too trivial to bother? Or too trivial in terms of risk 
and do you do want it? :)

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11016) Remove Filter#filterRow(List)

2014-05-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006309#comment-14006309
 ] 

stack commented on HBASE-11016:
---

What does this patch do for HBASE-10965 and HBASE-11093

We seem to still have hasFilterRow in the Interface yet we remove filterRow 
here.  Is that right?

 Remove Filter#filterRow(List)
 -

 Key: HBASE-11016
 URL: https://issues.apache.org/jira/browse/HBASE-11016
 Project: HBase
  Issue Type: Task
  Components: Filters
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11016-v1.txt


 0.96+ the filterRow(List) method is deprecated:
 {code}
* WARNING: please to not override this method.  Instead override {@link 
 #filterRowCells(List)}.
* This is for transition from 0.94 - 0.96
**/
   @Deprecated
   abstract public void filterRow(ListKeyValue kvs) throws IOException;
 {code}
 This method should be removed from Filter classes for 1.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006312#comment-14006312
 ] 

Vladimir Rodionov commented on HBASE-10070:
---

May be my point was not clear ... I agree that read HA (how many 9's by the 
way) is good feature to have, but this won't give us what the developer 
declared in the description section. The relatively high MTTR is not the major 
source of a bad 90%+ request latency in HBase.   

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006319#comment-14006319
 ] 

stack commented on HBASE-11212:
---

[~lhofhansl] Too trivial to bother.

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Konstantin Boudnik (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006317#comment-14006317
 ] 

Konstantin Boudnik commented on HBASE-10070:


bq. but it will not give us 10ms 99%
In reality, nobody with a skin in the game is seriously considering 99% 
availability. Even 99.999% isn't suitable for some. Would you consider 5.7 
minutes annualized to be a plausible danger?

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11212:
--

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94, 0.98, master for now.

 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10924) [region_mover]: Adjust region_mover script to retry unloading a server a configurable number of times in case of region splits/merges

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10924?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10924:
--

Fix Version/s: (was: 0.94.20)
   0.94.21

 [region_mover]: Adjust region_mover script to retry unloading a server a 
 configurable number of times in case of region splits/merges
 -

 Key: HBASE-10924
 URL: https://issues.apache.org/jira/browse/HBASE-10924
 Project: HBase
  Issue Type: Bug
  Components: Region Assignment
Affects Versions: 0.94.15
Reporter: Aleksandr Shulman
Assignee: Aleksandr Shulman
  Labels: region_mover, rolling_upgrade
 Fix For: 0.94.21

 Attachments: HBASE-10924-0.94-v2.patch, HBASE-10924-0.94-v3.patch


 Observed behavior:
 In about 5% of cases, my rolling upgrade tests fail because of stuck regions 
 during a region server unload. My theory is that this occurs when region 
 assignment information changes between the time the region list is generated, 
 and the time when the region is to be moved.
 An example of such a region information change is a split or merge.
 Example:
 Regionserver A has 100 regions (#0-#99). The balancer is turned off and the 
 regionmover script is called to unload this regionserver. The regionmover 
 script will generate the list of 100 regions to be moved and then proceed 
 down that list, moving the regions off in series. However, there is a region, 
 #84, that has split into two daughter regions while regions 0-83 were moved. 
 The script will be stuck trying to move #84, timeout, and then the failure 
 will bubble up (attempt 1 failed).
 Proposed solution:
 This specific failure mode should be caught and the region_mover script 
 should now attempt to move off all the regions. Now, it will have 16+1 (due 
 to split) regions to move. There is a good chance that it will be able to 
 move all 17 off without issues. However, should it encounter this same issue 
 (attempt 2 failed), it will retry again. This process will continue until the 
 maximum number of unload retry attempts has been reached.
 This is not foolproof, but let's say for the sake of argument that 5% of 
 unload attempts hit this issue, then with a retry count of 3, it will reduce 
 the unload failure probability from 0.05 to 0.000125 (0.05^3).
 Next steps:
 I am looking for feedback on this approach. If it seems like a sensible 
 approach, I will create a strawman patch and test it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HBASE-10507) Proper filter tests for TestImportExport

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10507?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-10507.
---

   Resolution: Won't Fix
Fix Version/s: (was: 0.98.4)
   (was: 0.94.20)
   (was: 0.96.3)
   (was: 0.99.0)

Keep pushing it. Marking as Won't fix unless somebody wants to just do it.

 Proper filter tests for TestImportExport
 

 Key: HBASE-10507
 URL: https://issues.apache.org/jira/browse/HBASE-10507
 Project: HBase
  Issue Type: Sub-task
Reporter: Lars Hofhansl

 See parent. TestImportExport.testWithFilter used to passed by accident (until 
 parent is fixed and until very recently also in trunk).
 This is as simple as just added some non-matching rows to the tests. Other 
 than parent that should be added to all branches.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10536) ImportTsv should fail fast if any of the column family passed to the job is not present in the table

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10536?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-10536:
--

Fix Version/s: (was: 0.94.20)
   0.94.21

 ImportTsv should fail fast if any of the column family passed to the job is 
 not present in the table
 

 Key: HBASE-10536
 URL: https://issues.apache.org/jira/browse/HBASE-10536
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.99.0, 0.96.3, 0.94.21, 0.98.4


 While checking 0.98 rc, running bulkload tools. By mistake passed wrong 
 column family to importtsv. LoadIncrementalHfiles failed with following 
 exception
 {code}
 Exception in thread main java.io.IOException: Unmatched family names found: 
 unmatched family names in HFiles to be bulkloaded: [f1]; valid family names 
 of table test are: [f]
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.doBulkLoad(LoadIncrementalHFiles.java:241)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.run(LoadIncrementalHFiles.java:823)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
 at 
 org.apache.hadoop.hbase.mapreduce.LoadIncrementalHFiles.main(LoadIncrementalHFiles.java:828)
 {code}
  
 Its better to fail fast if any of the passed column family is not present in 
 table.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11096) stop method of Master and RegionServer coprocessor is not invoked

2014-05-22 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11096?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11096:
--

Fix Version/s: (was: 0.94.20)
   0.94.21

Lemme push to 0.94.21, so that I can get an RC out.

 stop method of Master and RegionServer coprocessor  is not invoked
 --

 Key: HBASE-11096
 URL: https://issues.apache.org/jira/browse/HBASE-11096
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.96.2, 0.98.1, 0.94.19
Reporter: Qiang Tian
Assignee: Qiang Tian
 Fix For: 0.99.0, 0.96.3, 0.98.3, 0.94.21

 Attachments: HBASE-11096-0.94.patch, HBASE-11096-0.96.patch, 
 HBASE-11096-0.98.patch, HBASE-11096-trunk-v0.patch, 
 HBASE-11096-trunk-v0.patch, HBASE-11096-trunk-v1.patch, 
 HBASE-11096-trunk-v2.patch, HBASE-11096-trunk-v3.patch


 the stop method of coprocessor specified by 
 hbase.coprocessor.master.classes and  
 hbase.coprocessor.regionserver.classes  is not invoked.
 If coprocessor allocates OS resources, it could cause master/regionserver 
 resource leak or hang during exit.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006335#comment-14006335
 ] 

Vladimir Rodionov commented on HBASE-10070:
---

I apologize [~enis], I call you - developer :). 

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006343#comment-14006343
 ] 

Vladimir Rodionov commented on HBASE-10070:
---

{quote}
In reality, nobody with a skin in the game is seriously considering 99% 
availability. Even 99.999% isn't suitable for some. Would you consider 5.7 
minutes annualized to be a plausible danger?
{quote}

How does this contradict my statement?  HA is great feature, but it has nothing 
to do with improving read requests latency distribution.

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006351#comment-14006351
 ] 

Andrew Purtell commented on HBASE-9857:
---

bq.  If you do not read block immediately after write (into cache) it will 
never get promoted into multi-bucket (50% of a cache) and you will be trashing 
bottom 25% of a block cache

We have a separate already existing schema setting for cache-on-write.

Otherwise, sure, there's no magic here. It's a tuning option.

 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Andrey Stepachev (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006358#comment-14006358
 ] 

Andrey Stepachev commented on HBASE-10070:
--

If you can spread read load of hot region across shadows - read latencies can 
go down due of less contention, especially in case avoiding reading from 
primary rs. 

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11016) Remove Filter#filterRow(List)

2014-05-22 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006360#comment-14006360
 ] 

Ted Yu commented on HBASE-11016:


There is filterRow() in Filter which hasFilterRow() applies to.

 Remove Filter#filterRow(List)
 -

 Key: HBASE-11016
 URL: https://issues.apache.org/jira/browse/HBASE-11016
 Project: HBase
  Issue Type: Task
  Components: Filters
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11016-v1.txt


 0.96+ the filterRow(List) method is deprecated:
 {code}
* WARNING: please to not override this method.  Instead override {@link 
 #filterRowCells(List)}.
* This is for transition from 0.94 - 0.96
**/
   @Deprecated
   abstract public void filterRow(ListKeyValue kvs) throws IOException;
 {code}
 This method should be removed from Filter classes for 1.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10070) HBase read high-availability using timeline-consistent region replicas

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10070?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006367#comment-14006367
 ] 

Mikhail Antonov commented on HBASE-10070:
-

bq. HA is great feature, but it has nothing to do with improving read requests 
latency distribution.

In this jira we're saying that we can't provide good latency in 99 (.999?) % 
cases for the following (but not limited, as there're also GC pauses etc) 
reason: when region replica fails (RS fails) the requests time out, or just 
take really long time. And this feature is addressing this aspect. So this jira 
aims (as I understand) to give HA (possibly stale) replicas, with added benefit 
of reduced latency.

 HBase read high-availability using timeline-consistent region replicas
 --

 Key: HBASE-10070
 URL: https://issues.apache.org/jira/browse/HBASE-10070
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Attachments: HighAvailabilityDesignforreadsApachedoc.pdf


 In the present HBase architecture, it is hard, probably impossible, to 
 satisfy constraints like 99th percentile of the reads will be served under 10 
 ms. One of the major factors that affects this is the MTTR for regions. There 
 are three phases in the MTTR process - detection, assignment, and recovery. 
 Of these, the detection is usually the longest and is presently in the order 
 of 20-30 seconds. During this time, the clients would not be able to read the 
 region data.
 However, some clients will be better served if regions will be available for 
 reads during recovery for doing eventually consistent reads. This will help 
 with satisfying low latency guarantees for some class of applications which 
 can work with stale reads.
 For improving read availability, we propose a replicated read-only region 
 serving design, also referred as secondary regions, or region shadows. 
 Extending current model of a region being opened for reads and writes in a 
 single region server, the region will be also opened for reading in region 
 servers. The region server which hosts the region for reads and writes (as in 
 current case) will be declared as PRIMARY, while 0 or more region servers 
 might be hosting the region as SECONDARY. There may be more than one 
 secondary (replica count  2).
 Will attach a design doc shortly which contains most of the details and some 
 thoughts about development approaches. Reviews are more than welcome. 
 We also have a proof of concept patch, which includes the master and regions 
 server side of changes. Client side changes will be coming soon as well. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11217) Race between SplitLogManager task creation + TimeoutMonitor

2014-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11217?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006369#comment-14006369
 ] 

Hudson commented on HBASE-11217:


FAILURE: Integrated in HBase-TRUNK #5137 (See 
[https://builds.apache.org/job/HBase-TRUNK/5137/])
HBASE-11217 Race between SplitLogManager task creation + TimeoutMonitor (enis: 
rev 92b2c86776d968c9f44bbf848e56eff753c8950f)
* hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java


 Race between SplitLogManager task creation + TimeoutMonitor
 ---

 Key: HBASE-11217
 URL: https://issues.apache.org/jira/browse/HBASE-11217
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
Priority: Critical
 Fix For: 0.99.0, 0.98.3

 Attachments: hbase-11217_v1.patch, hbase-11217_v2.patch


 Some time ago, we reported a test failure in HBASE-11036, which resulted in 
 already-split and merged regions coming back to life, causing split brain for 
 region boundaries and resulting in data loss. 
 It turns out that the root cause was not concurrent online schema change + 
 region split/merge, but meta log splitting failing and the meta updates 
 getting lost. This in turn causes the region split/merge information and 
 assignment to be lost causing large scale data loss. 
 Logs below shows that the split task for meta log is created, but before the 
 znode is created, the timeout thread kicks in and sees the unassigned task. 
 Then it does a get on znode which fails with NoNode (because the znode is not 
 created yet). This causes the task to be marked complete (setDone(path, 
 SUCCESS)) which means that the logs are lost. Meta is assigned elsewhere (and 
 opened with the same seqId as previous) confirming data loss in meta. 
 {code}
 2014-04-16 18:31:26,267 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] 
 handler.MetaServerShutdownHandler: Splitting hbase:meta logs for 
 hor13n03.gq1.ygridcore.net,60020,1397672668647
 2014-04-16 18:31:26,274 DEBUG 
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.MasterFileSystem: 
 Renamed region directory: 
 hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting
 2014-04-16 18:31:26,274 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: dead 
 splitlog workers [hor13n03.gq1.ygridcore.net,60020,1397672668647]
 2014-04-16 18:31:26,276 DEBUG 
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 Scheduling batch of logs to split
 2014-04-16 18:31:26,276 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 started splitting 1 logs in 
 [hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting]
 2014-04-16 18:31:26,276 INFO  
 [hor13n02.gq1.ygridcore.net,6,1397672191204.splitLogManagerTimeoutMonitor]
  master.SplitLogManager: total tasks = 1 unassigned = 1 
 tasks={/hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta=last_update
  = -1 last_version = -
 2014-04-16 18:31:26,276 DEBUG 
 [hor13n02.gq1.ygridcore.net,6,1397672191204.splitLogManagerTimeoutMonitor]
  master.SplitLogManager: resubmitting unassigned task(s) after timeout
 2014-04-16 18:31:26,277 WARN  [main-EventThread] 
 master.SplitLogManager$GetDataAsyncCallback: task znode 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
  vanished.
 2014-04-16 18:31:26,277 INFO  [main-EventThread] master.SplitLogManager: Done 
 splitting 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
 2014-04-16 18:31:26,282 DEBUG [main-EventThread] master.SplitLogManager: put 
 up splitlog task at znode 
 /hbase/splitWAL/WALs%2Fhor13n03.gq1.ygridcore.net%2C60020%2C1397672668647-splitting%2Fhor13n03.gq1.ygridcore.net%252C60020%252C1397672668647.1397672681632.meta
   
 
 2014-04-16 18:31:26,286 WARN  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 returning success without actually splitting and deleting all the log files 
 in path 
 hdfs://hor13n01.gq1.ygridcore.net:8020/apps/hbase/data/WALs/hor13n03.gq1.ygridcore.net,60020,1397672668647-splitting
 2014-04-16 18:31:26,286 INFO  
 [MASTER_META_SERVER_OPERATIONS-hor13n02:6-2] master.SplitLogManager: 
 finished splitting (more than or equal to) 9 bytes in 1 log files in 
 

[jira] [Commented] (HBASE-9857) Blockcache prefetch option

2014-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006372#comment-14006372
 ] 

Hudson commented on HBASE-9857:
---

FAILURE: Integrated in HBase-TRUNK #5137 (See 
[https://builds.apache.org/job/HBase-TRUNK/5137/])
HBASE-9857 Blockcache prefetch option (apurtell: rev 
58818496daad0572843eacbeabfb95bc6af816ee)
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CacheConfig.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SingleSizeCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFile.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockIndex.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestPrefetch.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlockIndex.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/BlockCache.java
* hbase-client/src/main/java/org/apache/hadoop/hbase/HColumnDescriptor.java
* hbase-shell/src/main/ruby/hbase/admin.rb
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestCacheOnWriteInSchema.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/CombinedBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV2.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/LruBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/slab/SlabCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestHeapMemoryManager.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/DoubleBlockCache.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileReaderV3.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestLruBlockCache.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/util/CompoundBloomFilter.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/CacheTestUtils.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/PrefetchExecutor.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestCacheOnWrite.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/bucket/BucketCache.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/bucket/TestBucketCache.java
Amend HBASE-9857 Blockcache prefetch option; add missing license header 
(apurtell: rev 264725d59274374d7b9c8ee2b47a86713ab1a6b8)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/AbstractHFileReader.java


 Blockcache prefetch option
 --

 Key: HBASE-9857
 URL: https://issues.apache.org/jira/browse/HBASE-9857
 Project: HBase
  Issue Type: Improvement
Reporter: Andrew Purtell
Assignee: Andrew Purtell
Priority: Minor
 Fix For: 0.99.0, 0.98.3

 Attachments: 9857.patch, 9857.patch, HBASE-9857-0.98.patch, 
 HBASE-9857-trunk.patch, HBASE-9857-trunk.patch


 Attached patch implements a prefetching function for HFile (v3) blocks, if 
 indicated by a column family or regionserver property. The purpose of this 
 change is to as rapidly after region open as reasonable warm the blockcache 
 with all the data and index blocks of (presumably also in-memory) table data, 
 without counting those block loads as cache misses. Great for fast reads and 
 keeping the cache hit ratio high. Can tune the IO impact versus time until 
 all data blocks are in cache. Works a bit like CompactSplitThread. Makes some 
 effort not to stampede.
 I have been using this for setting up various experiments and thought I'd 
 polish it up a bit and throw it out there. If the data to be preloaded will 
 not fit in blockcache, or if as a percentage of blockcache it is large, this 
 is not a good idea, will just blow out the cache and trigger a lot of useless 
 GC activity. Might be useful as an expert tuning option though. Or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11212) Fix increment index in KeyValueSortReducer

2014-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006374#comment-14006374
 ] 

Hudson commented on HBASE-11212:


FAILURE: Integrated in HBase-TRUNK #5137 (See 
[https://builds.apache.org/job/HBase-TRUNK/5137/])
HBASE-11212 Fix increment index in KeyValueSortReducer. (Gustavo Anatoly) 
(larsh: rev dd9ac0c0ad33448a46b5e61334bf86975fd9f779)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/KeyValueSortReducer.java


 Fix increment index in KeyValueSortReducer
 --

 Key: HBASE-11212
 URL: https://issues.apache.org/jira/browse/HBASE-11212
 Project: HBase
  Issue Type: Bug
Reporter: Gustavo Anatoly
Assignee: Gustavo Anatoly
Priority: Minor
 Fix For: 0.99.0, 0.94.20, 0.98.3

 Attachments: HBASE-11212.patch


 The index is never incremented inside the loop, therefore context.setStatus 
 also is never set.
 {code}
 int index = 0;
 for (KeyValue kv: map) {
   context.write(row, kv);
   if (index  0  index % 100 == 0) context.setStatus(Wrote  + index);
 }
 {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10835) DBE encode path improvements

2014-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10835?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006371#comment-14006371
 ] 

Hudson commented on HBASE-10835:


FAILURE: Integrated in HBase-TRUNK #5137 (See 
[https://builds.apache.org/job/HBase-TRUNK/5137/])
HBASE-10835 DBE encode path improvements.(Anoop) (anoopsamjohn: rev 
53513dcb452e104bbfd71819054bf4d68808f731)
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodedDataBlock.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockEncodingContext.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/EncodingState.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlockCompatibility.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/CopyKeyDataBlockEncoder.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV3.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/HFileBlockDefaultEncodingContext.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DataBlockEncoder.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/DiffKeyDeltaEncoder.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileWriterV2.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/EncoderFactory.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/PrefixKeyDeltaEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestDataBlockEncoders.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoder.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestSeekToBlockWithEncoders.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/FastDiffDeltaEncoder.java
* 
hbase-common/src/main/java/org/apache/hadoop/hbase/io/encoding/BufferedDataBlockEncoder.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileDataBlockEncoderImpl.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/PrefixTreeCodec.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileDataBlockEncoder.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/NoOpDataBlockEncoder.java
* 
hbase-prefix-tree/src/main/java/org/apache/hadoop/hbase/codec/prefixtree/encode/EncoderPoolImpl.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/encoding/TestPrefixTreeEncoding.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/io/hfile/TestHFileBlock.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/io/hfile/HFileBlock.java


 DBE encode path improvements
 

 Key: HBASE-10835
 URL: https://issues.apache.org/jira/browse/HBASE-10835
 Project: HBase
  Issue Type: Improvement
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 0.99.0

 Attachments: HBASE-10835.patch, HBASE-10835_V2.patch, 
 HBASE-10835_V3.patch, HBASE-10835_V4.patch, HBASE-10835_V5.patch


 Here 1st we write KVs (Cells) into a buffer and then passed to DBE encoder. 
 Encoder again reads kvs one by one from the buffer and encodes and creates a 
 new buffer.
 There is no need to have this model now. Previously we had option of no 
 encode in disk and encode only in cache. At that time the read buffer from a 
 HFile block was passed to this and encodes.
 So encode cell by cell can be done now. Making this change will need us to 
 have a NoOp DBE impl which just do the write of a cell as it is with out any 
 encoding.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11229) Change block cache percentage metrics to be doubles rather than ints

2014-05-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11229?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006370#comment-14006370
 ] 

Hudson commented on HBASE-11229:


FAILURE: Integrated in HBase-TRUNK #5137 (See 
[https://builds.apache.org/job/HBase-TRUNK/5137/])
HBASE-11229 Change block cache percentage metrics to be doubles rather than 
ints (stack: rev 46e53b089a81c2e1606b1616b1abf64277de50a9)
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapper.java
* 
hbase-hadoop-compat/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerSource.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperStub.java
* hbase-server/src/main/javadoc/org/apache/hadoop/hbase/io/hfile/package.html
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/MetricsRegionServerWrapperImpl.java


 Change block cache percentage metrics to be doubles rather than ints
 

 Key: HBASE-11229
 URL: https://issues.apache.org/jira/browse/HBASE-11229
 Project: HBase
  Issue Type: Sub-task
  Components: metrics
Reporter: stack
Assignee: stack
 Fix For: 0.99.0

 Attachments: 11229.txt


 See parent issue.  Small changes in the hit percentage can have large 
 implications, even when movement is inside a single percent: i.e. going from 
 99.11 to 99.87 percent.  As is, percents are ints.  Make them doubles.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-10831) IntegrationTestIngestWithACL is not setting up LoadTestTool correctly

2014-05-22 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-10831?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-10831:
---

Status: Patch Available  (was: Open)

 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly
 -

 Key: HBASE-10831
 URL: https://issues.apache.org/jira/browse/HBASE-10831
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Andrew Purtell
Assignee: Vandana Ayyalasomayajula
 Fix For: 0.99.0, 0.98.3

 Attachments: HBASE-10831_98_v1.patch, HBASE-10831_98_v3.patch, 
 HBASE-10831_98_v4.patch, HBASE-10831_trunk_v2.patch, 
 HBASE-10831_trunk_v3.patch, HBASE-10831_trunk_v4.patch


 IntegrationTestIngestWithACL is not setting up LoadTestTool correctly.
 {noformat}
 Tests run: 1, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 601.709 sec 
  FAILURE!
 testIngest(org.apache.hadoop.hbase.IntegrationTestIngestWithACL)  Time 
 elapsed: 601.489 sec   FAILURE!
 java.lang.AssertionError: Failed to initialize LoadTestTool expected:0 but 
 was:1
 at org.junit.Assert.fail(Assert.java:88)
 at org.junit.Assert.failNotEquals(Assert.java:743)
 at org.junit.Assert.assertEquals(Assert.java:118)
 at org.junit.Assert.assertEquals(Assert.java:555)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.initTable(IntegrationTestIngest.java:74)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngest.setUpCluster(IntegrationTestIngest.java:69)
 at 
 org.apache.hadoop.hbase.IntegrationTestIngestWithACL.setUpCluster(IntegrationTestIngestWithACL.java:58)
 at 
 org.apache.hadoop.hbase.IntegrationTestBase.setUp(IntegrationTestBase.java:89)
 {noformat}
 Could be related to HBASE-10675?



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11026) Provide option to filter out all rows in PerformanceEvaluation tool

2014-05-22 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11026?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006383#comment-14006383
 ] 

Andrew Purtell commented on HBASE-11026:


I'd say fix the YCSB POM rather than move a return nothing filter to 
hbase-client. It's not useful for a user.

 Provide option to filter out all rows in PerformanceEvaluation tool
 ---

 Key: HBASE-11026
 URL: https://issues.apache.org/jira/browse/HBASE-11026
 Project: HBase
  Issue Type: Improvement
Affects Versions: 0.99.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.99.0, 0.98.2

 Attachments: HBASE-11026_1.patch, HBASE-11026_2.patch, 
 HBASE-11026_4-0.98.patch, HBASE-11026_4.patch


 Performance Evaluation could also be used to check the actual performance of 
 the scans on the Server side by passing Filters that filters out all the 
 rows.  We can create a test filter and add it to the Filter.proto and set 
 this filter based on input params.  Could be helpful in testing.
 If you feel this is not needed pls feel free to close this issue.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11048) Support setting custom priority per client RPC

2014-05-22 Thread Jesse Yates (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11048?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006433#comment-14006433
 ] 

Jesse Yates commented on HBASE-11048:
-

Thanks for taking a look [~apurtell]! I'll commit this evening (right under the 
RC deadline :)), unless there are any objections 

 Support setting custom priority per client RPC
 --

 Key: HBASE-11048
 URL: https://issues.apache.org/jira/browse/HBASE-11048
 Project: HBase
  Issue Type: Improvement
  Components: Client
Affects Versions: 0.99.0, 0.98.2
Reporter: Jesse Yates
Assignee: Jesse Yates
  Labels: Phoenix
 Fix For: 0.99.0, 0.98.3

 Attachments: hbase-11048-0.98-v0.patch, hbase-11048-trunk-v0.patch, 
 hbase-11048-trunk-v1.patch


 Servers have the ability to handle custom rpc priority levels, but currently 
 we are only using it to differentiate META/ROOT updates from replication and 
 other 'priority' updates (as specified by annotation tags per RS method). 
 However, some clients need the ability to create custom handlers (e.g. 
 PHOENIX-938) which can really only be cleanly tied together to requests by 
 the request priority. The disconnect is in that there is no way for the 
 client to overwrite the priority per table - the PayloadCarryingRpcController 
 will always just set priority per ROOT/META and otherwise just use the 
 generic priority.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11120) Update documentation about major compaction algorithm

2014-05-22 Thread Jean-Daniel Cryans (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006480#comment-14006480
 ] 

Jean-Daniel Cryans commented on HBASE-11120:


On top of Sergey's comments:

This could also use a sentence regarding TTLs, they are simply filtered out and 
tombstones aren't created:

{noformat}
bq. titleCompaction and Deletions/title
{noformat}

Same kind of comment for this para, the old versions are filtered more than 
deleted:

{noformat}
+titleCompaction and Versions/title
{noformat}

One thing bothers me about this para:

{noformat}
+paraThe compaction algorithms used by HBase have evolved over 
time. HBase 0.96
+  introduced a new algorithm for compaction file selection. To 
find out about the old
+  algorithm, see xref
+linkend=compaction /. The rest of this section describes 
the new algorithm, which
+  was implemented in link
+
xlink:href=https://issues.apache.org/jira/browse/HBASE-7842;HBASE-7842/link.
{noformat}

What's written next is much more about how to pick the files that will then be 
considered for compaction than the actual new policy which is in 
ExploringCompactionPolicy.java. In other words, the text that follows what I 
just quoted is ok but not really new in 0.96 via HBASE-7842, and it seems to me 
that we should explain what HBASE-7842 does.

 Update documentation about major compaction algorithm
 -

 Key: HBASE-11120
 URL: https://issues.apache.org/jira/browse/HBASE-11120
 Project: HBase
  Issue Type: Bug
  Components: Compaction, documentation
Affects Versions: 0.98.2
Reporter: Misty Stanley-Jones
Assignee: Misty Stanley-Jones
 Attachments: HBASE-11120.patch


 [14:20:38]  jdcryans seems that there's 
 http://hbase.apache.org/book.html#compaction and 
 http://hbase.apache.org/book.html#managed.compactions
 [14:20:56]  jdcryans the latter doesn't say much, except that you 
 should manage them
 [14:21:44]  jdcryans the former gives a good description of the 
 _old_ selection algo
 [14:45:25]  jdcryans this is the new selection algo since C5 / 
 0.96.0: https://issues.apache.org/jira/browse/HBASE-7842



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006485#comment-14006485
 ] 

Mikhail Antonov commented on HBASE-11165:
-

bq. Oh I see that looks like a ZK API limitation then, we can't do partial 
enumeration. So this is an opportunity for an alternative Mikhail Antonov
Yep (plus, in this particular case, it may also be a misconfiguration, or 
default configuration of zookeeper. Btw, were you able to overcome this error 
with the settings described at the link, if you had a chance to?)

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack
 Attachments: HBASE-11165.zip


 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11016) Remove Filter#filterRow(List)

2014-05-22 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11016?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006492#comment-14006492
 ] 

stack commented on HBASE-11016:
---

So we remove the filterRow batch but not the individual filterRow call?

Is this issue just addressing a '@deprecate' and not the problems raised over 
in issues such as HBASE-10965 and HBASE-11093?

 Remove Filter#filterRow(List)
 -

 Key: HBASE-11016
 URL: https://issues.apache.org/jira/browse/HBASE-11016
 Project: HBase
  Issue Type: Task
  Components: Filters
Reporter: Ted Yu
Assignee: Ted Yu
Priority: Minor
 Fix For: 0.99.0

 Attachments: 11016-v1.txt


 0.96+ the filterRow(List) method is deprecated:
 {code}
* WARNING: please to not override this method.  Instead override {@link 
 #filterRowCells(List)}.
* This is for transition from 0.94 - 0.96
**/
   @Deprecated
   abstract public void filterRow(ListKeyValue kvs) throws IOException;
 {code}
 This method should be removed from Filter classes for 1.0



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HBASE-11237) Bulk load initiated by user other than hbase fails

2014-05-22 Thread Dima Spivak (JIRA)
Dima Spivak created HBASE-11237:
---

 Summary: Bulk load initiated by user other than hbase fails
 Key: HBASE-11237
 URL: https://issues.apache.org/jira/browse/HBASE-11237
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Dima Spivak
Assignee: Jimmy Xiang
Priority: Critical


Running TestLoadIncrementalHFiles and TestHFileOutputFormat as a properly 
kinit'd HBase superuser who isn't hbase began to fail last month after a 
patch to fix HBASE-10902 was committed to trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11237) Bulk load initiated by user other than hbase fails

2014-05-22 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11237:


Attachment: hbase-11237.patch

Attached a patch that don't override the token in case they are for the same 
FileSystem. If the token is overridden, and the user is different, we may not 
be able to rename files any more.

 Bulk load initiated by user other than hbase fails
 --

 Key: HBASE-11237
 URL: https://issues.apache.org/jira/browse/HBASE-11237
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Dima Spivak
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-11237.patch


 Running TestLoadIncrementalHFiles and TestHFileOutputFormat as a properly 
 kinit'd HBase superuser who isn't hbase began to fail last month after a 
 patch to fix HBASE-10902 was committed to trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11237) Bulk load initiated by user other than hbase fails

2014-05-22 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-11237:


Status: Patch Available  (was: Open)

 Bulk load initiated by user other than hbase fails
 --

 Key: HBASE-11237
 URL: https://issues.apache.org/jira/browse/HBASE-11237
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Dima Spivak
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-11237.patch


 Running TestLoadIncrementalHFiles and TestHFileOutputFormat as a properly 
 kinit'd HBase superuser who isn't hbase began to fail last month after a 
 patch to fix HBASE-10902 was committed to trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11165) Scaling so cluster can host 1M regions and beyond (50M regions?)

2014-05-22 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006508#comment-14006508
 ] 

Enis Soztutar commented on HBASE-11165:
---

One other thing to consider is that we want to put even more data per region 
entry in meta. First, we would like to keep a history of RS assignments for the 
region in meta (this is for debugging + new assignment design). Second, ideally 
we would want to keep the region files in meta as well and get away without 
doing reference files, etc. It will ultimately depend on the new master design 
of course when these can go in. So I would say the current size calculations 
might not be relevant in the longer term.  

 Scaling so cluster can host 1M regions and beyond (50M regions?)
 

 Key: HBASE-11165
 URL: https://issues.apache.org/jira/browse/HBASE-11165
 Project: HBase
  Issue Type: Brainstorming
Reporter: stack
 Attachments: HBASE-11165.zip


 This discussion issue comes out of Co-locate Meta And Master HBASE-10569 
 and comments on the doc posted there.
 A user -- our Francis Liu -- needs to be able to scale a cluster to do 1M 
 regions maybe even 50M later.  This issue is about discussing how we will do 
 that (or if not 50M on a cluster, how otherwise we can attain same end).
 More detail to follow.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006509#comment-14006509
 ] 

Mikhail Antonov commented on HBASE-11108:
-

If that looks good ([~stack] any feedbacks?), it'd be great to have it 
committed may be this week (like before merge of HBASE-10070 to trunk), as the 
patch, given its size, may rot.

Also there're few other patches hanging around, which will need to be 
rebased/merged against the changes made in this one.

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11237) Bulk load initiated by user other than hbase fails

2014-05-22 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11237?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006516#comment-14006516
 ] 

Matteo Bertozzi commented on HBASE-11237:
-

+1 looks good to me

 Bulk load initiated by user other than hbase fails
 --

 Key: HBASE-11237
 URL: https://issues.apache.org/jira/browse/HBASE-11237
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.1
Reporter: Dima Spivak
Assignee: Jimmy Xiang
Priority: Critical
 Attachments: hbase-11237.patch


 Running TestLoadIncrementalHFiles and TestHFileOutputFormat as a properly 
 kinit'd HBase superuser who isn't hbase began to fail last month after a 
 patch to fix HBASE-10902 was committed to trunk.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-10761) StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic

2014-05-22 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006588#comment-14006588
 ] 

Elliott Clark commented on HBASE-10761:
---

The process of computing a balance can be very expensive.  We should be very 
careful to make sure that we can't get into a situation where the balancer will 
do a full pass through all option, determine that nothing needs to move, and 
loop through again next time.

 StochasticLoadBalancer still uses SimpleLoadBalancer's needBalance logic
 

 Key: HBASE-10761
 URL: https://issues.apache.org/jira/browse/HBASE-10761
 Project: HBase
  Issue Type: Bug
  Components: Balancer
Affects Versions: 0.98.0
Reporter: Victor Xu
 Fix For: 0.99.0

 Attachments: HBASE_10761.patch, HBASE_10761_v2.patch


 StochasticLoadBalancer has become the default balancer since 0.98.0. But its 
 balanceCluster method still uses the BaseLoadBalancer.needBalance() which is 
 originally designed for SimpleLoadBalancer. It's all based on the number of 
 regions on the regionservers.
 This can cause such a problem: when the cluster has some Hot Spot Region, the 
 balance process may not be triggered because the numbers of regions on the 
 RegionServers are averaged.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-22 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-11108:
--

   Resolution: Fixed
Fix Version/s: 0.99.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to master branch.  Thanks for the nice patch [~mantonov]  Sorry for 
the delay (Forgot about it for a second there).

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Fix For: 0.99.0

 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HBASE-11108) Split ZKTable into interface and implementation

2014-05-22 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006596#comment-14006596
 ] 

Mikhail Antonov commented on HBASE-11108:
-

Thanks [~stack] and [~enis] for reviews! 

 Split ZKTable into interface and implementation
 ---

 Key: HBASE-11108
 URL: https://issues.apache.org/jira/browse/HBASE-11108
 Project: HBase
  Issue Type: Sub-task
  Components: Consensus, Zookeeper
Affects Versions: 0.99.0
Reporter: Konstantin Boudnik
Assignee: Mikhail Antonov
 Fix For: 0.99.0

 Attachments: HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, HBASE-11108.patch, 
 HBASE-11108.patch


 In HBASE-11071 we are trying to split admin handlers away from ZK. However, a 
 ZKTable instance is being used in multiple places, hence it would be 
 beneficial to hide its implementation behind a well defined interface.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >