[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762772#comment-13762772
 ] 

Hudson commented on HBASE-8930:
---

FAILURE: Integrated in HBase-0.94 #1145 (See 
[https://builds.apache.org/job/HBase-0.94/1145/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521356)
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.94/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/branches/0.94/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION =//
 Put put = new Put(row);
 

[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-10 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762774#comment-13762774
 ] 

Chao Shi commented on HBASE-8884:
-

stack, could you please explain a little bit more on pooling of buffers across 
requests. I don't quite understand. In fact, the very first rationale for us 
to introduce pluggable RpcScheduler, is that we want to isolate read and write 
ops. So we can simply write a RpcScheduler with two thread-pools. My case is 
pretty easy, and I'm interested to listen about your case.

 Pluggable RpcScheduler
 --

 Key: HBASE-8884
 URL: https://issues.apache.org/jira/browse/HBASE-8884
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
 hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
 hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch


 Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
 isolated thread-pools based on their priority. In the current implementation, 
 all normal get/put requests are using the same pool. We'd like to add some 
 per-user or per-region level isolation, so that a misbehaved user/region will 
 not saturate the thread-pool and cause DoS to others easily. The idea is 
 similar to FairScheduler in MR. The current scheduling code is not standalone 
 and is mixed with others (Connection#processRequest). The issue is the first 
 step to extract it to an interface, so that people are free to write and test 
 their own implementations.
 This patch doesn't make it completely pluggable yet, as some parameters are 
 pass from constructor. This is because HMaster and HRegionServer both use 
 RpcServer and they have different thread-pool size config. Let me know if you 
 have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Fix Version/s: 0.96.0
Affects Version/s: 0.95.2
   0.94.11
   Status: Patch Available  (was: Open)

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.94.11, 0.95.2
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Attachment: HBASE-9482.patch

Patch for trunk.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762799#comment-13762799
 ] 

Hadoop QA commented on HBASE-9482:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602297/HBASE-9482.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7112//console

This message is automatically generated.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762800#comment-13762800
 ] 

chunhui shen commented on HBASE-9488:
-

bq.we instead pass one arg 'boolean shortScan'. 
In the method  HStore#getScanners, 
{format}storeFilesToScan 
=this.storeEngine.getStoreFileManager().getFilesForScanOrGet(isGet, startRow, 
stopRow);{format}
The arg 'isGet' is used, thus need a new arg to specify whether using pread

bq.Is this caching location? Will we cache a location across changes? i.e. 
changes in location for the HRegionInfo?
Sure, it use current client region cache mechanism

bq.Does this have to public +public class ClientSmallScanner extends 
AbstractClientScanner {?
Existed ClientScanner is also public, keep the same with it

bq.You should instead say that the amount of data should be small and inside 
the one region.
If the scan range is within one data block, it could be considered as a small 
scan


bq.Should the Scan check that the stoprow is inside a single region and fail if 
not?
Now, I hope it is controlled by user. e.g. if the scan cross multi regions, but 
only scan two rows, in that case, small scan also be better.


Improve the javadoc of Scan#small in patch-V2

review board:

https://reviews.apache.org/r/14059/



 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, test results.jpg


 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Attachment: HBASE-9488-trunkV2.patch

 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread chunhui shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

chunhui shen updated HBASE-9488:


Description: 
review board:

https://reviews.apache.org/r/14059/


Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
patched on 0.94 version
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 

  was:
Now, one scan operation would call 3 RPC at least:
openScanner();
next();
closeScanner();

I think we could reduce the RPC call to one for small scan to get better 
performance

Also using pread is better than seek+read for small scan (For this point, see 
more on HBASE-7266)


Implements such a small scan as the patch, and take the performance test as 
following:

a.Environment:
patched on 0.94 version
one regionserver; 
one client with 50 concurrent threads;
KV size:50/100;
100% LRU cache hit ratio;
Random start row of scan


b.Results:
See the picture attachment


*Usage:*
Scan scan = new Scan(startRow,stopRow);
scan.setSmall(true);
ResultScanner scanner = table.getScanner(scan);

Set the new 'small' attribute as true for scan, others are the same
 


 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762803#comment-13762803
 ] 

Hudson commented on HBASE-8930:
---

FAILURE: Integrated in HBase-TRUNK #4484 (See 
[https://builds.apache.org/job/HBase-TRUNK/4484/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521354)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // === INSERTION 

[jira] [Updated] (HBASE-9249) Add cp hook before setting PONR in split

2013-09-10 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9249:
--

Attachment: HBASE-9249_v8.patch

PreCommit build failed with unexpected hudson problem. Retrying QA.
{code}
FATAL: Unable to delete script file /tmp/hudson1422193118764676423.sh
hudson.util.IOException2: remote file operation failed: 
/tmp/hudson1422193118764676423.sh at hudson.remoting.Channel@419aad26:hadoop1
at hudson.FilePath.act(FilePath.java:905)
at hudson.FilePath.act(FilePath.java:882)
at hudson.FilePath.delete(FilePath.java:1291)
at hudson.tasks.CommandInterpreter.perform(CommandInterpreter.java:101)
{code}

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.98.0

 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, 
 HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, 
 HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, 
 HBASE-9249_v8.patch, HBASE-9249_v8.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762820#comment-13762820
 ] 

Nicolas Liochon commented on HBASE-9482:


I'm +1 on the patch, but
 - a review by Andrew or Gary would be better
 - this needs a release notes and a fat warning in the documentation, as 
someone upgrading will have to change its settings to keep hbase running in 
secure mode. Especially for 0.94.

brw, it's strange that it does not impact any unit test.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9490) Provide independent execution environment for small tests

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762830#comment-13762830
 ] 

Nicolas Liochon commented on HBASE-9490:


Yes. We should try to have as much real unit tests as possible. A real unit 
tests being something that can work in parallel with the other tests. That's a 
continuous effort, difficult to do, but necessary if we want to limit our test 
time. Agreed, there is not much individual  short term incentive 

Surefire now supports reuse fork, this would allow to parallelize the small 
tests at no cost. Surefire 2.15 was not working with HBase, but the 2.16 is now 
available. If you can give it a try it would be great (see the old patch in 
HBASE-4955).

-PlocalTests should be used only to run a subset of tests. Even on your local 
machine, you should run the tests in parallel (and usually your dev machine 
will be more powerful than the apache build machine). This is explained in the 
hbase book, 16.7.3.5. Running tests faster).



 Provide independent execution environment for small tests
 -

 Key: HBASE-9490
 URL: https://issues.apache.org/jira/browse/HBASE-9490
 Project: HBase
  Issue Type: Improvement
Reporter: Vasu Mariyala
Assignee: Vasu Mariyala
 Attachments: 0.94-Independent-Test-Execution.patch, 
 0.96-trunk-Independent-Test-Execution.patch


 Some of the state related to schema metrics is stored in static variables and 
 since the small test cases are run in a single jvm, it is causing random 
 behavior in the output of the tests.
 An example scenario is the test case failures in HBASE-8930
 {code}
 for (SchemaMetrics cfm : tableAndFamilyToMetrics.values()) {
 if (metricName.startsWith(CF_PREFIX + CF_PREFIX)) {
   throw new AssertionError(Column family prefix used twice:  +
   metricName);
 }
 {code}
 The above code throws an error when the metric name starts with cf.cf.. It 
 would be helpful if any one sheds some light on the reason behind checking 
 for cf.cf.
 The scenarios in which we would have a metric name start with cf.cf. are as 
 follows (See generateSchemaMetricsPrefix method of SchemaMetrics)
 a) The column family name should be cf
 AND
 b) The table name is either  or use table name globally should be false 
 (useTableNameGlobally variable of SchemaMetrics).
 Table name is empty only in the case of ALL_SCHEMA_METRICS which has the 
 column family as . So we could rule out the
 possibility of the table name being empty.
 Also to note, the variables useTableNameGlobally and 
 tableAndFamilyToMetrics of SchemaMetrics are static and are shared across 
 all the tests that run in a single jvm. In our case, the profile runAllTests 
 has the below configuration
 {code}
 surefire.firstPartForkModeonce/surefire.firstPartForkMode
 surefire.firstPartParallelnone/surefire.firstPartParallel
 surefire.firstPartThreadCount1/surefire.firstPartThreadCount
   
 surefire.firstPartGroupsorg.apache.hadoop.hbase.SmallTests/surefire.firstPartGroups
 {code}
 Hence all of our small tests run in a single jvm and share the above 
 variables useTableNameGlobally and tableAndFamilyToMetrics.
 The reasons why the order of execution of the tests caused this failure are 
 as follows
 a) A bunch of small tests like TestMemStore, TestSchemaConfiguredset set the 
 useTableNameGlobally to false. But these tests don't create tables that have 
 the column family name as cf.
 b) If the tests in step (a) run before the tests which create table/regions 
 with column family 'cf', metric names would start with cf.cf.
 c) If any of other tests, like the failed tests(TestScannerSelectionUsingTTL, 
 TestHFileReaderV1, TestScannerSelectionUsingKeyRange), validate schema 
 metrics, they would fail as the metric names start with cf.cf.
 On my local machine, I have tried to re-create the failure scenario by 
 changing the sure fire test configuration and creating a simple (TestSimple) 
 which just creates a region for the table 'testtable' and column family 'cf'.
 {code}
 TestSimple.java
 --
   @Before
   public void setUp() throws Exception {
 HTableDescriptor htd = new HTableDescriptor(TABLE_NAME_BYTES);
 htd.addFamily(new HColumnDescriptor(FAMILY_NAME_BYTES));
 HRegionInfo info = new HRegionInfo(TABLE_NAME_BYTES, null, null, false);
 this.region = HRegion.createHRegion(info, TEST_UTIL.getDataTestDir(),
 TEST_UTIL.getConfiguration(), htd);
 Put put = new Put(ROW_BYTES);
 for (int i = 0; i  10; i += 2) {
   // puts 0, 2, 4, 6 and 8
   put.add(FAMILY_NAME_BYTES, Bytes.toBytes(QUALIFIER_PREFIX + i), i,
   Bytes.toBytes(VALUE_PREFIX + i));
 }
 

[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762833#comment-13762833
 ] 

Chao Shi commented on HBASE-9488:
-

Great patch! We planned to do this too. I didn't get how you reduce RPCs to one 
(is it implemented in this patch?).

 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762834#comment-13762834
 ] 

Nicolas Liochon commented on HBASE-9359:


[~ram_krish] yes.

 Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, 
 ColumnInterpreter
 --

 Key: HBASE-9359
 URL: https://issues.apache.org/jira/browse/HBASE-9359
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, 
 hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, 
 hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch


 This path is the second half of eliminating KeyValue from the client 
 interfaces.  This percolated through quite a bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762841#comment-13762841
 ] 

Hudson commented on HBASE-8930:
---

SUCCESS: Integrated in hbase-0.96 #28 (See 
[https://builds.apache.org/job/hbase-0.96/28/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521355)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) 

[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762844#comment-13762844
 ] 

Nicolas Liochon commented on HBASE-9488:


Impressive patch. 
Some comments:
 - As Stack said, does it needs to be public? But it's strange, 
AbstractClientScanner is public as well...
 - The javadocs mentions ShortClientScanner. 
 - hbase.client.smallscanner.caching should be in hbase-default, it's an 
important setting
 - there seeems to be some duplication between ClientScanner  this class, may 
be some stuff could be shared...

 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread chunhui shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762842#comment-13762842
 ] 

chunhui shen commented on HBASE-9488:
-

bq.I didn't get how you reduce RPCs to one (is it implemented in this patch?).
Yes, See the ClientSmallScanner in the patch.

With the patch, no RPC when calling HTable#getScanner and ResultScanner#close, 
only call RPC when calling ResultScanner#next

 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9249) Add cp hook before setting PONR in split

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9249?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762858#comment-13762858
 ] 

Hadoop QA commented on HBASE-9249:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602300/HBASE-9249_v8.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 6 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7113//console

This message is automatically generated.

 Add cp hook before setting PONR in split
 

 Key: HBASE-9249
 URL: https://issues.apache.org/jira/browse/HBASE-9249
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.98.0
Reporter: rajeshbabu
Assignee: rajeshbabu
 Fix For: 0.98.0

 Attachments: HBASE-9249.patch, HBASE-9249_v2.patch, 
 HBASE-9249_v3.patch, HBASE-9249_v4.patch, HBASE-9249_v5.patch, 
 HBASE-9249_v6.patch, HBASE-9249_v7.patch, HBASE-9249_v7.patch, 
 HBASE-9249_v8.patch, HBASE-9249_v8.patch


 This hook helps to perform split on user region and corresponding index 
 region such that both will be split or none.
 With this hook split for user and index region as follows
 user region
 ===
 1) Create splitting znode for user region split
 2) Close parent user region
 3) split user region storefiles
 4) instantiate child regions of user region
 Through the new hook we can call index region transitions as below
 index region
 ===
 5) Create splitting znode for index region split
 6) Close parent index region
 7) Split storefiles of index region
 8) instantiate child regions of the index region
 If any failures in 5,6,7,8 rollback the steps and return null, on null return 
 throw exception to rollback for 1,2,3,4
 9) set PONR
 10) do batch put of offline and split entries for user and index regions
 index region
 
 11) open daughers of index regions and transition znode to split. This step 
 we will do through preSplitAfterPONR hook. Opening index regions before 
 opening user regions helps to avoid put failures if there is colocation 
 mismatch(this can happen if user regions opening completed but index regions 
 opening in progress)
 user region
 ===
 12) open daughers of user regions and transition znode to split.
 Even if region server crashes also at the end both user and index regions 
 will be split or none

--
This message is automatically generated by JIRA.
If you think it was 

[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Status: Open  (was: Patch Available)

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.94.11, 0.95.2
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Attachment: HBASE-9482.patch

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Release Note: Seems that trunk code moved ahead since I generated the 
patch. Resubmitting.
  Status: Patch Available  (was: Open)

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.94.11, 0.95.2
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8751) Enable peer cluster to choose/change the ColumnFamilies/Tables it really want to replicate from a source cluster

2013-09-10 Thread Feng Honghua (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762885#comment-13762885
 ] 

Feng Honghua commented on HBASE-8751:
-

[~jdcryans] would you please help review this patch? thanks

 Enable peer cluster to choose/change the ColumnFamilies/Tables it really want 
 to replicate from a source cluster
 

 Key: HBASE-8751
 URL: https://issues.apache.org/jira/browse/HBASE-8751
 Project: HBase
  Issue Type: Improvement
  Components: Replication
Reporter: Feng Honghua
 Attachments: HBASE-8751-0.94-V0.patch


 Consider scenarios (all cf are with replication-scope=1):
 1) cluster S has 3 tables, table A has cfA,cfB, table B has cfX,cfY, table C 
 has cf1,cf2.
 2) cluster X wants to replicate table A : cfA, table B : cfX and table C from 
 cluster S.
 3) cluster Y wants to replicate table B : cfY, table C : cf2 from cluster S.
 Current replication implementation can't achieve this since it'll push the 
 data of all the replicatable column-families from cluster S to all its peers, 
 X/Y in this scenario.
 This improvement provides a fine-grained replication theme which enable peer 
 cluster to choose the column-families/tables they really want from the source 
 cluster:
 A). Set the table:cf-list for a peer when addPeer:
   hbase-shell add_peer '3', zk:1100:/hbase, table1; table2:cf1,cf2; 
 table3:cf2
 B). View the table:cf-list config for a peer using show_peer_tableCFs:
   hbase-shell show_peer_tableCFs 1
 C). Change/set the table:cf-list for a peer using set_peer_tableCFs:
   hbase-shell set_peer_tableCFs '2', table1:cfX; table2:cf1; table3:cf1,cf2
 In this theme, replication-scope=1 only means a column-family CAN be 
 replicated to other clusters, but only the 'table:cf-list list' determines 
 WHICH cf/table will actually be replicated to a specific peer.
 To provide back-compatibility, empty 'table:cf-list list' will replicate all 
 replicatable cf/table. (this means we don't allow a peer which replicates 
 nothing from a source cluster, we think it's reasonable: if replicating 
 nothing why bother adding a peer?)
 This improvement addresses the exact problem raised  by the first FAQ in 
 http://hbase.apache.org/replication.html:
   GLOBAL means replicate? Any provision to replicate only to cluster X and 
 not to cluster Y? or is that for later?
   Yes, this is for much later.
 I also noticed somebody mentioned replication-scope as integer rather than 
 a boolean is for such fine-grained replication purpose, but I think extending 
 replication-scope can't achieve the same replication granularity 
 flexibility as providing above per-peer replication configurations.
 This improvement has been running smoothly in our production clusters 
 (Xiaomi) for several months.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9489) Add cp hooks in online merge before and after setting PONR

2013-09-10 Thread rajeshbabu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9489?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

rajeshbabu updated HBASE-9489:
--

Description: As we need to merge index region along with user region we 
need the hooks before and after setting PONR in region merge transtion.  (was: 
As we need to merge index region along with user region we need the hooks in 
before and after setting PONR in region merge transtion.)

 Add cp hooks in online merge before and after setting PONR
 --

 Key: HBASE-9489
 URL: https://issues.apache.org/jira/browse/HBASE-9489
 Project: HBase
  Issue Type: Sub-task
Reporter: rajeshbabu
Assignee: rajeshbabu

 As we need to merge index region along with user region we need the hooks 
 before and after setting PONR in region merge transtion.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762938#comment-13762938
 ] 

Hadoop QA commented on HBASE-9482:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602310/HBASE-9482.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestHCM

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7114//console

This message is automatically generated.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} 

[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762948#comment-13762948
 ] 

Aditya Kishore commented on HBASE-9482:
---

The test failure in trunk seems to be related. Let me have a look.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9468) Previous active master can still serves RPC request when it is trying recovering expired zk session

2013-09-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9468?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762964#comment-13762964
 ] 

stack commented on HBASE-9468:
--

How hard [~fenghh] to make it so we take down rpc during recovery attempt?  If 
recovery succeeds put the rpc backup again?  (I believe you suggested this 
even).  I am thinking about the non-sophisticated user running on a small 
cluster probably without a backup master.  In this case, they'd prefer their 
master keep running if at all possible, if it recovers its zk session.

If it complicates or muddles your patch or if the gap between zk getting the 
callback and being able to shutdown rpc is too large, lets not bother.  Just 
thought I'd ask.

 Previous active master can still serves RPC request when it is trying 
 recovering expired zk session
 ---

 Key: HBASE-9468
 URL: https://issues.apache.org/jira/browse/HBASE-9468
 Project: HBase
  Issue Type: Bug
Reporter: Feng Honghua

 When the active master's zk session expires, it'll try to recover zk session, 
 but without turn off its RpcServer. What if a previous backup master has 
 already become the now active master, and some client tries to send request 
 to this expired master by using the cached master info? Any problem here?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8610) Introduce interfaces to support MultiWAL

2013-09-10 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8610:
--

Status: Patch Available  (was: Open)

Ran testcases.  I have 2 failures need to check if it is because of this patch. 
 Will submit it to hadoopQA.

 Introduce interfaces to support MultiWAL
 

 Key: HBASE-8610
 URL: https://issues.apache.org/jira/browse/HBASE-8610
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-8610_firstcut.patch


 As the heading says this JIRA is specific to adding interfaces to support 
 MultiWAL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-8610) Introduce interfaces to support MultiWAL

2013-09-10 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-8610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-8610:
--

Attachment: HBASE-8610_firstcut.patch

Introduces interface for MultiWAL support.  Default implementation creates a 
WAL for meta and a WAL for the RS. 
TableBasedGrouper would group per table.  We can have implementation for 
grouping a specified number of WALs within the regions getting opened in a WAL. 
 I have not attached the implementation of it here in this patch.  Added 
testcase for DefaultGrouping and Table based grouping.  Comments and feedback 
welcome.

 Introduce interfaces to support MultiWAL
 

 Key: HBASE-8610
 URL: https://issues.apache.org/jira/browse/HBASE-8610
 Project: HBase
  Issue Type: Improvement
  Components: wal
Affects Versions: 0.98.0
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 0.98.0

 Attachments: HBASE-8610_firstcut.patch


 As the heading says this JIRA is specific to adding interfaces to support 
 MultiWAL.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762979#comment-13762979
 ] 

stack commented on HBASE-8884:
--

[~stepinto] I was reading rpc code so I could better review an incoming rpc 
patch and because I have notions I will likely never get to (see below).  While 
reading, I was trying to write up documentation of how it all worked.  This is 
where I ran into how opaque and convoluted its operation what w/ unused thread 
locals used passing messages and then the stuff added by this patch -- 
complications we can hopefully clean up in subsequent refactorings as you 
suggest.  Do you have any pushback on my review comments?

'Pooling of buffers across requests' is a notion that rather than do

  data = ByteBuffer.allocate(dataLength);

inside in a rpc Reader thread every time we get a new request, since we have 
read the total rpc size and we know its size, that instead we could go to a 
pool of buffers and ask it for a buffer that is of appropriate size.  We'd 
check it out for the length of the request.  We'd need to check it back in when 
done (likely good spot is at the tail of the Handler when it adds the response 
to the Responder queue).  This could save us a bunch of allocations (and GC 
load, etc.).  I think we could get away w/ this given how KeyValues are copied 
into the MSLAB when we add them to the MemStore (we'd have to figure what to do 
about those that are not copied; i.e. KeyValues that are large).

If the above worked, we could then entertain making the pool be a pool of 
direct byte buffers (Downsides on DBBs are they take a while to allocate and 
their cleanup is unpreditable -- having them in a pool that we set up on server 
start w/ skirt some of these downsides; the copy from the socket channel to the 
DBB would be offheap making for more savings.

If the memstore implementation was itself offheap but now I am into fantasy 
so will stop. 

Thanks

 Pluggable RpcScheduler
 --

 Key: HBASE-8884
 URL: https://issues.apache.org/jira/browse/HBASE-8884
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
 hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
 hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch


 Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
 isolated thread-pools based on their priority. In the current implementation, 
 all normal get/put requests are using the same pool. We'd like to add some 
 per-user or per-region level isolation, so that a misbehaved user/region will 
 not saturate the thread-pool and cause DoS to others easily. The idea is 
 similar to FairScheduler in MR. The current scheduling code is not standalone 
 and is mixed with others (Connection#processRequest). The issue is the first 
 step to extract it to an interface, so that people are free to write and test 
 their own implementations.
 This patch doesn't make it completely pluggable yet, as some parameters are 
 pass from constructor. This is because HMaster and HRegionServer both use 
 RpcServer and they have different thread-pool size config. Let me know if you 
 have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Status: Open  (was: Patch Available)

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.94.11, 0.95.2
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Attachment: HBASE-9482.patch

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Aditya Kishore updated HBASE-9482:
--

Release Note:   (was: Seems that trunk code moved ahead since I generated 
the patch. Resubmitting.)
  Status: Patch Available  (was: Open)

{{org.apache.hadoop.hbase.client.TestHCM.testConnection()}} creates an empty 
configuration (from HBase perspective) and hence does not have value for 
{{hbase.security.authentication}} set.

Resubmitting the patch.

Also, removing the ill placed comment in the release note section.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.94.11, 0.95.2
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13762999#comment-13762999
 ] 

Aditya Kishore commented on HBASE-9482:
---

[~liochon] This change does not require any modification to configuration in an 
existing secure HBase cluster. It just does away with the requirement that 
{{hadoop.security.authentication}} must be set to {{kerberos}} too for HBase to 
work securely and facilitate running secure HBase over an unsecured filesystem.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763014#comment-13763014
 ] 

Hudson commented on HBASE-8930:
---

SUCCESS: Integrated in hbase-0.96-hadoop2 #15 (See 
[https://builds.apache.org/job/hbase-0.96-hadoop2/15/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521355)
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/branches/0.96/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/branches/0.96/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, 

[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763017#comment-13763017
 ] 

Nicolas Liochon commented on HBASE-9482:


Ok, I read the code again  I agree. Still +1.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8930) Filter evaluates KVs outside requested columns

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8930?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763025#comment-13763025
 ] 

Hudson commented on HBASE-8930:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #720 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/720/])
HBASE-8930 REAPPLY with testfix (larsh: rev 1521354)
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanQueryMatcher.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/ScanWildcardColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/filter/TestInvocationRecordFilter.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestExplicitColumnTracker.java
* 
/hbase/trunk/hbase-server/src/test/java/org/apache/hadoop/hbase/regionserver/TestScanWildcardColumnTracker.java


 Filter evaluates KVs outside requested columns
 --

 Key: HBASE-8930
 URL: https://issues.apache.org/jira/browse/HBASE-8930
 Project: HBase
  Issue Type: Bug
  Components: Filters
Affects Versions: 0.94.7
Reporter: Federico Gaule
Assignee: Vasu Mariyala
Priority: Critical
  Labels: filters, hbase, keyvalue
 Fix For: 0.98.0, 0.94.12, 0.96.1

 Attachments: 0.94-HBASE-8930.patch, 0.94-HBASE-8930-rev1.patch, 
 0.94-HBASE-8930-rev2.patch, 0.94-Independent-Test-Execution.patch, 
 0.95-HBASE-8930.patch, 0.95-HBASE-8930-rev1.patch, 
 0.96-HBASE-8930-rev2.patch, 0.96-HBASE-8930-rev3.patch, 
 0.96-trunk-Independent-Test-Execution.patch, 8930-0.94.txt, HBASE-8930.patch, 
 HBASE-8930-rev1.patch, HBASE-8930-rev2.patch, HBASE-8930-rev3.patch, 
 HBASE-8930-rev4.patch, HBASE-8930-rev5.patch, HBASE-8930-rev6.patch


 1- Fill row with some columns
 2- Get row with some columns less than universe - Use filter to print kvs
 3- Filter prints not requested columns
 Filter (AllwaysNextColFilter) always return ReturnCode.INCLUDE_AND_NEXT_COL 
 and prints KV's qualifier
 SUFFIX_0 = 0
 SUFFIX_1 = 1
 SUFFIX_4 = 4
 SUFFIX_6 = 6
 P= Persisted
 R= Requested
 E= Evaluated
 X= Returned
 | 5580 | 5581 | 5584 | 5586 | 5590 | 5591 | 5594 | 5596 | 5600 | 5601 | 5604 
 | 5606 |... 
 |  |  P   |   P  |  |  |  P   |   P  |  |  |  P   |   P  
 |  |...
 |  |  R   |   R  |   R  |  |  R   |   R  |   R  |  |  |  
 |  |...
 |  |  E   |   E  |  |  |  E   |   E  |  |  |  
 {color:red}E{color}   |  |  |...
 |  |  X   |   X  |  |  |  X   |   X  |  |  |  |  
 |  |
 {code:title=ExtraColumnTest.java|borderStyle=solid}
 @Test
 public void testFilter() throws Exception {
 Configuration config = HBaseConfiguration.create();
 config.set(hbase.zookeeper.quorum, myZK);
 HTable hTable = new HTable(config, testTable);
 byte[] cf = Bytes.toBytes(cf);
 byte[] row = Bytes.toBytes(row);
 byte[] col1 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_1));
 byte[] col2 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_1));
 byte[] col3 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_1));
 byte[] col4 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_1));
 byte[] col5 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_1));
 byte[] col6 = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_1));
 byte[] col1g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_6));
 byte[] col2g = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_6));
 byte[] col1v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 558, (byte) SUFFIX_4));
 byte[] col2v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 559, (byte) SUFFIX_4));
 byte[] col3v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 560, (byte) SUFFIX_4));
 byte[] col4v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 561, (byte) SUFFIX_4));
 byte[] col5v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 562, (byte) SUFFIX_4));
 byte[] col6v = new QualifierConverter().objectToByteArray(new 
 Qualifier((short) 563, (byte) SUFFIX_4));
 // 

[jira] [Commented] (HBASE-9476) Yet more master log cleanup

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9476?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763024#comment-13763024
 ] 

Hudson commented on HBASE-9476:
---

SUCCESS: Integrated in HBase-TRUNK-on-Hadoop-2.0.0 #720 (See 
[https://builds.apache.org/job/HBase-TRUNK-on-Hadoop-2.0.0/720/])
HBASE-9476 Yet more master log cleanup (stack: rev 1521315)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/client/ClientScanner.java
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKAssign.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/AssignmentManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/RegionStates.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/ServerManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/SplitLogManager.java
* 
/hbase/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/master/handler/OpenedRegionHandler.java


 Yet more master log cleanup
 ---

 Key: HBASE-9476
 URL: https://issues.apache.org/jira/browse/HBASE-9476
 Project: HBase
  Issue Type: Bug
  Components: Usability
Reporter: stack
Assignee: stack
 Fix For: 0.98.0, 0.96.0

 Attachments: edits.txt


 Even more cleanup, tightening, of log output (was staring at some over the 
 last day..)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763044#comment-13763044
 ] 

Hadoop QA commented on HBASE-9482:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602327/HBASE-9482.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.regionserver.TestAtomicOperation

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7116//console

This message is automatically generated.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is 

[jira] [Commented] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763055#comment-13763055
 ] 

Jimmy Xiang commented on HBASE-9338:


I think I know the root cause now: assign region out of SSH. A region server is 
dead, before SSH handles it and completes the log splitting, a CM assigns a 
dead region. That's why it doesn't happen all the time.  Fortunately, that's 
not a scenario normally happens to a real cluster.

Let me check if we can prevent from assigning a dead region out of SSH from the 
master.

 Test Big Linked List fails on Hadoop 2.1.0
 --

 Key: HBASE-9338
 URL: https://issues.apache.org/jira/browse/HBASE-9338
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Fix For: 0.96.0

 Attachments: HBASE-9338-TESTING-2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9451) Meta remains unassigned when the meta server crashes with the ClusterStatusListener set

2013-09-10 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9451:
---

   Resolution: Fixed
Fix Version/s: 0.96.0
   0.98.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed, thanks for the review Jimmy.

 Meta remains unassigned when the meta server crashes with the 
 ClusterStatusListener set
 ---

 Key: HBASE-9451
 URL: https://issues.apache.org/jira/browse/HBASE-9451
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.0

 Attachments: 9451.v1.patch


 While running tests described in HBASE-9338, ran into this problem. The 
 hbase.status.listener.class was set to 
 org.apache.hadoop.hbase.client.ClusterStatusListener$MultiCastListener.
 1. I had the meta server coming down
 2. The metaSSH got triggered. The call chain:
2.1 verifyAndAssignMetaWithRetries
2.2 verifyMetaRegionLocation
2.3 waitForMetaServerConnection
2.4 getMetaServerConnection
2.5 getCachedConnection
2.6 HConnectionManager.getAdmin(serverName, false)
2.7 isDeadServer(serverName) - This is hardcoded to return 'false' when 
 the clusterStatusListener field is null. If clusterStatusListener is not null 
 (in my test), then it could return true in certain cases (and in this case, 
 indeed it should return true since the server is down). I am trying to 
 understand why it's hardcoded to 'false' for former case.
 3. When isDeadServer returns true, the method 
 HConnectionManager.getAdmin(ServerName, boolean) throws 
 RegionServerStoppedException.
 4. Finally, after the retries are over verifyAndAssignMetaWithRetries gives 
 up and the master aborts.
 The methods in the above call chain don't handle 
 RegionServerStoppedException. Maybe something to look at... 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9101) Addendum to pluggable RpcScheduler

2013-09-10 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763087#comment-13763087
 ] 

Chao Shi commented on HBASE-9101:
-

oops.. sorry for not updating this ticket for a while. (my mailbox doesn't 
receive JIRA updates for unknown reasons). 

bq.
Do you have to pass in an HRegionServer? ... Could it be an instance of Server? 
Or RegionServerServices

The reason is that the RpcScheduler constructor needs QosFunction (which is 
member of HRegionServer). I see this is not clean, but don't know if I can add 
it to the interface RegionServerServices. Any ideas?

bq.
Why have this constant ... up in HConstants and not in HRegionServer or in the 
scheduler Interface itself

Fixed.

bq.
We really should add a comment in HConstants that generally constants shouldn't 
go there, unless referenced by a lot of different things.

Fixed.


 Addendum to pluggable RpcScheduler
 --

 Key: HBASE-9101
 URL: https://issues.apache.org/jira/browse/HBASE-9101
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-9101.patch, hbase-9101-v2.patch


 This patch fixes the review comments from [~stack] and a small fix:
 - Make RpcScheduler fully pluggable. One can write his/her own implementation 
 and add it to classpath and specify it by config 
 hbase.region.server.rpc.scheduler.factory.class.
 - Add unit tests and fix that RpcScheduler.stop is not called (discovered by 
 tests)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9480:
---

Attachment: 9480-1.txt

This is the patch that I have been working with.

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9480:
---

Fix Version/s: 0.96.0

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9101) Addendum to pluggable RpcScheduler

2013-09-10 Thread Chao Shi (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Shi updated HBASE-9101:


Attachment: hbase-9101-v3.patch

 Addendum to pluggable RpcScheduler
 --

 Key: HBASE-9101
 URL: https://issues.apache.org/jira/browse/HBASE-9101
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-9101.patch, hbase-9101-v2.patch, 
 hbase-9101-v3.patch


 This patch fixes the review comments from [~stack] and a small fix:
 - Make RpcScheduler fully pluggable. One can write his/her own implementation 
 and add it to classpath and specify it by config 
 hbase.region.server.rpc.scheduler.factory.class.
 - Add unit tests and fix that RpcScheduler.stop is not called (discovered by 
 tests)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0

2013-09-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763113#comment-13763113
 ] 

Devaraj Das commented on HBASE-9338:


With the patches from HBASE-9481, HBASE-9480, HBASE-9456, and the patch on this 
jira, I was finally able to get a successful run. I used the following command:
hbase org.apache.hadoop.hbase.test.IntegrationTestBigLinkedList 
-Dhbase.client.retries.number=3000 -monkey verySlow Loop 1 12 2500 
IntegrationTestBigLinkedList 12 

Note that I increased the retries from a default value of 30 to 3000. This is 
to make sure that the MR jobs ride over the CM chaos.

Now I am running the same test with the slowDeterministic CM.

Yeah, maybe the data loss was caused by the region losses which is at least 
partially addressed by the HBASE-9481 and HBASE-9480 patches.

 Test Big Linked List fails on Hadoop 2.1.0
 --

 Key: HBASE-9338
 URL: https://issues.apache.org/jira/browse/HBASE-9338
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Fix For: 0.96.0

 Attachments: HBASE-9338-TESTING-2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763095#comment-13763095
 ] 

Jimmy Xiang edited comment on HBASE-9481 at 9/10/13 3:02 PM:
-

[~stack], the patch looks good to me because all the access to 
regionsInTransition (and other region states member variables) are 
synchronized. The gap is widened, but accessing to in-memory state is also 
delayed due to the synchronization.  Another way is to use Iterator, which 
needs a little bit refactory. Either way is fine with me.

  was (Author: jxiang):
[~saint@gmail.com], the patch looks good to me because all the access 
to regionsInTransition (and other region states member variables) are 
synchronized. The gap is widened, but accessing to in-memory state is also 
delayed due to the synchronization.  Another way is to use Iterator, which 
needs a little bit refactory. Either way is fine with me.
  
 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Comment Edited] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763095#comment-13763095
 ] 

Jimmy Xiang edited comment on HBASE-9481 at 9/10/13 3:02 PM:
-

[~saint@gmail.com], the patch looks good to me because all the access to 
regionsInTransition (and other region states member variables) are 
synchronized. The gap is widened, but accessing to in-memory state is also 
delayed due to the synchronization.  Another way is to use Iterator, which 
needs a little bit refactory. Either way is fine with me.

  was (Author: jxiang):
@stack, the patch looks good to me because all the access to 
regionsInTransition (and other region states member variables) are 
synchronized. The gap is widened, but accessing to in-memory state is also 
delayed due to the synchronization.  Another way is to use Iterator, which 
needs a little bit refactory. Either way is fine with me.
  
 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763095#comment-13763095
 ] 

Jimmy Xiang commented on HBASE-9481:


@stack, the patch looks good to me because all the access to 
regionsInTransition (and other region states member variables) are 
synchronized. The gap is widened, but accessing to in-memory state is also 
delayed due to the synchronization.  Another way is to use Iterator, which 
needs a little bit refactory. Either way is fine with me.

 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763120#comment-13763120
 ] 

Jimmy Xiang commented on HBASE-9480:


I think a proper fix should be aborting the region server at step 6.

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Brock Noland (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763152#comment-13763152
 ] 

Brock Noland commented on HBASE-9477:
-

As a Hadoop user, I do look at the annotations to see what I am using and know 
that if something is marked Unstable it's up to me to maintain that API. I'd 
like the same thing in HBase along with Public/Stable requiring one major 
release of depreciation before being changed or removed incompatibly. Ideally 
non-public apis would have .internal. in their package name as well but that 
would require too many moves to be practical.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9101) Addendum to pluggable RpcScheduler

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9101?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763156#comment-13763156
 ] 

Hadoop QA commented on HBASE-9101:
--

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602345/hbase-9101-v3.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:green}+1 hadoop2.0{color}.  The patch compiles against the hadoop 
2.0 profile.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7117//console

This message is automatically generated.

 Addendum to pluggable RpcScheduler
 --

 Key: HBASE-9101
 URL: https://issues.apache.org/jira/browse/HBASE-9101
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-9101.patch, hbase-9101-v2.patch, 
 hbase-9101-v3.patch


 This patch fixes the review comments from [~stack] and a small fix:
 - Make RpcScheduler fully pluggable. One can write his/her own implementation 
 and add it to classpath and specify it by config 
 hbase.region.server.rpc.scheduler.factory.class.
 - Add unit tests and fix that RpcScheduler.stop is not called (discovered by 
 tests)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-8884) Pluggable RpcScheduler

2013-09-10 Thread Chao Shi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-8884?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763140#comment-13763140
 ] 

Chao Shi commented on HBASE-8884:
-

I haven't thought on the interface clearly. You idea sounds similar to using 
per-request memory pool in the old C days. I will try to do some refactor as 
you suggested (callable + remove thread local).

 Pluggable RpcScheduler
 --

 Key: HBASE-8884
 URL: https://issues.apache.org/jira/browse/HBASE-8884
 Project: HBase
  Issue Type: Improvement
  Components: IPC/RPC
Reporter: Chao Shi
Assignee: Chao Shi
 Fix For: 0.98.0

 Attachments: hbase-8884.patch, hbase-8884-v2.patch, 
 hbase-8884-v3.patch, hbase-8884-v4.patch, hbase-8884-v5.patch, 
 hbase-8884-v6.patch, hbase-8884-v7.patch, hbase-8884-v8.patch


 Today, the RPC scheduling mechanism is pretty simple: it execute requests in 
 isolated thread-pools based on their priority. In the current implementation, 
 all normal get/put requests are using the same pool. We'd like to add some 
 per-user or per-region level isolation, so that a misbehaved user/region will 
 not saturate the thread-pool and cause DoS to others easily. The idea is 
 similar to FairScheduler in MR. The current scheduling code is not standalone 
 and is mixed with others (Connection#processRequest). The issue is the first 
 step to extract it to an interface, so that people are free to write and test 
 their own implementations.
 This patch doesn't make it completely pluggable yet, as some parameters are 
 pass from constructor. This is because HMaster and HRegionServer both use 
 RpcServer and they have different thread-pool size config. Let me know if you 
 have a solution to this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763162#comment-13763162
 ] 

stack commented on HBASE-9481:
--

Thanks [~jxiang]

+1 because change is inside synchronization block.

 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-9485:
--

Attachment: (was: 9485-v1.txt)

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu
Assignee: Ted Yu

 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9463) Fix comments around alter tables

2013-09-10 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9463?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9463:
---

  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

 Fix comments around alter tables
 

 Key: HBASE-9463
 URL: https://issues.apache.org/jira/browse/HBASE-9463
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Assignee: Nicolas Liochon
Priority: Minor
 Fix For: 0.98.0, 0.96.0

 Attachments: 9463.v1.patch


 Some are outdated.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 ReplicationLogCleaner slow at large scale

2013-09-10 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763169#comment-13763169
 ] 

Dave Latham commented on HBASE-9377:


Didn't notice the InterfaceAudience.private.  With that, I agree with #2.

For your modified patch, it means that not all included log cleaners extend the 
BaseLogCleanerDelegate.  If you like that better than supplying a default 
implementation for BaseLogCleanerDelegate.isLogDeletable, it doesn't make much 
difference to me.
+0

It does look like the patch is missing BaseFileCleanerDelegate.java though.

Thanks, Lars, for pushing on it.

 Backport HBASE- 9208 ReplicationLogCleaner slow at large scale
 

 Key: HBASE-9377
 URL: https://issues.apache.org/jira/browse/HBASE-9377
 Project: HBase
  Issue Type: Task
  Components: Replication
Reporter: stack
Assignee: Lars Hofhansl
 Fix For: 0.94.12

 Attachments: 9377.txt


 For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
 issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763197#comment-13763197
 ] 

Jeffrey Zhong commented on HBASE-9480:
--

It's not ideal to abort here. Because the aborting is on master(Single Failure 
Point) which handles region assignment  SSH and may have other chain effects 
or master may keep aborting. 

Since the issue is more caused by 
{code}deleteClosingOrClosedNode(region);{code} which stopped the assignment 
state machine, I think we can remove them(there are two places in this unassign 
function). 

The longer term fix should allow unassign to throw exception to let different 
code paths handle differently and fast fail move region request(either by a 
user or balancer) before a region move or during a move.

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763245#comment-13763245
 ] 

Lars Hofhansl commented on HBASE-9488:
--

Nit: I should have done that when I broken ClientScanner and 
AbstractClientScanner out, but while you're at it, can you pull {{public 
Result[] next(int nbRows) throws IOException}} up into AsbtractClientScanner 
and remove it from ClientScanner and SmallClientScanner?


 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9338) Test Big Linked List fails on Hadoop 2.1.0

2013-09-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763244#comment-13763244
 ] 

Devaraj Das commented on HBASE-9338:


Thinking about it a bit more, the lost(offlined) regions should be got back 
when the master is restarted. So whenever the CM restarts the master we should 
get those lost regions back since they would have entries in the meta. The data 
loss is not permanent in that sense but there are windows of time when the data 
is not accessible. This may or may not lead to the dataloss issue that 
[~eclark] is seeing. My test with the slowDeterministic is still running. Let's 
see how that goes.

 Test Big Linked List fails on Hadoop 2.1.0
 --

 Key: HBASE-9338
 URL: https://issues.apache.org/jira/browse/HBASE-9338
 Project: HBase
  Issue Type: Bug
  Components: test
Affects Versions: 0.96.0
Reporter: Elliott Clark
Assignee: Elliott Clark
Priority: Blocker
 Fix For: 0.96.0

 Attachments: HBASE-9338-TESTING-2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9482) Do not enforce secure Hadoop for secure HBase

2013-09-10 Thread Aditya Kishore (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763231#comment-13763231
 ] 

Aditya Kishore commented on HBASE-9482:
---

This test ({{org.apache.hadoop.hbase.regionserver.TestAtomicOperation}}) 
failure is unrelated to the patch. Passes locally every time.

 Do not enforce secure Hadoop for secure HBase
 -

 Key: HBASE-9482
 URL: https://issues.apache.org/jira/browse/HBASE-9482
 Project: HBase
  Issue Type: Bug
  Components: security
Affects Versions: 0.95.2, 0.94.11
Reporter: Aditya Kishore
Assignee: Aditya Kishore
  Labels: security
 Fix For: 0.96.0

 Attachments: HBASE-9482-0.94.patch, HBASE-9482.patch, 
 HBASE-9482.patch, HBASE-9482.patch


 We should recommend and not enforce secure Hadoop underneath as a requirement 
 to run secure HBase.
 Few of our customers have HBase clusters which expose only HBase services to 
 outside the physical network and no other services (including ssh) are 
 accessible from outside of such cluster.
 However they are forced to setup secure Hadoop and incur the penalty of 
 security overhead at filesystem layer even if they do not need to.
 The following code tests for both secure HBase and secure Hadoop.
 {code:title=org.apache.hadoop.hbase.security.User|borderStyle=solid}
   /**
* Returns whether or not secure authentication is enabled for HBase.  Note 
 that
* HBase security requires HDFS security to provide any guarantees, so this 
 requires that
* both codehbase.security.authentication/code and 
 codehadoop.security.authentication/code
* are set to codekerberos/code.
*/
   public static boolean isHBaseSecurityEnabled(Configuration conf) {
 return kerberos.equalsIgnoreCase(conf.get(HBASE_SECURITY_CONF_KEY)) 
 kerberos.equalsIgnoreCase(
 conf.get(CommonConfigurationKeys.HADOOP_SECURITY_AUTHENTICATION));
   }
 {code}
 What is worse that if {{hadoop.security.authentication}} is not set to 
 {{kerberos}} (undocumented at http://hbase.apache.org/book/security.html), 
 all other configuration have no impact and HBase RPCs silently switch back to 
 unsecured mode.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-9477:
--

Attachment: hbase-9477.v2.patch

v2 fixes a cherrypick conflict

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763211#comment-13763211
 ] 

Devaraj Das commented on HBASE-9480:


Jimmy I think meant aborting the RS in question.

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763259#comment-13763259
 ] 

Jeffrey Zhong commented on HBASE-9480:
--

Oh, my bad I thought aborting master. Aborting RS may still not fully cover the 
issue because exception may triggered by current RS is in SSH already and outer 
retry loop still could delete RIT node and interfere with SSH handling. 
[~jxiang] Any issue do you see if we remove the following two lines in function 
unassign? Thanks.
{code}
if (transitionInZK) {
  // delete the node. if no node exists need not bother.
  deleteClosingOrClosedNode(region);
}
if (state != null) {
  regionOffline(region);
}
{code}


 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9451) Meta remains unassigned when the meta server crashes with the ClusterStatusListener set

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763212#comment-13763212
 ] 

Hudson commented on HBASE-9451:
---

SUCCESS: Integrated in HBase-TRUNK #4485 (See 
[https://builds.apache.org/job/HBase-TRUNK/4485/])
HBASE-9451  Meta remains unassigned when the meta server crashes with the 
ClusterStatusListener set (nkeywal: rev 1521513)
* 
/hbase/trunk/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java


 Meta remains unassigned when the meta server crashes with the 
 ClusterStatusListener set
 ---

 Key: HBASE-9451
 URL: https://issues.apache.org/jira/browse/HBASE-9451
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.0

 Attachments: 9451.v1.patch


 While running tests described in HBASE-9338, ran into this problem. The 
 hbase.status.listener.class was set to 
 org.apache.hadoop.hbase.client.ClusterStatusListener$MultiCastListener.
 1. I had the meta server coming down
 2. The metaSSH got triggered. The call chain:
2.1 verifyAndAssignMetaWithRetries
2.2 verifyMetaRegionLocation
2.3 waitForMetaServerConnection
2.4 getMetaServerConnection
2.5 getCachedConnection
2.6 HConnectionManager.getAdmin(serverName, false)
2.7 isDeadServer(serverName) - This is hardcoded to return 'false' when 
 the clusterStatusListener field is null. If clusterStatusListener is not null 
 (in my test), then it could return true in certain cases (and in this case, 
 indeed it should return true since the server is down). I am trying to 
 understand why it's hardcoded to 'false' for former case.
 3. When isDeadServer returns true, the method 
 HConnectionManager.getAdmin(ServerName, boolean) throws 
 RegionServerStoppedException.
 4. Finally, after the retries are over verifyAndAssignMetaWithRetries gives 
 up and the master aborts.
 The methods in the above call chain don't handle 
 RegionServerStoppedException. Maybe something to look at... 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763166#comment-13763166
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

bq.  I'd say making non-public things non-public is a better option. That way, 
the only way to get at them is to use tricks explicitly, and I'm ok with 
breaking that 

Actually, this doesn't work.  For example, even in an ideal future world were 
users aren't supposed to use KeyValue, it and its methods must still be java 
public (though @InterfaceAudience.Private), because code form other packages 
(like o.a.h.h.regionserver) use them and need to access the methods.  Users 
generally touch code in common and client jars so that's where we should start 
and make it clear that we are enforcing the policy.  (that said they touch 
other parts too -- thee MR jobs, bulk load tool, etc are in regionserver).

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-09-10 Thread kiran (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763304#comment-13763304
 ] 

kiran commented on HBASE-1936:
--

Is this issue related 
(http://article.gmane.org/gmane.comp.java.hadoop.hbase.user/37652/match=0.94.7) 
to HBASE-1936 ? If so, how can the classes loaded from hdfs read the 
configurations and their values ?


 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
 HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
 trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-4030) LoadIncrementalHFiles fails with FileNotFoundException

2013-09-10 Thread Philip Gladstone (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763305#comment-13763305
 ] 

Philip Gladstone commented on HBASE-4030:
-

This has started to bite us in 0.94.8 -- fairly consistently. I have no idea 
what changed in our environment to trigger this behavior.

 LoadIncrementalHFiles fails with FileNotFoundException
 --

 Key: HBASE-4030
 URL: https://issues.apache.org/jira/browse/HBASE-4030
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.90.1
 Environment: CDH3bu on Ubuntu 4.4.3
Reporter: Adam Phelps
 Fix For: 0.95.0


 -- We've been seeing intermittent failures of calls to LoadIncrementalHFiles. 
  When this happens the node that made the call will see a 
 FileNotFoundException such as this:
 2011-06-23 15:47:34.379566500 java.net.SocketTimeoutException: Call to 
 s8.XXX/67.215.90.38:60020 failed on socket timeout exception: 
 java.net.SocketTi
 meoutException: 6 millis timeout while waiting for channel to be ready 
 for read. ch : java.nio.channels.SocketChannel[connected 
 local=/67.215.90.51:51605 remo
 te=s8.XXX/67.215.90.38:60020]
 2011-06-23 15:47:34.379570500 java.io.FileNotFoundException: 
 java.io.FileNotFoundException: File does not exist: 
 /hfiles/2011/06/23/14/domainsranked/TopDomainsRan
 k.r3v5PRvK/handling/3557032074765091256
 2011-06-23 15:47:34.379573500   at 
 org.apache.hadoop.hdfs.DFSClient$DFSInputStream.openInfo(DFSClient.java:1602)
 2011-06-23 15:47:34.379573500   at 
 org.apache.hadoop.hdfs.DFSClient$DFSInputStream.init(DFSClient.java:1593)
 -- Over on the regionserver that was loading this we see that it attempted to 
 load and hit a 60 second timeout:
 2011-06-23 15:45:54,634 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Validating hfile at 
 hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
  for inclusion in store handling region 
 domainsranked,368449:2011/0/03/23:category::com.zynga.static.fishville.facebook,1305890318961.d4925aca7852bed32613a509215d42b
 8.
 ...
 2011-06-23 15:46:54,639 INFO org.apache.hadoop.hdfs.DFSClient: Failed to 
 connect to /67.215.90.38:50010, add to deadNodes and continue
 java.net.SocketTimeoutException: 6 millis timeout while waiting for 
 channel to be ready for read. ch : java.nio.channels.SocketChannel[connected 
 local=/67.215.90.38:42199 remote=/67.215.90.38:50010]
 at 
 org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
 at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
 at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
 at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
 at java.io.DataInputStream.readShort(DataInputStream.java:295)
 -- We suspect this particular problem is a resource contention issue on our 
 side.  However, the loading process proceeds to rename the file despite the 
 failure:
 2011-06-23 15:46:54,657 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Renaming bulk load file 
 hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
  to 
 hdfs://namenode.XXX:8020/hbase/domainsranked/d4925aca7852bed32613a509215d42b8/handling/3615917062821145533
 -- And then the LoadIncrementalHFiles tries to load the hfile again:
 2011-06-23 15:46:55,684 INFO org.apache.hadoop.hbase.regionserver.Store: 
 Validating hfile at 
 hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
  for inclusion in store handling region 
 domainsranked,368449:2011/05/03/23:category::com.zynga.static.fishville.facebook,1305890318961.d4925aca7852bed32613a509215d42b8.
 2011-06-23 15:46:55,685 DEBUG org.apache.hadoop.ipc.HBaseServer: IPC Server 
 handler 147 on 60020, call 
 bulkLoadHFile(hdfs://namenode.XXX/hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256,
  [B@4224508b, [B@5e23f799) from 67.215.90.51:51856: error: 
 java.io.FileNotFoundException: File does not exist: 
 /hfiles/2011/06/23/14/domainsranked/TopDomainsRank.r3v5PRvK/handling/3557032074765091256
 -- This eventually leads to the load command failing.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9484) Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96

2013-09-10 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9484:


Status: Open  (was: Patch Available)

 Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96
 --

 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk
 Attachments: 
 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9484) Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96

2013-09-10 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763316#comment-13763316
 ] 

Nick Dimiduk commented on HBASE-9484:
-

Integration tests fail consistently on hadoop2 when run in local mode. 
Investigating.

{noformat}
Tests in error:
  
testGenerateAndLoad(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv):
 File does not exist: 
hdfs://localhost:60649/grid/1/ndimiduk/hbase/hbase-it/target/test-data/a2919dd2-913d-49a1-b2db-794fe8929a9d/hadoop-7259409377972642917.jar
  
testRunFromOutputCommitter(org.apache.hadoop.hbase.mapreduce.IntegrationTestImportTsv):
 File does not exist: 
hdfs://localhost:60649/grid/1/ndimiduk/hbase/hbase-it/target/test-data/a2919dd2-913d-49a1-b2db-794fe8929a9d/hadoop-4503946817659468635.jar
{noformat}

 Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96
 --

 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk
 Attachments: 
 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9484) Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96

2013-09-10 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9484:


Fix Version/s: 0.96.0

 Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96
 --

 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk
 Fix For: 0.96.0

 Attachments: 
 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9484) Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96

2013-09-10 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9484:


Priority: Minor  (was: Major)

 Backport 8534 Fix coverage for org.apache.hadoop.hbase.mapreduce to 0.96
 --

 Key: HBASE-9484
 URL: https://issues.apache.org/jira/browse/HBASE-9484
 Project: HBase
  Issue Type: Test
  Components: mapreduce, test
Reporter: Nick Dimiduk
Priority: Minor
 Fix For: 0.96.0

 Attachments: 
 0001-HBASE-9484-backport-8534-Fix-coverage-for-org.apache.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9485) TableOutputCommitter should implement recovery if we don't want jobs to start from 0 on RM restart

2013-09-10 Thread Nick Dimiduk (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nick Dimiduk updated HBASE-9485:


Status: Open  (was: Patch Available)

patch appears to have disappeared.

 TableOutputCommitter should implement recovery if we don't want jobs to start 
 from 0 on RM restart
 --

 Key: HBASE-9485
 URL: https://issues.apache.org/jira/browse/HBASE-9485
 Project: HBase
  Issue Type: Bug
  Components: mapreduce
Reporter: Ted Yu
Assignee: Ted Yu

 HBase extends OutputCommitter which turns recovery off. Meaning all completed 
 maps are lost on RM restart and job starts from scratch. FileOutputCommitter 
 implements recovery so we should look at that to see what is potentially 
 needed for recovery.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 ReplicationLogCleaner slow at large scale

2013-09-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763321#comment-13763321
 ] 

Lars Hofhansl commented on HBASE-9377:
--

I do not feel strongly. Also happy to commit your initial 0.94 patch.

 Backport HBASE- 9208 ReplicationLogCleaner slow at large scale
 

 Key: HBASE-9377
 URL: https://issues.apache.org/jira/browse/HBASE-9377
 Project: HBase
  Issue Type: Task
  Components: Replication
Reporter: stack
Assignee: Lars Hofhansl
 Fix For: 0.94.12

 Attachments: 9377.txt


 For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
 issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763181#comment-13763181
 ] 

Hadoop QA commented on HBASE-9477:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602236/hbase-9477.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 111 
new or modified tests.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7118//console

This message is automatically generated.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9481:
---

   Resolution: Fixed
Fix Version/s: 0.96.0
   Status: Resolved  (was: Patch Available)

Committed. Thanks, Jeffrey for the patch.

 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Fix For: 0.96.0

 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 ReplicationLogCleaner slow at large scale

2013-09-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763327#comment-13763327
 ] 

Lars Hofhansl commented on HBASE-9377:
--

Ok... If there are no objections, I'll commit this (Dave's initial patch 
actually).

 Backport HBASE- 9208 ReplicationLogCleaner slow at large scale
 

 Key: HBASE-9377
 URL: https://issues.apache.org/jira/browse/HBASE-9377
 Project: HBase
  Issue Type: Task
  Components: Replication
Reporter: stack
Assignee: Lars Hofhansl
 Fix For: 0.94.12

 Attachments: 9377.txt


 For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
 issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9456) Meta doesn't get assigned in a master failure scenario

2013-09-10 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9456?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-9456:
---

Resolution: Fixed
Status: Resolved  (was: Patch Available)

Committed.

 Meta doesn't get assigned in a master failure scenario
 --

 Key: HBASE-9456
 URL: https://issues.apache.org/jira/browse/HBASE-9456
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9456-1.txt, 9456-2.txt


 The flow:
 1. Cluster is up, meta is assigned to some server
 2. Master is killed
 3. Master is brought up, it is initializing. It learns about the Meta server 
 (in assignMeta).
 4. Server holding meta is killed
 5. Meta never gets reassigned since the SSH wasn't enabled

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 ReplicationLogCleaner slow at large scale

2013-09-10 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1376#comment-1376
 ] 

Dave Latham commented on HBASE-9377:


If you want me to choose, I'd say stick with the my patch just because it 
already went in like that for 0.96, but if you like it the other way, I'm happy 
too.

 Backport HBASE- 9208 ReplicationLogCleaner slow at large scale
 

 Key: HBASE-9377
 URL: https://issues.apache.org/jira/browse/HBASE-9377
 Project: HBase
  Issue Type: Task
  Components: Replication
Reporter: stack
Assignee: Lars Hofhansl
 Fix For: 0.94.12

 Attachments: 9377.txt


 For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
 issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9377) Backport HBASE- 9208 ReplicationLogCleaner slow at large scale

2013-09-10 Thread Dave Latham (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763336#comment-13763336
 ] 

Dave Latham commented on HBASE-9377:


(By initial patch, you mean HBASE-9208-0.94-v2.patch - correct?)

 Backport HBASE- 9208 ReplicationLogCleaner slow at large scale
 

 Key: HBASE-9377
 URL: https://issues.apache.org/jira/browse/HBASE-9377
 Project: HBase
  Issue Type: Task
  Components: Replication
Reporter: stack
Assignee: Lars Hofhansl
 Fix For: 0.94.12

 Attachments: 9377.txt


 For [~lhofhansl] to make a  call on.  See end where Dave Latham talks about 
 issues w/ patch in 0.94.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9451) Meta remains unassigned when the meta server crashes with the ClusterStatusListener set

2013-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9451?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763306#comment-13763306
 ] 

Hudson commented on HBASE-9451:
---

SUCCESS: Integrated in hbase-0.96 #29 (See 
[https://builds.apache.org/job/hbase-0.96/29/])
HBASE-9451  Meta remains unassigned when the meta server crashes with the 
ClusterStatusListener set (nkeywal: rev 1521526)
* 
/hbase/branches/0.96/hbase-client/src/main/java/org/apache/hadoop/hbase/catalog/CatalogTracker.java


 Meta remains unassigned when the meta server crashes with the 
 ClusterStatusListener set
 ---

 Key: HBASE-9451
 URL: https://issues.apache.org/jira/browse/HBASE-9451
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Assignee: Nicolas Liochon
 Fix For: 0.98.0, 0.96.0

 Attachments: 9451.v1.patch


 While running tests described in HBASE-9338, ran into this problem. The 
 hbase.status.listener.class was set to 
 org.apache.hadoop.hbase.client.ClusterStatusListener$MultiCastListener.
 1. I had the meta server coming down
 2. The metaSSH got triggered. The call chain:
2.1 verifyAndAssignMetaWithRetries
2.2 verifyMetaRegionLocation
2.3 waitForMetaServerConnection
2.4 getMetaServerConnection
2.5 getCachedConnection
2.6 HConnectionManager.getAdmin(serverName, false)
2.7 isDeadServer(serverName) - This is hardcoded to return 'false' when 
 the clusterStatusListener field is null. If clusterStatusListener is not null 
 (in my test), then it could return true in certain cases (and in this case, 
 indeed it should return true since the server is down). I am trying to 
 understand why it's hardcoded to 'false' for former case.
 3. When isDeadServer returns true, the method 
 HConnectionManager.getAdmin(ServerName, boolean) throws 
 RegionServerStoppedException.
 4. Finally, after the retries are over verifyAndAssignMetaWithRetries gives 
 up and the master aborts.
 The methods in the above call chain don't handle 
 RegionServerStoppedException. Maybe something to look at... 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9481) Servershutdown handler get aborted with ConcurrentModificationException

2013-09-10 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763271#comment-13763271
 ] 

Jeffrey Zhong commented on HBASE-9481:
--

Thanks [~jxiang], [~saint@gmail.com] and [~te...@apache.org] for the 
reviews and I'll integrate the fix to 0.96 and trunk today. 

 Servershutdown handler get aborted with ConcurrentModificationException
 ---

 Key: HBASE-9481
 URL: https://issues.apache.org/jira/browse/HBASE-9481
 Project: HBase
  Issue Type: Bug
  Components: MTTR
Affects Versions: 0.96.0
Reporter: Jeffrey Zhong
Assignee: Jeffrey Zhong
 Attachments: hbase-9481.patch


 In integration tests, we found SSH got aborted with following stack trace:
 {code}
 13/09/07 18:10:00 ERROR executor.EventHandler: Caught throwable while 
 processing event M_SERVER_SHUTDOWN
 java.util.ConcurrentModificationException
 at java.util.HashMap$HashIterator.nextEntry(HashMap.java:793)
 at java.util.HashMap$ValueIterator.next(HashMap.java:822)
 at 
 org.apache.hadoop.hbase.master.RegionStates.serverOffline(RegionStates.java:378)
 at 
 org.apache.hadoop.hbase.master.AssignmentManager.processServerShutdown(AssignmentManager.java:3143)
 at 
 org.apache.hadoop.hbase.master.handler.ServerShutdownHandler.process(ServerShutdownHandler.java:207)
 at 
 org.apache.hadoop.hbase.executor.EventHandler.run(EventHandler.java:131)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
 at 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
 at java.lang.Thread.run(Thread.java:662)
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763357#comment-13763357
 ] 

Jonathan Hsieh commented on HBASE-9477:
---

With this update and with my ycsb pom dependent on my hbase-client 
0.97.0-SNAPSHOT it seems to builds fine. Waiting for a sane hadoopqa build.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter

2013-09-10 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763355#comment-13763355
 ] 

Jonathan Hsieh commented on HBASE-9359:
---

With the update in HBASE-9477 and with my ycsb pom dependent on my hbase-client 
0.97.0-SNAPSHOT it seems to builds fine.

 Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, 
 ColumnInterpreter
 --

 Key: HBASE-9359
 URL: https://issues.apache.org/jira/browse/HBASE-9359
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, 
 hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, 
 hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch


 This path is the second half of eliminating KeyValue from the client 
 interfaces.  This percolated through quite a bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9359) Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, ColumnInterpreter

2013-09-10 Thread Jonathan Hsieh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763358#comment-13763358
 ] 

Jonathan Hsieh commented on HBASE-9359:
---

[~nkeywal] 
bq. BTW, it seems a KeyValue survived in WALEdit? This class being public 
through the coprocessors

I only looked at hbase-common and hbase-client.  Coprocs are in 
hbase-regionserver -- I didn't get everything in there and i believe it is 
understood to still be flexible.  (normal clients shouldn't touch waledits.)  
File a follow on for that?

 Convert KeyValue to Cell in hbase-client module - Result/Put/Delete, 
 ColumnInterpreter
 --

 Key: HBASE-9359
 URL: https://issues.apache.org/jira/browse/HBASE-9359
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9334-9359.v4.patch, hbase-9359-9334.v5.patch, 
 hbase-9359-9334.v6.patch, hbase-9359.patch, hbase-9359.v2.patch, 
 hbase-9359.v3.patch, hbase-9359.v5.patch, hbase-9359.v6.patch


 This path is the second half of eliminating KeyValue from the client 
 interfaces.  This percolated through quite a bit. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9375) [REST] Querying row data gives all the available versions of a column

2013-09-10 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9375?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763373#comment-13763373
 ] 

Vandana Ayyalasomayajula commented on HBASE-9375:
-

[~ndimiduk] Can you review this JIRA ?

 [REST] Querying row data gives all the available versions of a column
 -

 Key: HBASE-9375
 URL: https://issues.apache.org/jira/browse/HBASE-9375
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: HBASE-9375.00.patch, HBASE-9375_trunk.00.patch


 In the hbase shell, when a user tries to get a value related to a column, 
 hbase returns only the latest value. But using the REST API returns 
 HColumnDescriptor.DEFAULT_VERSIONS versions by default. 
 The behavior should be consistent with the hbase shell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9488) Improve performance for small scan

2013-09-10 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9488?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763276#comment-13763276
 ] 

Lars Hofhansl commented on HBASE-9488:
--

Would also be nice to have this in 0.94. The patch would be a bit different:
# add {{next(byte[] regionName, Scan scan, int numberOfRows}} to 
HRegionInterface and HRegion
# in the new next(...) method on HRegionServer call openScanner, followed by 
the actual next, followed by close()
# the smallScan bit would be encoded as a scan attribute

 Improve performance for small scan
 --

 Key: HBASE-9488
 URL: https://issues.apache.org/jira/browse/HBASE-9488
 Project: HBase
  Issue Type: Improvement
  Components: Client, Performance, Scanners
Reporter: chunhui shen
Assignee: chunhui shen
 Attachments: HBASE-9488-trunk.patch, HBASE-9488-trunkV2.patch, test 
 results.jpg


 review board:
 https://reviews.apache.org/r/14059/
 Now, one scan operation would call 3 RPC at least:
 openScanner();
 next();
 closeScanner();
 I think we could reduce the RPC call to one for small scan to get better 
 performance
 Also using pread is better than seek+read for small scan (For this point, see 
 more on HBASE-7266)
 Implements such a small scan as the patch, and take the performance test as 
 following:
 a.Environment:
 patched on 0.94 version
 one regionserver; 
 one client with 50 concurrent threads;
 KV size:50/100;
 100% LRU cache hit ratio;
 Random start row of scan
 b.Results:
 See the picture attachment
 *Usage:*
 Scan scan = new Scan(startRow,stopRow);
 scan.setSmall(true);
 ResultScanner scanner = table.getScanner(scan);
 Set the new 'small' attribute as true for scan, others are the same
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9347) Support for enabling servlet filters for REST service

2013-09-10 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9347?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763374#comment-13763374
 ] 

Vandana Ayyalasomayajula commented on HBASE-9347:
-

[~ndimiduk] Can you review the latest patch ? HadoopQA run did not get 
triggered after I attached the patch. 

 Support for enabling servlet filters for REST service
 -

 Key: HBASE-9347
 URL: https://issues.apache.org/jira/browse/HBASE-9347
 Project: HBase
  Issue Type: Improvement
  Components: REST
Affects Versions: 0.94.11
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
 Attachments: HBASE-9347_94.00.patch, HBASE-9347_trunk.00.patch, 
 HBASE-9347_trunk.01.patch, HBASE-9347_trunk.02.patch, 
 HBASE-9347_trunk.03.patch, HBASE-9347_trunk.04.patch


 Currently there is no support for specifying filters for filtering client 
 requests. It will be useful if filters can be configured through hbase 
 configuration. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763383#comment-13763383
 ] 

Nicolas Liochon commented on HBASE-9467:


Yes.
Just that the number of handlers is not usually not that high compared to the 
number of regions.
Note that the scheduler is now pluggable on trunk, so it's possible to put its 
own implementation.

In theory, we should have as little handler as possible to limit the context 
switches. In practise, it depends, I have conflicting results on my tests 
around this. Obviously, if you can have a large number of handlers compared to 
the number of regions, it's easier.

I was thinking about something like:
 - there are 30 handlers total
 - 50% can be used for any task, we don't do any analysis
 - if we have more then 50% of these handlers used, then we ensure that the 
remaining handlers are shared fairly (or prioritized).

The advantage of not doing any prioritization on the first 50% is that the 
shared counters can be expensive if the queries hit the cache.

In your case, what is your configuration (number of regions / number of 
handlers / write load vs. read load in cache) currently? 




 write can be totally blocked temporarily by a write-heavy region
 

 Key: HBASE-9467
 URL: https://issues.apache.org/jira/browse/HBASE-9467
 Project: HBase
  Issue Type: Improvement
Reporter: Feng Honghua
Priority: Minor

 Write to a region can be blocked temporarily if the memstore of that region 
 reaches the threshold(hbase.hregion.memstore.block.multiplier * 
 hbase.hregion.flush.size) until the memstore of that region is flushed.
 For a write-heavy region, if its write requests saturates all the handler 
 threads of that RS when write blocking for that region occurs, requests of 
 other regions/tables to that RS also can't be served due to no available 
 handler threads...until the pending writes of that write-heavy region are 
 served after the flush is done. Hence during this time period, from the RS 
 perspective it can't serve any request from any table/region just due to a 
 single write-heavy region.
 This sounds not very reasonable, right? Maybe write requests from a region 
 can only be served by a sub-set of the handler threads, and then write 
 blocking of any single region can't lead to the scenario mentioned above?
 Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763365#comment-13763365
 ] 

Nicolas Liochon commented on HBASE-9477:


Skimmed through the patch, seems ok to me. +1 (and thanks for the hard work, 
Jon!).

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HBASE-9491) KeyValue is still visible to client code through the coprocessor API

2013-09-10 Thread Nicolas Liochon (JIRA)
Nicolas Liochon created HBASE-9491:
--

 Summary: KeyValue is still visible to client code through the 
coprocessor API
 Key: HBASE-9491
 URL: https://issues.apache.org/jira/browse/HBASE-9491
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Priority: Minor


See objectives of HBASE-9245 and sub jiras...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9491) KeyValue is still visible to client code through the coprocessor API

2013-09-10 Thread Nicolas Liochon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9491?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Nicolas Liochon updated HBASE-9491:
---

Description: See objectives of HBASE-9245 and sub jiras... The guilty is in 
WALEdit.  (was: See objectives of HBASE-9245 and sub jiras...)

 KeyValue is still visible to client code through the coprocessor API
 

 Key: HBASE-9491
 URL: https://issues.apache.org/jira/browse/HBASE-9491
 Project: HBase
  Issue Type: Bug
  Components: Client, Coprocessors, regionserver
Affects Versions: 0.98.0, 0.96.0
Reporter: Nicolas Liochon
Priority: Minor

 See objectives of HBASE-9245 and sub jiras... The guilty is in WALEdit.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763405#comment-13763405
 ] 

stack commented on HBASE-9477:
--

Don't commit this hbase-server/pom.xml.hadoop2

Otherwise, +1

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9480) Regions are unexpectedly made offline in certain failure conditions

2013-09-10 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9480?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763160#comment-13763160
 ] 

Devaraj Das commented on HBASE-9480:


Hmm.. yeah the patch i uploaded was mostly to get me out of the woods. We 
should see what the right fix is.

 Regions are unexpectedly made offline in certain failure conditions
 ---

 Key: HBASE-9480
 URL: https://issues.apache.org/jira/browse/HBASE-9480
 Project: HBase
  Issue Type: Bug
Reporter: Devaraj Das
Priority: Critical
 Fix For: 0.96.0

 Attachments: 9480-1.txt


 Came across this issue (HBASE-9338 test):
 1. Client issues a request to move a region from ServerA to ServerB
 2. ServerA is compacting that region and doesn't close region immediately. In 
 fact, it takes a while to complete the request.
 3. The master in the meantime, sends another close request.
 4. ServerA sends it a NotServingRegionException
 5. Master handles the exception, deletes the znode, and invokes regionOffline 
 for the said region.
 6. ServerA fails to operate on ZK in the CloseRegionHandler since the node is 
 deleted.
 The region is permanently offline.
 There are potentially other situations where when a RegionServer is offline 
 and the client asks for a region move off from that server, the master makes 
 the region offline.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763441#comment-13763441
 ] 

stack commented on HBASE-9477:
--

bq. I have an interesting idea, but probably unfeasible at this point.

[~sershe] I like your ideas.

I made a KeyValue Interface.  I'll attach it.  What would we do with methods 
like:

getKeyLength
getKeyOffset
getTimestampOffset
getKey
createKeyOnly
createLastOnRowCol
match*
createFirstOnRowColTS
heapSize

A few preclude doing implementations that have a different format from current 
KeyValue -- or we'd have to do contorted implementations for format that are 
other than KeyValue's current layout (doable I suppose).

Let me attach it.  There is not too much difference.  We could move NOT include 
stuff like the match* and a few other methods.

bq. If 0.98 will come just after 0.96, I propose we make also add this patch to 
0.98, and remove them in the one after 0.98. 

[~enis] Sure.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch, KVI.java


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9477:
-

Attachment: KVI.java

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch, KVI.java


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-9477:
-

Attachment: hbase-9477.v2

Reattach Jon's last patch (it is missing the .patch extension) so that hadoopqa 
finds this rather than the KVI.java I attached.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2, hbase-9477.v2.patch, 
 KVI.java


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9477) Add deprecation compat shim for Result#raw and Result#list for 0.96

2013-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9477?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763440#comment-13763440
 ] 

Hadoop QA commented on HBASE-9477:
--

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12602362/hbase-9477.v2.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 116 
new or modified tests.

{color:green}+1 hadoop1.0{color}.  The patch compiles against the hadoop 
1.0 profile.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop1-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//artifact/trunk/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/7122//console

This message is automatically generated.

 Add deprecation compat shim for Result#raw and Result#list for 0.96
 ---

 Key: HBASE-9477
 URL: https://issues.apache.org/jira/browse/HBASE-9477
 Project: HBase
  Issue Type: Sub-task
Affects Versions: 0.95.2
Reporter: Jonathan Hsieh
Assignee: Jonathan Hsieh
Priority: Blocker
 Fix For: 0.98.0, 0.96.0

 Attachments: hbase-9477.patch, hbase-9477.v2.patch


 Discussion in HBASE-9359 brought up that applications commonly use the 
 Keyvalue[] Result#raw (and similarly Result#list).  Let's rename the 0.96 
 version to something like #listCells and #rawCells and revert #raw and #list 
 to their old signature to easy upgrade deprecation issues. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-1936) ClassLoader that loads from hdfs; useful adding filters to classpath without having to restart services

2013-09-10 Thread Jimmy Xiang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-1936?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763447#comment-13763447
 ] 

Jimmy Xiang commented on HBASE-1936:


Most likely it is not related.  Have you change the code here a little to pin 
point which one is NULL?
{noformat}
props.put(zk.connect, env.getConfiguration().get(a.zk.connect));
{noformat}


 ClassLoader that loads from hdfs; useful adding filters to classpath without 
 having to restart services
 ---

 Key: HBASE-1936
 URL: https://issues.apache.org/jira/browse/HBASE-1936
 Project: HBase
  Issue Type: New Feature
Reporter: stack
Assignee: Jimmy Xiang
  Labels: noob
 Fix For: 0.98.0, 0.94.7, 0.95.1

 Attachments: 0.94-1936.patch, cp_from_hdfs.patch, 
 HBASE-1936-trunk(forReview).patch, trunk-1936.patch, trunk-1936_v2.1.patch, 
 trunk-1936_v2.2.patch, trunk-1936_v2.patch, trunk-1936_v3.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9467) write can be totally blocked temporarily by a write-heavy region

2013-09-10 Thread Nicolas Liochon (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9467?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763417#comment-13763417
 ] 

Nicolas Liochon commented on HBASE-9467:


Yes. But I can't say how much we would gain.
Ideally, we should have one thread, and any i/o would put be put on the queue 
list. And we would not use the queue if there is no i/o.

We have Reader - queue - ThreadPool execution the 'Call'. It's not ideal to 
have a queue if there is no i/o.

But I've just tested that (removing this queue, Reader calling 'Call', after 
having removed all the synchronization), and the difference in performances was 
minimal. May be 5%.  So it's not our bottleneck today.

This said, I hope it will become our bottleneck a day, hence this idea of doing 
a 50% between what we do w/o thinking and what we put in a priority list.



 write can be totally blocked temporarily by a write-heavy region
 

 Key: HBASE-9467
 URL: https://issues.apache.org/jira/browse/HBASE-9467
 Project: HBase
  Issue Type: Improvement
Reporter: Feng Honghua
Priority: Minor

 Write to a region can be blocked temporarily if the memstore of that region 
 reaches the threshold(hbase.hregion.memstore.block.multiplier * 
 hbase.hregion.flush.size) until the memstore of that region is flushed.
 For a write-heavy region, if its write requests saturates all the handler 
 threads of that RS when write blocking for that region occurs, requests of 
 other regions/tables to that RS also can't be served due to no available 
 handler threads...until the pending writes of that write-heavy region are 
 served after the flush is done. Hence during this time period, from the RS 
 perspective it can't serve any request from any table/region just due to a 
 single write-heavy region.
 This sounds not very reasonable, right? Maybe write requests from a region 
 can only be served by a sub-set of the handler threads, and then write 
 blocking of any single region can't lead to the scenario mentioned above?
 Comment?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HBASE-9364) Get request with multiple columns returns partial results

2013-09-10 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9364?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13763415#comment-13763415
 ] 

Nick Dimiduk commented on HBASE-9364:
-

{noformat}
-  if (split.length == 2  split[1].length != 0) {
-get.addColumn(split[0], split[1]);
+  if (split.length == 2) {
+if (split[1].length != 0) {
+  get.addColumn(split[0], split[1]);
+} else {
+  get.addColumn(split[0], HConstants.EMPTY_BYTE_ARRAY);
+}
{noformat}

No need for the special logic around {{split[1].length != 0}}. If it's length 
== 0, it is an empty byte[].

{noformat}
-  if (!s.contains(:)) {
-this.columns.add(Bytes.toBytes(s + :));
-  } else {
{noformat}

Why do you remove the ':' from the rowspec?

{noformat}
+   sb.append(Bytes.toStringBinary((byte[])e.getKey()));
+   sb.append(':');
{noformat}

ws: no tabs please.

{noformat}
-result = remoteTable.get(get);
+result = remoteTable.get(get);
{noformat}

ws: thanks for cleaning these up.

{noformat}
+define_test parse_column_name should  return empty qualifier for 
family-only column specifiers do
{noformat}

ws: extra space between should and return.

 Get request with multiple columns returns partial results
 -

 Key: HBASE-9364
 URL: https://issues.apache.org/jira/browse/HBASE-9364
 Project: HBase
  Issue Type: Bug
  Components: REST
Affects Versions: 0.94.11
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor
 Attachments: HBASE-9364.00.patch, HBASE-9364.01.patch, 
 hbase-9364_trunk.00.patch, HBASE-9364_trunk.01.patch


 When a GET request is issue for a table row with multiple columns and columns 
 have empty qualifier like f1: ,  results for empty qualifiers is being 
 ignored. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   4   >