[jira] [Commented] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150459#comment-14150459
 ] 

Hudson commented on HBASE-11957:


FAILURE: Integrated in HBase-0.94 #1420 (See 
[https://builds.apache.org/job/HBase-0.94/1420/])
HBASE-11957 Addendum; fix TestMetaReaderEditorNoCluster (larsh: rev 
66cfcbe1532261f42524e8e02e762007ef0796a3)
* 
src/test/java/org/apache/hadoop/hbase/catalog/TestMetaReaderEditorNoCluster.java


> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: 11957-addendum.txt, HBASE-5974-0.94-v1.diff, 
> verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl resolved HBASE-11957.
---
Resolution: Fixed

> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: 11957-addendum.txt, HBASE-5974-0.94-v1.diff, 
> verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-11957:
--
Attachment: 11957-addendum.txt

Pushed the attached addendum, which fixes the test.
My fault that I did the tests slide for so long.

> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: 11957-addendum.txt, HBASE-5974-0.94-v1.diff, 
> verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12103) Backport HFileV1Detector to 0.94

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150428#comment-14150428
 ] 

Hudson commented on HBASE-12103:


FAILURE: Integrated in HBase-0.94 #1419 (See 
[https://builds.apache.org/job/HBase-0.94/1419/])
HBASE-12103 Backport HFileV1Detector to 0.94. (Jeffrey Zhong) (larsh: rev 
7e4d7f04e14c5197949caeacd289748be9b1bf5b)
* src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java


> Backport HFileV1Detector to 0.94
> 
>
> Key: HBASE-12103
> URL: https://issues.apache.org/jira/browse/HBASE-12103
> Project: HBase
>  Issue Type: Task
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.24
>
> Attachments: HBASE-12103.patch
>
>
> The reason of back porting is that 0.96+ HBase doesn't support HFileV1 format 
> anymore. While HFileV1Detector in 0.96+, which is supposed to check v1 hfile 
> before upgrade, can't talk to pre 0.96 cluster because underlying hadoop 
> doesn't compatible either due to protocol buffer version bump.
> After the back port, a user can just download the latest 0.94 HBase tar ball 
> in a machine, point its config to a 0.94 cluster to check v1 file existance 
> by running following command:
> {code}
> ./bin/hbase org.apache.hadoop.hbase.util.HFileV1Detector -p  path>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12099) TestScannerModel fails if using jackson 1.9.13

2014-09-26 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-12099:

Attachment: 12099-2.txt

This patch should make build work with both jackson versions.

> TestScannerModel fails if using jackson 1.9.13
> --
>
> Key: HBASE-12099
> URL: https://issues.apache.org/jira/browse/HBASE-12099
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
> Environment: hadoop-2.5.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: 12099-1.txt, 12099-2.txt, HBASE-12099.v0.txt
>
>
> TestScannerModel fails if jackson 1.9.13 is used. (Hadoop 2.5 now uses that 
> version, see HADOOP-10104):
> {code}
> Failed tests:   
> testToJSON(org.apache.hadoop.hbase.rest.model.TestScannerModel): 
> expected:<{"batch":100,"caching":1000,"cacheBlocks":false,"endRow":"enp5eng=","endTime":1245393318192,"maxVersions":2147483647,"startRow":"YWJyYWNhZGFicmE=","startTime":1245219839331,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"labels":["private","public"]}>
>  but 
> was:<{"startRow":"YWJyYWNhZGFicmE=","endRow":"enp5eng=","batch":100,"startTime":1245219839331,"endTime":1245393318192,"maxVersions":2147483647,"caching":1000,"cacheBlocks":false,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"label":["private","public"]}>
> {code}
> The problem is the annotation used for the labels element which is 'label' 
> instead of 'labels'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12103) Backport HFileV1Detector to 0.94

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150417#comment-14150417
 ] 

Hudson commented on HBASE-12103:


FAILURE: Integrated in HBase-0.94-JDK7 #189 (See 
[https://builds.apache.org/job/HBase-0.94-JDK7/189/])
HBASE-12103 Backport HFileV1Detector to 0.94. (Jeffrey Zhong) (larsh: rev 
7e4d7f04e14c5197949caeacd289748be9b1bf5b)
* src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java


> Backport HFileV1Detector to 0.94
> 
>
> Key: HBASE-12103
> URL: https://issues.apache.org/jira/browse/HBASE-12103
> Project: HBase
>  Issue Type: Task
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.24
>
> Attachments: HBASE-12103.patch
>
>
> The reason of back porting is that 0.96+ HBase doesn't support HFileV1 format 
> anymore. While HFileV1Detector in 0.96+, which is supposed to check v1 hfile 
> before upgrade, can't talk to pre 0.96 cluster because underlying hadoop 
> doesn't compatible either due to protocol buffer version bump.
> After the back port, a user can just download the latest 0.94 HBase tar ball 
> in a machine, point its config to a 0.94 cluster to check v1 file existance 
> by running following command:
> {code}
> ./bin/hbase org.apache.hadoop.hbase.util.HFileV1Detector -p  path>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150416#comment-14150416
 ] 

Lars Hofhansl commented on HBASE-11957:
---

[~liushaohui], we need to either fix or remove this (at least for now)

> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: HBASE-5974-0.94-v1.diff, verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reopened HBASE-11957:
---

> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: HBASE-5974-0.94-v1.diff, verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11957) Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() seems incorrect

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11957?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150415#comment-14150415
 ] 

Lars Hofhansl commented on HBASE-11957:
---

Looks like this breaks: 
TestMetaReaderEditorNoCluster.testRideOverServerNotRunning

> Backport to 0.94 HBASE-5974 Scanner retry behavior with RPC timeout on next() 
> seems incorrect
> -
>
> Key: HBASE-11957
> URL: https://issues.apache.org/jira/browse/HBASE-11957
> Project: HBase
>  Issue Type: Bug
>Reporter: Liu Shaohui
>Assignee: Liu Shaohui
>Priority: Critical
> Fix For: 0.94.24
>
> Attachments: HBASE-5974-0.94-v1.diff, verify-test.patch
>
>
> HBASE-5974:Scanner retry behavior with RPC timeout on next() seems incorrect, 
> which cause data missing in hbase scan.
> I think we should fix it in 0.94.
> [~lhofhansl]



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12103) Backport HFileV1Detector to 0.94

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150414#comment-14150414
 ] 

Hudson commented on HBASE-12103:


FAILURE: Integrated in HBase-0.94-security #530 (See 
[https://builds.apache.org/job/HBase-0.94-security/530/])
HBASE-12103 Backport HFileV1Detector to 0.94. (Jeffrey Zhong) (larsh: rev 
7e4d7f04e14c5197949caeacd289748be9b1bf5b)
* src/main/java/org/apache/hadoop/hbase/util/HFileV1Detector.java


> Backport HFileV1Detector to 0.94
> 
>
> Key: HBASE-12103
> URL: https://issues.apache.org/jira/browse/HBASE-12103
> Project: HBase
>  Issue Type: Task
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.24
>
> Attachments: HBASE-12103.patch
>
>
> The reason of back porting is that 0.96+ HBase doesn't support HFileV1 format 
> anymore. While HFileV1Detector in 0.96+, which is supposed to check v1 hfile 
> before upgrade, can't talk to pre 0.96 cluster because underlying hadoop 
> doesn't compatible either due to protocol buffer version bump.
> After the back port, a user can just download the latest 0.94 HBase tar ball 
> in a machine, point its config to a 0.94 cluster to check v1 file existance 
> by running following command:
> {code}
> ./bin/hbase org.apache.hadoop.hbase.util.HFileV1Detector -p  path>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150412#comment-14150412
 ] 

stack commented on HBASE-12108:
---

I like your idea better.  Workaround is ugly.

> HBaseConfiguration
> --
>
> Key: HBASE-12108
> URL: https://issues.apache.org/jira/browse/HBASE-12108
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Aniket Bhatnagar
>Priority: Minor
>
> IN the setup wherein HBase jars are loaded in child classloader whose parent 
> had loaded hadoop-common jar, HBaseConfiguration.create() throws 
> "hbase-default.xml file seems to be for and old version of HBase (null)... " 
> exception. ClassLoader should be set in Hadoop conf object before calling 
> addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150411#comment-14150411
 ] 

stack commented on HBASE-12110:
---

+1

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: 0001-HBASE-12110-Fix-.arcconfig.patch
>
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12079) Deprecate KeyValueUtil#ensureKeyValue(s)

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150408#comment-14150408
 ] 

stack commented on HBASE-12079:
---

Great +1

> Deprecate KeyValueUtil#ensureKeyValue(s)
> 
>
> Key: HBASE-12079
> URL: https://issues.apache.org/jira/browse/HBASE-12079
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12079.patch
>
>
> We can deprecate this in 2.0 and remove later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12111) Remove deprecated APIs from Mutation(s)

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150410#comment-14150410
 ] 

stack commented on HBASE-12111:
---

+1

> Remove deprecated APIs from Mutation(s)
> ---
>
> Key: HBASE-12111
> URL: https://issues.apache.org/jira/browse/HBASE-12111
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-12111.patch
>
>
> Mutation setWriteToWAL(boolean)
> boolean getWriteToWAL()
> Mutation setFamilyMap(NavigableMap>)
> NavigableMap> getFamilyMap()
> To be removed from Mutation and the setters from Put/Delete/Increment/Append 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12086) Fix bugs in HTableMultiplexer

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150405#comment-14150405
 ] 

Hudson commented on HBASE-12086:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #518 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/518/])
HBASE-12086 Fix bug of HTableMultipliexer (eclark: rev 
3b8d80d5f92c19b9904087bd963e68258931c926)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableMultiplexer.java


> Fix bugs in HTableMultiplexer
> -
>
> Key: HBASE-12086
> URL: https://issues.apache.org/jira/browse/HBASE-12086
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: @deprecated Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: client, multiplex
> Fix For: 2.0.0, 0.99.1
>
> Attachments: 0001-Fix-bug-of-HTableMultipliexer.patch, 
> 0001-HBASE-12086-Fix-bug-of-HTableMultipliexer-0.98.patch
>
>
> HTableMultiplexer doesn't write Puts to correct table if there are multiple 
> tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150404#comment-14150404
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #518 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/518/])
HBASE-12052: BulkLoad Failed due to no write permission on input files 
(jeffreyz: rev ed5980e0079372ccb978c3d5f09bb0d08fd2853a)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesUseSecurityEndPoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Maddineni Sukumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150402#comment-14150402
 ] 

Maddineni Sukumar commented on HBASE-12065:
---

Hi Lars,

Along with ReplicationSink I think copy table tool also has the same
logic.I will change that and write unit tests for each.

Thanks
Sukumar

On Saturday, September 27, 2014, Lars Hofhansl (JIRA)   Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Maddineni Sukumar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150403#comment-14150403
 ] 

Maddineni Sukumar commented on HBASE-12065:
---

Hi Lars,

Along with ReplicationSink I think copy table tool also has the same logic.I 
will change that and write unit tests for each.


>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150400#comment-14150400
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-1.0 #235 (See 
[https://builds.apache.org/job/HBase-1.0/235/])
HBASE-12052: BulkLoad Failed due to no write permission on input files - 
Addendum (jeffreyz: rev b22e670b49d71de1688ea22368b9c768d00b25da)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12065:
--
Fix Version/s: 0.99.1
   0.94.24
   0.98.7
   2.0.0

>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150395#comment-14150395
 ] 

Lars Hofhansl commented on HBASE-12065:
---

BTW. ReplicationSink has the same logic. Wanna fix it there too 
[~sukuna...@gmail.com]? No problem if not, I can fix it there.

>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150394#comment-14150394
 ] 

Lars Hofhansl commented on HBASE-12065:
---

You fix-2 being on top of the first patch through me off. Normally we'd only 
attach patches here that apply against the unchanged code.

>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150392#comment-14150392
 ] 

Lars Hofhansl commented on HBASE-12065:
---

Never mind. You write it out immediately in this case. All good. Albeit a bit 
less efficient.

>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150391#comment-14150391
 ] 

stack commented on HBASE-12112:
---

Can we avoid creating iterator here?

  for (Cell cell : rs.rawCells()) {

This thing is kinda crazy cellToStringMap (Its not you, you are just replacing 
what was there). Is that necessary?

I don't get getFlatKey even now.  It is just the key part serialized as kv?  
Should it be in CellUtil?  Can we give it better name? Should we even be doing 
this getFlatKey serialization?  Are we perpetuating the old KV serialization 
surrepticiously here?  Shall we call it getKey?  Should we make a new object 
Key that has the Key stuff from Cell in it?

ClonedSeekerState is serializing same way KV did?  that is why the heapsize 
calc works?

In CellComparator is there something that does the below?  If not, should we 
add it?

int result = Bytes.compareTo(cell.getRowArray(), 
cell.getRowOffset(), cell.getRowLength(),
316 row, 0, row.length);


CellComparator I'm pretty sure has means of comparing two Cells.

if (Bytes.compareTo(pCell.getRowArray(), pCell.getRowOffset(), 
pCell.getRowLength(),
347 cell.getRowArray(), cell.getRowOffset(), 
cell.getRowLength()) > 0) {

Very nice cleanup.




> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12065) Import tool is not restoring multiple DeleteFamily markers of a row

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12065?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150390#comment-14150390
 ] 

Lars Hofhansl commented on HBASE-12065:
---

So looking at the patch... If we even get into a situation where we read a 
column delete marker followed by a family delete marker for the same row, we'd 
overwrite the column marker and hence not store it.
Not sure that would happen normally, but it is possible as we have not 
guarantees about this when reading from a sequence file.

Looks good otherwise.

>  Import tool is not restoring multiple DeleteFamily markers of a row
> 
>
> Key: HBASE-12065
> URL: https://issues.apache.org/jira/browse/HBASE-12065
> Project: HBase
>  Issue Type: Bug
>  Components: util
>Affects Versions: 0.98.2
>Reporter: Maddineni Sukumar
>Assignee: Maddineni Sukumar
>Priority: Minor
> Attachments: hbase-12065-fix-2.patch, hbase-12065-fix.patch, 
> hbase-12065-unit-test.patch
>
>
> When a row has more than one DeleteFamily markers, Import tool is not 
> restoring all DeleteFamily markers. 
> Scenario: Insert entries into hbase in below order
> Put Row1 with Value-A
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Delete Row1 with DeleteFamily Marker
> Using Export tool export this data and Import it into another table, you will 
> see below entries
> Delete Row1 with DeleteFamily Marker
> Put Row1 with Value-B
> Put Row1 with Value-A
> One DeleteFamily marker is missing here... In Import tool, 
> Importer.writeResult() method we are batching all deletes into a single 
> Delete request and pushing into hbase. Here we are pushing only one delete 
> family marker into hbase table.
> I tried same with normal HTable.delete command also. 
> If you pass multiple DeleteFamily markers of a row in a single Delete request 
> to hbase then table is maintaining only one. 
> If that is the expected  behavior of hbase then we should change logic in 
> Import tool to push DeleteFamily markers individually one by one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-5622) Improve efficiency of mapred vesion of RowCounter

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-5622?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150388#comment-14150388
 ] 

Lars Hofhansl commented on HBASE-5622:
--

HBASE-4657 does a bit more stuff (or is that already incorporated?)

> Improve efficiency of mapred vesion of RowCounter
> -
>
> Key: HBASE-5622
> URL: https://issues.apache.org/jira/browse/HBASE-5622
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Lars Hofhansl
>Assignee: Ashish Singhi
>Priority: Minor
> Attachments: HBASE-5622.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150387#comment-14150387
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-0.98 #545 (See 
[https://builds.apache.org/job/HBase-0.98/545/])
HBASE-12052: BulkLoad Failed due to no write permission on input files 
(jeffreyz: rev ed5980e0079372ccb978c3d5f09bb0d08fd2853a)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesUseSecurityEndPoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12039) Lower log level for TableNotFoundException log message when throwing

2014-09-26 Thread James Taylor (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150386#comment-14150386
 ] 

James Taylor commented on HBASE-12039:
--

Changing log level to INFO or DEBUG solves the issue for us (as does removing 
the log line altogether). Leaving as-is doesn't, though :-)

> Lower log level for TableNotFoundException log message when throwing
> 
>
> Key: HBASE-12039
> URL: https://issues.apache.org/jira/browse/HBASE-12039
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: stack
>Priority: Minor
> Fix For: 0.98.7, 0.94.25
>
> Attachments: 12039-0.94.txt, 12039.txt
>
>
> Our HBase client tries to get the HTable descriptor for a table that may or 
> may not exist. We catch and ignore the TableNotFoundException if it occurs, 
> but the log message appear regardless of this which confuses our users. Would 
> it be possible to lower the log level of this message since the exception is 
> already being throw (making it up to the caller how they want to handle this).
> 14/09/20 20:01:54 WARN client.HConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch META table: 
> org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
> table: _IDX_TEST.TESTING, row=_IDX_TEST.TESTING,,99
> at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:151)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1059)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1121)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1001)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:958)
> at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:243)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150383#comment-14150383
 ] 

Lars Hofhansl commented on HBASE-12090:
---

bq. Yes, Scan shows improvement in both tables.

For Gets it's probably shadowed by other overhead - also see HBASE-11811. In 
any case this is a good improvement.


> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12112:
--
Assignee: Anoop Sam John  (was: Lars Hofhansl)

Whoops... Jira key-press glitch in search overview... Assigned back to Anoop.

> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl reassigned HBASE-12112:
-

Assignee: Lars Hofhansl  (was: Anoop Sam John)

> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Lars Hofhansl
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12039) Lower log level for TableNotFoundException log message when throwing

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12039?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12039:
--
Fix Version/s: (was: 0.94.24)
   0.94.25

> Lower log level for TableNotFoundException log message when throwing
> 
>
> Key: HBASE-12039
> URL: https://issues.apache.org/jira/browse/HBASE-12039
> Project: HBase
>  Issue Type: Bug
>Reporter: James Taylor
>Assignee: stack
>Priority: Minor
> Fix For: 0.98.7, 0.94.25
>
> Attachments: 12039-0.94.txt, 12039.txt
>
>
> Our HBase client tries to get the HTable descriptor for a table that may or 
> may not exist. We catch and ignore the TableNotFoundException if it occurs, 
> but the log message appear regardless of this which confuses our users. Would 
> it be possible to lower the log level of this message since the exception is 
> already being throw (making it up to the caller how they want to handle this).
> 14/09/20 20:01:54 WARN client.HConnectionManager$HConnectionImplementation: 
> Encountered problems when prefetch META table: 
> org.apache.hadoop.hbase.TableNotFoundException: Cannot find row in .META. for 
> table: _IDX_TEST.TESTING, row=_IDX_TEST.TESTING,,99
> at 
> org.apache.hadoop.hbase.client.MetaScanner.metaScan(MetaScanner.java:151)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.prefetchRegionCache(HConnectionManager.java:1059)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegionInMeta(HConnectionManager.java:1121)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:1001)
> at 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation.locateRegion(HConnectionManager.java:958)
> at org.apache.hadoop.hbase.client.HTable.finishSetup(HTable.java:251)
> at org.apache.hadoop.hbase.client.HTable.(HTable.java:243)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12103) Backport HFileV1Detector to 0.94

2014-09-26 Thread Lars Hofhansl (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lars Hofhansl updated HBASE-12103:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to 0.94. Thanks [~jeffreyz].

> Backport HFileV1Detector to 0.94
> 
>
> Key: HBASE-12103
> URL: https://issues.apache.org/jira/browse/HBASE-12103
> Project: HBase
>  Issue Type: Task
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.24
>
> Attachments: HBASE-12103.patch
>
>
> The reason of back porting is that 0.96+ HBase doesn't support HFileV1 format 
> anymore. While HFileV1Detector in 0.96+, which is supposed to check v1 hfile 
> before upgrade, can't talk to pre 0.96 cluster because underlying hadoop 
> doesn't compatible either due to protocol buffer version bump.
> After the back port, a user can just download the latest 0.94 HBase tar ball 
> in a machine, point its config to a 0.94 cluster to check v1 file existance 
> by running following command:
> {code}
> ./bin/hbase org.apache.hadoop.hbase.util.HFileV1Detector -p  path>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150380#comment-14150380
 ] 

Vladimir Rodionov commented on HBASE-12090:
---

[~saint@gmail.com]:
{quote}
Your thinking is that TestBytes is coverage enough for this new code
{quote}

I concur with Lars, the new code is exercised during normal operations in the 
tests. If it is broken 50% of HBase tests will fail.

[~lhofhansl]:
{quote}
Did you expect another result?
{quote}

Yes, Scan shows improvement in both tables.

> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12103) Backport HFileV1Detector to 0.94

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12103?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150378#comment-14150378
 ] 

Lars Hofhansl commented on HBASE-12103:
---

Is that because we EOL'ed 0.96?

In any case. +1. Thanks Enis.


> Backport HFileV1Detector to 0.94
> 
>
> Key: HBASE-12103
> URL: https://issues.apache.org/jira/browse/HBASE-12103
> Project: HBase
>  Issue Type: Task
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.94.24
>
> Attachments: HBASE-12103.patch
>
>
> The reason of back porting is that 0.96+ HBase doesn't support HFileV1 format 
> anymore. While HFileV1Detector in 0.96+, which is supposed to check v1 hfile 
> before upgrade, can't talk to pre 0.96 cluster because underlying hadoop 
> doesn't compatible either due to protocol buffer version bump.
> After the back port, a user can just download the latest 0.94 HBase tar ball 
> in a machine, point its config to a 0.94 cluster to check v1 file existance 
> by running following command:
> {code}
> ./bin/hbase org.apache.hadoop.hbase.util.HFileV1Detector -p  path>
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150374#comment-14150374
 ] 

Hadoop QA commented on HBASE-12112:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12671616/HBASE-12112.patch
  against trunk revision .
  ATTACHMENT ID: 12671616

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 1 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:339)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:100)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11108//console

This message is automatically generated.

> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12040) Performances issues with FilteredScanTest

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150368#comment-14150368
 ] 

stack commented on HBASE-12040:
---

When I compare on a cluster doing 100 scans, 0.99.0 dev release is 30% faster 
than 0.98.  This is one run of 0.98.6 and two of 0.99.0.

Let me use JMS's script and see what I get over ten runs.




> Performances issues with FilteredScanTest 
> --
>
> Key: HBASE-12040
> URL: https://issues.apache.org/jira/browse/HBASE-12040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: Jean-Marc Spaggiari
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.7, 0.99.1
>
> Attachments: at-HBASE-11331.html, pre-HBASE-11331.html
>
>
> While testing 0.99.0RC1 release performances, compared to 0.98.6, figured 
> that:
> - FilteredScanTest is 100 times slower;
> - RandomReadTest is 1.5 times slower;
> - RandomSeekScanTest is 3.2 times slower;
> - RandomScanWithRange10Test is 1,2 times slower;
> - RandomScanWithRange100Test is 1,3 times slower;
> - RandomScanWithRange1000Test is 4 times slower;
> - SequentialReadTest is 1,7 times slower;
> - SequentialWriteTest is just a bit faster;
> - RandomWriteTest  is just a bit faster;
> - GaussianRandomReadBenchmark is just a beat slower;
> - SequentialReadBenchmark is 1,1 times slower;
> - SequentialWriteBenchmark is 1,1 times slower;
> - UniformRandomReadBenchmark crashed;
> - UniformRandomSmallScan is 1,3 times slower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150367#comment-14150367
 ] 

Lars Hofhansl commented on HBASE-12090:
---

I can confirm the scan numbers (tested with a table with 5 cols, 1 version). 
Nice!

bq. Your thinking is that TestBytes is coverage enough for this new code 
Vladimir Rodionov?

I think this code is exercised during normal operations in the tests.

Looked through the patch looks good to me. The only part I am little but fuzzy 
on the compareTo change, but I can't see anything wrong in that.

+1

[~apurtell], I assume you're cool with this in 0.98. It's nice speed 
improvement. Will commit too all branches by tomorrow morning unless I hear 
objections.

> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150359#comment-14150359
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-TRUNK #5566 (See 
[https://builds.apache.org/job/HBase-TRUNK/5566/])
HBASE-12052: BulkLoad Failed due to no write permission on input files - 
Addendum (jeffreyz: rev 4e56a19cf168e69fe002fe97132bed9a3fbcd0f0)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesSplitRecovery.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFiles.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12075) Preemptive Fast Fail

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150358#comment-14150358
 ] 

Hadoop QA commented on HBASE-12075:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12671614/0001-Implement-Preemptive-Fast-Fail.patch
  against trunk revision .
  ATTACHMENT ID: 12671614

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 6 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+  private static final RetryingCallerInterceptorContext NO_OP_CONTEXT = 
new NoOpRetryingInterceptorContext();
+  protected final ConcurrentMap repeatedFailuresMap = 
new ConcurrentHashMap();
+  private final ThreadLocal threadRetryingInFastFailMode = new 
ThreadLocal();
+return (fInfo != null && System.currentTimeMillis() > 
(fInfo.timeOfFirstFailureMilliSec + this.fastFailThresholdMilliSec));
+ * The {@link RetryingCallerInterceptor} also acts as a factory for getting a 
new {@link RetryingCallerInterceptorContext}.
+PreemptiveFastFailInterceptor interceptor = 
(PreemptiveFastFailInterceptor) interceptorBeforeCast;
+"We should be getting a FastFailInterceptorContext since we are 
interacting with the PreemptiveFastFailInterceptor",
++ "but it is not assigned to the context yet. It would be assigned 
on the next intercept.",
+new SyncFailedException("Dave is not on the same page"), new 
TimeoutException("Mike is late again"),

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   
org.apache.hadoop.hbase.client.TestFastFailWithoutTestUtil

 {color:red}-1 core zombie tests{color}.  There are 2 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestImportTSVWithVisibilityLabels.testMRWithOutputFormat(TestImportTSVWithVisibilityLabels.java:269)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11107//console

This message is automatically generated.

> Preemptive Fast Fail
> 
>
> Key: HBASE-12075
> URL: https://issues.apache.org/jira/browse/HBASE-12075
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.0.0
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
> Fix For: 1.0.0
>
> Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
> 0001-Implement-Preemptive-Fast-Fail.patch, 
> 0001-Implement-Preemptive

[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread Lars Hofhansl (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150357#comment-14150357
 ] 

Lars Hofhansl commented on HBASE-12090:
---

[~vrodionov]
bq. I need to find explanation why patched version does not show significant 
improvement in a GET - Narrow Table test.
Did you expect another result? Or see another result with another version of 
the patch?


> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150356#comment-14150356
 ] 

Ted Yu commented on HBASE-12072:


Correction:
0.99+ would also have this problem (kudo to Enis) since:
{code}
this.operationTimeout = 
this.conf.getInt(HConstants.HBASE_CLIENT_OPERATION_TIMEOUT,
  HConstants.DEFAULT_HBASE_CLIENT_OPERATION_TIMEOUT);
{code}
DEFAULT_HBASE_CLIENT_OPERATION_TIMEOUT is Integer.MAX_VALUE.

> We are doing 35 x 35 retries for master operations
> --
>
> Key: HBASE-12072
> URL: https://issues.apache.org/jira/browse/HBASE-12072
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Enis Soztutar
>Assignee: Ted Yu
> Attachments: 12072-v1.txt, 12072-v2.txt
>
>
> For master requests, there are two retry mechanisms in effect. The first one 
> is from HBaseAdmin.executeCallable() 
> {code}
>   private  V executeCallable(MasterCallable callable) throws 
> IOException {
> RpcRetryingCaller caller = rpcCallerFactory.newCaller();
> try {
>   return caller.callWithRetries(callable);
> } finally {
>   callable.close();
> }
>   }
> {code}
> And inside, the other one is from StubMaker.makeStub():
> {code}
> /**
>* Create a stub against the master.  Retry if necessary.
>* @return A stub to do intf against the master
>* @throws MasterNotRunningException
>*/
>   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
> (value="SWL_SLEEP_WITH_LOCK_HELD")
>   Object makeStub() throws MasterNotRunningException {
> {code}
> The tests will just hang for 10 min * 35 ~= 6hours. 
> {code}
> 2014-09-23 16:19:05,151 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
> failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,253 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
> failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,456 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
> failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,759 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
> failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:06,262 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
> failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:07,273 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
> failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:09,286 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
> failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:13,303 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
> failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:23,343 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
> failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:33,439 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
> 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:43,473 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
> 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:53,485 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
> 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:20:13,656 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: g

[jira] [Updated] (HBASE-12042) Replace internal uses of HTable(Configuration, String) with HTable(Configuration, TableName)

2014-09-26 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-12042:
--
Attachment: hbase-12042_v2.patch

It seems that the test failures was relevant. v2 patch fixes 3 of those.

> Replace internal uses of HTable(Configuration, String) with 
> HTable(Configuration, TableName)
> 
>
> Key: HBASE-12042
> URL: https://issues.apache.org/jira/browse/HBASE-12042
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 2.0.0
>Reporter: Solomon Duskis
>Assignee: Solomon Duskis
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12042.patch, HBASE-12042.patch, HBASE-12042.patch, 
> hbase-12042_v2.patch
>
>
> Replace internal uses of HTable(Configuration, String) with 
> HTable(Configuration, TableName)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150345#comment-14150345
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-1.0 #234 (See 
[https://builds.apache.org/job/HBase-1.0/234/])
HBASE-12052: BulkLoad Failed due to no write permission on input files 
(jeffreyz: rev ae3e70b6e97f0ec90a4b532a7eb2a480c390803e)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesUseSecurityEndPoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12098) User granted namespace table create permissions can't create a table

2014-09-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150344#comment-14150344
 ] 

Anoop Sam John commented on HBASE-12098:


bq.should we consider add a flag
IMHO no as this is a bug fix. One would expect ns permission to work out for 
creating tables in that ts.  Max we can mark this jira as incompatible change 
clearly say in release notes what is the compatibility break and why we do so.

> User granted namespace table create permissions can't create a table
> 
>
> Key: HBASE-12098
> URL: https://issues.apache.org/jira/browse/HBASE-12098
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 0.98.6
>Reporter: Dima Spivak
>Assignee: Srikanth Srungarapu
>Priority: Critical
> Fix For: 2.0.0, 0.98.7, 0.99.1
>
> Attachments: 12098-master.txt, HBASE-12098.patch, 
> HBASE-12098_master_v2.patch
>
>
> From the HBase shell and Java API, I am seeing
> {code}ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: 
> Insufficient permissions for user 'dima' (global, action=CREATE){code}
> when I try to create a table in a namespace to which I have been granted 
> RWXCA permissions by a global admin. Interestingly enough, this only seems to 
> extend to table creation; the same user is then allowed to disable and drop a 
> table created by a global admin in that namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12112:
---
Status: Patch Available  (was: Open)

> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12112:
---
Attachment: HBASE-12112.patch

> Avoid KeyValueUtil#ensureKeyValue some more simple cases
> 
>
> Key: HBASE-12112
> URL: https://issues.apache.org/jira/browse/HBASE-12112
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12112.patch
>
>
> This include fixes with
> - Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
> - Printing the key portion of a cell (rk+cf+q+ts+type). These are in 
> Exception messages
> - HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
> cell#getxxx() which involves bytes copying. This is not a hot area still we 
> can avoid as much usage of deprecated methods as possible in core code. I 
> believe these bytes copying methods are used in many other parts and later we 
> can try fixing those as per area importance
> - Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12040) Performances issues with FilteredScanTest

2014-09-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150342#comment-14150342
 ] 

Andrew Purtell commented on HBASE-12040:


Even 15% is enough if we can isolate it to a commit 

> Performances issues with FilteredScanTest 
> --
>
> Key: HBASE-12040
> URL: https://issues.apache.org/jira/browse/HBASE-12040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: Jean-Marc Spaggiari
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.7, 0.99.1
>
> Attachments: at-HBASE-11331.html, pre-HBASE-11331.html
>
>
> While testing 0.99.0RC1 release performances, compared to 0.98.6, figured 
> that:
> - FilteredScanTest is 100 times slower;
> - RandomReadTest is 1.5 times slower;
> - RandomSeekScanTest is 3.2 times slower;
> - RandomScanWithRange10Test is 1,2 times slower;
> - RandomScanWithRange100Test is 1,3 times slower;
> - RandomScanWithRange1000Test is 4 times slower;
> - SequentialReadTest is 1,7 times slower;
> - SequentialWriteTest is just a bit faster;
> - RandomWriteTest  is just a bit faster;
> - GaussianRandomReadBenchmark is just a beat slower;
> - SequentialReadBenchmark is 1,1 times slower;
> - SequentialWriteBenchmark is 1,1 times slower;
> - UniformRandomReadBenchmark crashed;
> - UniformRandomSmallScan is 1,3 times slower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12112) Avoid KeyValueUtil#ensureKeyValue some more simple cases

2014-09-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-12112:
--

 Summary: Avoid KeyValueUtil#ensureKeyValue some more simple cases
 Key: HBASE-12112
 URL: https://issues.apache.org/jira/browse/HBASE-12112
 Project: HBase
  Issue Type: Sub-task
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 0.99.1


This include fixes with
- Replace KeyValue#heapSize() with CellUtil#estimatedHeapSizeOf(Cell)
- Printing the key portion of a cell (rk+cf+q+ts+type). These are in Exception 
messages
- HFilePrettyPrinter - Avoiding ensureKeyValue() calls and calls to 
cell#getxxx() which involves bytes copying. This is not a hot area still we can 
avoid as much usage of deprecated methods as possible in core code. I believe 
these bytes copying methods are used in many other parts and later we can try 
fixing those as per area importance
- Creating CellUtil#createKeyOnlyCell and using that in KeyOnlyFilter




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12098) User granted namespace table create permissions can't create a table

2014-09-26 Thread Srikanth Srungarapu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12098?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150338#comment-14150338
 ] 

Srikanth Srungarapu commented on HBASE-12098:
-

As this patch will change the existing semantics by allowing the table creation 
to non global admins
should we consider add a flag i.e something like 
hbase.acl.allow.create.if.has.namespace.permission? Or is it not required?


> User granted namespace table create permissions can't create a table
> 
>
> Key: HBASE-12098
> URL: https://issues.apache.org/jira/browse/HBASE-12098
> Project: HBase
>  Issue Type: Bug
>  Components: Client, security
>Affects Versions: 0.98.6
>Reporter: Dima Spivak
>Assignee: Srikanth Srungarapu
>Priority: Critical
> Fix For: 2.0.0, 0.98.7, 0.99.1
>
> Attachments: 12098-master.txt, HBASE-12098.patch, 
> HBASE-12098_master_v2.patch
>
>
> From the HBase shell and Java API, I am seeing
> {code}ERROR: org.apache.hadoop.hbase.security.AccessDeniedException: 
> Insufficient permissions for user 'dima' (global, action=CREATE){code}
> when I try to create a table in a namespace to which I have been granted 
> RWXCA permissions by a global admin. Interestingly enough, this only seems to 
> extend to table creation; the same user is then allowed to disable and drop a 
> table created by a global admin in that namespace.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12075) Preemptive Fast Fail

2014-09-26 Thread Manukranth Kolloju (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Manukranth Kolloju updated HBASE-12075:
---
Attachment: 0001-Implement-Preemptive-Fast-Fail.patch

Added some more standalone unit tests to test the interceptor and documentation 
to all the abstract classes describing how it is used. Not sure at this point 
how reusable the interceptors are, but this feature would make the code messy 
without using such a pattern.

> Preemptive Fast Fail
> 
>
> Key: HBASE-12075
> URL: https://issues.apache.org/jira/browse/HBASE-12075
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 1.0.0
>Reporter: Manukranth Kolloju
>Assignee: Manukranth Kolloju
> Fix For: 1.0.0
>
> Attachments: 0001-Add-a-test-case-for-Preemptive-Fast-Fail.patch, 
> 0001-Implement-Preemptive-Fast-Fail.patch, 
> 0001-Implement-Preemptive-Fast-Fail.patch, 
> 0001-Implement-Preemptive-Fast-Fail.patch
>
>
> In multi threaded clients, we use a feature developed on 0.89-fb branch 
> called Preemptive Fast Fail. This allows the client threads which would 
> potentially fail, fail fast. The idea behind this feature is that we allow, 
> among the hundreds of client threads, one thread to try and establish 
> connection with the regionserver and if that succeeds, we mark it as a live 
> node again. Meanwhile, other threads which are trying to establish connection 
> to the same server would ideally go into the timeouts which is effectively 
> unfruitful. We can in those cases return appropriate exceptions to those 
> clients instead of letting them retry.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12086) Fix bugs in HTableMultiplexer

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150336#comment-14150336
 ] 

Hudson commented on HBASE-12086:


FAILURE: Integrated in HBase-0.98 #544 (See 
[https://builds.apache.org/job/HBase-0.98/544/])
HBASE-12086 Fix bug of HTableMultipliexer (eclark: rev 
3b8d80d5f92c19b9904087bd963e68258931c926)
* hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTable.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestHTableMultiplexer.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/HTableMultiplexer.java


> Fix bugs in HTableMultiplexer
> -
>
> Key: HBASE-12086
> URL: https://issues.apache.org/jira/browse/HBASE-12086
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: @deprecated Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: client, multiplex
> Fix For: 2.0.0, 0.99.1
>
> Attachments: 0001-Fix-bug-of-HTableMultipliexer.patch, 
> 0001-HBASE-12086-Fix-bug-of-HTableMultipliexer-0.98.patch
>
>
> HTableMultiplexer doesn't write Puts to correct table if there are multiple 
> tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12034) If I kill single RS in branch-1, all regions end up on Master!

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150335#comment-14150335
 ] 

Hadoop QA commented on HBASE-12034:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12671585/hbase-12034_2.patch
  against trunk revision .
  ATTACHMENT ID: 12671585

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.procedure.TestZKProcedure

 {color:red}-1 core zombie tests{color}.  There are 6 zombie test(s):   
at 
org.apache.hadoop.hbase.master.balancer.TestStochasticLoadBalancer.testBalanceCluster(TestStochasticLoadBalancer.java:179)
at 
org.apache.hadoop.hbase.replication.regionserver.TestReplicationHLogReaderManager.test(TestReplicationHLogReaderManager.java:180)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:337)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:98)
at 
org.apache.hadoop.hbase.replication.TestMasterReplication.testCyclicReplication1(TestMasterReplication.java:131)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11103//console

This message is automatically generated.

> If I kill single RS in branch-1, all regions end up on Master!
> --
>
> Key: HBASE-12034
> URL: https://issues.apache.org/jira/browse/HBASE-12034
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: stack
>Assignee: Jimmy Xiang
>Priority: Critical
> Fix For: 0.99.1
>
> Attachments: hbase-12034_1.patch, hbase-12034_2.patch
>
>
> This is unexpected.  M should not be carrying regions in branch-1.  Right 
> [~jxiang]?   Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12079) Deprecate KeyValueUtil#ensureKeyValue(s)

2014-09-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12079:
---
Fix Version/s: 0.99.1

> Deprecate KeyValueUtil#ensureKeyValue(s)
> 
>
> Key: HBASE-12079
> URL: https://issues.apache.org/jira/browse/HBASE-12079
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12079.patch
>
>
> We can deprecate this in 2.0 and remove later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12079) Deprecate KeyValueUtil#ensureKeyValue(s)

2014-09-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150333#comment-14150333
 ] 

Anoop Sam John commented on HBASE-12079:


>From 1.0 itself it can be deprecated so that future patches to 1.0 should not 
>use these APIs

> Deprecate KeyValueUtil#ensureKeyValue(s)
> 
>
> Key: HBASE-12079
> URL: https://issues.apache.org/jira/browse/HBASE-12079
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12079.patch
>
>
> We can deprecate this in 2.0 and remove later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-12052:
--
Fix Version/s: 0.99.1
   0.98.7

> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Fix For: 0.98.7, 0.99.1
>
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Jeffrey Zhong (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150323#comment-14150323
 ] 

Jeffrey Zhong commented on HBASE-12052:
---

Thanks for all the reviews! I've integrated the fix into master, branch-1 & 
0.98 branch.

> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Jeffrey Zhong (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jeffrey Zhong updated HBASE-12052:
--
  Resolution: Fixed
Release Note: SecureBulkLoadEndPoint can be used in un-secure env to bulk 
load data without hitting "Permission denied" for hbase user.
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12111) Remove deprecated APIs from Mutation(s)

2014-09-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12111:
---
Status: Patch Available  (was: Open)

> Remove deprecated APIs from Mutation(s)
> ---
>
> Key: HBASE-12111
> URL: https://issues.apache.org/jira/browse/HBASE-12111
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-12111.patch
>
>
> Mutation setWriteToWAL(boolean)
> boolean getWriteToWAL()
> Mutation setFamilyMap(NavigableMap>)
> NavigableMap> getFamilyMap()
> To be removed from Mutation and the setters from Put/Delete/Increment/Append 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12111) Remove deprecated APIs from Mutation(s)

2014-09-26 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-12111:
---
Attachment: HBASE-12111.patch

> Remove deprecated APIs from Mutation(s)
> ---
>
> Key: HBASE-12111
> URL: https://issues.apache.org/jira/browse/HBASE-12111
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0
>
> Attachments: HBASE-12111.patch
>
>
> Mutation setWriteToWAL(boolean)
> boolean getWriteToWAL()
> Mutation setFamilyMap(NavigableMap>)
> NavigableMap> getFamilyMap()
> To be removed from Mutation and the setters from Put/Delete/Increment/Append 
> as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12111) Remove deprecated APIs from Mutation(s)

2014-09-26 Thread Anoop Sam John (JIRA)
Anoop Sam John created HBASE-12111:
--

 Summary: Remove deprecated APIs from Mutation(s)
 Key: HBASE-12111
 URL: https://issues.apache.org/jira/browse/HBASE-12111
 Project: HBase
  Issue Type: Sub-task
  Components: Client
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0


Mutation setWriteToWAL(boolean)
boolean getWriteToWAL()
Mutation setFamilyMap(NavigableMap>)
NavigableMap> getFamilyMap()

To be removed from Mutation and the setters from Put/Delete/Increment/Append as 
well.





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12110:
--
Attachment: 0001-HBASE-12110-Fix-.arcconfig.patch

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: 0001-HBASE-12110-Fix-.arcconfig.patch
>
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12110:
--
Status: Open  (was: Patch Available)

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: 0001-HBASE-12110-Fix-.arcconfig.patch
>
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12110:
--
Status: Patch Available  (was: Open)

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
> Attachments: 0001-HBASE-12110-Fix-.arcconfig.patch
>
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-12110:
--
Affects Version/s: 0.99.1
   0.98.7
   2.0.0
   Status: Patch Available  (was: Open)

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12082) Find a way to set timestamp on Cells on the server

2014-09-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150306#comment-14150306
 ] 

Anoop Sam John commented on HBASE-12082:


Thanks Stack.
Ping [~enis] for branch-1

> Find a way to set timestamp on Cells on the server
> --
>
> Key: HBASE-12082
> URL: https://issues.apache.org/jira/browse/HBASE-12082
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Affects Versions: 0.99.0
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12082.patch
>
>
> On write path, we have to replace the ts on cells when the coming in ts is 
> HConstants.LATEST_TIMESTAMP. Also on delete version cells we have to adjust 
> the ts. All these places, now we do Cell to KV convert.
> We can provide a similar way as we have given for setting seqId



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150307#comment-14150307
 ] 

Elliott Clark commented on HBASE-12110:
---

https://reviews.facebook.net/D24075

All of the review hooks are broken but that's just something we will have to 
live with.

> Fix .arcconfig
> --
>
> Key: HBASE-12110
> URL: https://issues.apache.org/jira/browse/HBASE-12110
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Elliott Clark
>Assignee: Elliott Clark
>
> Not many people are currently using arc but it's a nice tool for the 
> developers who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12087) [0.98] Changing the default setting of hbase.security.access.early_out to true

2014-09-26 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150305#comment-14150305
 ] 

Anoop Sam John commented on HBASE-12087:


Should be marked as incompatible change as the def value for the config is true 
now in 0.98.7. [~apurtell]?

> [0.98] Changing the default setting of hbase.security.access.early_out to true
> --
>
> Key: HBASE-12087
> URL: https://issues.apache.org/jira/browse/HBASE-12087
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Fix For: 0.98.7
>
> Attachments: HBASE-12087.patch, HBASE-12087_v2.patch
>
>
> From the mailing list conversation:
> Problem:
> - 98 with default early out = false and hfile v2 will always give the
> "Permission Denied" instead of the "0 rows" that you expect since the early
> out is false
>  - 98 with default early out = false and hfile v3 will always give the "0
> rows"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12110) Fix .arcconfig

2014-09-26 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-12110:
-

 Summary: Fix .arcconfig
 Key: HBASE-12110
 URL: https://issues.apache.org/jira/browse/HBASE-12110
 Project: HBase
  Issue Type: Sub-task
Reporter: Elliott Clark
Assignee: Elliott Clark


Not many people are currently using arc but it's a nice tool for the developers 
who are used to it. Since it's already there let's make it work.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12086) Fix bugs in HTableMultiplexer

2014-09-26 Thread Yi Deng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12086?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Deng updated HBASE-12086:

Attachment: 0001-HBASE-12086-Fix-bug-of-HTableMultipliexer-0.98.patch

Diff for branch 0.98

> Fix bugs in HTableMultiplexer
> -
>
> Key: HBASE-12086
> URL: https://issues.apache.org/jira/browse/HBASE-12086
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Affects Versions: 2.0.0
>Reporter: Yi Deng
>Assignee: Yi Deng
>Priority: Minor
>  Labels: client, multiplex
> Fix For: 2.0.0, 0.99.1
>
> Attachments: 0001-Fix-bug-of-HTableMultipliexer.patch, 
> 0001-HBASE-12086-Fix-bug-of-HTableMultipliexer-0.98.patch
>
>
> HTableMultiplexer doesn't write Puts to correct table if there are multiple 
> tables.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2014-09-26 Thread Aniket Bhatnagar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150283#comment-14150283
 ] 

Aniket Bhatnagar commented on HBASE-12108:
--

Yes @stack, that's exactly the reason. My proposal is to call 
conf.setClassLoader(this.getClass().getClassLoader()) before calling 
addHbaseResources in the static create method.

A workaround to this is that the user should set context classloader (using 
Thread.currentThread().setClassLoader()) before calling 
HBaseConfiguration.create().

> HBaseConfiguration
> --
>
> Key: HBASE-12108
> URL: https://issues.apache.org/jira/browse/HBASE-12108
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Aniket Bhatnagar
>Priority: Minor
>
> IN the setup wherein HBase jars are loaded in child classloader whose parent 
> had loaded hadoop-common jar, HBaseConfiguration.create() throws 
> "hbase-default.xml file seems to be for and old version of HBase (null)... " 
> exception. ClassLoader should be set in Hadoop conf object before calling 
> addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12099) TestScannerModel fails if using jackson 1.9.13

2014-09-26 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12099?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150268#comment-14150268
 ] 

Esteban Gutierrez commented on HBASE-12099:
---

[~devaraj] Yeah, that works fine with 1.9.13 and that is the right name that 
needs to be used, however fixing that you can't build against 1.8.8.

> TestScannerModel fails if using jackson 1.9.13
> --
>
> Key: HBASE-12099
> URL: https://issues.apache.org/jira/browse/HBASE-12099
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
> Environment: hadoop-2.5.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: 12099-1.txt, HBASE-12099.v0.txt
>
>
> TestScannerModel fails if jackson 1.9.13 is used. (Hadoop 2.5 now uses that 
> version, see HADOOP-10104):
> {code}
> Failed tests:   
> testToJSON(org.apache.hadoop.hbase.rest.model.TestScannerModel): 
> expected:<{"batch":100,"caching":1000,"cacheBlocks":false,"endRow":"enp5eng=","endTime":1245393318192,"maxVersions":2147483647,"startRow":"YWJyYWNhZGFicmE=","startTime":1245219839331,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"labels":["private","public"]}>
>  but 
> was:<{"startRow":"YWJyYWNhZGFicmE=","endRow":"enp5eng=","batch":100,"startTime":1245219839331,"endTime":1245393318192,"maxVersions":2147483647,"caching":1000,"cacheBlocks":false,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"label":["private","public"]}>
> {code}
> The problem is the annotation used for the labels element which is 'label' 
> instead of 'labels'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12109) user_permission command for namespace does not return correct result

2014-09-26 Thread Vandana Ayyalasomayajula (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150256#comment-14150256
 ] 

Vandana Ayyalasomayajula commented on HBASE-12109:
--

Sure. I will create a new patch based on AccessControlClient.

> user_permission command for namespace does not return correct result
> 
>
> Key: HBASE-12109
> URL: https://issues.apache.org/jira/browse/HBASE-12109
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-12109_0.patch
>
>
> The existing user_permission command does not return permissions related to 
> namespace. The permissions exist in the acl table, but the user_permission.rb 
> does not handle namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12099) TestScannerModel fails if using jackson 1.9.13

2014-09-26 Thread Devaraj Das (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12099?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HBASE-12099:

Attachment: 12099-1.txt

This patch worked for me...

> TestScannerModel fails if using jackson 1.9.13
> --
>
> Key: HBASE-12099
> URL: https://issues.apache.org/jira/browse/HBASE-12099
> Project: HBase
>  Issue Type: Bug
>  Components: REST
>Affects Versions: 2.0.0, 0.98.7, 0.99.1
> Environment: hadoop-2.5.0
>Reporter: Esteban Gutierrez
>Assignee: Esteban Gutierrez
> Attachments: 12099-1.txt, HBASE-12099.v0.txt
>
>
> TestScannerModel fails if jackson 1.9.13 is used. (Hadoop 2.5 now uses that 
> version, see HADOOP-10104):
> {code}
> Failed tests:   
> testToJSON(org.apache.hadoop.hbase.rest.model.TestScannerModel): 
> expected:<{"batch":100,"caching":1000,"cacheBlocks":false,"endRow":"enp5eng=","endTime":1245393318192,"maxVersions":2147483647,"startRow":"YWJyYWNhZGFicmE=","startTime":1245219839331,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"labels":["private","public"]}>
>  but 
> was:<{"startRow":"YWJyYWNhZGFicmE=","endRow":"enp5eng=","batch":100,"startTime":1245219839331,"endTime":1245393318192,"maxVersions":2147483647,"caching":1000,"cacheBlocks":false,"column":["Y29sdW1uMQ==","Y29sdW1uMjpmb28="],"label":["private","public"]}>
> {code}
> The problem is the annotation used for the labels element which is 'label' 
> instead of 'labels'.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-12101) region failed to move out of transition within timeout 120000ms

2014-09-26 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12101?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-12101.

Resolution: Invalid

Please mail u...@hbase.apache.org for troubleshooting assistance

> region failed to move out of transition within timeout 12ms
> ---
>
> Key: HBASE-12101
> URL: https://issues.apache.org/jira/browse/HBASE-12101
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 0.98.1
> Environment: debian7
>Reporter: vishal.rajan
>Priority: Critical
>
> log snippet.
>  from one of the region server
> 2014-09-26 18:22:47,793 WARN  [RS_OPEN_REGION-b1-255-07:60020-0] 
> regionserver.HStore: Failed flushing store file, retrying num=0
> 2014-09-26 18:24:58,554 WARN  [DataStreamer for file 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/b4cb1ad7892440b6a7c13a7c30053d3f]
>  hdfs.DFSClient: DataStreamer Exception
> 2014-09-26 18:24:58,555 WARN  [RS_OPEN_REGION-b1-255-07:60020-0] 
> regionserver.HStore: Failed flushing store file, retrying num=1
> 2014-09-26 18:48:09,077 WARN  [DataStreamer for file 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/a6f6f951de6047b0b691ea42ddd5e036]
>  hdfs.DFSClient: DataStreamer Exception
> 2014-09-26 18:48:09,079 WARN  [RS_OPEN_REGION-b1-255-07:60020-0] 
> regionserver.HStore: Failed flushing store file, retrying num=2
> 2014-09-26 18:48:53,363 WARN  [DataStreamer for file 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/7e99efe5981a46ee9ba7ce4b7e50dd26]
>  hdfs.DFSClient: DataStreamer Exception
> Hdfs dir for hbase
> -rwxr-xr-x   3 hbase hbase   41047119 2014-09-23 02:40 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/t/fc0d247e689c40b598e2e4e1dfaa5f0f.
> -rwxr-xr-x   3 hbase hbase 146163105792 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/2ae6cce7b6ea446991405bc8c7382f03
> -rwxr-xr-x   3 hbase hbase 14602064 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/4f8dc1f8e06844be8cc1a9d61a77624a
> -rwxr-xr-x   3 hbase hbase 140794396672 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/5bb43145bab1454d8fe1758cfaa91286
> -rwxr-xr-x   3 hbase hbase 150994944000 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/66f124990997420eb6001b86bd9cea30
> -rwxr-xr-x   3 hbase hbase 141197049856 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/88cede4cce514e97973b08ef5d0fc32c
> -rwxr-xr-x   3 hbase hbase 144015622144 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/aeb55b1f7e214da192c3d21f84a01ceb
> -rwxr-xr-x   3 hbase hbase 152605556736 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/c58c57e7aef74fe3b2a4332deec426a9
> -rwxr-xr-x   3 hbase hbase 144552493056 2014-09-26 19:21 
> /hbase/data/default/tsdb/5b2e8261c6f88eb44c861fd24dbc2b42/.tmp/ed76cac79cc04adc866670fe261f5cc6
>  
> Hbase master data
> 5b2e8261c6f88eb44c861fd24dbc2b42  
> tsdb,,1411066769435.5b2e8261c6f88eb44c861fd24dbc2b42. state=OPENING, ts=Fri 
> Sep 26 19:21:02 IST 2014 (1311s ago), 
> server=hostname.colo.xyz.com,60020,1411736155186  1311508



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12109) user_permission command for namespace does not return correct result

2014-09-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150246#comment-14150246
 ] 

Andrew Purtell commented on HBASE-12109:


Agreed, please consider AccessControlClient [~avandana]

> user_permission command for namespace does not return correct result
> 
>
> Key: HBASE-12109
> URL: https://issues.apache.org/jira/browse/HBASE-12109
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-12109_0.patch
>
>
> The existing user_permission command does not return permissions related to 
> namespace. The permissions exist in the acl table, but the user_permission.rb 
> does not handle namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12040) Performances issues with FilteredScanTest

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150243#comment-14150243
 ] 

stack commented on HBASE-12040:
---

I will [~apurtell]  Trying to find the 100x difference that [~jmspaggi] reports 
above.

> Performances issues with FilteredScanTest 
> --
>
> Key: HBASE-12040
> URL: https://issues.apache.org/jira/browse/HBASE-12040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: Jean-Marc Spaggiari
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.7, 0.99.1
>
> Attachments: at-HBASE-11331.html, pre-HBASE-11331.html
>
>
> While testing 0.99.0RC1 release performances, compared to 0.98.6, figured 
> that:
> - FilteredScanTest is 100 times slower;
> - RandomReadTest is 1.5 times slower;
> - RandomSeekScanTest is 3.2 times slower;
> - RandomScanWithRange10Test is 1,2 times slower;
> - RandomScanWithRange100Test is 1,3 times slower;
> - RandomScanWithRange1000Test is 4 times slower;
> - SequentialReadTest is 1,7 times slower;
> - SequentialWriteTest is just a bit faster;
> - RandomWriteTest  is just a bit faster;
> - GaussianRandomReadBenchmark is just a beat slower;
> - SequentialReadBenchmark is 1,1 times slower;
> - SequentialWriteBenchmark is 1,1 times slower;
> - UniformRandomReadBenchmark crashed;
> - UniformRandomSmallScan is 1,3 times slower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12040) Performances issues with FilteredScanTest

2014-09-26 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150226#comment-14150226
 ] 

Andrew Purtell commented on HBASE-12040:


bq. 0.98.6 runs about 15% faster.

Are you able to checkout out 0.98 head, revert the HBASE-11331 commit, and see 
the perf improvement after?

> Performances issues with FilteredScanTest 
> --
>
> Key: HBASE-12040
> URL: https://issues.apache.org/jira/browse/HBASE-12040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: Jean-Marc Spaggiari
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.7, 0.99.1
>
> Attachments: at-HBASE-11331.html, pre-HBASE-11331.html
>
>
> While testing 0.99.0RC1 release performances, compared to 0.98.6, figured 
> that:
> - FilteredScanTest is 100 times slower;
> - RandomReadTest is 1.5 times slower;
> - RandomSeekScanTest is 3.2 times slower;
> - RandomScanWithRange10Test is 1,2 times slower;
> - RandomScanWithRange100Test is 1,3 times slower;
> - RandomScanWithRange1000Test is 4 times slower;
> - SequentialReadTest is 1,7 times slower;
> - SequentialWriteTest is just a bit faster;
> - RandomWriteTest  is just a bit faster;
> - GaussianRandomReadBenchmark is just a beat slower;
> - SequentialReadBenchmark is 1,1 times slower;
> - SequentialWriteBenchmark is 1,1 times slower;
> - UniformRandomReadBenchmark crashed;
> - UniformRandomSmallScan is 1,3 times slower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11948) graceful_stop.sh should use hbase-daemon.sh when executed on the decomissioned node

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11948?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150224#comment-14150224
 ] 

Hudson commented on HBASE-11948:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #517 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/517/])
HBASE-11948 graceful_stop.sh should use hbase-daemon.sh when executed on the 
decomissioned node (Sebastien Barrier) (tedyu: rev 
e7ff09210b84d37d65749a3d86c707c1bc4c6083)
* bin/graceful_stop.sh


> graceful_stop.sh should use hbase-daemon.sh when executed on the 
> decomissioned node
> ---
>
> Key: HBASE-11948
> URL: https://issues.apache.org/jira/browse/HBASE-11948
> Project: HBase
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.99.0, 0.94.23, 0.98.6
> Environment: Linux
>Reporter: Sebastien Barrier
>Assignee: Sebastien Barrier
>Priority: Minor
>  Labels: patch
> Fix For: 2.0.0, 0.98.7, 0.99.1
>
> Attachments: 11948-v2.txt, HBASE-11948.patch
>
>
> The script graceful_stop.sh should use hbase-daemon.sh instead of 
> hbase-daemons.sh when it's executed on the local decomissioned node.
> hbase-daemons.sh use ssh to perform the stop/start action, which mean the  
> local server public key need to be added in the known_hosts.
> hbase-daemon.sh is more appropriate to stop/start on a local host as it don't 
> need ssh to perform the stop/start action.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12087) [0.98] Changing the default setting of hbase.security.access.early_out to true

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12087?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150223#comment-14150223
 ] 

Hudson commented on HBASE-12087:


SUCCESS: Integrated in HBase-0.98-on-Hadoop-1.1 #517 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/517/])
HBASE-12087 [0.98] [0.98] Changing the default setting of 
hbase.security.access.early_out to true (Srikanth Srungarapu) (apurtell: rev 
c0d0d794554ad15b9a87f4ccf82aac0643419b9a)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* 
hbase-client/src/main/java/org/apache/hadoop/hbase/security/access/AccessControlConstants.java


> [0.98] Changing the default setting of hbase.security.access.early_out to true
> --
>
> Key: HBASE-12087
> URL: https://issues.apache.org/jira/browse/HBASE-12087
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Srikanth Srungarapu
>Assignee: Srikanth Srungarapu
>Priority: Minor
> Fix For: 0.98.7
>
> Attachments: HBASE-12087.patch, HBASE-12087_v2.patch
>
>
> From the mailing list conversation:
> Problem:
> - 98 with default early out = false and hfile v2 will always give the
> "Permission Denied" instead of the "0 rows" that you expect since the early
> out is false
>  - 98 with default early out = false and hfile v3 will always give the "0
> rows"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12040) Performances issues with FilteredScanTest

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150214#comment-14150214
 ] 

stack commented on HBASE-12040:
---

Here are [~jmspaggi] notes on how he ran his tests from mailing list:

{code}
You were right, this is my "small" cluster. It's 4 nodes. One master, 3 RS.
The "big" cluster (8 nodes) is reserved for Lars (0.94) for now ;)

I run the tests using this command:
for i in {1..10}; do echo; echo -n $i ; /home/hadoop/bin/hadoop fs -rmr
/hbase/*; rm -rf /tmp/*; echo rmr /hbase |
/home/zookeeper/zookeeper-3.4.3/bin/zkCli.sh; bin/start-hbase.sh; sleep 60;
bin/hbase org.apache.hadoop.hbase.PerformanceEvaluation sequentialWrite 1;
echo balancer | bin/hbase shell; sleep 60; bin/hbase
org.apache.hadoop.hbase.PerformanceEvaluation --rows=100 --nomapred
filterScan 1; bin/stop-hbase.sh; done &>> output.txt

Basically, it removed everything, start HBase, put some data it in, and
settle down. Then run the test. I do that 10 times for each test and remove
the smallest and fastests.

Nodes are 16GB.

Extract from hbase-env.sh:
export JAVA_HOME=/usr/local/jdk1.7.0_45/

# Extra Java CLASSPATH elements.  Optional.
# export HBASE_CLASSPATH=

# The maximum amount of heap to use, in MB. Default is 1000.
export HBASE_HEAPSIZE=10240



Configured propoerties:
  
hbase.rootdir
hdfs://hbasetest1:9000/hbase
  
  
hbase.cluster.distributed
true
  
  
hbase.zookeeper.quorum
hbasetest1.distparser.com
  
  
hbase.zookeeper.property.dataDir
/home/zookeeper
  
  
fs.default.name
hdfs://hbasetest1:9000/
  
  
hbase.regionserver.codecs
gz
  
  
ipc.server.tcpnodelay
true
  
  
ipc.client.tcpnodelay
true
  
  
hbase.regionserver.region.split.policy

org.apache.hadoop.hbase.regionserver.ConstantSizeRegionSplitPolicy
  
  
hbase.hregion.max.filesize
1073741824000
  


3 disks only per node:
hbase@hbasetest2:~$ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda  8:00   1,8T  0 disk
├─sda1   8:10  93,1G  0 part /
└─sda2   8:20   1,7T  0 part /data1
sdb  8:16   0   1,8T  0 disk
├─sdb1   8:17   0   9,3G  0 part [SWAP]
└─sdb2   8:18   0   1,8T  0 part /data2
sdc  8:32   0   1,8T  0 disk
└─sdc1   8:33   0   1,8T  0 part /data3


data1 to data3 are the datanode partitions.

Drives are pretty empty:
hbase@hbasetest2:~$ df -h
Sys. de fichiers Taille Utilisé Dispo Uti% Monté sur
/dev/sda1   92G 19G   69G  22% /
udev10M   0   10M   0% /dev
tmpfs  1,6G272K  1,6G   1% /run
tmpfs  5,0M   0  5,0M   0% /run/lock
tmpfs  5,0G   0  5,0G   0% /run/shm
/dev/sda2  1,8T399G  1,4T  24% /data1
/dev/sdb2  1,9T398G  1,4T  22% /data2
/dev/sdc1  1,9T397G  1,4T  22% /data3

RegionServers are SATA, Master is SSD. Only one ZK server hosted on the
master too.

0.94.x tests run with hadoop 1.2.1.
0.98.x+ tests run with hadoop 2.2.0

I'm trying to build 0.99 from the source to run it and being able to run
some specific revisions. But so far no success (yet) ;)

Just ask me whatever else you might want to know about the cluster. Can
even give you a remote access.

JM
{code}

In standalone mode, master and tip of 0.98 are about the same.  0.98.6 runs 
about 15% faster.  Let me try on cluster to see if difference more marked there.

> Performances issues with FilteredScanTest 
> --
>
> Key: HBASE-12040
> URL: https://issues.apache.org/jira/browse/HBASE-12040
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.1
>Reporter: Jean-Marc Spaggiari
>Assignee: stack
>Priority: Blocker
> Fix For: 0.98.7, 0.99.1
>
> Attachments: at-HBASE-11331.html, pre-HBASE-11331.html
>
>
> While testing 0.99.0RC1 release performances, compared to 0.98.6, figured 
> that:
> - FilteredScanTest is 100 times slower;
> - RandomReadTest is 1.5 times slower;
> - RandomSeekScanTest is 3.2 times slower;
> - RandomScanWithRange10Test is 1,2 times slower;
> - RandomScanWithRange100Test is 1,3 times slower;
> - RandomScanWithRange1000Test is 4 times slower;
> - SequentialReadTest is 1,7 times slower;
> - SequentialWriteTest is just a bit faster;
> - RandomWriteTest  is just a bit faster;
> - GaussianRandomReadBenchmark is just a beat slower;
> - SequentialReadBenchmark is 1,1 times slower;
> - SequentialWriteBenchmark is 1,1 times slower;
> - UniformRandomReadBenchmark crashed;
> - UniformRandomSmallScan is 1,3 times slower.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-11970) BlockCache would be instantiated when Master doesn't host regions

2014-09-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang resolved HBASE-11970.
-
Resolution: Implemented

Fixed this in HBASE-12034. For master that doesn't host any region, block cache 
is not instantiated.

> BlockCache would be instantiated when Master doesn't host regions
> -
>
> Key: HBASE-11970
> URL: https://issues.apache.org/jira/browse/HBASE-11970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Ted Yu
>Assignee: Jimmy Xiang
>
> HMaster ctor calls HRegionServer ctor which calls CacheConfig ctor where 
> CacheConfig.instantiateBlockCache() would create BlockCache.
> This happens even if Master is configured not to host regions.
> This was observed when I tried to answer question from the thread titled:
> 'The default hbase.regionserver.info.port doesn't work'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-11970) BlockCache would be instantiated when Master doesn't host regions

2014-09-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11970?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang reassigned HBASE-11970:
---

Assignee: Jimmy Xiang

> BlockCache would be instantiated when Master doesn't host regions
> -
>
> Key: HBASE-11970
> URL: https://issues.apache.org/jira/browse/HBASE-11970
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0
>Reporter: Ted Yu
>Assignee: Jimmy Xiang
>
> HMaster ctor calls HRegionServer ctor which calls CacheConfig ctor where 
> CacheConfig.instantiateBlockCache() would create BlockCache.
> This happens even if Master is configured not to host regions.
> This was observed when I tried to answer question from the thread titled:
> 'The default hbase.regionserver.info.port doesn't work'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12106) Move test annotations to test artifact

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12106?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150208#comment-14150208
 ] 

Hadoop QA commented on HBASE-12106:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12671563/hbase-12106_v2.patch
  against trunk revision .
  ATTACHMENT ID: 12671563

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 76 new 
or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.client.TestMultiParallel

 {color:red}-1 core zombie tests{color}.  There are 3 zombie test(s):   
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFiles.testSimpleLoad(TestLoadIncrementalHFiles.java:98)
at 
org.apache.hadoop.hbase.mapreduce.TestLoadIncrementalHFilesSplitRecovery.testSplitWhileBulkLoadPhase(TestLoadIncrementalHFilesSplitRecovery.java:337)
at 
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotBlocksMap.testReadRenamedSnapshotFileWithCheckpoint(TestSnapshotBlocksMap.java:273)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11102//console

This message is automatically generated.

> Move test annotations to test artifact
> --
>
> Key: HBASE-12106
> URL: https://issues.apache.org/jira/browse/HBASE-12106
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 0.98.7, 0.99.1
>
> Attachments: hbase-12106_v1-0.98+0.99.patch, hbase-12106_v1.patch, 
> hbase-12106_v1.patch, hbase-12106_v2.patch
>
>
> Test annotation interfaces used to be under hbase-common/src/test then moved 
> to hbase-annotations/src/main. We should move them to 
> hbase-annotations/src/test. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12034) If I kill single RS in branch-1, all regions end up on Master!

2014-09-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-12034:

Attachment: hbase-12034_2.patch

Also attached a patch for the master branch to have the similar behavior. The 
difference is that meta and namespace are put on active master by default, for 
the master branch.

> If I kill single RS in branch-1, all regions end up on Master!
> --
>
> Key: HBASE-12034
> URL: https://issues.apache.org/jira/browse/HBASE-12034
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: stack
>Assignee: Jimmy Xiang
>Priority: Critical
> Fix For: 0.99.1
>
> Attachments: hbase-12034_1.patch, hbase-12034_2.patch
>
>
> This is unexpected.  M should not be carrying regions in branch-1.  Right 
> [~jxiang]?   Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12034) If I kill single RS in branch-1, all regions end up on Master!

2014-09-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-12034:

Status: Patch Available  (was: Open)

> If I kill single RS in branch-1, all regions end up on Master!
> --
>
> Key: HBASE-12034
> URL: https://issues.apache.org/jira/browse/HBASE-12034
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: stack
>Assignee: Jimmy Xiang
>Priority: Critical
> Fix For: 0.99.1
>
> Attachments: hbase-12034_1.patch, hbase-12034_2.patch
>
>
> This is unexpected.  M should not be carrying regions in branch-1.  Right 
> [~jxiang]?   Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12034) If I kill single RS in branch-1, all regions end up on Master!

2014-09-26 Thread Jimmy Xiang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12034?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jimmy Xiang updated HBASE-12034:

Attachment: hbase-12034_1.patch

Attached patch for branch 1 that rolled back some changes. Now, backup master 
doesn't host region, doesn't show in master web UI as a regionserver. Active 
master can host region only if configured so. Otherwise, it doesn't host any 
region, doesn't show in master web UI as a regionserver either. Ran unit tests, 
ITBLL, standalone mode, pseudo mode with no new config.

> If I kill single RS in branch-1, all regions end up on Master!
> --
>
> Key: HBASE-12034
> URL: https://issues.apache.org/jira/browse/HBASE-12034
> Project: HBase
>  Issue Type: Bug
>  Components: master
>Reporter: stack
>Assignee: Jimmy Xiang
>Priority: Critical
> Fix For: 0.99.1
>
> Attachments: hbase-12034_1.patch
>
>
> This is unexpected.  M should not be carrying regions in branch-1.  Right 
> [~jxiang]?   Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12108) HBaseConfiguration

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150202#comment-14150202
 ] 

stack commented on HBASE-12108:
---

Why is that [~aniket]? Because we look for hbase-default.xml in the parent 
class loader from where we loaded Configuration and are not finding 
hbase-default.xml?  Thanks for looking into this one.

> HBaseConfiguration
> --
>
> Key: HBASE-12108
> URL: https://issues.apache.org/jira/browse/HBASE-12108
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Reporter: Aniket Bhatnagar
>Priority: Minor
>
> IN the setup wherein HBase jars are loaded in child classloader whose parent 
> had loaded hadoop-common jar, HBaseConfiguration.create() throws 
> "hbase-default.xml file seems to be for and old version of HBase (null)... " 
> exception. ClassLoader should be set in Hadoop conf object before calling 
> addHbaseResources method



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150197#comment-14150197
 ] 

stack commented on HBASE-12090:
---

bq. I need to find explanation why patched version does not show significant 
improvement in a GET - Narrow Table test.

Less compares per result returned?

Otherwise patch looks good on a skim.  This stuff is a little unsettling if 
(UnsafeComparer.littleEndian) { but seems fine on a read through. Your thinking 
is that TestBytes is coverage enough for this new code [~vrodionov]? They do 
tests that go in and out of bytes.  You don't think we need a test that does 
unsafe and safe and then compares result?

Deferring to Lars's testing but this looks good to me.

> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-09-26 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-12072:
---
Affects Version/s: 0.98.6

Looks like this issue is for 0.98 only

0.99+ wouldn't have this problem.

> We are doing 35 x 35 retries for master operations
> --
>
> Key: HBASE-12072
> URL: https://issues.apache.org/jira/browse/HBASE-12072
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.6
>Reporter: Enis Soztutar
>Assignee: Ted Yu
> Attachments: 12072-v1.txt, 12072-v2.txt
>
>
> For master requests, there are two retry mechanisms in effect. The first one 
> is from HBaseAdmin.executeCallable() 
> {code}
>   private  V executeCallable(MasterCallable callable) throws 
> IOException {
> RpcRetryingCaller caller = rpcCallerFactory.newCaller();
> try {
>   return caller.callWithRetries(callable);
> } finally {
>   callable.close();
> }
>   }
> {code}
> And inside, the other one is from StubMaker.makeStub():
> {code}
> /**
>* Create a stub against the master.  Retry if necessary.
>* @return A stub to do intf against the master
>* @throws MasterNotRunningException
>*/
>   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
> (value="SWL_SLEEP_WITH_LOCK_HELD")
>   Object makeStub() throws MasterNotRunningException {
> {code}
> The tests will just hang for 10 min * 35 ~= 6hours. 
> {code}
> 2014-09-23 16:19:05,151 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
> failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,253 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
> failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,456 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
> failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,759 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
> failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:06,262 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
> failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:07,273 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
> failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:09,286 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
> failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:13,303 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
> failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:23,343 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
> failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:33,439 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
> 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:43,473 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
> 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:53,485 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
> 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:20:13,656 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
> 35 failed; retrying after sleep of 20006, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:20:33,675 INFO  [main] 
> client.ConnectionManager$HConnectionImpl

[jira] [Commented] (HBASE-12052) BulkLoad Failed due to no write permission on input files

2014-09-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12052?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150179#comment-14150179
 ] 

Hudson commented on HBASE-12052:


FAILURE: Integrated in HBase-TRUNK #5565 (See 
[https://builds.apache.org/job/HBase-TRUNK/5565/])
HBASE-12052: BulkLoad Failed due to no write permission on input files 
(jeffreyz: rev 8ee39f197157e2c938b12efaca43ad2bea8205ed)
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/mapreduce/TestLoadIncrementalHFilesUseSecurityEndPoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/SecureBulkLoadEndpoint.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/LoadIncrementalHFiles.java
* 
hbase-server/src/test/java/org/apache/hadoop/hbase/security/access/TestAccessController.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/token/FsDelegationToken.java


> BulkLoad Failed due to no write permission on input files
> -
>
> Key: HBASE-12052
> URL: https://issues.apache.org/jira/browse/HBASE-12052
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 0.98.6
>Reporter: Jeffrey Zhong
>Assignee: Jeffrey Zhong
> Attachments: HBASE-12052.patch
>
>
> The issue is that HBase bulkload is done by Region Server which normally runs 
> under hbase user while the input hfile folder & the user starts the bulkload 
> could be any user.
> Below is the error message when user "hrt_qa" bulkload files which "hrt_qa" 
> has the write permission while the bulkload operation still fail with 
> "Permission denied" error.
> We had similar handling for this issue in secure env so the proposed fix is 
> to reuse SecureBulkLoadEndPoint in un-secure env as well. In the future, we 
> can rename the class to BulkLoadEndPoint.
> {noformat}
> java.io.IOException: Exception in rename
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.rename(HRegionFileSystem.java:947)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.commitStoreFile(HRegionFileSystem.java:347)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionFileSystem.bulkLoadStoreFile(HRegionFileSystem.java:421)
> at 
> org.apache.hadoop.hbase.regionserver.HStore.bulkLoadHFile(HStore.java:723)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3603)
> at 
> org.apache.hadoop.hbase.regionserver.HRegion.bulkLoadHFiles(HRegion.java:3525)
> at 
> org.apache.hadoop.hbase.regionserver.HRegionServer.bulkLoadHFile(HRegionServer.java:3276)
> at 
> org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:28863)
> at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2008)
> at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:92)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.consumerLoop(SimpleRpcScheduler.java:160)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.access$000(SimpleRpcScheduler.java:38)
> at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler$1.run(SimpleRpcScheduler.java:110)
> at java.lang.Thread.run(Thread.java:662)
> Caused by: org.apache.hadoop.security.AccessControlException: Permission 
> denied: user=hbase, access=WRITE, 
> inode="/tmp/a0f3ee35-4c8f-4077-93d0-94d8e5bae914/0":hrt_qa:hdfs:drwxr-xr-x
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:265)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:251)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:232)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
> at 
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:5515)
> {noformat}
>   



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12031) Parallel Scanners inside Region

2014-09-26 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150173#comment-14150173
 ] 

stack commented on HBASE-12031:
---

bq. HFileContext carries the information on some of the meta data about the 
HFile - this is per HFile. FielReaderContext is per Reader (Scanner).
bq. Probably, yes but need a place (class) to move the actual read to.

Can it not be done inside existing model via changes to HFileBlock and/or 
changes in scanners rather than doing this 'cross-cut'?

bq.  Not all data from read ahead buffer should be cached in a general case. 
Sharing some data between scanners in RA buffer is not a common case.

Tell me more about the use case then?  Many scanners inside the same region but 
they will not be scanning same files?  RA buffers per scanner will need to be 
accounted for so we can measure mem usage. Since we read in the block, why not 
go via blockcache unless for sure it is one-shot only?

Will readahead be a scanner option?

Good stuff [~vrodionov]




> Parallel Scanners inside Region
> ---
>
> Key: HBASE-12031
> URL: https://issues.apache.org/jira/browse/HBASE-12031
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, Scanners
>Affects Versions: 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 1.0.0, 2.0.0, 0.98.7, 0.99.1
>
> Attachments: HBASE-12031.2.patch, HBASE-12031.3.patch, 
> HBASE-12031.patch, ParallelScannerDesign.pdf, hbase-12031-tests.tar.gz
>
>
> This JIRA to improve performance of multiple scanners running on a same 
> region in parallel. The scenarios where we will get the performance benefits:
> * New TableInputFormat with input splits smaller than HBase Region.
> * Scanning during compaction (Compaction scanner and application scanner over 
> the same Region).
> Some JIRAs related to this one:
> https://issues.apache.org/jira/browse/HBASE-7336
> https://issues.apache.org/jira/browse/HBASE-5979 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12042) Replace internal uses of HTable(Configuration, String) with HTable(Configuration, TableName)

2014-09-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12042?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150147#comment-14150147
 ] 

Hadoop QA commented on HBASE-12042:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12671544/HBASE-12042.patch
  against trunk revision .
  ATTACHMENT ID: 12671544

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 195 
new or modified tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+Table table = TEST_UTIL.createTable(tableName, new byte[][] { FAMILY 
}, conf, Integer.MAX_VALUE);
+HRegion region = 
TEST_UTIL.getRSForFirstRegionInTable(tableName).getFromOnlineRegions(regionName);

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.mapreduce.TestWALPlayer
  org.apache.hadoop.hbase.master.TestMasterTransitions
  org.apache.hadoop.hbase.client.TestMultiParallel
  org.apache.hadoop.hbase.regionserver.wal.TestLogRolling

 {color:red}-1 core zombie tests{color}.  There are 1 zombie test(s):   
at 
org.apache.hadoop.hdfs.server.blockmanagement.TestBlockTokenWithDFS.testEnd2End(TestBlockTokenWithDFS.java:592)

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop2-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-prefix-tree.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-common.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-thrift.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-annotations.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-examples.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-client.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-hadoop-compat.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-server.html
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//artifact/patchprocess/newPatchFindbugsWarningshbase-protocol.html
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/11099//console

This message is automatically generated.

> Replace internal uses of HTable(Configuration, String) with 
> HTable(Configuration, TableName)
> 
>
> Key: HBASE-12042
> URL: https://issues.apache.org/jira/browse/HBASE-12042
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.99.0, 2.0.0
>Reporter: Solomon Duskis
>Assignee: Solomon Duskis
> Fix For: 2.0.0, 0.99.1
>
> Attachments: HBASE-12042.patch, HBASE-12042.patch, HBASE-12042.patch
>
>
> Replace internal uses of HTable(Configuration, String) with 
> HTable(Configuration, TableName)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12109) user_permission command for namespace does not return correct result

2014-09-26 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150132#comment-14150132
 ] 

Matteo Bertozzi commented on HBASE-12109:
-

are we going back to the old code and not using AccessControlClient?

> user_permission command for namespace does not return correct result
> 
>
> Key: HBASE-12109
> URL: https://issues.apache.org/jira/browse/HBASE-12109
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-12109_0.patch
>
>
> The existing user_permission command does not return permissions related to 
> namespace. The permissions exist in the acl table, but the user_permission.rb 
> does not handle namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12031) Parallel Scanners inside Region

2014-09-26 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150121#comment-14150121
 ] 

Vladimir Rodionov commented on HBASE-12031:
---

I am going to submit new patch next Monday.

> Parallel Scanners inside Region
> ---
>
> Key: HBASE-12031
> URL: https://issues.apache.org/jira/browse/HBASE-12031
> Project: HBase
>  Issue Type: New Feature
>  Components: Performance, Scanners
>Affects Versions: 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 1.0.0, 2.0.0, 0.98.7, 0.99.1
>
> Attachments: HBASE-12031.2.patch, HBASE-12031.3.patch, 
> HBASE-12031.patch, ParallelScannerDesign.pdf, hbase-12031-tests.tar.gz
>
>
> This JIRA to improve performance of multiple scanners running on a same 
> region in parallel. The scenarios where we will get the performance benefits:
> * New TableInputFormat with input splits smaller than HBase Region.
> * Scanning during compaction (Compaction scanner and application scanner over 
> the same Region).
> Some JIRAs related to this one:
> https://issues.apache.org/jira/browse/HBASE-7336
> https://issues.apache.org/jira/browse/HBASE-5979 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150107#comment-14150107
 ] 

Ted Yu edited comment on HBASE-12072 at 9/26/14 10:36 PM:
--

HConnectionImplementation provides:
{code}
public MasterKeepAliveConnection getKeepAliveMasterService()
{code}
which calls stubMaker.makeStub().

I was thinking about adding getKeepAliveMasterServiceNoRetries() which calls 
stubMaker.makeStubNoRetries().

But HConnection is marked Stable.
On the other hand, related classes (ConnectionManager, HConnection, etc), are 
in hbase-client module. So the addition of getKeepAliveMasterServiceNoRetries() 
would not complicate rolling upgrade scenario.


was (Author: yuzhih...@gmail.com):
HConnectionImplementation provides:
{code}
public MasterKeepAliveConnection getKeepAliveMasterService()
{code}
which calls stubMaker.makeStub().

I was thinking about adding getKeepAliveMasterServiceNoRetries() which calls 
stubMaker.makeStubNoRetries().

But HConnection is marked Stable.

> We are doing 35 x 35 retries for master operations
> --
>
> Key: HBASE-12072
> URL: https://issues.apache.org/jira/browse/HBASE-12072
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Ted Yu
> Attachments: 12072-v1.txt, 12072-v2.txt
>
>
> For master requests, there are two retry mechanisms in effect. The first one 
> is from HBaseAdmin.executeCallable() 
> {code}
>   private  V executeCallable(MasterCallable callable) throws 
> IOException {
> RpcRetryingCaller caller = rpcCallerFactory.newCaller();
> try {
>   return caller.callWithRetries(callable);
> } finally {
>   callable.close();
> }
>   }
> {code}
> And inside, the other one is from StubMaker.makeStub():
> {code}
> /**
>* Create a stub against the master.  Retry if necessary.
>* @return A stub to do intf against the master
>* @throws MasterNotRunningException
>*/
>   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
> (value="SWL_SLEEP_WITH_LOCK_HELD")
>   Object makeStub() throws MasterNotRunningException {
> {code}
> The tests will just hang for 10 min * 35 ~= 6hours. 
> {code}
> 2014-09-23 16:19:05,151 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
> failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,253 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
> failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,456 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
> failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,759 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
> failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:06,262 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
> failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:07,273 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
> failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:09,286 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
> failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:13,303 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
> failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:23,343 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
> failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:33,439 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
> 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:43,473 INFO  [main] 
> clien

[jira] [Commented] (HBASE-12090) Bytes: more Unsafe, more Faster

2014-09-26 Thread Vladimir Rodionov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12090?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150120#comment-14150120
 ] 

Vladimir Rodionov commented on HBASE-12090:
---

I need to find explanation why patched version does not show significant 
improvement in a GET - Narrow Table test.

> Bytes: more Unsafe, more Faster 
> 
>
> Key: HBASE-12090
> URL: https://issues.apache.org/jira/browse/HBASE-12090
> Project: HBase
>  Issue Type: Improvement
>  Components: Performance
>Affects Versions: 0.94.23, 0.98.6
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0, 0.98.7, 0.94.24, 0.99.1
>
> Attachments: 12090-v1.1.txt, HBASE-12090.2.patch, HBASE-12090.patch
>
>
> Additional optimizations to *org.apache.hadoop.hbase.util.Bytes*:
> * New version of compareTo method.
> * New versions for primitive converters  : putXXX/toXXX.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12109) user_permission command for namespace does not return correct result

2014-09-26 Thread Vandana Ayyalasomayajula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vandana Ayyalasomayajula updated HBASE-12109:
-
Attachment: HBASE-12109_0.patch

Initial patch for review.

> user_permission command for namespace does not return correct result
> 
>
> Key: HBASE-12109
> URL: https://issues.apache.org/jira/browse/HBASE-12109
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 2.0.0
>Reporter: Vandana Ayyalasomayajula
>Assignee: Vandana Ayyalasomayajula
>Priority: Minor
> Attachments: HBASE-12109_0.patch
>
>
> The existing user_permission command does not return permissions related to 
> namespace. The permissions exist in the acl table, but the user_permission.rb 
> does not handle namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-12109) user_permission command for namespace does not return correct result

2014-09-26 Thread Vandana Ayyalasomayajula (JIRA)
Vandana Ayyalasomayajula created HBASE-12109:


 Summary: user_permission command for namespace does not return 
correct result
 Key: HBASE-12109
 URL: https://issues.apache.org/jira/browse/HBASE-12109
 Project: HBase
  Issue Type: Bug
  Components: shell
Affects Versions: 2.0.0
Reporter: Vandana Ayyalasomayajula
Assignee: Vandana Ayyalasomayajula
Priority: Minor


The existing user_permission command does not return permissions related to 
namespace. The permissions exist in the acl table, but the user_permission.rb 
does not handle namespaces.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12072) We are doing 35 x 35 retries for master operations

2014-09-26 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12072?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14150107#comment-14150107
 ] 

Ted Yu commented on HBASE-12072:


HConnectionImplementation provides:
{code}
public MasterKeepAliveConnection getKeepAliveMasterService()
{code}
which calls stubMaker.makeStub().

I was thinking about adding getKeepAliveMasterServiceNoRetries() which calls 
stubMaker.makeStubNoRetries().

But HConnection is marked Stable.

> We are doing 35 x 35 retries for master operations
> --
>
> Key: HBASE-12072
> URL: https://issues.apache.org/jira/browse/HBASE-12072
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Ted Yu
> Attachments: 12072-v1.txt, 12072-v2.txt
>
>
> For master requests, there are two retry mechanisms in effect. The first one 
> is from HBaseAdmin.executeCallable() 
> {code}
>   private  V executeCallable(MasterCallable callable) throws 
> IOException {
> RpcRetryingCaller caller = rpcCallerFactory.newCaller();
> try {
>   return caller.callWithRetries(callable);
> } finally {
>   callable.close();
> }
>   }
> {code}
> And inside, the other one is from StubMaker.makeStub():
> {code}
> /**
>* Create a stub against the master.  Retry if necessary.
>* @return A stub to do intf against the master
>* @throws MasterNotRunningException
>*/
>   @edu.umd.cs.findbugs.annotations.SuppressWarnings 
> (value="SWL_SLEEP_WITH_LOCK_HELD")
>   Object makeStub() throws MasterNotRunningException {
> {code}
> The tests will just hang for 10 min * 35 ~= 6hours. 
> {code}
> 2014-09-23 16:19:05,151 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 1 of 35 
> failed; retrying after sleep of 100, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,253 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 2 of 35 
> failed; retrying after sleep of 200, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,456 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 3 of 35 
> failed; retrying after sleep of 300, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:05,759 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 4 of 35 
> failed; retrying after sleep of 500, exception=java.io.IOException: Can't get 
> master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:06,262 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 5 of 35 
> failed; retrying after sleep of 1008, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:07,273 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 6 of 35 
> failed; retrying after sleep of 2011, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:09,286 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 7 of 35 
> failed; retrying after sleep of 4012, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:13,303 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 8 of 35 
> failed; retrying after sleep of 10033, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:23,343 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 9 of 35 
> failed; retrying after sleep of 10089, exception=java.io.IOException: Can't 
> get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:33,439 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 10 of 
> 35 failed; retrying after sleep of 10027, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:43,473 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 11 of 
> 35 failed; retrying after sleep of 10004, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:19:53,485 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 12 of 
> 35 failed; retrying after sleep of 20160, exception=java.io.IOException: 
> Can't get master address from ZooKeeper; znode data == null
> 2014-09-23 16:20:13,656 INFO  [main] 
> client.ConnectionManager$HConnectionImplementation: getMaster attempt 13 of 
> 3

  1   2   3   >