[jira] [Commented] (HBASE-13496) Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504427#comment-14504427
 ] 

Anoop Sam John commented on HBASE-13496:


Stack
I am using jre1.8.0_45.

I was playing with some other compare tests and noticed the Bytes Unsafe based 
compare is performing lower than an Unsafe comparator we wrote to work with 
BBs. (Over in HBASE-11425) and then observer that with default settings the 
inline in not happening.  With a small change it is happening for def setting.
Vladimir
Yes with the change in inlineSize, we can make the bigger methods to get 
inlined (tested also)  I was saying with the default case. Or else we will have 
to change our script to pass some bigger max inline size while starting the RS 
process.  Similar inline optimization is done in this Jiras siblings.

 Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
 -

 Key: HBASE-13496
 URL: https://issues.apache.org/jira/browse/HBASE-13496
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0


 While testing with some other perf comparisons I have noticed that the above 
 method (which is very hot in read path) is not getting inline
 bq.@ 16   
 org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
  (364 bytes)   hot method too big
 We can do minor refactoring to make it inlineable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11290) Unlock RegionStates

2015-04-21 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504432#comment-14504432
 ] 

Mikhail Antonov commented on HBASE-11290:
-

[~virag] Curious if you had a chance to take the latest patch for a spin on the 
real cluster? Any numbers to think over?

 Unlock RegionStates
 ---

 Key: HBASE-11290
 URL: https://issues.apache.org/jira/browse/HBASE-11290
 Project: HBase
  Issue Type: Sub-task
Reporter: Francis Liu
Assignee: Virag Kothari
 Fix For: 2.0.0, 1.1.0, 0.98.13

 Attachments: HBASE-11290-0.98.patch, HBASE-11290-0.98_v2.patch, 
 HBASE-11290.draft.patch


 Even though RegionStates is a highly accessed data structure in HMaster. Most 
 of it's methods are synchronized. Which limits concurrency. Even simply 
 making some of the getters non-synchronized by using concurrent data 
 structures has helped with region assignments. We can go as simple as this 
 approach or create locks per region or a bucket lock per region bucket.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504413#comment-14504413
 ] 

Anoop Sam John commented on HBASE-13501:


It is used only to find and pass the region name. So the public API also can 
better take the region name.

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504417#comment-14504417
 ] 

Anoop Sam John commented on HBASE-13520:


I am +1 for V2.   Will wait for QA report.
I can add the assert part on commit.

 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13499) AsyncRpcClient test cases failure in powerpc

2015-04-21 Thread sangamesh (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504419#comment-14504419
 ] 

sangamesh commented on HBASE-13499:
---

All AsyncIPC tests are passing after applying the attached patch.

Running org.apache.hadoop.hbase.ipc.TestAsyncIPC
Tests run: 24, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 4.534 sec - in 
org.apache.hadoop.hbase.ipc.TestAsyncIPC


 AsyncRpcClient test cases failure in powerpc
 

 Key: HBASE-13499
 URL: https://issues.apache.org/jira/browse/HBASE-13499
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: sangamesh
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13499.patch


 The new AsyncRpcClient feature added through the jira defect HBASE-12684 
 causing some test cases failures in powerpc64 environment.
 I am testing it in master branch.
 Looks like the version of netty (4.0.23) doesn't provide a support for non 
 amd64 platforms and suggested to use pure java netty 
 Here is the discussion on that https://github.com/aphyr/riemann/pull/508
 So new Async test cases will fail in ppc64 and other non amd64 platforms too.
 Here is the output of the error.
 Running org.apache.hadoop.hbase.ipc.TestAsyncIPC
 Tests run: 24, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 2.802 sec 
  FAILURE! - in org.apache.hadoop.hbase.ipc.TestAsyncIPC
 testRTEDuringAsyncConnectionSetup[3](org.apache.hadoop.hbase.ipc.TestAsyncIPC)
   Time elapsed: 0.048 sec   ERROR!
 java.lang.UnsatisfiedLinkError: 
 /tmp/libnetty-transport-native-epoll4286512618055650929.so: 
 /tmp/libnetty-transport-native-epoll4286512618055650929.so: cannot open 
 shared object file: No such file or directory (Possible cause: can't load AMD 
 64-bit .so on a Power PC 64-bit platform)
   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13521) Create metric for get requests

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504482#comment-14504482
 ] 

Elliott Clark commented on HBASE-13521:
---

There's already a gets metric (taken from /jmx on a regionserver):
{code}
Get_num_ops: 673,
Get_min: 0,
Get_max: 9,
Get_mean: 0.22585438335809807,
Get_median: 0,
Get_75th_percentile: 0,
Get_95th_percentile: 1,
Get_99th_percentile: 1,
{code}

Get_num_ops will be how many gets are received. the 
min/max/median/75th,95th,99th refer to time taken to respond to a get request.

These metrics are available per server and per region.

 Create metric for get requests
 --

 Key: HBASE-13521
 URL: https://issues.apache.org/jira/browse/HBASE-13521
 Project: HBase
  Issue Type: Improvement
  Components: metrics, regionserver
Affects Versions: 1.0.0
Reporter: cuijianwei
Priority: Minor

 Currently, the readRequestsCount records the request count for both get 
 request(random read) and scan#next request(sequential read). However, the 
 cost of get request and scan#next request are different, and usually the get 
 request is much more heavy than the scan#next. Is it reasonable to create a 
 metric getRequestsCount to record the get request count specifically? Then, 
 we can trigger an alert if getRequestsCount grows too fast because large 
 number of random read requests will cause cluster overload more easily(The 
 readRequestsCount will easily grow fast if there is a scan, however, this may 
 not cause the system overload because sequential read is much more fast). 
 Discussions and suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13496) Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13496:
---
Attachment: HBASE-13496.patch

Pls note the statements movements to other private methods.  Actually we have 
to reverseBytes (for littleEndian case) iff we have a non equal case.(Only 
while compare- so inside the if).   Not a big deal still avoiding 2 ops.
Also a multiply in loop var is avoided.  

 Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
 -

 Key: HBASE-13496
 URL: https://issues.apache.org/jira/browse/HBASE-13496
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13496.patch


 While testing with some other perf comparisons I have noticed that the above 
 method (which is very hot in read path) is not getting inline
 bq.@ 16   
 org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
  (364 bytes)   hot method too big
 We can do minor refactoring to make it inlineable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504440#comment-14504440
 ] 

Hadoop QA commented on HBASE-13520:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726788/HBASE-13520-v1.patch
  against master branch at commit eb82b8b3098d6a9ac62aa50189f9d4b289f38472.
  ATTACHMENT ID: 12726788

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
 

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13750//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13750//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13750//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13750//console

This message is automatically generated.

 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original 

[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504400#comment-14504400
 ] 

Anoop Sam John commented on HBASE-13520:


Ya me too in 2 minds.
Actually getTagsArray should never be getting called on the member 'cell'.   In 
constructor we can add an assert for tags != null.  That will be better.

 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13482) Phoenix is failing to scan tables on secure environments.

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504477#comment-14504477
 ] 

Hudson commented on HBASE-13482:


FAILURE: Integrated in HBase-0.98-on-Hadoop-1.1 #908 (See 
[https://builds.apache.org/job/HBase-0.98-on-Hadoop-1.1/908/])
HBASE-13482. Phoenix is failing to scan tables on secure environments. (Alicia 
Shu) (apurtell: rev 50010ca31ed0587e3bf112a5789ec42185a9b939)
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/access/AccessController.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/ipc/RpcServer.java
* 
hbase-server/src/main/java/org/apache/hadoop/hbase/security/visibility/VisibilityController.java


 Phoenix is failing to scan tables on secure environments. 
 --

 Key: HBASE-13482
 URL: https://issues.apache.org/jira/browse/HBASE-13482
 Project: HBase
  Issue Type: Bug
Reporter: Alicia Ying Shu
Assignee: Alicia Ying Shu
 Fix For: 1.1.0, 0.98.13

 Attachments: Hbase-13482-v1.patch, Hbase-13482.patch


 When executed on secure environments, phoenix query is getting the following 
 exception message:
 java.util.concurrent.ExecutionException: 
 org.apache.hadoop.hbase.security.AccessDeniedException: 
 org.apache.hadoop.hbase.security.AccessDeniedException: User 'null' is not 
 the scanner owner! 
 org.apache.hadoop.hbase.security.access.AccessController.requireScannerOwner(AccessController.java:2048)
 org.apache.hadoop.hbase.security.access.AccessController.preScannerNext(AccessController.java:2022)
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$53.call(RegionCoprocessorHost.java:1336)
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost$RegionOperation.call(RegionCoprocessorHost.java:1671)
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperation(RegionCoprocessorHost.java:1746)
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.execOperationWithResult(RegionCoprocessorHost.java:1720)
 org.apache.hadoop.hbase.regionserver.RegionCoprocessorHost.preScannerNext(RegionCoprocessorHost.java:1331)
 org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2227)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13521) Create metric for get requests

2015-04-21 Thread cuijianwei (JIRA)
cuijianwei created HBASE-13521:
--

 Summary: Create metric for get requests
 Key: HBASE-13521
 URL: https://issues.apache.org/jira/browse/HBASE-13521
 Project: HBase
  Issue Type: Improvement
  Components: metrics, regionserver
Affects Versions: 1.0.0
Reporter: cuijianwei
Priority: Minor


Currently, the readRequestsCount records the request count for both get 
request(random read) and scan#next request(sequential read). However, the cost 
of get request and scan#next request are different, and usually the get request 
is much more heavy than the scan#next. Is it reasonable to create a metric 
getRequestsCount to record the get request count specifically? Then, we can 
trigger an alert if getRequestsCount grows too fast because large number of 
random read requests will cause cluster overload more easily(The 
readRequestsCount will easily grow fast if there is a scan, however, this may 
not cause the system overload because sequential read is much more fast). 
Discussions and suggestions are welcomed! Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13499) AsyncRpcClient test cases failure in powerpc

2015-04-21 Thread zhangduo (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13499?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504426#comment-14504426
 ] 

zhangduo commented on HBASE-13499:
--

Good. Will commit it tomorrow if no objection.

 AsyncRpcClient test cases failure in powerpc
 

 Key: HBASE-13499
 URL: https://issues.apache.org/jira/browse/HBASE-13499
 Project: HBase
  Issue Type: Bug
  Components: IPC/RPC
Affects Versions: 2.0.0, 1.1.0, 1.2.0
Reporter: sangamesh
Assignee: zhangduo
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: HBASE-13499.patch


 The new AsyncRpcClient feature added through the jira defect HBASE-12684 
 causing some test cases failures in powerpc64 environment.
 I am testing it in master branch.
 Looks like the version of netty (4.0.23) doesn't provide a support for non 
 amd64 platforms and suggested to use pure java netty 
 Here is the discussion on that https://github.com/aphyr/riemann/pull/508
 So new Async test cases will fail in ppc64 and other non amd64 platforms too.
 Here is the output of the error.
 Running org.apache.hadoop.hbase.ipc.TestAsyncIPC
 Tests run: 24, Failures: 0, Errors: 6, Skipped: 0, Time elapsed: 2.802 sec 
  FAILURE! - in org.apache.hadoop.hbase.ipc.TestAsyncIPC
 testRTEDuringAsyncConnectionSetup[3](org.apache.hadoop.hbase.ipc.TestAsyncIPC)
   Time elapsed: 0.048 sec   ERROR!
 java.lang.UnsatisfiedLinkError: 
 /tmp/libnetty-transport-native-epoll4286512618055650929.so: 
 /tmp/libnetty-transport-native-epoll4286512618055650929.so: cannot open 
 shared object file: No such file or directory (Possible cause: can't load AMD 
 64-bit .so on a Power PC 64-bit platform)
   at java.lang.ClassLoader$NativeLibrary.load(Native Method)
   at java.lang.ClassLoader.loadLibrary1(ClassLoader.java:1965)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504458#comment-14504458
 ] 

ramkrishna.s.vasudevan commented on HBASE-13501:


bq.How is it used internally? To find the 'name' or region id?
The closeRegion API taking the HRegionInfo is internally called after the 
HRegionInfo is fetched from the MetaTableAccessor 
{code}
PairHRegionInfo, ServerName pair = MetaTableAccessor.getRegion(connection, 
regionname);
{code}
The actual HBaseAdmin impl for the closeRegion taking HRegionInfo is this
{code}
AdminService.BlockingInterface admin = this.connection.getAdmin(sn);
// Close the region without updating zk state.
ProtobufUtil.closeRegion(admin, sn, hri.getRegionName());
{code}

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13470) High level Integration test for master DDL operations

2015-04-21 Thread Matteo Bertozzi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504495#comment-14504495
 ] 

Matteo Bertozzi commented on HBASE-13470:
-

looks great, thanks [~fengs]!
can you put it on reviewboard so it is easier to review?

from a quick look there are just two things I noticed.
 - you consider every operation able to complete successfully. but this is more 
or less true only in case of delete/disable (delete at least can fail if 
waiting for region in transition takes longer then the timeout). anyway, what 
I'm saying is that you may get false positives if any operation hit an 
exception e.g. hdfs/zk not responding or similar. in theory you can just put a 
for loop around the admin.operation() and retry N times, if you get an 
exception, before asserting.
 - the assert in the modify/add-family operations doesn't look right. you fetch 
the descriptor in the beginning, you modify it and assert on that descriptor 
not a new one fetched from the master.

 High level Integration test for master DDL operations
 -

 Key: HBASE-13470
 URL: https://issues.apache.org/jira/browse/HBASE-13470
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13470-v0.patch


 Our [~fengs] has an integration test which executes DDL operations with a new 
 monkey to kill the active master as a high level test for the proc v2 
 changes. 
 The test does random DDL operations from 20 client threads. The DDL 
 statements are create / delete / modify / enable / disable table and CF 
 operations. It runs HBCK to verify the end state. 
 The test can be run on a single master, or multi master setup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13524) TestReplicationAdmin fails on JDK 1.8

2015-04-21 Thread Elliott Clark (JIRA)
Elliott Clark created HBASE-13524:
-

 Summary: TestReplicationAdmin fails on JDK 1.8
 Key: HBASE-13524
 URL: https://issues.apache.org/jira/browse/HBASE-13524
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark


{code}
---
 T E S T S
---
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
support was removed in 8.0
Running org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
Tests run: 5, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 2.419 sec  
FAILURE! - in org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
testAppendPeerTableCFs(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
  Time elapsed: 0.037 sec   FAILURE!
org.junit.ComparisonFailure: expected:t[2;t1] but was:t[1;t2]
at org.junit.Assert.assertEquals(Assert.java:115)
at org.junit.Assert.assertEquals(Assert.java:144)
at 
org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testAppendPeerTableCFs(TestReplicationAdmin.java:170)

testEnableDisable(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
  Time elapsed: 0.031 sec   ERROR!
java.lang.IllegalArgumentException: Cannot add a peer with id=1 because that id 
already exists.
at 
org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addPeer(ReplicationPeersZKImpl.java:112)
at 
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:202)
at 
org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:181)
at 
org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testEnableDisable(TestReplicationAdmin.java:115)

{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13339) Update default Hadoop version to 2.6.0

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505415#comment-14505415
 ] 

Elliott Clark commented on HBASE-13339:
---

Re-kicking the issue:

So we've been testing branch-1.1 here and without hadoop 2.6.0 branch-1 can't 
get stable for any length of time.
https://issues.apache.org/jira/browse/HDFS-7005 is just too awful. Should we 
ship 2.6.0  branch-1.1 and 2.7.X on master?

 Update default Hadoop version to 2.6.0
 --

 Key: HBASE-13339
 URL: https://issues.apache.org/jira/browse/HBASE-13339
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13339.patch


 Current default Hadoop version is getting a little long in the tooth. We 
 should update to the latest version. The latest version is backwards 
 compatible with 2.5.1's dfs and mr so this should be painless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13524) TestReplicationAdmin fails on JDK 1.8

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13524:
--
Affects Version/s: 1.1.0
   2.0.0

 TestReplicationAdmin fails on JDK 1.8
 -

 Key: HBASE-13524
 URL: https://issues.apache.org/jira/browse/HBASE-13524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13524.patch


 {code}
 ---
  T E S T S
 ---
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
 support was removed in 8.0
 Running org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 Tests run: 5, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 2.419 sec  
 FAILURE! - in org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 testAppendPeerTableCFs(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.037 sec   FAILURE!
 org.junit.ComparisonFailure: expected:t[2;t1] but was:t[1;t2]
   at org.junit.Assert.assertEquals(Assert.java:115)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testAppendPeerTableCFs(TestReplicationAdmin.java:170)
 testEnableDisable(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.IllegalArgumentException: Cannot add a peer with id=1 because that 
 id already exists.
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addPeer(ReplicationPeersZKImpl.java:112)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:202)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:181)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testEnableDisable(TestReplicationAdmin.java:115)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13524) TestReplicationAdmin fails on JDK 1.8

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13524:
--
Attachment: HBASE-13524.patch

 TestReplicationAdmin fails on JDK 1.8
 -

 Key: HBASE-13524
 URL: https://issues.apache.org/jira/browse/HBASE-13524
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13524.patch


 {code}
 ---
  T E S T S
 ---
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
 support was removed in 8.0
 Running org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 Tests run: 5, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 2.419 sec  
 FAILURE! - in org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 testAppendPeerTableCFs(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.037 sec   FAILURE!
 org.junit.ComparisonFailure: expected:t[2;t1] but was:t[1;t2]
   at org.junit.Assert.assertEquals(Assert.java:115)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testAppendPeerTableCFs(TestReplicationAdmin.java:170)
 testEnableDisable(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.IllegalArgumentException: Cannot add a peer with id=1 because that 
 id already exists.
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addPeer(ReplicationPeersZKImpl.java:112)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:202)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:181)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testEnableDisable(TestReplicationAdmin.java:115)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13496) Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13496:
---
Attachment: OffheapVsOnHeapCompareTest.java
ByteBufferUtils.java

[~stack]
OffheapVsOnHeapCompareTest  is the micro test I was running. Basically I was 
testing the byte[] compare vs BB compare both using Unsafe APIs.  As that 
result not valid in this Jira not saying abt that.

Then I saw the inlining problem and with the patch I evaluated the onheap 
compare time
Avg of 25 runs  - For comparing the 2 arrays for 10 million time
With out patch  321814965.3  (nano secs) 
With patch 211087522.8

 Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
 -

 Key: HBASE-13496
 URL: https://issues.apache.org/jira/browse/HBASE-13496
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: ByteBufferUtils.java, HBASE-13496.patch, 
 OffheapVsOnHeapCompareTest.java


 While testing with some other perf comparisons I have noticed that the above 
 method (which is very hot in read path) is not getting inline
 bq.@ 16   
 org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
  (364 bytes)   hot method too big
 We can do minor refactoring to make it inlineable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505461#comment-14505461
 ] 

stack commented on HBASE-13387:
---

Reread the doc. It is good working summary of what we've learned so far doing 
our back and forth (Can we throw Unsupported exceptions if wrong API is used?)

Anyone else up for a read of this direction doc?

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505457#comment-14505457
 ] 

Anoop Sam John commented on HBASE-13387:


Oh sorry abt that formatting ugliness Stack..  
I am just waiting for HBASE-10800 to go in as I have to know what all there 
finally, in the CellComparator.
We would need some changes to Tag also  (Similar way as Cell) but for another 
Jira I would say.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13515) Handle FileNotFoundException in region replica replay for flush/compaction events

2015-04-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13515:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I have committed this. Thanks Devaraj for review. 

 Handle FileNotFoundException in region replica replay for flush/compaction 
 events
 -

 Key: HBASE-13515
 URL: https://issues.apache.org/jira/browse/HBASE-13515
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: hbase-13515_v1.patch, hbase-13515_v1.patch


 I had this patch laying around that somehow dropped from my plate. We should 
 skip replaying compaction / flush and region open event markers if the files 
 (from flush or compaction) can no longer be found from the secondary. If we 
 do not skip, the replay will be retried forever, effectively blocking the 
 replication further. 
 Bulk load already does this, we just need to do it for flush / compaction and 
 region open events as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13515) Handle FileNotFoundException in region replica replay for flush/compaction events

2015-04-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13515?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13515:
--
Fix Version/s: 1.2.0

 Handle FileNotFoundException in region replica replay for flush/compaction 
 events
 -

 Key: HBASE-13515
 URL: https://issues.apache.org/jira/browse/HBASE-13515
 Project: HBase
  Issue Type: Sub-task
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0, 1.2.0

 Attachments: hbase-13515_v1.patch, hbase-13515_v1.patch


 I had this patch laying around that somehow dropped from my plate. We should 
 skip replaying compaction / flush and region open event markers if the files 
 (from flush or compaction) can no longer be found from the secondary. If we 
 do not skip, the replay will be retried forever, effectively blocking the 
 replication further. 
 Bulk load already does this, we just need to do it for flush / compaction and 
 region open events as well. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13524) TestReplicationAdmin fails on JDK 1.8

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13524:
--
Status: Patch Available  (was: Open)

 TestReplicationAdmin fails on JDK 1.8
 -

 Key: HBASE-13524
 URL: https://issues.apache.org/jira/browse/HBASE-13524
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13524.patch


 {code}
 ---
  T E S T S
 ---
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
 support was removed in 8.0
 Running org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 Tests run: 5, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 2.419 sec  
 FAILURE! - in org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 testAppendPeerTableCFs(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.037 sec   FAILURE!
 org.junit.ComparisonFailure: expected:t[2;t1] but was:t[1;t2]
   at org.junit.Assert.assertEquals(Assert.java:115)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testAppendPeerTableCFs(TestReplicationAdmin.java:170)
 testEnableDisable(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.IllegalArgumentException: Cannot add a peer with id=1 because that 
 id already exists.
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addPeer(ReplicationPeersZKImpl.java:112)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:202)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:181)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testEnableDisable(TestReplicationAdmin.java:115)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13339) Update default Hadoop version to 2.6.0

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505465#comment-14505465
 ] 

Sean Busbey commented on HBASE-13339:
-

Our [versions of Hadoop table|http://hbase.apache.org/book.html#hadoop] 
doesn't currently list Hadoop 2.6.0 at all. Can we start by doing whatever work 
is needed to get it added to that table? And quantify the problems with running 
on earlier versions so we can mark them as not-recommended in that same table?

I've started updating the 0.98 Jenkins builds to use the matrix plugin. I can 
probably finish that this week. If we update the builds for 1.1 to similarly 
use the matrix plugin to handle Hadoop 2.4 - 2.6 for atleast unit tests I'd be 
comfortable moving our stated pom versions (but would want a check on how it 
changed our dependencies).

 Update default Hadoop version to 2.6.0
 --

 Key: HBASE-13339
 URL: https://issues.apache.org/jira/browse/HBASE-13339
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13339.patch


 Current default Hadoop version is getting a little long in the tooth. We 
 should update to the latest version. The latest version is backwards 
 compatible with 2.5.1's dfs and mr so this should be painless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Can't start Hmaster due to zookeeper

2015-04-21 Thread Bo Fu
Hi,

I’m a beginner of HBase. I’m recently deploying HBase 1.0.0 onto Emulab using 
Hadoop 2.6.0
When I type bin/start-hbase.sh, Hbase and HRegionservers starts and then shut 
down. The master log are as follows:

2015-04-21 12:13:58,607 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
pc439.emulab.net/155.98.38.39:2181http://pc439.emulab.net/155.98.38.39:2181. 
Will not attempt to authenticate using SASL (unknown error)
2015-04-21 12:13:58,608 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
pc439.emulab.net/155.98.38.39:2181http://pc439.emulab.net/155.98.38.39:2181, 
initiating session
2015-04-21 12:13:58,609 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, 
likely server has closed socket, closing socket connection and attempting 
reconnect
2015-04-21 12:13:59,513 INFO  
[main-SendThread(pc440.emulab.nethttp://pc440.emulab.net:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
pc440.emulab.net/155.98.38.40:2181http://pc440.emulab.net/155.98.38.40:2181. 
Will not attempt to authenticate using SASL (unknown error)
2015-04-21 12:13:59,513 INFO  
[main-SendThread(pc440.emulab.nethttp://pc440.emulab.net:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
pc440.emulab.net/155.98.38.40:2181http://pc440.emulab.net/155.98.38.40:2181, 
initiating session
2015-04-21 12:13:59,514 INFO  
[main-SendThread(pc440.emulab.nethttp://pc440.emulab.net:2181)] 
zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, 
likely server has closed socket, closing socket connection and attempting 
reconnect
2015-04-21 12:14:01,531 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Opening socket connection to server 
pc439.emulab.net/155.98.38.39:2181http://pc439.emulab.net/155.98.38.39:2181. 
Will not attempt to authenticate using SASL (unknown error)
2015-04-21 12:14:01,531 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Socket connection established to 
pc439.emulab.net/155.98.38.39:2181http://pc439.emulab.net/155.98.38.39:2181, 
initiating session
2015-04-21 12:14:01,532 INFO  
[main-SendThread(pc439.emulab.nethttp://pc439.emulab.net:2181)] 
zookeeper.ClientCnxn: Unable to read additional data from server sessionid 0x0, 
likely server has closed socket, closing socket connection and attempting 
reconnect
2015-04-21 12:14:01,633 WARN  [main] zookeeper.RecoverableZooKeeper: Possibly 
transient ZooKeeper, 
quorum=pc439.emulab.nethttp://pc439.emulab.net:2181,pc440.emulab.nethttp://pc440.emulab.net:2181,
 exception=org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for /hbase
2015-04-21 12:14:01,633 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper 
create failed after 4 attempts
2015-04-21 12:14:01,634 ERROR [main] master.HMasterCommandLine: Master exiting
java.lang.RuntimeException: Failed construction of Master: class 
org.apache.hadoop.hbase.master.HMaster
at 
org.apache.hadoop.hbase.master.HMaster.constructMaster(HMaster.java:1982)
at 
org.apache.hadoop.hbase.master.HMasterCommandLine.startMaster(HMasterCommandLine.java:198)
at 
org.apache.hadoop.hbase.master.HMasterCommandLine.run(HMasterCommandLine.java:139)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at 
org.apache.hadoop.hbase.util.ServerCommandLine.doMain(ServerCommandLine.java:126)
at org.apache.hadoop.hbase.master.HMaster.main(HMaster.java:1996)
Caused by: org.apache.zookeeper.KeeperException$ConnectionLossException: 
KeeperErrorCode = ConnectionLoss for /hbase
at org.apache.zookeeper.KeeperException.create(KeeperException.java:99)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.createNonSequential(RecoverableZooKeeper.java:512)
at 
org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.create(RecoverableZooKeeper.java:491)
at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1252)
at 
org.apache.hadoop.hbase.zookeeper.ZKUtil.createWithParents(ZKUtil.java:1230)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.createBaseZNodes(ZooKeeperWatcher.java:174)
at 
org.apache.hadoop.hbase.zookeeper.ZooKeeperWatcher.init(ZooKeeperWatcher.java:167)

My Hbase-site.xml is:
configuration
  property
namehbase.master/name
value10.10.10.2:6/value
  /property
  property
namehbase.rootdir/name
valuehdfs://hadoopmaster:9000/hbase/value
  /property
  property
namehbase.zookeeper.property.dataDir/name

[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505495#comment-14505495
 ] 

Anoop Sam John commented on HBASE-13502:


bq.private KVComparator getRowComparator(TableName tableName) 
I believe you will change this type to CellComparator in HBASE-10800.

{quote}
I think it clearer if you write it as:
return TableName.META_TABLE_NAME.equals(tableName)? KeyValue.META_COMPARATOR: 
KeyValue.COMPARATOR;
{quote}
I also think we can do this way as Stack suggested.

+1. Can correct on commit.

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch, HBASE-13502_3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13339) Update default Hadoop version to 2.6.0

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13339:
--
Attachment: HBASE-13339-v1.patch

 Update default Hadoop version to 2.6.0
 --

 Key: HBASE-13339
 URL: https://issues.apache.org/jira/browse/HBASE-13339
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13339-v1.patch, HBASE-13339.patch


 Current default Hadoop version is getting a little long in the tooth. We 
 should update to the latest version. The latest version is backwards 
 compatible with 2.5.1's dfs and mr so this should be painless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505509#comment-14505509
 ] 

ramkrishna.s.vasudevan commented on HBASE-13387:


I was always for throwing Exception but after discussions I think am ok either 
way.  Because we are any way introducing new APIs to handle Buffer's offset.
[~stack]
If you want to throw exceptions then having offset APIS for BBs wont make 
sense.  Because when we have both the type of getXXXOffset() and 
getXXXBBOffset() if the user is using the getXXXArray APIs with getXXXOFfset 
and getXXXBB() APIs with getXXXBBOFfset() then there is no problem, but if he 
interchanges these APIs,  we cannot figure out that he has jumbled the API 
usage. In that case we cannot throw any exception.  That could only be 
documented.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13078) IntegrationTestSendTraceRequests is a noop

2015-04-21 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-13078:
--
   Resolution: Fixed
Fix Version/s: (was: 0.98.13)
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to branch-1.0, branch-1.1, and to master. 0.98 has its own issue. Thanks 
for the patch [~elserj]

 IntegrationTestSendTraceRequests is a noop
 --

 Key: HBASE-13078
 URL: https://issues.apache.org/jira/browse/HBASE-13078
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Josh Elser
Priority: Critical
 Fix For: 2.0.0, 1.1.0, 1.0.2

 Attachments: HBASE-13078-0.98-removal.patch, 
 HBASE-13078-0.98-v1.patch, HBASE-13078-v1.patch, HBASE-13078.patch


 While pair-debugging with [~jeffreyz] on HBASE-13077, we noticed that 
 IntegrationTestSendTraceRequests doesn't actually assert anything. This test 
 should be converted to use a mini cluster, setup a POJOSpanReceiver, and then 
 verify the spans collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13450) Purge RawBytescomparator from the writers and readers after HBASE-10800

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505328#comment-14505328
 ] 

stack commented on HBASE-13450:
---

Failure looks related.

Please edit the description. I do not follow.

The binarySearch you add in Bytes is same as existing binarySearch only for the 
comparator part: i.e. the only diff is s/comparator/Bytes.BYTES_RAWCOMPARATOR/ 
Why reproduce the body? Why not call the existing binarySearch passing 
Bytes.BYTES_RAWCOMPARATOR as comparator.

We are replacing raw comparator for rows. Is this going to be safe for meta 
file compares? (It looks like it going by change in HFileBlockIndex where if no 
comparator, we use Bytes but if a comparator, we use it). Is what is going on 
here a replacement of RawBytesComparator with Bytes.RAWCOMPARATOR?

Why write 'if(' and then follow it with 'else {'... i.e. sometimes you have a 
space and other times not (In the rest of the code, the style is to have a 
space).

What is RAW_BYTES in the below? Is this Bytes.RAWCOMPARATOR?

// Return null for RAW_BYTES


Is this right?

889 if (bloomType == BloomType.ROW) {
890   res = (Bytes.BYTES_RAWCOMPARATOR.compare(bloomKey, 
bloomKeyOffset, bloomKeyLen,
891   lastBloomKey, lastBloomKeyOffset, lastBloomKeyLen) = 
0);
892 } else {
893   res = 
(generalBloomFilterWriter.getComparator().compareFlatKey(bloomKey,
894   bloomKeyOffset, bloomKeyLen, lastBloomKey, 
lastBloomKeyOffset, lastBloomKeyLen)) = 0;
895 }

If row bloom, we use bytes... but if this is a meta table file, we should use 
comparator ?

Rather than repeat the = 0 test twice, why not do it once after you've done 
the compares rather than do

if (res) {

Hmmm... 

  bloomType == BloomType.ROWCOL ? KeyValue.COMPARATOR : null);

but.. it is replacing...

bloomType == BloomType.ROWCOL ? KeyValue.COMPARATOR : 
KeyValue.RAW_COMPARATOR);

... so, raw comparator is 'ok' if a row-only bloom. How does this work w/ meta 
keys?

Rather than do this in a few places

246   if (comparator != null) {
247 Bytes.writeByteArray(out, 
Bytes.toBytes(comparator.getClass().getName()));
248   } else {
249 Bytes.writeByteArray(out, 
Bytes.toBytes(Bytes.BYTES_RAWCOMPARATOR.getClass().getName()));
250 

just pass comparator even if it is null and let the method internally figure 
which comparator to use...






 Purge RawBytescomparator from the writers and readers after HBASE-10800
 ---

 Key: HBASE-13450
 URL: https://issues.apache.org/jira/browse/HBASE-13450
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13450.patch


 The RawBytesComparator should not be coming in the HFiles, the writers and 
 readers as we will be trying to use the Bytes.RAW_COMPARATOR directly. Also 
 RawBytescomparator can no longer by CellComparator as we would like to deal 
 with CellComparator for all the comparisons going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505380#comment-14505380
 ] 

ramkrishna.s.vasudevan commented on HBASE-13501:


What is the deprecation message to be added here? We will say it getComparator 
will be replaced by getCellComparator? Or we will remove this API itself and 
move this to HRegion?  


 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505444#comment-14505444
 ] 

stack commented on HBASE-13387:
---

Ok. Writeup helps (formatting is all wacky).  It could do with an edit but it 
is lists out the fruit of our back and forth on the two patches posted.  Great 
(I have reservations about an Interface per server-side feature but for now, 
lets go this route... I think it means less change). Looking forward to v3.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-11758) Meta region location should be cached

2015-04-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505500#comment-14505500
 ] 

Enis Soztutar commented on HBASE-11758:
---

Duplicate of HBASE-10785? 

 Meta region location should be cached
 -

 Key: HBASE-11758
 URL: https://issues.apache.org/jira/browse/HBASE-11758
 Project: HBase
  Issue Type: Sub-task
Reporter: Virag Kothari

 The zk less assignment involves only master updating the meta and  this can 
 be faster if we cache the meta location instead of reading the meta znode 
 every time.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505521#comment-14505521
 ] 

ramkrishna.s.vasudevan commented on HBASE-13502:


There is something fishy in the way the patch gets created in my system.  I 
specifically changed that stmt as per Stack's comment.  I saw the patch before 
i saved it.  But here it is not there ;)?  Ok will change on commit. Thanks for 
the reviews.  Will commit it tomorrow morning.

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch, HBASE-13502_3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13471) Deadlock closing a region

2015-04-21 Thread Rajesh Nishtala (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505541#comment-14505541
 ] 

Rajesh Nishtala commented on HBASE-13471:
-

I just updated the patch based on [~eclark]'s comments in the diff

 Deadlock closing a region
 -

 Key: HBASE-13471
 URL: https://issues.apache.org/jira/browse/HBASE-13471
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.13
Reporter: Elliott Clark
Assignee: Rajesh Nishtala
 Attachments: HBASE-13471-v1.patch, HBASE-13471.patch


 {code}
 Thread 4139 
 (regionserver/hbase412.example.com/10.158.6.53:60020-splits-1429003183537):
   State: WAITING
   Blocked count: 131
   Waited count: 228
   Waiting on 
 java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@50714dc3
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1371)
 org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1325)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:352)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:252)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:509)
 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13471) Deadlock closing a region

2015-04-21 Thread Rajesh Nishtala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Nishtala updated HBASE-13471:

Attachment: HBASE-13471-v1.patch

 Deadlock closing a region
 -

 Key: HBASE-13471
 URL: https://issues.apache.org/jira/browse/HBASE-13471
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.13
Reporter: Elliott Clark
Assignee: Rajesh Nishtala
 Attachments: HBASE-13471-v1.patch, HBASE-13471.patch


 {code}
 Thread 4139 
 (regionserver/hbase412.example.com/10.158.6.53:60020-splits-1429003183537):
   State: WAITING
   Blocked count: 131
   Waited count: 228
   Waiting on 
 java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@50714dc3
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1371)
 org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1325)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:352)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:252)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:509)
 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13496) Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505302#comment-14505302
 ] 

Anoop Sam John commented on HBASE-13496:


[~stack]
I have evidence from the micro test case where I do compare of 2 byte[]s with 
and without patched code.
byte[]s have equal length of 135 bytes and both are equal too.( I want to make 
sure the loop executes all cycles). 10 million times am doing the compare 
calls. Doing tests in Java 8
This gives an avg 30% improvement.
But not done the cluster testing


 Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
 -

 Key: HBASE-13496
 URL: https://issues.apache.org/jira/browse/HBASE-13496
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13496.patch


 While testing with some other perf comparisons I have noticed that the above 
 method (which is very hot in read path) is not getting inline
 bq.@ 16   
 org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
  (364 bytes)   hot method too big
 We can do minor refactoring to make it inlineable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13418) Regions getting stuck in PENDING_CLOSE state infinitely in high load HA scenarios

2015-04-21 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505410#comment-14505410
 ] 

Esteban Gutierrez commented on HBASE-13418:
---

Are you using the features added in HDFS-3703 to skip stale DNs? If this is 
happening when killing most of the DNs I think is expected for the RS to enter 
into a long PENDING_CLOSE (and also during other states) until the HDFS 
pipeline can be reconstructed. So depending for how long the DNs were down this 
should or shouldn't have recovered.

 Regions getting stuck in PENDING_CLOSE state infinitely in high load HA 
 scenarios
 -

 Key: HBASE-13418
 URL: https://issues.apache.org/jira/browse/HBASE-13418
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.10
Reporter: Vikas Vishwakarma

 In some heavy data load cases when there are multiple RegionServers going 
 up/down (HA) or when we try to shutdown/restart the entire HBase cluster, we 
 are observing that some regions are getting stuck in PENDING_CLOSE state 
 infinitely. 
 On going through the logs for a particular region stuck in PENDING_CLOSE 
 state, it looks like for this region two memstore flush got triggered within 
 few milliseconds as given below and after sometime there is Unrecoverable 
 exception while closing region. I am suspecting this could be some kind of 
 race condition but need to check further
 Logs:
 
 ..
 2015-04-06 11:47:33,309 INFO  [2,queue=0,port=60020] 
 regionserver.HRegionServer - Close 884fd5819112370d9a9834895b0ec19c, via 
 zk=yes, znode version=0, on 
 blitzhbase01-dnds1-4-crd.eng.sfdc.net,60020,1428318111711
 2015-04-06 11:47:33,309 DEBUG [-dnds3-4-crd:60020-0] 
 handler.CloseRegionHandler - Processing close of 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.
 2015-04-06 11:47:33,319 DEBUG [-dnds3-4-crd:60020-0] regionserver.HRegion - 
 Closing 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.:
  disabling compactions  flushes
 2015-04-06 11:47:33,319 INFO  [-dnds3-4-crd:60020-0] regionserver.HRegion - 
 Running close preflush of 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.
 2015-04-06 11:47:33,319 INFO  [-dnds3-4-crd:60020-0] regionserver.HRegion - 
 Started memstore flush for 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.,
  current region memstore size 70.0 M
 2015-04-06 11:47:33,327 DEBUG [-dnds3-4-crd:60020-0] regionserver.HRegion - 
 Updates disabled for region 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.
 2015-04-06 11:47:33,328 INFO  [-dnds3-4-crd:60020-0] regionserver.HRegion - 
 Started memstore flush for 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.,
  current region memstore size 70.0 M
 2015-04-06 11:47:33,328 WARN  [-dnds3-4-crd:60020-0] wal.FSHLog - Couldn't 
 find oldest seqNum for the region we are about to flush: 
 [884fd5819112370d9a9834895b0ec19c]
 2015-04-06 11:47:33,328 WARN  [-dnds3-4-crd:60020-0] regionserver.MemStore - 
 Snapshot called again without clearing previous. Doing nothing. Another 
 ongoing flush or did we fail last attempt?
 2015-04-06 11:47:33,334 FATAL [-dnds3-4-crd:60020-0] 
 regionserver.HRegionServer - ABORTING region server 
 blitzhbase01-dnds3-4-crd.eng.sfdc.net,60020,1428318082860: Unrecoverable 
 exception while closing region 
 RMHA_1,\x04\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00,1428318937003.884fd5819112370d9a9834895b0ec19c.,
  still finishing close



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505450#comment-14505450
 ] 

Anoop Sam John commented on HBASE-13501:


You mean add the getCellComparator in Region interface right?  I will prefer 
that.  It make more sense than having getCellComparator in HRegionInfo.

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505539#comment-14505539
 ] 

Elliott Clark commented on HBASE-13517:
---

bq.I'm generally of the opinion that you should relocate absolutely everything 
you can.
K, working on it.

bq.Is Hadoop's Configuration really in our API?
Yeah return values for HBaseConfigurations and as function parameters for 
Connection creation and MR job submission.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13450) Purge RawBytescomparator from the writers and readers after HBASE-10800

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505407#comment-14505407
 ] 

ramkrishna.s.vasudevan commented on HBASE-13450:


Thanks for the review.  I will update the patch.  Yes META does not have bloom 
keys as Anoop said.

 Purge RawBytescomparator from the writers and readers after HBASE-10800
 ---

 Key: HBASE-13450
 URL: https://issues.apache.org/jira/browse/HBASE-13450
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13450.patch


 The RawBytesComparator should not be coming in the HFiles, the writers and 
 readers as we will be trying to use the Bytes.RAW_COMPARATOR directly. Also 
 RawBytescomparator can no longer by CellComparator as we would like to deal 
 with CellComparator for all the comparisons going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13450) Purge RawBytescomparator from the writers and readers after HBASE-10800

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505342#comment-14505342
 ] 

Anoop Sam John commented on HBASE-13450:


bq.If row bloom, we use bytes... but if this is a meta table file, we should 
use comparator ?
bq.so, raw comparator is 'ok' if a row-only bloom. How does this work w/ meta 
keys?
We are safe here because we don't have bloom for META. :-)

 Purge RawBytescomparator from the writers and readers after HBASE-10800
 ---

 Key: HBASE-13450
 URL: https://issues.apache.org/jira/browse/HBASE-13450
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13450.patch


 The RawBytesComparator should not be coming in the HFiles, the writers and 
 readers as we will be trying to use the Bytes.RAW_COMPARATOR directly. Also 
 RawBytescomparator can no longer by CellComparator as we would like to deal 
 with CellComparator for all the comparisons going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13502:
---
Attachment: HBASE-13502_3.patch

Updated patch. Address the comments.  Will commit this unless objections.

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch, HBASE-13502_3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HBASE-13524) TestReplicationAdmin fails on JDK 1.8

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13524?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark reassigned HBASE-13524:
-

Assignee: Elliott Clark

 TestReplicationAdmin fails on JDK 1.8
 -

 Key: HBASE-13524
 URL: https://issues.apache.org/jira/browse/HBASE-13524
 Project: HBase
  Issue Type: Bug
Reporter: Elliott Clark
Assignee: Elliott Clark

 {code}
 ---
  T E S T S
 ---
 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=256m; 
 support was removed in 8.0
 Running org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 Tests run: 5, Failures: 1, Errors: 1, Skipped: 0, Time elapsed: 2.419 sec  
 FAILURE! - in org.apache.hadoop.hbase.client.replication.TestReplicationAdmin
 testAppendPeerTableCFs(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.037 sec   FAILURE!
 org.junit.ComparisonFailure: expected:t[2;t1] but was:t[1;t2]
   at org.junit.Assert.assertEquals(Assert.java:115)
   at org.junit.Assert.assertEquals(Assert.java:144)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testAppendPeerTableCFs(TestReplicationAdmin.java:170)
 testEnableDisable(org.apache.hadoop.hbase.client.replication.TestReplicationAdmin)
   Time elapsed: 0.031 sec   ERROR!
 java.lang.IllegalArgumentException: Cannot add a peer with id=1 because that 
 id already exists.
   at 
 org.apache.hadoop.hbase.replication.ReplicationPeersZKImpl.addPeer(ReplicationPeersZKImpl.java:112)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:202)
   at 
 org.apache.hadoop.hbase.client.replication.ReplicationAdmin.addPeer(ReplicationAdmin.java:181)
   at 
 org.apache.hadoop.hbase.client.replication.TestReplicationAdmin.testEnableDisable(TestReplicationAdmin.java:115)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505494#comment-14505494
 ] 

ramkrishna.s.vasudevan commented on HBASE-13387:


bq.Some times it can be a DBB backed Cell landing in there and if we throw 
Exception it will blast.
I can understand why you don't like exceptions to be thrown here, but I 
remember one thing is that while discussing we thought throwing exception would 
keill the HRS, but it won't. It will only kill the scan.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13339) Update default Hadoop version to 2.6.0

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13339?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505510#comment-14505510
 ] 

Elliott Clark commented on HBASE-13339:
---

It's not just not recommended. It's that the client that we're shipping by 
default is un-able to handle any timeouts. So about 2 times a day we have 
integration tests fail with regions stuck opening forever.

{code}
Thread 30178 (StoreFileOpenerThread-meta-1):
  State: RUNNABLE
  Blocked count: 4
  Waited count: 9
  Stack:
sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:79)
sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)

org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:335)
org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
java.io.FilterInputStream.read(FilterInputStream.java:83)
org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:1998)

org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(RemoteBlockReader2.java:408)

org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(BlockReaderFactory.java:787)

org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:666)
org.apache.hadoop.hdfs.BlockReaderFactory.build(BlockReaderFactory.java:326)
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:570)

org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:793)
org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:840)
java.io.DataInputStream.readFully(DataInputStream.java:195)
Thread 30177 (StoreOpener-dbf37ef3cd559c78c591140285715c3f-1):
  State: WAITING
  Blocked count: 4
  Waited count: 9
  Waiting on 
java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject@446083e5
  Stack:
sun.misc.Unsafe.park(Native Method)
java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)

java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:442)

java.util.concurrent.ExecutorCompletionService.take(ExecutorCompletionService.java:193)
org.apache.hadoop.hbase.regionserver.HStore.openStoreFiles(HStore.java:542)
org.apache.hadoop.hbase.regionserver.HStore.loadStoreFiles(HStore.java:510)
org.apache.hadoop.hbase.regionserver.HStore.init(HStore.java:272)

org.apache.hadoop.hbase.regionserver.HRegion.instantiateHStore(HRegion.java:4847)
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:887)
org.apache.hadoop.hbase.regionserver.HRegion$1.call(HRegion.java:884)
java.util.concurrent.FutureTask.run(FutureTask.java:266)
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
java.util.concurrent.FutureTask.run(FutureTask.java:266)

java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)

java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
java.lang.Thread.run(Thread.java:745)
{code}


The abstract fear of maybe having some incompatibility that users would have to 
deal with doesn't seem to me to justify staying behind when there are bug fixes 
and new features.

 Update default Hadoop version to 2.6.0
 --

 Key: HBASE-13339
 URL: https://issues.apache.org/jira/browse/HBASE-13339
 Project: HBase
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Attachments: HBASE-13339-v1.patch, HBASE-13339.patch


 Current default Hadoop version is getting a little long in the tooth. We 
 should update to the latest version. The latest version is backwards 
 compatible with 2.5.1's dfs and mr so this should be painless.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13516) Increase PermSize to 128MB

2015-04-21 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-13516:
--
Attachment: hbase-13516_v2.patch

Added a note about JDK8+. I'll commit this unless further discussion. 
We can remove this once we are JDK8 only. 

 Increase PermSize to 128MB
 --

 Key: HBASE-13516
 URL: https://issues.apache.org/jira/browse/HBASE-13516
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: hbase-13516_v1.patch, hbase-13516_v2.patch


 HBase uses ~40MB, and with Phoenix we use ~56MB of Perm space out of 64MB by 
 default. Every Filter and Coprocessor increases that.
 Running out of perm space triggers a stop the world full GC of the entire 
 heap. We have seen this in misconfigured cluster. 
 Should we default to  {{-XX:PermSize=128m -XX:MaxPermSize=128m}} out of the 
 box as a convenience for users? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505533#comment-14505533
 ] 

Sean Busbey commented on HBASE-13517:
-

{quote}
Should we re-locate the commons-* stuff too ? I think we should leave the 
loggin stuff exposed as that stuff relies on loading classes by name; however 
the rest can move if it's wanted.
{quote}

I'm generally of the opinion that you should relocate absolutely everything you 
can.

Is Hadoop's Configuration really in our API?

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13525) Update test-patch to leverage rewrite in Hadoop

2015-04-21 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-13525:
---

 Summary: Update test-patch to leverage rewrite in Hadoop
 Key: HBASE-13525
 URL: https://issues.apache.org/jira/browse/HBASE-13525
 Project: HBase
  Issue Type: Improvement
  Components: build
Reporter: Sean Busbey
 Fix For: 2.0.0


Once HADOOP-11746 lands over in Hadoop, incorporate its changes into our 
test-patch. Most likely easiest approach is to start with the Hadoop version 
and add in the features we have locally that they don't.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13387:
---
Attachment: ByteBufferedCell.docx

The design consideration and list of APIs in the new interface. Also mentions 
the related changed required in the read path.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13078) IntegrationTestSendTraceRequests is a noop

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505389#comment-14505389
 ] 

stack commented on HBASE-13078:
---

[~anoop.hbase] Thanks. You applied it everywhere?

 IntegrationTestSendTraceRequests is a noop
 --

 Key: HBASE-13078
 URL: https://issues.apache.org/jira/browse/HBASE-13078
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Josh Elser
Priority: Critical
 Fix For: 2.0.0, 1.1.0, 0.98.13, 1.0.2

 Attachments: HBASE-13078-0.98-removal.patch, 
 HBASE-13078-0.98-v1.patch, HBASE-13078-v1.patch, HBASE-13078.patch


 While pair-debugging with [~jeffreyz] on HBASE-13077, we noticed that 
 IntegrationTestSendTraceRequests doesn't actually assert anything. This test 
 should be converted to use a mini cluster, setup a POJOSpanReceiver, and then 
 verify the spans collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505395#comment-14505395
 ] 

Anoop Sam John commented on HBASE-13387:


bq. Is there a writeup anywhere on change in thinking that I can refer too?
[~stack]  attached boss.
[~apurtell]  You suggest we close this issue and create a new one as this 
contains 2 approaches patches? Or just remove the attached patches as it will 
confuse?  I thought of continue with this Jira as it contains many comments abt 
the design consideration and pros and cons. I am ok for any.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13450) Purge RawBytescomparator from the writers and readers after HBASE-10800

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505393#comment-14505393
 ] 

stack commented on HBASE-13450:
---

bq. We are safe here because we don't have bloom for META. 

I came to that realization on re-review of HBASE-10800 just now. Needs comment 
then I'd say and we should probably make blooms work for meta but that is 
another issue. Thanks [~anoop.hbase]


 Purge RawBytescomparator from the writers and readers after HBASE-10800
 ---

 Key: HBASE-13450
 URL: https://issues.apache.org/jira/browse/HBASE-13450
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13450.patch


 The RawBytesComparator should not be coming in the HFiles, the writers and 
 readers as we will be trying to use the Bytes.RAW_COMPARATOR directly. Also 
 RawBytescomparator can no longer by CellComparator as we would like to deal 
 with CellComparator for all the comparisons going forward.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13450) Purge RawBytescomparator from the writers and readers after HBASE-10800

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13450?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13450:
---
Description: 
Currently KeyValue.RAW_COMPARATOR is written in the Trailer of the HFiles.  
Ideally this need not be persisted to the trailer of the Hfiles because only 
the ROW bloom and the meta index blocks uses this. Currently RAW_COMPARATOR is 
also of type KVComparator.
HBASE-10800 would introduce CellComparator and would expect only cells to be 
compared.  We cannot have RAW_COMPARATOR a type of CellComparator.  Hence it is 
better to avoid writing the RAW_COMPARATOR to the FFT and whereever we need 
RAW_COMPARATOR we will directly use it as Bytes.BYTES_RAWCOMPARATOR.

  was:The RawBytesComparator should not be coming in the HFiles, the writers 
and readers as we will be trying to use the Bytes.RAW_COMPARATOR directly. Also 
RawBytescomparator can no longer by CellComparator as we would like to deal 
with CellComparator for all the comparisons going forward.


 Purge RawBytescomparator from the writers and readers after HBASE-10800
 ---

 Key: HBASE-13450
 URL: https://issues.apache.org/jira/browse/HBASE-13450
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13450.patch


 Currently KeyValue.RAW_COMPARATOR is written in the Trailer of the HFiles.  
 Ideally this need not be persisted to the trailer of the Hfiles because only 
 the ROW bloom and the meta index blocks uses this. Currently RAW_COMPARATOR 
 is also of type KVComparator.
 HBASE-10800 would introduce CellComparator and would expect only cells to be 
 compared.  We cannot have RAW_COMPARATOR a type of CellComparator.  Hence it 
 is better to avoid writing the RAW_COMPARATOR to the FFT and whereever we 
 need RAW_COMPARATOR we will directly use it as Bytes.BYTES_RAWCOMPARATOR.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505466#comment-14505466
 ] 

Anoop Sam John commented on HBASE-13387:


I got ur point abt this
bq. I have reservations about an Interface per server-side feature

ByteBufferedCell  will be bad name in such a case. ServerCell would have been 
better.  We can change as per our discussion decision. Oh I am not good at all 
in naming.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505508#comment-14505508
 ] 

Anoop Sam John commented on HBASE-13387:


The Scan can be a compaction time activity too..  So some how my mind is 
towards not throwing Exception.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505508#comment-14505508
 ] 

Anoop Sam John edited comment on HBASE-13387 at 4/21/15 6:52 PM:
-

The Scan can be a compaction time activity too.. which will make the compaction 
to fail and making issues later in the cluster... So some how my mind is 
towards not throwing Exception.


was (Author: anoop.hbase):
The Scan can be a compaction time activity too..  So some how my mind is 
towards not throwing Exception.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13516) Increase PermSize to 128MB

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505536#comment-14505536
 ] 

Sean Busbey commented on HBASE-13516:
-

{quote}
bq. Can someone do a quick check that jdk8 ignores these silently?

It does not ignore them silently, as I stated above.
{quote}

Apologies. Combination of reading and writing too fast. I misread your original 
line as throw up or give warnings and meant to make sure it didn't throw up 
= fail with errors.

 Increase PermSize to 128MB
 --

 Key: HBASE-13516
 URL: https://issues.apache.org/jira/browse/HBASE-13516
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: hbase-13516_v1.patch, hbase-13516_v2.patch


 HBase uses ~40MB, and with Phoenix we use ~56MB of Perm space out of 64MB by 
 default. Every Filter and Coprocessor increases that.
 Running out of perm space triggers a stop the world full GC of the entire 
 heap. We have seen this in misconfigured cluster. 
 Should we default to  {{-XX:PermSize=128m -XX:MaxPermSize=128m}} out of the 
 box as a convenience for users? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505482#comment-14505482
 ] 

Anoop Sam John commented on HBASE-13387:


bq.Can we throw Unsupported exceptions if wrong API is used?
You mean getXXXArray() should throw Exception when called on a DBB backed Cell? 
 I would say we should not. All the old APIs has to work. Agree that when it is 
called on a DBB backed Cell, it has perf hits.Why I am so particular about 
that is we pass Cell type through out the read code path. And also to our CPs 
and Filters.  Some times it can be a DBB backed Cell landing in there and if we 
throw Exception it will blast.

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: ByteBufferedCell.docx, WIP_HBASE-13387_V2.patch, 
 WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13078) IntegrationTestSendTraceRequests is a noop

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505520#comment-14505520
 ] 

Anoop Sam John commented on HBASE-13078:


Sorry [~stack]  not following you. I have not committed this patch any where.

 IntegrationTestSendTraceRequests is a noop
 --

 Key: HBASE-13078
 URL: https://issues.apache.org/jira/browse/HBASE-13078
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Josh Elser
Priority: Critical
 Fix For: 2.0.0, 1.1.0, 0.98.13, 1.0.2

 Attachments: HBASE-13078-0.98-removal.patch, 
 HBASE-13078-0.98-v1.patch, HBASE-13078-v1.patch, HBASE-13078.patch


 While pair-debugging with [~jeffreyz] on HBASE-13077, we noticed that 
 IntegrationTestSendTraceRequests doesn't actually assert anything. This test 
 should be converted to use a mini cluster, setup a POJOSpanReceiver, and then 
 verify the spans collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13078) IntegrationTestSendTraceRequests is a noop

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505544#comment-14505544
 ] 

stack commented on HBASE-13078:
---

[~anoop.hbase] A misreading on my part. Sorry about that.

 IntegrationTestSendTraceRequests is a noop
 --

 Key: HBASE-13078
 URL: https://issues.apache.org/jira/browse/HBASE-13078
 Project: HBase
  Issue Type: Test
  Components: integration tests
Reporter: Nick Dimiduk
Assignee: Josh Elser
Priority: Critical
 Fix For: 2.0.0, 1.1.0, 0.98.13, 1.0.2

 Attachments: HBASE-13078-0.98-removal.patch, 
 HBASE-13078-0.98-v1.patch, HBASE-13078-v1.patch, HBASE-13078.patch


 While pair-debugging with [~jeffreyz] on HBASE-13077, we noticed that 
 IntegrationTestSendTraceRequests doesn't actually assert anything. This test 
 should be converted to use a mini cluster, setup a POJOSpanReceiver, and then 
 verify the spans collected.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13471) Fix a possible infinite loop in doMiniBatchMutation

2015-04-21 Thread Rajesh Nishtala (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rajesh Nishtala updated HBASE-13471:

Summary: Fix a possible infinite loop in doMiniBatchMutation  (was: 
Deadlock closing a region)

 Fix a possible infinite loop in doMiniBatchMutation
 ---

 Key: HBASE-13471
 URL: https://issues.apache.org/jira/browse/HBASE-13471
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0, 0.98.13
Reporter: Elliott Clark
Assignee: Rajesh Nishtala
 Attachments: HBASE-13471-v1.patch, HBASE-13471.patch


 {code}
 Thread 4139 
 (regionserver/hbase412.example.com/10.158.6.53:60020-splits-1429003183537):
   State: WAITING
   Blocked count: 131
   Waited count: 228
   Waiting on 
 java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@50714dc3
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1371)
 org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1325)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:352)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:252)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:509)
 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13496) Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13496?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505398#comment-14505398
 ] 

stack commented on HBASE-13496:
---

I'd say post the test code, how to run it, and numbers, and then I am +1 on 
commit.

 Make Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo inlineable
 -

 Key: HBASE-13496
 URL: https://issues.apache.org/jira/browse/HBASE-13496
 Project: HBase
  Issue Type: Sub-task
  Components: Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Fix For: 2.0.0, 1.2.0

 Attachments: HBASE-13496.patch


 While testing with some other perf comparisons I have noticed that the above 
 method (which is very hot in read path) is not getting inline
 bq.@ 16   
 org.apache.hadoop.hbase.util.Bytes$LexicographicalComparerHolder$UnsafeComparer::compareTo
  (364 bytes)   hot method too big
 We can do minor refactoring to make it inlineable.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13470) High level Integration test for master DDL operations

2015-04-21 Thread Sophia Feng (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13470?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505418#comment-14505418
 ] 

Sophia Feng commented on HBASE-13470:
-

[~mbertozzi]
Thanks for pointing those out. Exceptions before assertion definitely need to 
be addressed. I was fetching a new column descriptor but it was still bound to 
the old table descriptor. A brand new fetch is needed. Loop will do the actions 
too. Worker threads will add some concurrency to the DDLs I suppose. 

I'm making the changes and will put a newer patch to the reviewboard. Regards.

 High level Integration test for master DDL operations
 -

 Key: HBASE-13470
 URL: https://issues.apache.org/jira/browse/HBASE-13470
 Project: HBase
  Issue Type: Sub-task
  Components: master
Reporter: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13470-v0.patch


 Our [~fengs] has an integration test which executes DDL operations with a new 
 monkey to kill the active master as a high level test for the proc v2 
 changes. 
 The test does random DDL operations from 20 client threads. The DDL 
 statements are create / delete / modify / enable / disable table and CF 
 operations. It runs HBCK to verify the end state. 
 The test can be run on a single master, or multi master setup. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-11339) HBase MOB

2015-04-21 Thread Jonathan Hsieh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-11339?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Hsieh updated HBASE-11339:
---
Attachment: hbase-11339.150417.patch

attached hbase-11339.150417.patch.  Have been running for a few days and 
outside of likely unrelated flakey tests, I've been encountering a new  
occasional failures of TestAcidGurantees.testMobScanAtomicity 1 out or 10 
times.  

Would like to merge master in to hbase-11339, hunt down the atomicity violation 
before calling merge to master.

For reviewing the merge, it will  e easier to look at this merge into 
hbase-11339 -- the majority of of changes are in the last set of patches found 
here. https://github.com/jmhsieh/hbase/commits/hbase-11339

 HBase MOB
 -

 Key: HBASE-11339
 URL: https://issues.apache.org/jira/browse/HBASE-11339
 Project: HBase
  Issue Type: Umbrella
  Components: regionserver, Scanners
Affects Versions: 2.0.0
Reporter: Jingcheng Du
Assignee: Jingcheng Du
 Fix For: hbase-11339

 Attachments: HBase MOB Design-v2.pdf, HBase MOB Design-v3.pdf, HBase 
 MOB Design-v4.pdf, HBase MOB Design-v5.pdf, HBase MOB Design.pdf, MOB user 
 guide.docx, MOB user guide_v2.docx, MOB user guide_v3.docx, MOB user 
 guide_v4.docx, MOB user guide_v5.docx, hbase-11339-in-dev.patch, 
 hbase-11339.150417.patch, merge-150212.patch, merge.150212b.patch, 
 merge.150212c.patch


   It's quite useful to save the medium binary data like images, documents 
 into Apache HBase. Unfortunately directly saving the binary MOB(medium 
 object) to HBase leads to a worse performance since the frequent split and 
 compaction.
   In this design, the MOB data are stored in an more efficient way, which 
 keeps a high write/read performance and guarantees the data consistency in 
 Apache HBase.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13471) Deadlock closing a region

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13471:
--
Status: Patch Available  (was: Open)

 Deadlock closing a region
 -

 Key: HBASE-13471
 URL: https://issues.apache.org/jira/browse/HBASE-13471
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Rajesh Nishtala
 Attachments: HBASE-13471.patch


 {code}
 Thread 4139 
 (regionserver/hbase412.example.com/10.158.6.53:60020-splits-1429003183537):
   State: WAITING
   Blocked count: 131
   Waited count: 228
   Waiting on 
 java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@50714dc3
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1371)
 org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1325)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:352)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:252)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:509)
 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13471) Deadlock closing a region

2015-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13471?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504768#comment-14504768
 ] 

Hadoop QA commented on HBASE-13471:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726723/HBASE-13471.patch
  against master branch at commit eb82b8b3098d6a9ac62aa50189f9d4b289f38472.
  ATTACHMENT ID: 12726723

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13754//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13754//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13754//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13754//console

This message is automatically generated.

 Deadlock closing a region
 -

 Key: HBASE-13471
 URL: https://issues.apache.org/jira/browse/HBASE-13471
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Rajesh Nishtala
 Attachments: HBASE-13471.patch


 {code}
 Thread 4139 
 (regionserver/hbase412.example.com/10.158.6.53:60020-splits-1429003183537):
   State: WAITING
   Blocked count: 131
   Waited count: 228
   Waiting on 
 java.util.concurrent.locks.ReentrantReadWriteLock$NonfairSync@50714dc3
   Stack:
 sun.misc.Unsafe.park(Native Method)
 java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
 
 java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
 
 java.util.concurrent.locks.ReentrantReadWriteLock$WriteLock.lock(ReentrantReadWriteLock.java:943)
 org.apache.hadoop.hbase.regionserver.HRegion.doClose(HRegion.java:1371)
 org.apache.hadoop.hbase.regionserver.HRegion.close(HRegion.java:1325)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.stepsBeforePONR(SplitTransactionImpl.java:352)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.createDaughters(SplitTransactionImpl.java:252)
 
 org.apache.hadoop.hbase.regionserver.SplitTransactionImpl.execute(SplitTransactionImpl.java:509)
 
 org.apache.hadoop.hbase.regionserver.SplitRequest.run(SplitRequest.java:84)
 
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
 
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
 java.lang.Thread.run(Thread.java:745)
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504553#comment-14504553
 ] 

Hadoop QA commented on HBASE-13517:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726792/HBASE-13517.patch
  against master branch at commit eb82b8b3098d6a9ac62aa50189f9d4b289f38472.
  ATTACHMENT ID: 12726792

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified tests.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:red}-1 javac{color}.  The applied patch generated 66 javac compiler 
warnings (more than the master's current 5 warnings).

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:red}-1 release audit{color}.  The applied patch generated 1 release 
audit warnings (more than the master's current 0 warnings).

{color:red}-1 lineLengths{color}.  The patch introduces the following lines 
longer than 100:
+ xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+ xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+project xmlns=http://maven.apache.org/POM/4.0.0; 
xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance; 
xsi:schemaLocation=http://maven.apache.org/POM/4.0.0 
http://maven.apache.org/xsd/maven-4.0.0.xsd;
+!-- Hadoop and HBase depend on an old 
Guava, don't expose it to dependents --
+
shadedPatternorg.apache.hadoop.hbase.io.netty/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.org.jboss.netty/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.org.mortbay/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.org.codehaus.jackson/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.org.apache.avro/shadedPattern
+
shadedPatternorg.apache.hadoop.hbase.com.sun.jersey/shadedPattern

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

 {color:red}-1 core tests{color}.  The patch failed these unit tests:
   org.apache.hadoop.hbase.util.TestProcessBasedCluster
  org.apache.hadoop.hbase.mapreduce.TestImportExport

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//testReport/
Release audit warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//artifact/patchprocess/patchReleaseAuditWarnings.txt
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//artifact/patchprocess/checkstyle-aggregate.html

  Javadoc warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//artifact/patchprocess/patchJavadocWarnings.txt
Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13751//console

This message is automatically generated.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13522) [Thrift] DemoClient.php is out of date for Thrift 0.9.x and current HBase

2015-04-21 Thread Lars George (JIRA)
Lars George created HBASE-13522:
---

 Summary: [Thrift] DemoClient.php is out of date for Thrift 0.9.x 
and current HBase
 Key: HBASE-13522
 URL: https://issues.apache.org/jira/browse/HBASE-13522
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.98.12, 1.0.0
Reporter: Lars George


Bucket list:

* Assumes all Thrift files are under $THRIFT_SRC_HOME/lib/php/src but is now 
split between “src” and “lib
* Apparently casing is no issue as DemoClient.php refers to lowercase 
directories and finds them in Camel-Cased directories.
* Tries row with empty row key but without wrapping into try/catch
* Does assume non-UTF8 is not allowed in row key, and therefore page fails to 
load because they are valid



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13387:
---
Summary: Add ByteBufferedCell an extension to Cell  (was: Add ServerCell an 
extension to Cell)

 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: WIP_HBASE-13387_V2.patch, WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ServerCell'  in which we can add new 
 buffer based getter APIs, hasArray API etc.  We will keep this interface 
 @InterfaceAudience.Private
 Also we have to change the timestamp and seqId on Cells in server side. We 
 have added new interfaces SettableSequenceId, SettableTimestamp for this. Now 
 if we can add a ServerCell we can add the setter APIs there.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13387) Add ByteBufferedCell an extension to Cell

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13387:
---
Description: 
This came in btw the discussion abt the parent Jira and recently Stack added as 
a comment on the E2E patch on the parent Jira.
The idea is to add a new Interface 'ByteBufferedCell'  in which we can add new 
buffer based getter APIs and getters for position in components in BB.  We will 
keep this interface @InterfaceAudience.Private.   When the Cell is backed by a 
DBB, we can create an Object implementing this new interface.

The Comparators has to be aware abt this new Cell extension and has to use the 
BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil to 
abstract the checks for new Cell type.  (Like matchingXXX APIs, 
getValueAstype APIs etc)

  was:
This came in btw the discussion abt the parent Jira and recently Stack added as 
a comment on the E2E patch on the parent Jira.
The idea is to add a new Interface 'ServerCell'  in which we can add new buffer 
based getter APIs, hasArray API etc.  We will keep this interface 
@InterfaceAudience.Private

Also we have to change the timestamp and seqId on Cells in server side. We have 
added new interfaces SettableSequenceId, SettableTimestamp for this. Now if we 
can add a ServerCell we can add the setter APIs there.


 Add ByteBufferedCell an extension to Cell
 -

 Key: HBASE-13387
 URL: https://issues.apache.org/jira/browse/HBASE-13387
 Project: HBase
  Issue Type: Sub-task
  Components: regionserver, Scanners
Reporter: Anoop Sam John
Assignee: Anoop Sam John
 Attachments: WIP_HBASE-13387_V2.patch, WIP_ServerCell.patch


 This came in btw the discussion abt the parent Jira and recently Stack added 
 as a comment on the E2E patch on the parent Jira.
 The idea is to add a new Interface 'ByteBufferedCell'  in which we can add 
 new buffer based getter APIs and getters for position in components in BB.  
 We will keep this interface @InterfaceAudience.Private.   When the Cell is 
 backed by a DBB, we can create an Object implementing this new interface.
 The Comparators has to be aware abt this new Cell extension and has to use 
 the BB based APIs rather than getXXXArray().  Also give util APIs in CellUtil 
 to abstract the checks for new Cell type.  (Like matchingXXX APIs, 
 getValueAstype APIs etc)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13517:
--
Attachment: HBASE-13517-v1.patch

Patch to make assembly:single happy.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Elliott Clark (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Elliott Clark updated HBASE-13517:
--
Attachment: HBASE-13517-v2.patch

Patch without extra un-needed file.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13437) ThriftServer leaks ZooKeeper connections

2015-04-21 Thread Winger Pun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13437?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504584#comment-14504584
 ] 

Winger Pun commented on HBASE-13437:


Seems like no related test. I will have a try.

 ThriftServer leaks ZooKeeper connections
 

 Key: HBASE-13437
 URL: https://issues.apache.org/jira/browse/HBASE-13437
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Affects Versions: 0.98.8
Reporter: Winger Pun
 Attachments: hbase-13437-fix.patch


 HBase ThriftServer will cache Zookeeper connection in memory using 
 org.apache.hadoop.hbase.util.ConnectionCache. This class has a mechanism 
 called chore to clean up connections idle for too long(default is 10 min). 
 But method timedOut for testing whether idle exceed for maxIdleTime always 
 return false which leads to never release the Zookeeper connection. If we 
 send request to ThriftServer every maxIdleTime then ThriftServer will keep 
 thousands of Zookeeper Connection soon.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13502:
---
Attachment: HBASE-13502_2.patch

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13502:
---
Status: Open  (was: Patch Available)

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505080#comment-14505080
 ] 

ramkrishna.s.vasudevan commented on HBASE-13502:


bq.Just say deprecated without any replacement
Checkstyle comment says add an @Deprecated msg also.
bq.equals(this) - U compare against this? 
Silly mistake.
bq.In MetaCache still we use TableName#getRowComparator()?
Changed.  I think the patch that I uploaded was the wrong one.
bq.Unused?
We are using return KeyValue.META_COMPARATOR; and return KeyValue.COMPARATOR 
right. So not unused.

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505087#comment-14505087
 ] 

Josh Elser commented on HBASE-13520:


Thanks for pushing this [~anoop.hbase]!

 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13502:
---
Attachment: HBASE-13502_2.patch

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505082#comment-14505082
 ] 

ramkrishna.s.vasudevan commented on HBASE-13501:


May be after HBASE-10800 only?  I am fine with any.

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504979#comment-14504979
 ] 

Hudson commented on HBASE-13520:


FAILURE: Integrated in HBase-1.0 #869 (See 
[https://builds.apache.org/job/HBase-1.0/869/])
HBASE-13520 NullPointerException in TagRewriteCell.(Josh Elser) (anoopsamjohn: 
rev 856329a34e39ded03a55b3a41d27e62bdfe7162b)
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestTagRewriteCell.java


 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505022#comment-14505022
 ] 

Hudson commented on HBASE-13520:


FAILURE: Integrated in HBase-1.2 #10 (See 
[https://builds.apache.org/job/HBase-1.2/10/])
HBASE-13520 NullPointerException in TagRewriteCell.(Josh Elser) (anoopsamjohn: 
rev 8e6353ccd115d6378e4ebd5fae782cf776144546)
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestTagRewriteCell.java
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java


 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504950#comment-14504950
 ] 

Hadoop QA commented on HBASE-13502:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726846/HBASE-13502_1.patch
  against master branch at commit eb82b8b3098d6a9ac62aa50189f9d4b289f38472.
  ATTACHMENT ID: 12726846

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13755//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13755//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13755//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13755//console

This message is automatically generated.

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505031#comment-14505031
 ] 

Hudson commented on HBASE-13520:


FAILURE: Integrated in HBase-TRUNK #6395 (See 
[https://builds.apache.org/job/HBase-TRUNK/6395/])
HBASE-13520 NullPointerException in TagRewriteCell.(Josh Elser) (anoopsamjohn: 
rev 2ba4c4eb9fe568b962f9d71de829531f51c5375b)
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestTagRewriteCell.java


 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504961#comment-14504961
 ] 

Hadoop QA commented on HBASE-13501:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12726847/HBASE-13501.patch
  against master branch at commit eb82b8b3098d6a9ac62aa50189f9d4b289f38472.
  ATTACHMENT ID: 12726847

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 hadoop versions{color}. The patch compiles with all 
supported hadoop versions (2.4.1 2.5.2 2.6.0)

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 protoc{color}.  The applied patch does not increase the 
total number of protoc compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 checkstyle{color}.  The applied patch does not increase the 
total number of checkstyle errors

{color:green}+1 findbugs{color}.  The patch does not introduce any  new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 lineLengths{color}.  The patch does not introduce lines 
longer than 100

  {color:green}+1 site{color}.  The mvn site goal succeeds with this patch.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

Test results: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13756//testReport/
Release Findbugs (version 2.0.3)warnings: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13756//artifact/patchprocess/newFindbugsWarnings.html
Checkstyle Errors: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13756//artifact/patchprocess/checkstyle-aggregate.html

  Console output: 
https://builds.apache.org/job/PreCommit-HBASE-Build/13756//console

This message is automatically generated.

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-13523) API Doumentation formatting is broken

2015-04-21 Thread Dylan Jones (JIRA)
Dylan Jones created HBASE-13523:
---

 Summary: API Doumentation formatting is broken
 Key: HBASE-13523
 URL: https://issues.apache.org/jira/browse/HBASE-13523
 Project: HBase
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0
Reporter: Dylan Jones
Priority: Minor


On this page:
https://hbase.apache.org/apidocs/org/apache/hadoop/hbase/HColumnDescriptor.html#HColumnDescriptor(org.apache.hadoop.hbase.HColumnDescriptor)

Scroll down and you get a big surprise :)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505007#comment-14505007
 ] 

Hudson commented on HBASE-13520:


FAILURE: Integrated in HBase-1.1 #415 (See 
[https://builds.apache.org/job/HBase-1.1/415/])
HBASE-13520 NullPointerException in TagRewriteCell.(Josh Elser) (anoopsamjohn: 
rev 5d07390e9306574397943600f5ff9fec60f79d03)
* hbase-server/src/main/java/org/apache/hadoop/hbase/TagRewriteCell.java
* hbase-server/src/test/java/org/apache/hadoop/hbase/TestTagRewriteCell.java


 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13520) NullPointerException in TagRewriteCell

2015-04-21 Thread Anoop Sam John (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13520?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Anoop Sam John updated HBASE-13520:
---
   Resolution: Fixed
Fix Version/s: 1.2.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Pushed to master, branch-1, branch-1.1 and branch 1.0.
Thanks [~elserj]

 NullPointerException in TagRewriteCell
 --

 Key: HBASE-13520
 URL: https://issues.apache.org/jira/browse/HBASE-13520
 Project: HBase
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Josh Elser
Assignee: Josh Elser
 Fix For: 2.0.0, 1.1.0, 1.0.2, 1.2.0

 Attachments: HBASE-13520-v1.patch, HBASE-13520.patch


 Found via running {{IntegrationTestIngestWithVisibilityLabels}} with Kerberos 
 enabled.
 {noformat}
 2015-04-20 18:54:36,712 ERROR 
 [B.defaultRpcServer.handler=17,queue=2,port=16020] ipc.RpcServer: Unexpected 
 throwable object
 java.lang.NullPointerException
 at 
 org.apache.hadoop.hbase.TagRewriteCell.getTagsLength(TagRewriteCell.java:157)
 at 
 org.apache.hadoop.hbase.TagRewriteCell.heapSize(TagRewriteCell.java:186)
 at 
 org.apache.hadoop.hbase.CellUtil.estimatedHeapSizeOf(CellUtil.java:568)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.heapSizeChange(DefaultMemStore.java:1024)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.internalAdd(DefaultMemStore.java:259)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:567)
 at 
 org.apache.hadoop.hbase.regionserver.DefaultMemStore.upsert(DefaultMemStore.java:541)
 at 
 org.apache.hadoop.hbase.regionserver.HStore.upsert(HStore.java:2154)
 at 
 org.apache.hadoop.hbase.regionserver.HRegion.increment(HRegion.java:7127)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.increment(RSRpcServices.java:504)
 at 
 org.apache.hadoop.hbase.regionserver.RSRpcServices.mutate(RSRpcServices.java:2020)
 at 
 org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:31967)
 at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2106)
 at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:101)
 at 
 org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:130)
 at org.apache.hadoop.hbase.ipc.RpcExecutor$2.run(RpcExecutor.java:107)
 at java.lang.Thread.run(Thread.java:745)
 {noformat}
 HBASE-11870 tried to be tricky when only the tags of a {{Cell}} need to be 
 altered in the write-pipeline by creating a {{TagRewriteCell}} which avoided 
 copying all components of the original {{Cell}}. In an attempt to help free 
 the tags on the old cell that we wouldn't be referencing anymore, 
 {{TagRewriteCell}} nulls out the original {{byte[] tags}}.
 This causes a problem in that the implementation of {{heapSize()}} as it 
 {{getTagsLength()}} on the original {{Cell}} instead of the on {{this}}. 
 Because the tags on the passed in {{Cell}} (which was also a 
 {{TagRewriteCell}}) were null'ed out in the constructor, this results in a 
 NPE by the byte array is null.
 I believe this isn't observed in normal, unsecure deployments because there 
 is only one RegionObserver/Coprocessor loaded that gets invoked via 
 {{postMutationBeforeWAL}}. When there is only one RegionObserver, the 
 TagRewriteCell isn't passed another TagRewriteCell, but instead a cell from 
 the wire/protobuf. This means that the optimization isn't performed. When we 
 have two (or more) observers that a TagRewriteCell passes through (and a new 
 TagRewriteCell is created and the old TagRewriteCell's tags array is nulled), 
 this enables the described-above NPE.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-13501:
---
Status: Patch Available  (was: Open)

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504766#comment-14504766
 ] 

Anoop Sam John commented on HBASE-13502:


bq.This API will go away so that TableName will not have any comparator
Just say deprecated without any replacement

{code}
private KVComparator getRowComparator(TableName tableName) {
  if (TableName.META_TABLE_NAME.equals(this)) {
return KeyValue.META_COMPARATOR;
  }
  return KeyValue.COMPARATOR;
}
{code}
equals(this) - U compare against this?  I think copy paste issue. U have to 
compare against tableName
In MetaCache  still we use TableName#getRowComparator()?  It can changed to use 
the new private API.
import org.apache.hadoop.hbase.KeyValue;
Unused?

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13501) Deprecate/Remove getComparator() in HRegionInfo.

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13501?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14504774#comment-14504774
 ] 

Anoop Sam John commented on HBASE-13501:


You have to add the getCellComparator as part of this Jira?

 Deprecate/Remove getComparator() in HRegionInfo.
 

 Key: HBASE-13501
 URL: https://issues.apache.org/jira/browse/HBASE-13501
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13501.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505143#comment-14505143
 ] 

Sean Busbey commented on HBASE-13517:
-

{quote}
Can we avoid including things in the shaded jar that aren't in 
org.apache.hadoop.hbase or subpackages? That's probably the #1 source of pain 
with shaded artifacts.
{quote}

this should have been aren't relocated in. I don't want e.g. someone who 
needs to use HBase and HDFS to suddenly have another jar with the hadoop 
packages visible just because they're trying to use our shaded artifact.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505140#comment-14505140
 ] 

Sean Busbey commented on HBASE-13517:
-

just curious, how big are the shaded jars?

{code}
artifactSet
+includes
+includeorg.apache.hbase:*/include
+includeorg.apache.hadoop:*/include
+
includeorg.apache.zookeeper*/include
+
includecom.google.protobuf:*/include
+includeio.netty:*/include
+includeorg.jboss.netty:*/include
+includecom.google.guava:*/include
+includeorg.mortbay.jetty:*/include
+
includeorg.codehaus.jackson:*/include
+includeorg.apache.avro:*/include
+includecom.sun.jersey:*/include
+
includecom.sun.jersey.contribs:*/include
+includetomcat:*/include
+/includes
+/artifactSet
{code}

Can we avoid including things in the shaded jar that aren't in 
org.apache.hadoop.hbase or subpackages? That's probably the #1 source of pain 
with shaded artifacts.

{code}
+
shadedPatternorg.apache.hadoop.hbase.com.google.common
+/shadedPattern
{code}

nit: could we use something like {{org.apache.hadoop.hbase.shaded.}} as the 
prefix for all of these relocations so that there's a common package / 
directory for all of them?

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13516) Increase PermSize to 128MB

2015-04-21 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505168#comment-14505168
 ] 

Anoop Sam John commented on HBASE-13516:


It ignores with below message
{quote}
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; 
support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; 
support was removed in 8.0
{quote}
But no errors

 Increase PermSize to 128MB
 --

 Key: HBASE-13516
 URL: https://issues.apache.org/jira/browse/HBASE-13516
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: hbase-13516_v1.patch


 HBase uses ~40MB, and with Phoenix we use ~56MB of Perm space out of 64MB by 
 default. Every Filter and Coprocessor increases that.
 Running out of perm space triggers a stop the world full GC of the entire 
 heap. We have seen this in misconfigured cluster. 
 Should we default to  {{-XX:PermSize=128m -XX:MaxPermSize=128m}} out of the 
 box as a convenience for users? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13516) Increase PermSize to 128MB

2015-04-21 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505146#comment-14505146
 ] 

Sean Busbey commented on HBASE-13516:
-

Can someone do a quick check that jdk8 ignores these silently?

 Increase PermSize to 128MB
 --

 Key: HBASE-13516
 URL: https://issues.apache.org/jira/browse/HBASE-13516
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: hbase-13516_v1.patch


 HBase uses ~40MB, and with Phoenix we use ~56MB of Perm space out of 64MB by 
 default. Every Filter and Coprocessor increases that.
 Running out of perm space triggers a stop the world full GC of the entire 
 heap. We have seen this in misconfigured cluster. 
 Should we default to  {{-XX:PermSize=128m -XX:MaxPermSize=128m}} out of the 
 box as a convenience for users? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505153#comment-14505153
 ] 

Elliott Clark commented on HBASE-13517:
---

bq.I don't want e.g. someone who needs to use HBase and HDFS to suddenly have 
another jar with the hadoop packages visible just because they're trying to use 
our shaded artifact.

We can't relocate Hadoop because that would break Configuration and several 
public apis. However we need to include the classes because they use guava and 
need to be re-written to use the new location.

The rest of the dependencies should be relocated.

bq.nit: could we use something like org.apache.hadoop.hbase.shaded. as the 
prefix for all of these relocations so that there's a common package / 
directory for all of them?

Sure let me get that and have apache-rat clean.

 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13517) Publish a client artifact with shaded dependencies

2015-04-21 Thread Elliott Clark (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13517?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505174#comment-14505174
 ] 

Elliott Clark commented on HBASE-13517:
---

Here's a dependency:tree for a down stream project using hbase-shaded-client. ( 
No guava, No protobuf, No jackson, no jetty, no netty). 

{code}
[INFO] --- maven-dependency-plugin:2.8:tree (default-cli) @ 
test-hbase-client-shade ---
[INFO] test-hbase-client-shade:test-hbase-client-shade:jar:1.0-SNAPSHOT
[INFO] +- org.apache.hbase:hbase-shaded-client:jar:2.0.0-SNAPSHOT:compile
[INFO] |  +- commons-collections:commons-collections:jar:3.2.1:compile
[INFO] |  +- commons-codec:commons-codec:jar:1.9:compile
[INFO] |  +- commons-io:commons-io:jar:2.4:compile
[INFO] |  +- commons-lang:commons-lang:jar:2.6:compile
[INFO] |  +- commons-logging:commons-logging:jar:1.2:compile
[INFO] |  +- com.google.code.findbugs:jsr305:jar:1.3.9:compile
[INFO] |  +- org.slf4j:slf4j-api:jar:1.7.7:compile
[INFO] |  +- org.slf4j:slf4j-log4j12:jar:1.6.1:compile
[INFO] |  +- org.apache.htrace:htrace-core:jar:3.1.0-incubating:compile
[INFO] |  +- org.jruby.jcodings:jcodings:jar:1.0.8:compile
[INFO] |  +- org.jruby.joni:joni:jar:2.1.2:compile
[INFO] |  +- org.apache.httpcomponents:httpclient:jar:4.2.5:compile
[INFO] |  +- org.apache.httpcomponents:httpcore:jar:4.2.4:compile
[INFO] |  +- 
org.apache.directory.server:apacheds-kerberos-codec:jar:2.0.0-M15:compile
[INFO] |  +- org.apache.directory.server:apacheds-i18n:jar:2.0.0-M15:compile
[INFO] |  +- org.apache.directory.api:api-asn1-api:jar:1.0.0-M20:compile
[INFO] |  +- org.apache.directory.api:api-util:jar:1.0.0-M20:compile
[INFO] |  +- commons-cli:commons-cli:jar:1.2:compile
[INFO] |  +- org.apache.commons:commons-math3:jar:3.1.1:compile
[INFO] |  +- xmlenc:xmlenc:jar:0.52:compile
[INFO] |  +- commons-httpclient:commons-httpclient:jar:3.1:compile
[INFO] |  +- commons-net:commons-net:jar:3.1:compile
[INFO] |  +- commons-el:commons-el:jar:1.0:runtime
[INFO] |  +- commons-configuration:commons-configuration:jar:1.6:compile
[INFO] |  +- commons-digester:commons-digester:jar:1.8:compile
[INFO] |  +- commons-beanutils:commons-beanutils:jar:1.7.0:compile
[INFO] |  +- commons-beanutils:commons-beanutils-core:jar:1.8.0:compile
[INFO] |  +- com.thoughtworks.paranamer:paranamer:jar:2.3:compile
[INFO] |  +- org.xerial.snappy:snappy-java:jar:1.0.4.1:compile
[INFO] |  +- com.jcraft:jsch:jar:0.1.42:compile
[INFO] |  +- org.apache.commons:commons-compress:jar:1.4.1:compile
[INFO] |  +- org.tukaani:xz:jar:1.0:compile
[INFO] |  +- javax.xml.bind:jaxb-api:jar:2.2.2:compile
[INFO] |  +- javax.activation:activation:jar:1.1:compile
[INFO] |  +- 
com.github.stephenc.findbugs:findbugs-annotations:jar:1.3.9-1:compile
[INFO] |  \- log4j:log4j:jar:1.2.17:compile
[INFO] \- junit:junit:jar:4.11:test
[INFO]\- org.hamcrest:hamcrest-core:jar:1.3:test
{code}

Should we re-locate the commons-* stuff too ? I think we should leave the 
loggin stuff exposed as that stuff relies on loading classes by name; however 
the rest can move if it's wanted.


Sizes as things stand:
{code}
-rw-r--r--  1 elliott  THEFACEBOOK\Domain Users19M Apr 21 08:56 
hbase-shaded-client-2.0.0-SNAPSHOT.jar
-rw-r--r--  1 elliott  THEFACEBOOK\Domain Users36M Apr 21 08:57 
hbase-shaded-server-2.0.0-SNAPSHOT.jar
{code}


 Publish a client artifact with shaded dependencies
 --

 Key: HBASE-13517
 URL: https://issues.apache.org/jira/browse/HBASE-13517
 Project: HBase
  Issue Type: Bug
Affects Versions: 2.0.0, 1.1.0
Reporter: Elliott Clark
Assignee: Elliott Clark
 Fix For: 2.0.0, 1.1.0

 Attachments: HBASE-13517-v1.patch, HBASE-13517-v2.patch, 
 HBASE-13517.patch


 Guava's moved on. Hadoop has not.
 Jackson moves whenever it feels like it.
 Protobuf moves with breaking point changes.
 While shading all of the time would break people that require the transitive 
 dependencies for MR or other things. Lets provide an artifact with our 
 dependencies shaded. Then users can have the choice to use the shaded version 
 or the non-shaded version.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13516) Increase PermSize to 128MB

2015-04-21 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13516?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505188#comment-14505188
 ] 

stack commented on HBASE-13516:
---

bq. Only needed in jdk8.

Yeah, sorry... should have had a 'not' in there.

 Increase PermSize to 128MB
 --

 Key: HBASE-13516
 URL: https://issues.apache.org/jira/browse/HBASE-13516
 Project: HBase
  Issue Type: Improvement
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.1.0

 Attachments: hbase-13516_v1.patch


 HBase uses ~40MB, and with Phoenix we use ~56MB of Perm space out of 64MB by 
 default. Every Filter and Coprocessor increases that.
 Running out of perm space triggers a stop the world full GC of the entire 
 heap. We have seen this in misconfigured cluster. 
 Should we default to  {{-XX:PermSize=128m -XX:MaxPermSize=128m}} out of the 
 box as a convenience for users? 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-13502) Deprecate/remove getRowComparator() in TableName

2015-04-21 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13502?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14505647#comment-14505647
 ] 

Enis Soztutar commented on HBASE-13502:
---

LGTM. Hardcoded getRowcomparator() in TableName does not make sense. I'd say 
put it in 1.0+. 

 Deprecate/remove getRowComparator() in TableName
 

 Key: HBASE-13502
 URL: https://issues.apache.org/jira/browse/HBASE-13502
 Project: HBase
  Issue Type: Sub-task
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
 Fix For: 2.0.0

 Attachments: HBASE-13502.patch, HBASE-13502_1.patch, 
 HBASE-13502_2.patch, HBASE-13502_2.patch, HBASE-13502_3.patch






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   >