[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Status: Patch Available  (was: Open)

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch, 
> HBASE-16169.master.008.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Attachment: HBASE-16169.master.008.patch

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch, 
> HBASE-16169.master.008.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Status: Open  (was: Patch Available)

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()

2016-11-15 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17076:
--
Attachment: HBASE-17076-v2.patch

The HBASE-17082 is resolved. Re-run the QA.

> implement getAndPut() and getAndDelete()
> 
>
> Key: HBASE-17076
> URL: https://issues.apache.org/jira/browse/HBASE-17076
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17076-v0.patch, HBASE-17076-v1.patch, 
> HBASE-17076-v2.patch
>
>
> We implement the getAndPut() and getAndDelete() by coprocessor, but there are 
> a lot of duplicate effort (e.g., data checks, row lock, returned value, and 
> wal). It is cool if we provide the compare-and-swap primitive.
> The draft patch is attached. Any advice and suggestions will be greatly 
> appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()

2016-11-15 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17076:
--
Status: Patch Available  (was: Open)

> implement getAndPut() and getAndDelete()
> 
>
> Key: HBASE-17076
> URL: https://issues.apache.org/jira/browse/HBASE-17076
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17076-v0.patch, HBASE-17076-v1.patch, 
> HBASE-17076-v2.patch
>
>
> We implement the getAndPut() and getAndDelete() by coprocessor, but there are 
> a lot of duplicate effort (e.g., data checks, row lock, returned value, and 
> wal). It is cool if we provide the compare-and-swap primitive.
> The draft patch is attached. Any advice and suggestions will be greatly 
> appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17076) implement getAndPut() and getAndDelete()

2016-11-15 Thread ChiaPing Tsai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ChiaPing Tsai updated HBASE-17076:
--
Assignee: ChiaPing Tsai
  Status: Open  (was: Patch Available)

> implement getAndPut() and getAndDelete()
> 
>
> Key: HBASE-17076
> URL: https://issues.apache.org/jira/browse/HBASE-17076
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-17076-v0.patch, HBASE-17076-v1.patch
>
>
> We implement the getAndPut() and getAndDelete() by coprocessor, but there are 
> a lot of duplicate effort (e.g., data checks, row lock, returned value, and 
> wal). It is cool if we provide the compare-and-swap primitive.
> The draft patch is attached. Any advice and suggestions will be greatly 
> appreciated.
> Thanks.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669692#comment-15669692
 ] 

ChiaPing Tsai commented on HBASE-17082:
---

Do we need to update the README.txt in hbase-protocol-shaded ?
{noformat}
 $ mvn install -Dcompile-protobuf

or

 $ mvn install -Pcompile-protobuf

NOTE: 'install' above whereas other proto generation only needs 'compile'
{noformat}
After resolving this issue, we always skip the 'install'. The 'install' ought 
to be replaced by 'package' for avoiding misunderstandings.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16972) Log more details for Scan#next request when responseTooSlow

2016-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16972?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669687#comment-15669687
 ] 

Nick Dimiduk commented on HBASE-16972:
--

Change appears only to add new fields to a json-ish blob, so I'm (belatedly) 
okay with this for release lines. It would break folks who are parsing 
positionally, but it's structured, so they don't have so much room to complain.

Strong objections [~busbey]? I guess it's already out in 1.2.4, so that's that.

> Log more details for Scan#next request when responseTooSlow
> ---
>
> Key: HBASE-16972
> URL: https://issues.apache.org/jira/browse/HBASE-16972
> Project: HBase
>  Issue Type: Improvement
>  Components: Operability
>Affects Versions: 1.2.3, 1.1.7
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.4, 1.1.8
>
> Attachments: HBASE-16972.patch, HBASE-16972.v2.patch, 
> HBASE-16972.v3.patch
>
>
> Currently for if responseTooSlow happens on the scan.next call, we will get 
> warn log like below:
> {noformat}
> 2016-10-31 11:43:23,430 WARN  
> [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=60193] 
> ipc.RpcServer(2574):
> (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)",
> "starttimems":1477885403428,"responsesize":52,"method":"Scan","param":"scanner_id:
>  11 number_of_rows: 2147483647
> close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true
> track_scan_metrics: false renew: 
> false","processingtimems":2,"client":"127.0.0.1:60254","queuetimems":0,"class":"HMaster"}
> {noformat}
> From which we only have a {{scanner_id}} and impossible to know what exactly 
> this scan is about, like against which region of which table.
> After this JIRA, we will improve the message to something like below (notice 
> the last line):
> {noformat}
> 2016-10-31 11:43:23,430 WARN  
> [RpcServer.FifoWFPBQ.priority.handler=5,queue=1,port=60193] 
> ipc.RpcServer(2574):
> (responseTooSlow): 
> {"call":"Scan(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ScanRequest)",
> "starttimems":1477885403428,"responsesize":52,"method":"Scan","param":"scanner_id:
>  11 number_of_rows: 2147483647
> close_scanner: false next_call_seq: 0 client_handles_partials: true 
> client_handles_heartbeats: true
> track_scan_metrics: false renew: 
> false","processingtimems":2,"client":"127.0.0.1:60254","queuetimems":0,"class":"HMaster",
> "scandetails":"table: hbase:meta region: hbase:meta,,1.1588230740"}
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16973) Revisiting default value for hbase.client.scanner.caching

2016-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16973?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669671#comment-15669671
 ] 

Nick Dimiduk commented on HBASE-16973:
--

Trying to understand the state of things here for 1.1. Looks like HBASE-11544 
made it, meaning {{DEFAULT_HBASE_CLIENT_SCANNER_CACHING = Integer.MAX_VALUE}}; 
thus the default limit based on total number of rows is effectively unbounded. 
We also have HBASE-12976, so {{DEFAULT_HBASE_CLIENT_SCANNER_MAX_RESULT_SIZE = 2 
* 1024 * 1024}}. {{hbase.client.scanner.timeout.period}} is 1m in 
hbase-defaults.xml. This means for a highly selective filter, we'd end up 
hitting a timeout and throwing away any partial results before the 2mb is 
filled? Or does it mean we go back to the client after 1m with whatever we've 
accumulated so far? The former is a pretty bad situation and warrants some 
comment about the sharp edge. I'm against changing the default this late into 
the maintenance cycle, but a table in the book that breaks things out by 
release branch would help users stumbling through the mirk.

> Revisiting default value for hbase.client.scanner.caching
> -
>
> Key: HBASE-16973
> URL: https://issues.apache.org/jira/browse/HBASE-16973
> Project: HBase
>  Issue Type: Task
>Reporter: Yu Li
>Assignee: Yu Li
> Attachments: Scan.next_p999.png
>
>
> We are observing below logs for a long-running scan:
> {noformat}
> 2016-10-30 08:51:41,692 WARN  
> [B.defaultRpcServer.handler=50,queue=12,port=16020] ipc.RpcServer:
> (responseTooSlow-LongProcessTime): {"processingtimems":24329,
> "call":"Scan(org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ScanRequest)",
> "client":"11.251.157.108:50415","scandetails":"table: ae_product_image 
> region: ae_product_image,494:
> ,1476872321454.33171a04a683c4404717c43ea4eb8978.","param":"scanner_id: 
> 5333521 number_of_rows: 2147483647
> close_scanner: false next_call_seq: 8 client_handles_partials: true 
> client_handles_heartbeats: true",
> "starttimems":1477788677363,"queuetimems":0,"class":"HRegionServer","responsesize":818,"method":"Scan"}
> {noformat}
> From which we found the "number_of_rows" is as big as {{Integer.MAX_VALUE}}
> And we also observed a long filter list on the customized scan. After 
> checking application code we confirmed that there's no {{Scan.setCaching}} or 
> {{hbase.client.scanner.caching}} setting on client side, so it turns out 
> using the default value the caching for Scan will be Integer.MAX_VALUE, which 
> is really a big surprise.
> After checking code and commit history, I found it's HBASE-11544 which 
> changes {{HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING}} from 100 to 
> Integer.MAX_VALUE, and from the release note there I could see below notation:
> {noformat}
> Scan caching default has been changed to Integer.Max_Value 
> This value works together with the new maxResultSize value from HBASE-12976 
> (defaults to 2MB) 
> Results returned from server on basis of size rather than number of rows 
> Provides better use of network since row size varies amongst tables
> {noformat}
> And I'm afraid this lacks of consideration of the case of scan with filters, 
> which may involve many rows but only return with a small result.
> What's more, we still have below comment/code in {{Scan.java}}
> {code}
>   /*
>* -1 means no caching
>*/
>   private int caching = -1;
> {code}
> But actually the implementation does not follow (instead of no caching, we 
> are caching {{Integer.MAX_VALUE}}...).
> So here I'd like to bring up two points:
> 1. Change back the default value of 
> HConstants.DEFAULT_HBASE_CLIENT_SCANNER_CACHING to some small value like 128
> 2. Reenforce the semantic of "no caching"



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669666#comment-15669666
 ] 

ChiaPing Tsai commented on HBASE-17082:
---

Thank you for everything you’ve done.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669665#comment-15669665
 ] 

stack commented on HBASE-17082:
---

(I forgot to push this comment. This is my first attempt at a fix. It got 
reverted subsequently replaced by a second attempt that ending up working).

I pushed this attempt at a fix (It is hard to do the yetus env. locally -- it 
takes a long time). Trying the yetus env locally, I get a different compile 
error out of hbase-client. Hopefully that is a good sign. Committed so could 
try your test patch [~chia7712].  Another issue we have to deal w/ is the 
protoc check being run in modules that don't have protoc. Will need to mess in 
the hbase-personality...

{code}
commit 8847a7090260038afd538de274378a691ca96c4f
Author: Michael Stack 
Date:   Tue Nov 15 12:22:51 2016 -0800

HBASE-17082 ForeignExceptionUtil isnt packaged when building shaded 
protocol with -Pcompile-protobuf; Attempted Fix

diff --git a/hbase-protocol-shaded/pom.xml b/hbase-protocol-shaded/pom.xml
index 2b221d5..aebef81 100644
--- a/hbase-protocol-shaded/pom.xml
+++ b/hbase-protocol-shaded/pom.xml
@@ -181,7 +181,8 @@
   
   
 compile-protobuf
-
${project.build.directory}/protoc-generated-sources
+
+${basedir}/src/main/java 
${project.build.directory}/protoc-generated-sources
 
${project.build.directory}/protoc-generated-classes
 
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16169:
--
Attachment: HBASE-16169.master.007.patch

Retry now HBASE-17082 resolved.

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17082:
--
Resolution: Fixed
  Assignee: ChiaPing Tsai  (was: stack)
Status: Resolved  (was: Patch Available)

Ok. That seems to have fixed it. hbase-client passed. hbase-server failed a 
unit test where before it was failing to compile.

Resolving. Reassgning to [~chia7712] since did most of the work here.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14123) HBase Backup/Restore Phase 2

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14123?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669647#comment-15669647
 ] 

stack commented on HBASE-14123:
---

I tried it. Here are some notes.

I applied patch and restarted. Nothing in logs about a backup table. Good.

I tried this: $ ./hbase/bin/hbase --config ~/conf_hbase/ backup

... and got this message:

Backup is not enabled. To enable backup, set 'hbase.backup.enabled'=true and 
restart the cluster

... which is reasonable (might want to fix the single quotes above to include 
the =true.

I ran ./hbase/bin/hbase --config ~/conf_hbase/ backup and got nice listing of 
commands.

I notice that if I run backup command without arg., it does right thing 
printing out the help.

  backup_root Full path to store the backup image needs an example... is 
the spec for path a full hdfs url or a local reference?

I tried local reference and got:

ERROR: invalid backup destination: /User/stack/Downloads/bkup

Providing full on hdfs URL seems to do the job.

stack@ve0524:~$ ./hbase/bin/hbase --config ~/conf_hbase/ backup create full 
hdfs://ve0524.halxg.cloudera.com:8020/user/stack/backup ycsb

Can we file an issue to do a better output when doing history?

{code}
ID : backup_1479277134105
Type   : FULL
Tables : x_1
State  : COMPLETE
Start time : Tue Nov 15 22:18:54 PST 2016
End time   : Tue Nov 15 22:19:11 PST 2016
Progress   : 100

ID : backup_1479274617204
Type   : FULL
Tables : ycsb
State  : COMPLETE
Start time : Tue Nov 15 21:36:57 PST 2016
End time   : Tue Nov 15 21:52:48 PST 2016
Progress   : 100

ID : backup_1479273680731
Type   : FULL
Tables : ycsb
State  : RUNNING
Start time : Tue Nov 15 21:21:21 PST 2016
Phase  : null
Progress   : 0
{code}

A single line would be good when lots of backups. The line as JSON would be 
parseable/actionable by other tools. Can fix later.

Should 'history' be 'list'? History is not 'complete' as I'd expect after doing 
a few deletes. Can fix later.

History shows all about a backup. 'describe' shows all about a single backup.  
Maybe 'list' shows all and 'list' with a backup id shows the ids specified 
only? Can be done later. 'history' seems odd here.

'progress' help doesn't say it requires a backup id.

So, given the above note on how you format the backup emissions, it is 
interesting that the set output has this form, a new format completely:

{code}
x={ycsb,x_1}
{code}

Would be good to unify the output formats here or at least have them mildly 
related?

Formatting is off here:

{code}
stack@ve0524:~$ ./hbase/bin/hbase --config ~/conf_hbase/ restore
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/stack/hbase-2.0.0-SNAPSHOT/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/stack/hadoop-2.7.3-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
2016-11-15 22:40:09,081 DEBUG [main] backup.RestoreDriver: Will automatically 
restore all the dependencies
Usage: bin/hbase restore    [options]
  backup_path Path to a backup destination root
  backup_id   Backup image ID to restore  table(s)Comma-separated 
list of tables to restore
{code}

Restore like backup needs defaults and example args.

Please file subissues to fix the above but so far no blocker for merge. Let me 
do a bit more in the morning. Running a restore now...  If things basically 
work and review checks out, we can merge and pick up the above small stuff 
subsequently Will be back.













> HBase Backup/Restore Phase 2
> 
>
> Key: HBASE-14123
> URL: https://issues.apache.org/jira/browse/HBASE-14123
> Project: HBase
>  Issue Type: Umbrella
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Blocker
> Fix For: 2.0.0
>
> Attachments: 14123-master.v14.txt, 14123-master.v15.txt, 
> 14123-master.v16.txt, 14123-master.v17.txt, 14123-master.v18.txt, 
> 14123-master.v19.txt, 14123-master.v2.txt, 14123-master.v20.txt, 
> 14123-master.v21.txt, 14123-master.v24.txt, 14123-master.v25.txt, 
> 14123-master.v27.txt, 14123-master.v28.txt, 14123-master.v29.full.txt, 
> 14123-master.v3.txt, 14123-master.v30.txt, 14123-master.v31.txt, 
> 14123-master.v32.txt, 14123-master.v33.txt, 14123-master.v34.txt, 
> 14123-master.v35.txt, 14123-master.v36.txt, 14123-master.v37.txt, 
> 14123-master.v5.txt, 14123-master.v6.txt, 14123-master.v7.txt, 
> 14123-master.v8.txt, 14123-master.v9.txt, 

[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669645#comment-15669645
 ] 

Hadoop QA commented on HBASE-17082:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 15s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
10s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
29s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 56s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 51s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
38s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 22s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 10m 
25s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
30s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 33s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 24s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 5m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 49s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 26s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 57s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 92m 16s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
37s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 166m 25s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.master.TestAssignmentListener |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839115/HBASE-17082.nothing.patch
 |
| JIRA Issue | HBASE-17082 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  cc  hbaseprotoc  |
| uname | Linux 

[jira] [Commented] (HBASE-17109) LoadTestTool needs differentiation and help/usage/options/cleanup and examples; else lets drop it: we have enough loading options already

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17109?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669634#comment-15669634
 ] 

stack commented on HBASE-17109:
---

Also, need to fix the emission on completion:  'Failed to write keys: 0' ... 
makes you think you messed up when in fact all succeeded.

> LoadTestTool needs differentiation and help/usage/options/cleanup and 
> examples; else lets drop it: we have enough loading options already
> -
>
> Key: HBASE-17109
> URL: https://issues.apache.org/jira/browse/HBASE-17109
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> LTT needs better differentiation from PE (and YCSB). When would I use it 
> instead?
> If we can't make a case for it, lets drop it. Having so many loading options 
> is only confusing 
> It could be easier to use and get going doing some mildly interesting 
> loadings but the options presented confuse. They are not sorted. There are no 
> examples. If you type nothing you get an exception instead of help. A little 
> cleanup and LTT could be easy to get going loading tool. For heavier loading 
> PE or YCSB... though again, the less options the better.
> One thing LTT is nice at is the way it can make many tables with many regions 
> easily. What else differentiates it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669631#comment-15669631
 ] 

Nick Dimiduk commented on HBASE-16562:
--

I just found HBASE-16934. Sorry [~busbey], [~apurtell], please disregard.

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Fix For: 1.0.4, 1.3.1, 1.1.7, 0.98.23
>
> Attachments: HBASE-16562-branch-1.2.patch, 
> HBASE-16562-branch-1.2.v1.patch, HBASE-16562.patch, HBASE-16562.v1.patch, 
> HBASE-16562.v1.patch-addendum
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-17081:
---
Attachment: Pipelinememstore_fortrunk_3.patch

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch, Pipelinememstore_fortrunk_3.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16359) NullPointerException in RSRpcServices.openRegion()

2016-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16359?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669615#comment-15669615
 ] 

Nick Dimiduk commented on HBASE-16359:
--

+1 for branch-1.1 if it's making it's way down the release lines.

> NullPointerException in RSRpcServices.openRegion()
> --
>
> Key: HBASE-16359
> URL: https://issues.apache.org/jira/browse/HBASE-16359
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.2
>Reporter: Ted Yu
>Assignee: Ted Yu
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 16359.addendum, 16359.v2.txt
>
>
> I was investigating why some region failed to move out of transition within 
> timeout 12ms and found the following in region server log:
> {code}
> 2016-08-04 09:19:52,616 INFO  
> [B.priority.fifo.QRpcServer.handler=12,queue=0,port=16020] 
> regionserver.RSRpcServices: Open hbck_table_772674,,1470302211047.
> da859880bb51bc0fd25979798a96c444.
> 2016-08-04 09:19:52,620 ERROR 
> [B.priority.fifo.QRpcServer.handler=12,queue=0,port=16020] ipc.RpcServer: 
> Unexpected throwable object
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.hbase.regionserver.RSRpcServices.openRegion(RSRpcServices.java:1530)
>   at 
> org.apache.hadoop.hbase.protobuf.generated.AdminProtos$AdminService$2.callBlockingMethod(AdminProtos.java:22737)
>   at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
>   at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
> {code}
> Here is related code - NPE was thrown from the last line:
> {code}
> htd = htds.get(region.getTable());
> if (htd == null) {
>   htd = regionServer.tableDescriptors.get(region.getTable());
>   htds.put(region.getTable(), htd);
> }
> ...
>   if (region.isMetaRegion()) {
> regionServer.service.submit(new OpenMetaHandler(
>   regionServer, regionServer, region, htd, masterSystemTime, 
> coordination, ord));
>   } else {
> 
> regionServer.updateRegionFavoredNodesMapping(region.getEncodedName(),
>   regionOpenInfo.getFavoredNodesList());
> if (htd.getPriority() >= HConstants.ADMIN_QOS || 
> region.getTable().isSystemTable()) {
> {code}
> region.getTable() shouldn't be null since it is called via 
> htds.get(region.getTable()) unconditionally.
> It seems htd was null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669613#comment-15669613
 ] 

ramkrishna.s.vasudevan commented on HBASE-17081:


Just few comments in RB. Also attaching my version of patch for your reference. 
Incase you find something useful there.

> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16562) ITBLL should fail to start if misconfigured

2016-11-15 Thread Nick Dimiduk (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669610#comment-15669610
 ] 

Nick Dimiduk commented on HBASE-16562:
--

I don't see a revert commit for this one of branch-1.1, seems it went out in 
1.1.7. [~busbey], [~apurtell]: should it be reverted for 1.1.8?

> ITBLL should fail to start if misconfigured
> ---
>
> Key: HBASE-16562
> URL: https://issues.apache.org/jira/browse/HBASE-16562
> Project: HBase
>  Issue Type: Improvement
>  Components: integration tests
>Reporter: Andrew Purtell
>Assignee: Heng Chen
> Fix For: 1.0.4, 1.3.1, 1.1.7, 0.98.23
>
> Attachments: HBASE-16562-branch-1.2.patch, 
> HBASE-16562-branch-1.2.v1.patch, HBASE-16562.patch, HBASE-16562.v1.patch, 
> HBASE-16562.v1.patch-addendum
>
>
> The number of nodes in ITBLL must a multiple of width*wrap (defaults to 25M, 
> but can be configured by adding two more args to the test invocation) or else 
> verification will fail. This can be very expensive in terms of time or hourly 
> billing for on demand test resources. Check the sanity of test parameters 
> before launching any MR jobs and fail fast if invariants aren't met with an 
> indication what parameter(s) need fixing. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669567#comment-15669567
 ] 

ramkrishna.s.vasudevan commented on HBASE-17085:


I thought it is only used in tests and not any where else. When I grepped I 
found that only. Let me check your patch now.

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17049) Find out why AsyncFSWAL issues much more syncs than FSHLog

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669565#comment-15669565
 ] 

ramkrishna.s.vasudevan commented on HBASE-17049:


This is valid I think. Also almost for every 3 or 4 appends we create one 
packet and that is being synced.

> Find out why AsyncFSWAL issues much more syncs than FSHLog
> --
>
> Key: HBASE-17049
> URL: https://issues.apache.org/jira/browse/HBASE-17049
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
> Fix For: 2.0.0
>
>
> https://issues.apache.org/jira/browse/HBASE-16890?focusedCommentId=15647590=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15647590



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-15 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669559#comment-15669559
 ] 

Jingcheng Du commented on HBASE-16981:
--

This can reduce the IO, but this cannot help reduce the number of files.
If we want to keep a small number of files, we have to set this merge threshold 
in a large number which might introduce IO amplification.
Maybe we can add a threshold for the number of the files. The files that are 
larger than the merge threshold won't be touched until the number of files is 
larger than the new threshold? In the compaction, the files that are less than 
the merge threshold should be selected first.

> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17109) LoadTestTool needs differentiation and help/usage/options/cleanup and examples; else lets drop it: we have enough loading options already

2016-11-15 Thread stack (JIRA)
stack created HBASE-17109:
-

 Summary: LoadTestTool needs differentiation and 
help/usage/options/cleanup and examples; else lets drop it: we have enough 
loading options already
 Key: HBASE-17109
 URL: https://issues.apache.org/jira/browse/HBASE-17109
 Project: HBase
  Issue Type: Task
Reporter: stack
Priority: Critical
 Fix For: 2.0.0


LTT needs better differentiation from PE (and YCSB). When would I use it 
instead?

If we can't make a case for it, lets drop it. Having so many loading options is 
only confusing 

It could be easier to use and get going doing some mildly interesting loadings 
but the options presented confuse. They are not sorted. There are no examples. 
If you type nothing you get an exception instead of help. A little cleanup and 
LTT could be easy to get going loading tool. For heavier loading PE or YCSB... 
though again, the less options the better.

One thing LTT is nice at is the way it can make many tables with many regions 
easily. What else differentiates it?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669456#comment-15669456
 ] 

Duo Zhang commented on HBASE-17085:
---

Yeah I mean the sync method without txid. It is also called at many places so 
we need to consider it. Your patch may cause infinite wait when sync() is 
called.

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17049) Find out why AsyncFSWAL issues much more syncs than FSHLog

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17049?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669434#comment-15669434
 ] 

stack commented on HBASE-17049:
---

Agree. Compare packet sizes and their rates is next I think. Let me see if I 
can help here.

> Find out why AsyncFSWAL issues much more syncs than FSHLog
> --
>
> Key: HBASE-17049
> URL: https://issues.apache.org/jira/browse/HBASE-17049
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
> Fix For: 2.0.0
>
>
> https://issues.apache.org/jira/browse/HBASE-16890?focusedCommentId=15647590=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15647590



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669370#comment-15669370
 ] 

stack commented on HBASE-17082:
---

I reverted the first attempt with this commit:

{code}
commit 0f7a7f475134095eaa348af8fb78047970060ca0
Author: Michael Stack 
Date:   Tue Nov 15 20:27:32 2016 -0800

Revert "HBASE-17082 ForeignExceptionUtil isnt packaged when building shaded 
protocol with -Pcompile-protobuf; Attempted Fix"

This reverts commit 8847a7090260038afd538de274378a691ca96c4f.

We committed two 'attempted fixes'. This is a revert of the first
attempt. It did not work. Sorry for confusion. I used the same
commit message so it could be awkward unraveling.
{code}

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669360#comment-15669360
 ] 

ramkrishna.s.vasudevan commented on HBASE-17085:


My thing was based on the fact that since we call sync(txid), it is sure that 
when a sync call comes with txid 2 then surely we will have a append with txid 
2. So that is when the sync will be called. I tested that with PE tool and 
things worked fine for me. So do you think of the case where sync() is called 
without txid?  May be I did not test that case as PE actual case is with 
sync(txid). It can atleast reduce in the number of sync calls.

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669338#comment-15669338
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK7 #68 (See 
[https://builds.apache.org/job/HBase-1.3-JDK7/68/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
c3cb4203983244981e2f49784cd69ec21cb6910f)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17081) Flush the entire CompactingMemStore content to disk

2016-11-15 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669333#comment-15669333
 ] 

ramkrishna.s.vasudevan commented on HBASE-17081:


As am out, I did not check the patch. Since you asked as where we have talked 
about it. Pls see this 
https://issues.apache.org/jira/browse/HBASE-16608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15585971#comment-15585971.

Pls go ahead with the patch. 


> Flush the entire CompactingMemStore content to disk
> ---
>
> Key: HBASE-17081
> URL: https://issues.apache.org/jira/browse/HBASE-17081
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Anastasia Braginsky
>Assignee: Anastasia Braginsky
> Attachments: HBASE-17081-V01.patch
>
>
> Part of CompactingMemStore's memory is held by an active segment, and another 
> part is divided between immutable segments in the compacting pipeline. Upon 
> flush-to-disk request we want to flush all of it to disk, in contrast to 
> flushing only tail of the compacting pipeline.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17082:
--
Attachment: 17082_attempted_fix2.txt

Here is attempt #2. I reverted #1. It did not fix the problem.

The problem turns out to be mvns dodged install of a jar at end of install 
step. It goes so far as to take a renamed jar and install it as the modules 
product. 

The trick in hbase-protocol-shaded is to build protos into a scratch jar that 
then gets shaded, undone over src, patched and then committed. Our scratch jar 
-- missing some classes -- was getting installed into the repo. Usually not an 
issue but it became an issue during the run that this 'nothing' patch provokes 
where client and server modules are having their protos generated (they have 
none but yetus thinks it needs to run).  cilent and server need the repo to 
successfully build but just before their proto check, the hbase-protocol-shaded 
ran polluting the repo with the scratch jar as though it were the legit output 
of the hbase-protocol-shaded build.

Let me try the nothing patch against this commit.


> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17082:
--
Attachment: HBASE-17082.nothing.patch

Retry of 'nothing' patch.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17085:
--
Comment: was deleted

(was: In HBASE-16890 it is 463/272=1.70 and here it is 613/386=1.59. So I think 
it helps a little?

And see my latest comment in HBASE-17049, the sync count metrics of FSHLog and 
AsyncFSWAL can not be compared directly.

Anyway, I will keep trying other methods to aggregate more syncs.

Thanks [~stack].)

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-15 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669284#comment-15669284
 ] 

Guanghao Zhang commented on HBASE-17088:


TestHRegionWithInMemoryFlush passed locally.

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16956) Refactor FavoredNodePlan to use regionNames as keys

2016-11-15 Thread Devaraj Das (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669261#comment-15669261
 ] 

Devaraj Das commented on HBASE-16956:
-

[~thiruvel] the last patch seems to be unrelated to this jira. But I took a 
look at the one before last you uploaded. Unless you wanted to update that and 
resubmit, and if no objections, I'll commit that tomorrow.

> Refactor FavoredNodePlan to use regionNames as keys
> ---
>
> Key: HBASE-16956
> URL: https://issues.apache.org/jira/browse/HBASE-16956
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: HBASE-16956.branch-1.001.patch, 
> HBASE-16956.master.001.patch, HBASE-16956.master.002.patch, 
> HBASE-16956.master.003.patch, HBASE-16956.master.004.patch, 
> HBASE-16956.master.005.patch, HBASE-16956.master.006.patch
>
>
> We would like to rely on the FNPlan cache whether a region is offline or not. 
> Sticking to regionNames as keys makes that possible.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17105) Annotate RegionServerObserver

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669238#comment-15669238
 ] 

Hadoop QA commented on HBASE-17105:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
4s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
45s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
50s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 27s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
47s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
43s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
14s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 2s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
49s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 95m 27s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
13s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 135m 42s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839086/hbase-17105_v1.patch |
| JIRA Issue | HBASE-17105 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2296c77c6b38 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d40a0c3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4484/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4484/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: 

[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669217#comment-15669217
 ] 

Hadoop QA commented on HBASE-16169:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
42s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
36s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
28s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 44s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 9s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
42s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 19s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 21s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
45s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 15s 
{color} | {color:red} hbase-client generated 1 new + 13 unchanged - 0 fixed = 
14 total (was 13) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server generated 4 new + 1 unchanged - 0 fixed = 5 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 32s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839096/HBASE-16169.master.007.patch
 |
| JIRA Issue | HBASE-16169 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  cc  hbaseprotoc  |
| uname | Linux 

[jira] [Updated] (HBASE-17100) Implement Chore to sync FN info from Master to RegionServers

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17100?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-17100:
-
Attachment: HBASE_17100_draft.patch

Including a draft patch. This can go in post HBASE-16941

> Implement Chore to sync FN info from Master to RegionServers
> 
>
> Key: HBASE-17100
> URL: https://issues.apache.org/jira/browse/HBASE-17100
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE_17100_draft.patch
>
>
> Master will have a repair chore which will periodically sync fn information 
> from master to all the region servers. This will protect against rpc failures.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669152#comment-15669152
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK7 #71 (See 
[https://builds.apache.org/job/HBase-1.2-JDK7/71/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
e7b310e687fd65e1c7f79d02667379602a8895a9)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669148#comment-15669148
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.4 #536 (See 
[https://builds.apache.org/job/HBase-1.4/536/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
bf0483c37c09842f72dcea08042f8dadc0f0b758)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669105#comment-15669105
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.3-JDK8 #79 (See 
[https://builds.apache.org/job/HBase-1.3-JDK8/79/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
c3cb4203983244981e2f49784cd69ec21cb6910f)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17108) ZKConfig.getZKQuorumServersString does not return the correct client port number

2016-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669087#comment-15669087
 ] 

Andrew Purtell commented on HBASE-17108:


Testing a simple pick back. If it looks good will post the results

> ZKConfig.getZKQuorumServersString does not return the correct client port 
> number
> 
>
> Key: HBASE-17108
> URL: https://issues.apache.org/jira/browse/HBASE-17108
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.17
>Reporter: Andrew Purtell
> Fix For: 0.98.24
>
>
> ZKConfig.getZKQuorumServersString may not return the correct client port 
> number, at least on 0.98 branch. See PHOENIX-3485. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669085#comment-15669085
 ] 

Hadoop QA commented on HBASE-17088:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
38s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 31s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 86m 52s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
12s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 121m 43s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestHRegionWithInMemoryFlush |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839084/HBASE-17088-v3.patch |
| JIRA Issue | HBASE-17088 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 2d93f9bdee66 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / d40a0c3 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4483/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  
https://builds.apache.org/job/PreCommit-HBASE-Build/4483/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4483/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4483/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
>

[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669075#comment-15669075
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.2-JDK8 #65 (See 
[https://builds.apache.org/job/HBase-1.2-JDK8/65/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
e7b310e687fd65e1c7f79d02667379602a8895a9)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17108) ZKConfig.getZKQuorumServersString does not return the correct client port number

2016-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17108?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669068#comment-15669068
 ] 

Andrew Purtell commented on HBASE-17108:


There has been some refactoring on branch-1 since HBASE-14866 and a change in 
string handling. Most straightforward approach is a port back of the branch-1 
version of ZKConfig, incorporating HBASE-15769

> ZKConfig.getZKQuorumServersString does not return the correct client port 
> number
> 
>
> Key: HBASE-17108
> URL: https://issues.apache.org/jira/browse/HBASE-17108
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.17
>Reporter: Andrew Purtell
> Fix For: 0.98.24
>
>
> ZKConfig.getZKQuorumServersString may not return the correct client port 
> number, at least on 0.98 branch. See PHOENIX-3485. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Status: Patch Available  (was: Open)

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Attachment: HBASE-16169.master.007.patch

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thiruvel Thirumoolan updated HBASE-16169:
-
Status: Open  (was: Patch Available)

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17105) Annotate RegionServerObserver

2016-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669037#comment-15669037
 ] 

Enis Soztutar commented on HBASE-17105:
---

Seems this is a bit more involved than just RSO. 

{{CoprocessorEnvironment}} is Private, but it cannot be because 
Coprocessor.start(CoprocessorEnvironment env) is LimitedPrivate. 

RegionCoprocessorEnvironment is already LimitedPrivate. It seems that we should 
make the base class LP as well. We should also add some javadoc for the 
RegionCoprocessorEnvironment and others since without reading code, a 
coprocessor writer cannot know that env will be sent for start(). 



> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-17105_v1.patch
>
>
> Seems that we have forgotten to annotate RegionServerObserver with 
> InterfaceAudience. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17108) ZKConfig.getZKQuorumServersString does not return the correct client port number

2016-11-15 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-17108:
--

 Summary: ZKConfig.getZKQuorumServersString does not return the 
correct client port number
 Key: HBASE-17108
 URL: https://issues.apache.org/jira/browse/HBASE-17108
 Project: HBase
  Issue Type: Bug
Affects Versions: 0.98.17
Reporter: Andrew Purtell
 Fix For: 0.98.24


ZKConfig.getZKQuorumServersString may not return the correct client port 
number, at least on 0.98 branch. See PHOENIX-3485. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17093) Enhance SecureBulkLoadClient#secureBulkLoadHFiles() to return the family paths of the final hfiles

2016-11-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15669033#comment-15669033
 ] 

Ted Yu commented on HBASE-17093:


See the outline in HBASE-14417 :

https://issues.apache.org/jira/browse/HBASE-14417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15638081#comment-15638081

The hbase:backup table would be updated at the end of bulk load. Since the mega 
patch hasn't landed, that part of code would come in another issue.

It is error prone (as shown in HBASE-14417) to couple the update of 
hbase:backup in the bulk load code path on server side, hence the design choice 
as outlined above.

> Enhance SecureBulkLoadClient#secureBulkLoadHFiles() to return the family 
> paths of the final hfiles
> --
>
> Key: HBASE-17093
> URL: https://issues.apache.org/jira/browse/HBASE-17093
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 17093.v3.txt, 17093.v4.txt
>
>
> Currently SecureBulkLoadClient#secureBulkLoadHFiles() returns boolean value 
> to indicate success / failure.
> Since SecureBulkLoadClient.java is new to master branch, we can change the 
> return type to be the family paths of the final hfiles.
> LoadQueueItem would be moved to hbase-client module.
> LoadQueueItem in hbase-server module would delegate to the class in 
> hbase-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17107) FN info should be cleaned up on region/table cleanup

2016-11-15 Thread Thiruvel Thirumoolan (JIRA)
Thiruvel Thirumoolan created HBASE-17107:


 Summary: FN info should be cleaned up on region/table cleanup
 Key: HBASE-17107
 URL: https://issues.apache.org/jira/browse/HBASE-17107
 Project: HBase
  Issue Type: Sub-task
Reporter: Thiruvel Thirumoolan
Assignee: Thiruvel Thirumoolan


FN info should be cleaned up when table is deleted and when regions are GCed 
(i.e. CatalogJanitor).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17093) Enhance SecureBulkLoadClient#secureBulkLoadHFiles() to return the family paths of the final hfiles

2016-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17093?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668983#comment-15668983
 ] 

Enis Soztutar commented on HBASE-17093:
---

It is worth noting that this is for HBASE-14417. The idea seems to be that we 
do a 2-PC like operation for saving the bulk load entries to the backup table 
to make sure that next incremental backup will save those. 

For the patch, I thought the plan was to save the BL paths from the server side 
itself. Is there a reason why are these propagated all the way to the client? 
Is it because we do not want to block the BL event in case backup table is not 
available? 

> Enhance SecureBulkLoadClient#secureBulkLoadHFiles() to return the family 
> paths of the final hfiles
> --
>
> Key: HBASE-17093
> URL: https://issues.apache.org/jira/browse/HBASE-17093
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
> Attachments: 17093.v3.txt, 17093.v4.txt
>
>
> Currently SecureBulkLoadClient#secureBulkLoadHFiles() returns boolean value 
> to indicate success / failure.
> Since SecureBulkLoadClient.java is new to master branch, we can change the 
> return type to be the family paths of the final hfiles.
> LoadQueueItem would be moved to hbase-client module.
> LoadQueueItem in hbase-server module would delegate to the class in 
> hbase-client module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16962) Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API

2016-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16962?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668939#comment-15668939
 ] 

Enis Soztutar commented on HBASE-16962:
---

Just created HBASE-17106. 

> Add readPoint to preCompactScannerOpen() and preFlushScannerOpen() API
> --
>
> Key: HBASE-16962
> URL: https://issues.apache.org/jira/browse/HBASE-16962
> Project: HBase
>  Issue Type: Bug
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16956.branch-1.001.patch, 
> HBASE-16956.master.006.patch, HBASE-16962.master.001.patch, 
> HBASE-16962.master.002.patch, HBASE-16962.master.003.patch, 
> HBASE-16962.master.004.patch, HBASE-16962.rough.patch
>
>
> Similar to HBASE-15759, I would like to add readPoint to the 
> preCompactScannerOpen() API.
> I have a CP where I create a StoreScanner() as part of the 
> preCompactScannerOpen() API. I need the readpoint which was obtained in 
> Compactor.compact() method to be consistent.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17106) Wrap arguments to Coprocessor method invocations in Context objects

2016-11-15 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-17106:
-

 Summary: Wrap arguments to Coprocessor method invocations in 
Context objects
 Key: HBASE-17106
 URL: https://issues.apache.org/jira/browse/HBASE-17106
 Project: HBase
  Issue Type: New Feature
Reporter: Enis Soztutar
 Fix For: 2.0.0


As discussed in various contexts (and recently in 
https://issues.apache.org/jira/browse/HBASE-16962?focusedCommentId=15648512=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15648512)
 we have a very large number of deprecated methods in RegionObserver (and 
possibly others) which are due to the fact that the method signatures like: 
{code}
  @Deprecated
  InternalScanner preFlushScannerOpen(final 
ObserverContext c,
  final Store store, final KeyValueScanner memstoreScanner, final 
InternalScanner s)
  throws IOException;
{code}
depend inherently on the internal method signatures which gets changed 
frequently. 

We should look into wrapping the method arguments for such declerations in the 
RegionObserver interface so that we can evolve and add new arguments without 
breaking existing coprocessors between minor versions. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17105) Annotate RegionServerObserver

2016-11-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17105:
--
Status: Patch Available  (was: Open)

> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-17105_v1.patch
>
>
> Seems that we have forgotten to annotate RegionServerObserver with 
> InterfaceAudience. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17105) Annotate RegionServerObserver

2016-11-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17105?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17105:
--
Attachment: hbase-17105_v1.patch

Simple patch. 

> Annotate RegionServerObserver
> -
>
> Key: HBASE-17105
> URL: https://issues.apache.org/jira/browse/HBASE-17105
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.4.0
>
> Attachments: hbase-17105_v1.patch
>
>
> Seems that we have forgotten to annotate RegionServerObserver with 
> InterfaceAudience. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17105) Annotate RegionServerObserver

2016-11-15 Thread Enis Soztutar (JIRA)
Enis Soztutar created HBASE-17105:
-

 Summary: Annotate RegionServerObserver
 Key: HBASE-17105
 URL: https://issues.apache.org/jira/browse/HBASE-17105
 Project: HBase
  Issue Type: Bug
Reporter: Enis Soztutar
Assignee: Enis Soztutar
 Fix For: 2.0.0, 1.4.0


Seems that we have forgotten to annotate RegionServerObserver with 
InterfaceAudience. 





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668889#comment-15668889
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1959 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1959/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
d40a0c3bd88b7098437050eeb3e3f5c6ef5f5502)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668886#comment-15668886
 ] 

Hudson commented on HBASE-17082:


SUCCESS: Integrated in Jenkins build HBase-Trunk_matrix #1959 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1959/])
HBASE-17082 ForeignExceptionUtil isnt packaged when building shaded (stack: rev 
8847a7090260038afd538de274378a691ca96c4f)
* (edit) hbase-protocol-shaded/pom.xml


> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668870#comment-15668870
 ] 

Hadoop QA commented on HBASE-16179:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 18s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} patch {color} | {color:blue} 0m 2s 
{color} | {color:blue} The patch file was not named according to hbase's naming 
conventions. Please see 
https://yetus.apache.org/documentation/0.3.0/precommit-patchnames for 
instructions. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 9 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 22s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
7s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 22s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
24s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 2m 3s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} scaladoc {color} | {color:green} 1m 
15s {color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 4m 
19s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} scalac {color} | {color:green} 3m 51s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 1m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green} 0m 4s 
{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 42s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 1m 57s 
{color} | {color:red} root generated 1 new + 18 unchanged - 1 fixed = 19 total 
(was 19) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 10s 
{color} | {color:red} hbase-spark generated 1 new + 17 unchanged - 1 fixed = 18 
total (was 18) {color} |
| {color:red}-1{color} | {color:red} scaladoc {color} | {color:red} 0m 55s 
{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} scaladoc {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-spark in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 10s 
{color} | {color:green} hbase-spark2.0-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 11s 
{color} | {color:green} hbase-spark1.6-compat in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 109m 49s 
{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 5s 
{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
53s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 167m 4s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839052/16179.v13.txt |
| JIRA Issue | HBASE-16179 |
| Optional Tests |  asflicense  javac  javadoc  unit  xml  compile  

[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668854#comment-15668854
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK7 #1819 (See 
[https://builds.apache.org/job/HBase-1.1-JDK7/1819/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
a8628ee9a2c3e32347cf091db3bb3d67789fd29a)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12341) Overhaul logging; log4j2, machine-readable, etc.

2016-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668829#comment-15668829
 ] 

Enis Soztutar commented on HBASE-12341:
---

maven enforcer can enforce converging dependencies: 
{code}
  
org.apache.maven.plugins
maven-enforcer-plugin

  
depcheck

  

  true

  


  enforce

verify
  

  
{code}

> Overhaul logging; log4j2, machine-readable, etc.
> 
>
> Key: HBASE-12341
> URL: https://issues.apache.org/jira/browse/HBASE-12341
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is a general umbrella issue for 2.x logging improvements. Hang related 
> work off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-15 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17088:
---
Attachment: HBASE-17088-v3.patch

Attach a new v3 patch. Wait the hadoop QA result.

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-15 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17088:
---
Attachment: (was: HBASE-17088-v3.patch)

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-16940) Address review of "Backup/Restore (HBASE-7912, HBASE-14030, HBASE-14123) mega patch" posted on RB

2016-11-15 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16940?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov reopened HBASE-16940:
---

Reopening the issue to address the most recent comments.

> Address review of "Backup/Restore (HBASE-7912, HBASE-14030, HBASE-14123) mega 
> patch" posted on RB 
> --
>
> Key: HBASE-16940
> URL: https://issues.apache.org/jira/browse/HBASE-16940
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: 2.0.0
>
> Attachments: HBASE-16940-v1.patch, HBASE-16940-v2.patch
>
>
> Review 52748 remaining issues.
> https://reviews.apache.org/r/52748



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HBASE-16825) Backup mega patch: Review 52748 work

2016-11-15 Thread Vladimir Rodionov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16825?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov reopened HBASE-16825:
---

> Backup mega patch: Review 52748 work
> 
>
> Key: HBASE-16825
> URL: https://issues.apache.org/jira/browse/HBASE-16825
> Project: HBase
>  Issue Type: Task
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
> Fix For: HBASE-7912
>
>
> This ticket to address comments/issues raised in RB:
> https://reviews.apache.org/r/52748



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12341) Overhaul logging; log4j2, machine-readable, etc.

2016-11-15 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668760#comment-15668760
 ] 

Andrew Purtell commented on HBASE-12341:


slf4j instead of commons-logging for logging facade wouldn't be the worst 
option. If both we and Hadoop specify differing slf4j versions I think Maven 
would do the right thing and only one slf4j implementation jar would be on the 
classpath of our binary convenience artifacts, avoiding the annoying message 
Stack mentions.

> Overhaul logging; log4j2, machine-readable, etc.
> 
>
> Key: HBASE-12341
> URL: https://issues.apache.org/jira/browse/HBASE-12341
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is a general umbrella issue for 2.x logging improvements. Hang related 
> work off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668759#comment-15668759
 ] 

Hudson commented on HBASE-17091:


SUCCESS: Integrated in Jenkins build HBase-1.1-JDK8 #1903 (See 
[https://builds.apache.org/job/HBase-1.1-JDK8/1903/])
HBASE-17091 IntegrationTestZKAndFSPermissions failed with (enis: rev 
a8628ee9a2c3e32347cf091db3bb3d67789fd29a)
* (edit) 
hbase-it/src/test/java/org/apache/hadoop/hbase/test/IntegrationTestZKAndFSPermissions.java


> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668746#comment-15668746
 ] 

Duo Zhang commented on HBASE-17085:
---

In HBASE-16890 it is 463/272=1.70 and here it is 613/386=1.59. So I think it 
helps a little?

And see my latest comment in HBASE-17049, the sync count metrics of FSHLog and 
AsyncFSWAL can not be compared directly.

Anyway, I will keep trying other methods to aggregate more syncs.

Thanks [~stack].

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668744#comment-15668744
 ] 

Duo Zhang commented on HBASE-17085:
---

In HBASE-16890 it is 463/272=1.70 and here it is 613/386=1.59. So I think it 
helps a little?

And see my latest comment in HBASE-17049, the sync count metrics of FSHLog and 
AsyncFSWAL can not be compared directly.

Anyway, I will keep trying other methods to aggregate more syncs.

Thanks [~stack].

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12341) Overhaul logging; log4j2, machine-readable, etc.

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668709#comment-15668709
 ] 

stack commented on HBASE-12341:
---

Sample annoying message dumped on brand new build of master:

{code}
stack@ve0524:~$ ./hbase/bin/hbase --config ~/conf_hbase/ shell
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in 
[jar:file:/home/stack/hbase-2.0.0-SNAPSHOT/lib/slf4j-log4j12-1.6.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in 
[jar:file:/home/stack/hadoop-2.7.3-SNAPSHOT/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
{code}

slf4j is not what we use for logging. Its 3rd-party usage.  At least the 
complaint is clean. We need to clean this stuff up. Looks bad.

> Overhaul logging; log4j2, machine-readable, etc.
> 
>
> Key: HBASE-12341
> URL: https://issues.apache.org/jira/browse/HBASE-12341
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is a general umbrella issue for 2.x logging improvements. Hang related 
> work off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-4040) Make HFilePrettyPrinter programmatically invocable and add JSON output

2016-11-15 Thread Mikhail Antonov (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668711#comment-15668711
 ] 

Mikhail Antonov commented on HBASE-4040:


marked as unassigned, the programmatic invocation for that was done elsewhere, 
but not json formatting

> Make HFilePrettyPrinter programmatically invocable and add JSON output
> --
>
> Key: HBASE-4040
> URL: https://issues.apache.org/jira/browse/HBASE-4040
> Project: HBase
>  Issue Type: New Feature
>  Components: tooling, UI
>Reporter: Riley Patterson
>
> Implement JSON output in HFilePrettyPrinter, similar to the work done for the 
> HLogPrettyPrinter, so that scripts can easily parse the information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-4040) Make HFilePrettyPrinter programmatically invocable and add JSON output

2016-11-15 Thread Mikhail Antonov (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-4040?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mikhail Antonov updated HBASE-4040:
---
Assignee: (was: Mikhail Antonov)

> Make HFilePrettyPrinter programmatically invocable and add JSON output
> --
>
> Key: HBASE-4040
> URL: https://issues.apache.org/jira/browse/HBASE-4040
> Project: HBase
>  Issue Type: New Feature
>  Components: tooling, UI
>Reporter: Riley Patterson
>
> Implement JSON output in HFilePrettyPrinter, similar to the work done for the 
> HLogPrettyPrinter, so that scripts can easily parse the information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16506) Use subprocedure of Proc V2 for snapshot in BackupProcedure

2016-11-15 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16506?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668644#comment-15668644
 ] 

Ted Yu commented on HBASE-16506:


This no longer applies after the recent refactoring.

Planning to resolve.

> Use subprocedure of Proc V2 for snapshot in BackupProcedure
> ---
>
> Key: HBASE-16506
> URL: https://issues.apache.org/jira/browse/HBASE-16506
> Project: HBase
>  Issue Type: Improvement
>Reporter: Ted Yu
>Assignee: Matteo Bertozzi
>Priority: Blocker
>  Labels: backup, snapshot
> Fix For: 2.0.0
>
>
> Currently for SNAPSHOT_TABLES stage, we loop through the tables and take 
> snapshot for each table.
> If the master restarts in the middle of this stage, we would restart taking 
> snapshot from the first table.
> This issue would use subprocedure for each snapshot so that we don't need to 
> take snapshot for the table(s) whose snapshot is complete before the master 
> restart.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17104) Improve cryptic error message "Memstore size is" on region close

2016-11-15 Thread Matteo Bertozzi (JIRA)
Matteo Bertozzi created HBASE-17104:
---

 Summary: Improve cryptic error message "Memstore size is" on 
region close
 Key: HBASE-17104
 URL: https://issues.apache.org/jira/browse/HBASE-17104
 Project: HBase
  Issue Type: Bug
  Components: regionserver
Reporter: Matteo Bertozzi
Priority: Trivial
 Fix For: 2.0.0


while grepping my RS log for ERROR I found a cryptic
{noformat}
ERROR [RS_CLOSE_REGION-u1604vm:35021-1] regionserver.HRegion(1601): Memstore 
size is 33744
{noformat}

from the code looks like we seems to want to notify the user about the fact 
that on close the rs was not able to flush and there were things in the RS. 
https://github.com/apache/hbase/blob/c3685760f004450667920144f926383eb307de53/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java#L1601
{code}
if (!canFlush) {
  this.decrMemstoreSize(new MemstoreSize(memstoreDataSize.get(), 
getMemstoreHeapOverhead()));
} else if (memstoreDataSize.get() != 0) {
  LOG.error("Memstore size is " + memstoreDataSize.get());
}
{code}
this should probably not even be an error but a warn or even info, unless we 
have puts that specifically asked to not be written to the wal,  otherwise the 
data in the memstore should be safe in the wals. 
In any case it will be nice to have a message describing what is going on and 
why we are notifying about the memstore size.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668595#comment-15668595
 ] 

huaxiang sun edited comment on HBASE-16981 at 11/15/16 10:43 PM:
-

One question, today the mob compact chore can be controlled by how often it 
needs to run with
MOB_COMPACTION_CHORE_PERIOD (the default is one week)

And with MOB_COMPACTION_MERGEABLE_THRESHOLD, it can be configured that files 
larger than the threshold will be skipped by the minor mob compact. 

Are these not enough to reduce IO? assuming that the mob compact chore causes 
the situation Anoop described, not the manual mob compaction.




was (Author: huaxiang):
One question, today the mob compact chore can be controlled by how often it 
needs to run with
MOB_COMPACTION_CHORE_PERIOD (the default is one week)

And with MOB_COMPACTION_MERGEABLE_THRESHOLD, it can be configured that files 
larger than the threshold will be compacted by the minor mob compact. 

Are these not enough to reduce IO? assuming that the mob compact chore causes 
the situation Anoop described, not the manual mob compaction.



> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Affects Version/s: (was: 1.1.2)
   2.0.0
   Status: Patch Available  (was: Open)

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Status: Open  (was: Patch Available)

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Status: Patch Available  (was: In Progress)

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668595#comment-15668595
 ] 

huaxiang sun commented on HBASE-16981:
--

One question, today the mob compact chore can be controlled by how often it 
needs to run with
MOB_COMPACTION_CHORE_PERIOD (the default is one week)

And with MOB_COMPACTION_MERGEABLE_THRESHOLD, it can be configured that files 
larger than the threshold will be compacted by the minor mob compact. 

Are these not enough to reduce IO? assuming that the mob compact chore causes 
the situation Anoop described, not the manual mob compaction.



> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-15 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668559#comment-15668559
 ] 

huaxiang sun commented on HBASE-16981:
--

Hi [~anoop.hbase], this is a good point, will spend time thinking about and 
getting it back, thanks.

> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Attachment: HBASE-16708-v1.patch

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Attachment: (was: HBASE-16708-v1.patch)

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Attachment: HBASE-16708-v1.patch

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668536#comment-15668536
 ] 

Yi Liang edited comment on HBASE-16708 at 11/15/16 10:07 PM:
-

Hi [~ndimiduk]
   Carry your comments to show the full class name, since the full class name 
is only stored in a map called coprocessorServiceHandlers in each region to 
register each coprocessor, and it is a map store . Use my case as example, the coprocessorServiceHandlers will 
store , the 
SumService is created by xx.proto, and SumEndPoint is the coprocessor class 
extend SumService. So the only way to get the full class name  is through 
coprocessorServiceHandlers.  The detail about how to get access to 
coprocessorServiceHandlers is in the patch, could you help me to review it. 
Thanks
The current output looks like: 
{code}
2016-11-15 13:34:22,191 WARN  
[RpcServer.FifoWFPBQ.default.handler=59,queue=5,port=16020] ipc.RpcServer: 
(responseTooLarge): 
{"call":"ExecService(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","starttimems":1479245662059,"responsesize":127,"method":"ExecService","param":"endpoint
 coprocessor= class 
org.myname.hbase.coprocessor.endpoint.SumEndPoint:getSum","processingtimems":131,"client":"172.16.156.175:44892","queuetimems":1,"class":"HRegionServer"}
{code}
 


was (Author: easyliangjob):
Hi [~ndimiduk]
   Carry your comments to show the full class name, since the full class name 
is only stored in a map called coprocessorServiceHandlers in each region, 
register each coprocessor, and it is a map store . Use my case as example, the coprocessorServiceHandlers will 
store , the 
SumService is created by xx.proto, and SumEndPoint is the coprocessor class 
extend SumService. So the only way to get the full class name  is through 
coprocessorServiceHandlers.  The detail about how to get access to 
coprocessorServiceHandlers is in the patch, could you help me to review it. 
Thanks
The current output looks like: 
{code}
2016-11-15 13:34:22,191 WARN  
[RpcServer.FifoWFPBQ.default.handler=59,queue=5,port=16020] ipc.RpcServer: 
(responseTooLarge): 
{"call":"ExecService(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","starttimems":1479245662059,"responsesize":127,"method":"ExecService","param":"endpoint
 coprocessor= class 
org.myname.hbase.coprocessor.endpoint.SumEndPoint:getSum","processingtimems":131,"client":"172.16.156.175:44892","queuetimems":1,"class":"HRegionServer"}
{code}
 

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668536#comment-15668536
 ] 

Yi Liang edited comment on HBASE-16708 at 11/15/16 10:07 PM:
-

Hi [~ndimiduk]
   Carry your comments to show the full class name, since the full class name 
is only stored in a map called coprocessorServiceHandlers in each region, 
register each coprocessor, and it is a map store . Use my case as example, the coprocessorServiceHandlers will 
store , the 
SumService is created by xx.proto, and SumEndPoint is the coprocessor class 
extend SumService. So the only way to get the full class name  is through 
coprocessorServiceHandlers.  The detail about how to get access to 
coprocessorServiceHandlers is in the patch, could you help me to review it. 
Thanks
The current output looks like: 
{code}
2016-11-15 13:34:22,191 WARN  
[RpcServer.FifoWFPBQ.default.handler=59,queue=5,port=16020] ipc.RpcServer: 
(responseTooLarge): 
{"call":"ExecService(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","starttimems":1479245662059,"responsesize":127,"method":"ExecService","param":"endpoint
 coprocessor= class 
org.myname.hbase.coprocessor.endpoint.SumEndPoint:getSum","processingtimems":131,"client":"172.16.156.175:44892","queuetimems":1,"class":"HRegionServer"}
{code}
 


was (Author: easyliangjob):
Hi [~ndimiduk]
   Carry your comments to show the full class name, since the full class name 
is only stored in a map called coprocessorServiceHandlers in region, register 
each coprocessor, and it is a map store . Use 
my case as example, the coprocessorServiceHandlers will store , the SumService is created 
by xx.proto, and SumEndPoint is the coprocessor class extend SumService. So the 
only way to get the full class name  is through coprocessorServiceHandlers.  
The detail about how to get access to coprocessorServiceHandlers is in the 
patch, could you help me to review it. Thanks
The current output looks like: 
{code}
2016-11-15 13:34:22,191 WARN  
[RpcServer.FifoWFPBQ.default.handler=59,queue=5,port=16020] ipc.RpcServer: 
(responseTooLarge): 
{"call":"ExecService(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","starttimems":1479245662059,"responsesize":127,"method":"ExecService","param":"endpoint
 coprocessor= class 
org.myname.hbase.coprocessor.endpoint.SumEndPoint:getSum","processingtimems":131,"client":"172.16.156.175:44892","queuetimems":1,"class":"HRegionServer"}
{code}
 

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-15 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668536#comment-15668536
 ] 

Yi Liang commented on HBASE-16708:
--

Hi [~ndimiduk]
   Carry your comments to show the full class name, since the full class name 
is only stored in a map called coprocessorServiceHandlers in region, register 
each coprocessor, and it is a map store . Use 
my case as example, the coprocessorServiceHandlers will store , the SumService is created 
by xx.proto, and SumEndPoint is the coprocessor class extend SumService. So the 
only way to get the full class name  is through coprocessorServiceHandlers.  
The detail about how to get access to coprocessorServiceHandlers is in the 
patch, could you help me to review it. Thanks
The current output looks like: 
{code}
2016-11-15 13:34:22,191 WARN  
[RpcServer.FifoWFPBQ.default.handler=59,queue=5,port=16020] ipc.RpcServer: 
(responseTooLarge): 
{"call":"ExecService(org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$CoprocessorServiceRequest)","starttimems":1479245662059,"responsesize":127,"method":"ExecService","param":"endpoint
 coprocessor= class 
org.myname.hbase.coprocessor.endpoint.SumEndPoint:getSum","processingtimems":131,"client":"172.16.156.175:44892","queuetimems":1,"class":"HRegionServer"}
{code}
 

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 1.1.2
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668502#comment-15668502
 ] 

stack commented on HBASE-17082:
---

Or, hang on... poking around a bit more first.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668485#comment-15668485
 ] 

Hadoop QA commented on HBASE-17082:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 11s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 15s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 7s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
26s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 46s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 6s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 17s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
42s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hbase-client generated 1 new + 13 unchanged - 0 fixed = 
14 total (was 13) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 24s 
{color} | {color:red} hbase-server generated 4 new + 1 unchanged - 0 fixed = 5 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 12s {color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 62m 4s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839042/HBASE-17082.nothing.patch
 |
| JIRA Issue | HBASE-17082 |
| Optional Tests |  

[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668481#comment-15668481
 ] 

stack commented on HBASE-17082:
---

Reverting my attempt since it did not work. Same issue still.

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668468#comment-15668468
 ] 

Hadoop QA commented on HBASE-17082:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 16s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 9s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
33s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
24s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 42s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 7s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 1m 8s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 8m 
27s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
23s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 7s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:red}-1{color} | {color:red} hbaseprotoc {color} | {color:red} 0m 19s 
{color} | {color:red} Patch generated 1 new protoc errors in hbase-server. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 
40s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 14s 
{color} | {color:red} hbase-client generated 1 new + 13 unchanged - 0 fixed = 
14 total (was 13) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 22s 
{color} | {color:red} hbase-server generated 4 new + 1 unchanged - 0 fixed = 5 
total (was 1) {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 22s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 11s {color} 
| {color:red} hbase-client in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 0m 19s {color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 61m 55s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839042/HBASE-17082.nothing.patch
 |
| JIRA Issue | HBASE-17082 |
| Optional Tests |  

[jira] [Updated] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-11-15 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16179:
---
Attachment: 16179.v13.txt

Patch v13 updates to Spark 2.0.2

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17091:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 1.1.8
   1.2.5
   1.4.0
   1.3.0
   2.0.0
   Status: Resolved  (was: Patch Available)

Pushed to 1.1+. This is test only change, so I've also pushed to branch-1.3 
since it won't destabilize the branch. 

> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.3.0, 1.4.0, 1.2.5, 1.1.8
>
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17091) IntegrationTestZKAndFSPermissions failed with 'KeeperException$NoNodeException'

2016-11-15 Thread Enis Soztutar (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17091?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Enis Soztutar updated HBASE-17091:
--
Attachment: hbase-17091_v2.patch

Thanks for reviews. This is what I have committed. 

> IntegrationTestZKAndFSPermissions failed with 
> 'KeeperException$NoNodeException' 
> 
>
> Key: HBASE-17091
> URL: https://issues.apache.org/jira/browse/HBASE-17091
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Attachments: hbase-17091_v1.patch, hbase-17091_v2.patch
>
>
> The test failed with: 
> {code}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,488 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/9a1652e7d73eaa66c5fb45e3fa04ac1c 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,488|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,490 INFO  [main] test.IntegrationTestZKAndFSPermissions: Checking 
> ACLs for znode 
> znode:/hbase-secure/region-in-transition/e4ef3a431bcad8036bf3abd6f2caf0e4 
> acls:[31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|, 
> 31,s{'sasl,'hbase}
> 2016-11-11 11:33:03,491|INFO|MainThread|machine.py:142 - run()|]
> 2016-11-11 11:33:03,505|INFO|MainThread|machine.py:142 - run()|2016-11-11 
> 11:33:03,502 ERROR [main] util.AbstractHBaseTool: Error running command-line 
> tool
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - 
> run()|org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode for /hbase-secure/region-in-transition/7e352559c4072680e9c73bf892e81d14
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.zookeeper.ZooKeeper.getACL(ZooKeeper.java:1330)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.assertZnodePerms(IntegrationTestZKAndFSPermissions.java:180)
> 2016-11-11 11:33:03,506|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:161)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.checkZnodePermsRecursive(IntegrationTestZKAndFSPermissions.java:167)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.testZNodeACLs(IntegrationTestZKAndFSPermissions.java:151)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.test.IntegrationTestZKAndFSPermissions.doWork(IntegrationTestZKAndFSPermissions.java:131)
> 2016-11-11 11:33:03,507|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.hbase.util.AbstractHBaseTool.run(AbstractHBaseTool.java:112)
> 2016-11-11 11:33:03,508|INFO|MainThread|machine.py:142 - run()|at 
> org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:76)
> {code}
> Seems like a race condition for emphemeral region-in-transition nodes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12341) Overhaul logging; log4j2, machine-readable, etc.

2016-11-15 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12341?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668415#comment-15668415
 ] 

Enis Soztutar commented on HBASE-12341:
---

I did a 5 min google search. 
Seems that we are using commons-logging 1.2 which is the latest version and it 
does not have the lazy string interpolation or Java-8 lambda style. 

Lazy arguments will give us usage like this: 
{code}
  LOG.debug("Found a problem: {} with this {}", arg1, arg2);
{code}

instead of 
{code}
  if (LOG.isDebugEnabled()) {
 LOG.debug("Found a problem:" + arg1 + "with this" + arg2);
  }
{code}

Lambda lazy evaluation would be something like this: 
{code}
  LOG.debug("Result of computing something costly: {}", () -> 
doSomethingCostly()); 
{code}

I don't care too much about lambdas in logging, because there is then the 
overhead of capturing the lambda, although it will lazy execute. 

Then the question is whether moving to log4j2 APIs directly, or moving to slf4j 
first then moving to log4j2 after. We can easily do the two steps 
independently. 

> Overhaul logging; log4j2, machine-readable, etc.
> 
>
> Key: HBASE-12341
> URL: https://issues.apache.org/jira/browse/HBASE-12341
> Project: HBase
>  Issue Type: Umbrella
>Reporter: stack
>Priority: Critical
> Fix For: 2.0.0
>
>
> This is a general umbrella issue for 2.x logging improvements. Hang related 
> work off this one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-15 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15668360#comment-15668360
 ] 

stack commented on HBASE-17085:
---

Default:

{code}
...
(Hard to copy in histograms because they were logged over..)
...
7555417 2016-11-15 12:14:25,321 INFO  [main] wal.WALPerformanceEvaluation: 
Summary: threads=100, iterations=10, syncInterval=0 took 382.163s 
26166.846ops/s
7555418 2016-11-15 12:14:25,321 DEBUG [main] regionserver.HRegion: Closing 
WALPerformanceEvaluation:0,,1479240482214.23f202a47ef65215f3d85a5fe08ce8e3.: 
disabling compactions & flushes
7555419 2016-11-15 12:14:25,322 DEBUG [main] regionserver.HRegion: Updates 
disabled for region 
WALPerformanceEvaluation:0,,1479240482214.23f202a47ef65215f3d85a5fe08ce8e3.
7555420 2016-11-15 12:14:25,326 INFO  
[StoreCloserThread-WALPerformanceEvaluation:0,,1479240482214.23f202a47ef65215f3d85a5fe08ce8e3.-1]
 regionserver.HStore: Closed cf0
7555421 2016-11-15 12:14:25,326 INFO  [main] regionserver.HRegion: Closed 
WALPerformanceEvaluation:0,,1479240482214.23f202a47ef65215f3d85a5fe08ce8e3.
7555422 2016-11-15 12:14:25,328 DEBUG [main] wal.FSHLog: Closing WAL writer in 
/user/stack/test-data/300d7166-b729-47da-965d-936ec5f7bb43/WALPerformanceEvaluation/WALs/wals
7555423 2016-11-15 12:14:25,328 DEBUG [main] hdfs.DFSClient: DFSClient 
writeChunk allocating new packet seqno=1260314, 
src=/user/stack/test-data/300d7166-b729-47da-965d-936ec5f7bb43/WALPerformanceEvaluation/WALs/wals/wals.1479240483655,
 packetSize=65016, chunksPerPacket=126, bytesCurBlock=16408064
7555424 2016-11-15 12:14:25,328 DEBUG [main] hdfs.DFSClient: Queued packet 
1260314
7555425 2016-11-15 12:14:25,328 DEBUG [main] hdfs.DFSClient: Queued packet 
1260315
7555426 2016-11-15 12:14:25,328 DEBUG [main] hdfs.DFSClient: Waiting for ack 
for: 1260315
7555427 2016-11-15 12:14:25,328 DEBUG [DataStreamer for file 
/user/stack/test-data/300d7166-b729-47da-965d-936ec5f7bb43/WALPerformanceEvaluation/WALs/wals/wals.1479240483655
 block BP-1837290273-10.17.240.20-1458945429978:blk_1077044408_3304141] 
hdfs.DFSClient: DataStreamer block BP-18372902
73-10.17.240.20-1458945429978:blk_1077044408_3304141 sending packet packet 
seqno: 1260314 offsetInBlock: 16408064 lastPacketInBlock: false 
lastByteOffsetInBlock: 16408265
7555428 2016-11-15 12:14:25,329 DEBUG [ResponseProcessor for block 
BP-1837290273-10.17.240.20-1458945429978:blk_1077044408_3304141] 
hdfs.DFSClient: DFSClient seqno: 1260314 reply: SUCCESS downstreamAckTimeNanos: 
0 flag: 0
7555429 2016-11-15 12:14:25,329 DEBUG [DataStreamer for file 
/user/stack/test-data/300d7166-b729-47da-965d-936ec5f7bb43/WALPerformanceEvaluation/WALs/wals/wals.1479240483655
 block BP-1837290273-10.17.240.20-1458945429978:blk_1077044408_3304141] 
hdfs.DFSClient: DataStreamer block BP-18372902
73-10.17.240.20-1458945429978:blk_1077044408_3304141 sending packet packet 
seqno: 1260315 offsetInBlock: 16408265 lastPacketInBlock: true 
lastByteOffsetInBlock: 16408265
7555430 2016-11-15 12:14:25,329 DEBUG [ResponseProcessor for block 
BP-1837290273-10.17.240.20-1458945429978:blk_1077044408_3304141] 
hdfs.DFSClient: DFSClient seqno: 1260315 reply: SUCCESS downstreamAckTimeNanos: 
0 flag: 0
7555431 2016-11-15 12:14:25,329 DEBUG [DataStreamer for file 
/user/stack/test-data/300d7166-b729-47da-965d-936ec5f7bb43/WALPerformanceEvaluation/WALs/wals/wals.1479240483655
 block BP-1837290273-10.17.240.20-1458945429978:blk_1077044408_3304141] 
hdfs.DFSClient: Closing old block BP-183729027
3-10.17.240.20-1458945429978:blk_1077044408_3304141
7555432 2016-11-15 12:14:25,344 INFO  [main] hbase.MockRegionServerServices: 
Shutting down due to request 'test clean up.'
7555433 2016-11-15 12:14:25,344 INFO  [main] wal.WALPerformanceEvaluation: 
shutting down log roller.
7555434 2016-11-15 12:14:25,345 INFO  [WALPerfEval.logRoller] 
regionserver.LogRoller: LogRoller exiting.
7555435
7555436  Performance counter stats for './hbase/bin/hbase --config 
/home/stack/conf_hbase org.apache.hadoop.hbase.wal.WALPerformanceEvaluation 
-threads 100 -iterations 10 -qualifiers 25 -keySize 50 -valueSize 200':
7555437
7555438 4987370.861017 task-clock (msec) #   12.922 CPUs utilized
7555439 18,957,350 context-switches  #0.004 M/sec
7555440  2,815,124 cpu-migrations#0.564 K/sec
7555441 10,054,046 page-faults   #0.002 M/sec
7555442  9,934,495,287,070 cycles#1.992 GHz
7555443 stalled-cycles-frontend
7555444 stalled-cycles-backend
7555445  3,796,677,865,651 instructions  #0.38  insns per cycle
7555446608,865,750,776 branches  #  122.082 M/sec
7555447  7,478,401,535 branch-misses #1.23% of all branches
7555448
7555449  385.959774164 seconds time elapsed
{code}

With patch:
{code}

36055
36056 -- Histograms 

[jira] [Moved] (HBASE-17103) scannerGet does not throw correct exception

2016-11-15 Thread Jens Geyer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17103?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jens Geyer moved THRIFT-3976 to HBASE-17103:


Affects Version/s: (was: 0.9.3)
  Component/s: (was: PHP - Library)
 Workflow: no-reopen-closed, patch-avail  (was: classic default 
workflow)
  Key: HBASE-17103  (was: THRIFT-3976)
  Project: HBase  (was: Thrift)

> scannerGet does not throw correct exception
> ---
>
> Key: HBASE-17103
> URL: https://issues.apache.org/jira/browse/HBASE-17103
> Project: HBase
>  Issue Type: Bug
> Environment: software
>Reporter: le anh duc
>
> When we use getScanner to loop through rows of table, it does not return 
> notfound exception when reach to the end of table so if you use While loop 
> like in example, it'll loop forever.
> See this example to get more information:
> https://github.com/apache/hbase/blob/master/hbase-examples/src/main/php/DemoClient.php



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-15 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-17082:
--
Attachment: HBASE-17082.nothing.patch

Retry the nothing patch

> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: stack
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >