[jira] [Comment Edited] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672406#comment-15672406
 ] 

binlijin edited comment on HBASE-17118 at 11/17/16 7:27 AM:


We find the problem so later, and could not find the HFile which cause the 
IllegalArgumentException, and have no clue about the IllegalArgumentException 
yet.


was (Author: aoxiang):
We find the problem so later, and could find the HFile which cause the 
IllegalArgumentException, and have no clue about the IllegalArgumentException 
yet.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, HBASE-17118-master_v3.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-16 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-17085:
--
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master. Thanks all for reviewing.

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-16169:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: (was: 1.4.0)
   Status: Resolved  (was: Patch Available)

Pushed to master branch.

Why push to 1.4 [~thiruvel]? It adds new API in Admin. 2.0 will be out before 
1.4 is my guess? (You'll be able to rolling upgrade from branch-1 to 2.0 -- is 
the hope)

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch, 
> HBASE-16169.master.008.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-11-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-12894:

Status: In Progress  (was: Patch Available)

{code}
+  
+
+  org.javassist
+  javassist
+  
+
+  Mozilla Public License Version 1.1
+  https://www.mozilla.org/en-US/MPL/1.1/
+  repo
+
+  
+
+  
{code}

javassist's website and pom both claim that it can be distributed under ALv2. 
Please switch to that license instead of MPL 1.1.

{code}
+  
+
+  javax.servlet.jsp
+  javax.servlet.jsp-api
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
{code}

According to the header on the pom, headers on the source, and [the project 
website|https://jsp.java.net/license.html], this should be CDDL 1.1.

{code}
+  
+
+  org.glassfish
+  javax.el
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
{code}

I think this one is fine, but it's worth including a comment that the project 
page + pom license section indicates CDDLv1.0, while the headers on both the 
pom and all the source files indicated CDDLv1.1.

{code}
+  
+
+  org.glassfish.hk2
+  hk2-api
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
+  
+
+  org.glassfish.hk2
+  hk2-locator
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
+  
+
+  org.glassfish.hk2
+  hk2-utils
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  

... SNIP ...
+  
+
+  org.glassfish.hk2.external
+  aopalliance-repackaged
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
+  
+
+  org.glassfish.hk2.external
+  javax.inject
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
{code}

headers, pom license section, and project page all say these 5 should be CDDL 
v1.1.

{code}
+  
+
+  org.glassfish.jersey.bundles.repackaged
+  jersey-guava
+  
+
+  Common Development and Distribution License (CDDL) v1.0
+  https://glassfish.dev.java.net/public/CDDLv1.0.html
+  repo
+
+  
+
+  
{code}

headers, pom license, project file says this is CDDLv1.1.

I presume the rest of the jersey things listed as CDDLv1.0 probably should also 
be CDDLv1.1. I stopped reviewing at this jersey-guava entry.

Please make the above corrections and review the remaining dependencies to 
ensure they have the correct license information. 

Moving out of patch available pending corrections.

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST, UI
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Guang Yang
>Priority: Critical
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
> Attachments: HBASE-12894_Jetty9_v0.patch, 
> HBASE-12894_Jetty9_v1.patch, HBASE-12894_Jetty9_v1.patch, 
> HBASE-12894_Jetty9_v2.patch, HBASE-12894_Jetty9_v3.patch, 
> HBASE-12894_Jetty9_v4.patch, HBASE-12894_Jetty9_v5.patch, 
> HBASE-12894_Jetty9_v6.patch, HBASE-12894_Jetty9_v7.patch, 
> HBASE-12894_Jetty9_v8.patch, dependency_list_after, dependency_list_before
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> 

[jira] [Commented] (HBASE-16169) Make RegionSizeCalculator scalable

2016-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672997#comment-15672997
 ] 

stack commented on HBASE-16169:
---

The unit tests pass this time. The hbaseprotoc complaint is because we are 
calling generate protos in hbase-server module but it has none... Need to fix 
the hbase-personality so it doesn't do this. I opened HBASE-17119. Let me 
commit this.

> Make RegionSizeCalculator scalable
> --
>
> Key: HBASE-16169
> URL: https://issues.apache.org/jira/browse/HBASE-16169
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, scaling
>Reporter: Thiruvel Thirumoolan
>Assignee: Thiruvel Thirumoolan
> Fix For: 2.0.0, 1.4.0
>
> Attachments: HBASE-16169.master.000.patch, 
> HBASE-16169.master.001.patch, HBASE-16169.master.002.patch, 
> HBASE-16169.master.003.patch, HBASE-16169.master.004.patch, 
> HBASE-16169.master.005.patch, HBASE-16169.master.006.patch, 
> HBASE-16169.master.007.patch, HBASE-16169.master.007.patch, 
> HBASE-16169.master.008.patch
>
>
> RegionSizeCalculator is needed for better split generation of MR jobs. This 
> requires RegionLoad which can be obtained via ClusterStatus, i.e. accessing 
> Master. We don't want master to be in this path.
> The proposal is to add an API to the RegionServer that gets RegionLoad of all 
> regions hosted on it or those of a table if specified. RegionSizeCalculator 
> can use the latter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17115) HMaster/HRegion Info Server does not honour admin.acl

2016-11-16 Thread Arshad Mohammad (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672995#comment-15672995
 ] 

Arshad Mohammad commented on HBASE-17115:
-

Thanks [~apurtell] and [~jinghe] for the response.
# Currently service level authorization file is used only for RPC services, not 
for web services
# yarn and history server are using their own admin.acl property for 
authorizing the web URLs.
yarn.admin.acl
mapreduce.jobhistory.admin.acl
Reference:
{code}
/hadoop/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-common/src/main/java/org/apache/hadoop/mapreduce/v2/jobhistory/JHAdminConfig.java
 (1 hit)
Line 52:   public static final String JHS_ADMIN_ACL = MR_HISTORY_PREFIX + 
"admin.acl";
/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-api/src/main/java/org/apache/hadoop/yarn/conf/YarnConfiguration.java
 (1 hit)
Line 308: YARN_PREFIX + "admin.acl";
{code}
# This jira is only for handling the authorization in web URLs, authentication 
is already present.

I think the web url authorization should be done the same way as being done in 
yarn and history server.
any other thoughts?

> HMaster/HRegion Info Server does not honour admin.acl
> -
>
> Key: HBASE-17115
> URL: https://issues.apache.org/jira/browse/HBASE-17115
> Project: HBase
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>
> Currently there is no way to enable protected URLs like /jmx,  /conf  only 
> for admins. This is applicable for both Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17119) Fix yetus hbase-personality so we don't try and generate protos in modules that have none

2016-11-16 Thread stack (JIRA)
stack created HBASE-17119:
-

 Summary: Fix yetus hbase-personality so we don't try and generate 
protos in modules that have none
 Key: HBASE-17119
 URL: https://issues.apache.org/jira/browse/HBASE-17119
 Project: HBase
  Issue Type: Bug
Reporter: stack
Priority: Minor


See end of HBASE-11843. Says hbaseprotoc failed in hbase-server module. 
hbase-server module has not protos. In the emissions from the protoc run, we 
end w/ this:

{code}

[INFO] 
[INFO] BUILD SUCCESS
[INFO] 
[INFO] Total time: 20.661s
[INFO] Finished at: Wed Nov 16 08:50:27 UTC 2016
[INFO] Final Memory: 78M/1219M
[INFO] 
[WARNING] The requested profile "compile-protobuf" could not be activated 
because it does not exist.
{code}

Fix our protoc check so we don't run if no protos in module.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Attachment: HBASE-17112-v2.patch

All QA tests run branch-1.1... Retry for master

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch, 
> HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Release Note: 
Before this issue, two concurrent Increments/Appends done in same millisecond 
or RS's clock going back will result in two results have same TS, which is not 
friendly to versioning and will get wrong result in slave cluster if the 
replication is disordered.
After this issue, the result of Increment/Append will always have an 
incremental TS. There is no any inconsistent in replication for these 
operations. But there is a rare case that if there is a Delete in same 
millisecond, the later result can not be masked by this Delete. This can be 
fixed after we have new semantics that previous Delete will never mask later 
Put even its timestamp is higher.

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672959#comment-15672959
 ] 

Hadoop QA commented on HBASE-17118:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 13s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 3m 
15s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 26s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
44s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
27m 10s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
48s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 91m 17s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 130m 34s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839310/HBASE-17118-master_v3.patch
 |
| JIRA Issue | HBASE-17118 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 5ef5ee3d5ebf 3.13.0-95-generic #142-Ubuntu SMP Fri Aug 12 
17:00:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 48439e5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4509/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4509/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: binlijin
>Assignee: binlijin
> 

[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672929#comment-15672929
 ] 

Anoop Sam John commented on HBASE-17112:


Ya make sense.. Ya I agree this solves a bigger problem than the new possible 
issue (which is rare). Pls add some comments abt the new possile issue and ref 
to new Jira for the MVCC centric logic.
Am +1

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HBASE-17095) The ClientSimpleScanner keeps retrying if the hfile is corrupt or cannot found

2016-11-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672921#comment-15672921
 ] 

Anoop Sam John edited comment on HBASE-17095 at 11/17/16 6:30 AM:
--

So we throw back CorruptedHFileException or FNFE back to client now so that we 
dont end up throwing ScannerResetException.  It will be better if u can create 
a new HBase specific exception, extending DNRIOE for the HFile not found case.  
Being more generic and checking all possible cases, am ok we can do in another 
issue
Else +1


was (Author: anoop.hbase):
So we throw back CorruptedHFileException or FNFE back to client now so that we 
dont end up throwing ScannerResetException.  It will be better if u can create 
a new HBase specific exception, extending DNRIOE for the HFile not found case.
Else +1

> The ClientSimpleScanner keeps retrying if the hfile is corrupt or cannot found
> --
>
> Key: HBASE-17095
> URL: https://issues.apache.org/jira/browse/HBASE-17095
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17095.patch, TestScannerWithCorruptHFile.java
>
>
> In {{RsRPCServices.scan}}, most of IOE are thrown as a 
> {{ScannerResetException}}, even when the hfile is corrupt or it cannot be 
> found. The {{ClientScannr.loadCache}} will keep retrying when the exception 
> is {{ScannerResetException}}. We could throw CorruptHFileException and 
> FileNotFoundException directly from server and don't retry the scan in the 
> client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17095) The ClientSimpleScanner keeps retrying if the hfile is corrupt or cannot found

2016-11-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17095?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672921#comment-15672921
 ] 

Anoop Sam John commented on HBASE-17095:


So we throw back CorruptedHFileException or FNFE back to client now so that we 
dont end up throwing ScannerResetException.  It will be better if u can create 
a new HBase specific exception, extending DNRIOE for the HFile not found case.
Else +1

> The ClientSimpleScanner keeps retrying if the hfile is corrupt or cannot found
> --
>
> Key: HBASE-17095
> URL: https://issues.apache.org/jira/browse/HBASE-17095
> Project: HBase
>  Issue Type: Bug
>  Components: regionserver, scan
>Reporter: Jingcheng Du
>Assignee: Jingcheng Du
> Attachments: HBASE-17095.patch, TestScannerWithCorruptHFile.java
>
>
> In {{RsRPCServices.scan}}, most of IOE are thrown as a 
> {{ScannerResetException}}, even when the hfile is corrupt or it cannot be 
> found. The {{ClientScannr.loadCache}} will keep retrying when the exception 
> is {{ScannerResetException}}. We could throw CorruptHFileException and 
> FileNotFoundException directly from server and don't retry the scan in the 
> client.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672910#comment-15672910
 ] 

Hadoop QA commented on HBASE-17112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 20m 31s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 7m 
31s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 31s 
{color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
29s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
17s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 28s 
{color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 30s 
{color} | {color:red} hbase-server in branch-1.1 failed. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
33s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
16m 27s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
36s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 23s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 103m 50s 
{color} | {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
27s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 155m 57s {color} 
| {color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.regionserver.TestSplitWalDataLoss |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.2 Server=1.12.2 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839304/HBASE-17112-branch-1.1-v1.patch
 |
| JIRA Issue | HBASE-17112 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux fb959d7ab2bf 3.13.0-100-generic #147-Ubuntu SMP Tue Oct 18 
16:48:51 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | branch-1.1 / a8628ee |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4504/artifact/patchprocess/branch-findbugs-hbase-server-warnings.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4504/artifact/patchprocess/branch-javadoc-hbase-server.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4504/artifact/patchprocess/patch-javadoc-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4504/artifact/patchprocess/patch-unit-hbase-server.txt
 |
| unit test logs |  

[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672906#comment-15672906
 ] 

Hadoop QA commented on HBASE-17112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 11m 21s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
43s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
25s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 2m 5s 
{color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
45s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 36s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 38s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
22s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
13m 23s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
34s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 35s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 40s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 93m 59s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
31s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 133m 21s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.11.2 Server=1.11.2 Image:yetus/hbase:8012383 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839304/HBASE-17112-branch-1.1-v1.patch
 |
| JIRA Issue | HBASE-17112 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 4bb7fdd07ab8 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1.1 / a8628ee |
| Default Java | 1.7.0_80 |
| Multi-JDK 

[jira] [Commented] (HBASE-15806) An endpoint-based export tool

2016-11-16 Thread ChiaPing Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-15806?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672909#comment-15672909
 ] 

ChiaPing Tsai commented on HBASE-15806:
---

[~stack]

Please take a look at v6.patch if you have time. Thanks.

> An endpoint-based export tool
> -
>
> Key: HBASE-15806
> URL: https://issues.apache.org/jira/browse/HBASE-15806
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
>Priority: Minor
> Fix For: 2.0.0
>
> Attachments: Experiment.png, HBASE-15806-v1.patch, 
> HBASE-15806-v2.patch, HBASE-15806-v3.patch, HBASE-15806.patch, 
> HBASE-15806.v4.patch, HBASE-15806.v5.patch, HBASE-15806.v6.patch
>
>
> The time for exporting table can be reduced, if we use the endpoint technique 
> to export the hdfs files by the region server rather than by hbase client.
> In my experiments, the elapsed time of endpoint-based export can be less than 
> half of current export tool (enable the hdfs compression)
> But the shortcomings is we need to alter table for deploying the endpoint
> any comments about this? thanks



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672893#comment-15672893
 ] 

Phil Yang commented on HBASE-17112:
---

Yes, delete in same millisecond can not mask the new Put. Before this patch the 
Delete can mask two Puts. I think we can fix it after HBASE-15968 and when a cf 
enable the new semantics we can save a Delete with a large TS or even MAX_LONG 
to mask all Puts? At least after this patch we can prevent any inconsistent in 
replication. 

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672883#comment-15672883
 ] 

Hadoop QA commented on HBASE-17088:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 12s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
40s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
37s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
13s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
32s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 23s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
41s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 4s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 1m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 84m 29s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
15s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 119m 22s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839306/HBASE-17088-v4.patch |
| JIRA Issue | HBASE-17088 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 5581ba2c9037 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 48439e5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4508/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4508/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, 

[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672876#comment-15672876
 ] 

Hadoop QA commented on HBASE-17112:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 10m 41s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
34s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 29s 
{color} | {color:green} branch-1.1 passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 35s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
21s {color} | {color:green} branch-1.1 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} branch-1.1 passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 47s 
{color} | {color:red} hbase-server in branch-1.1 has 80 extant Findbugs 
warnings. {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-server in branch-1.1 failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 30s 
{color} | {color:green} branch-1.1 passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed with JDK v1.8.0_111 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 28s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 32s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
20s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 53s {color} | {color:green} The patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.7.1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
16s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red} 0m 27s 
{color} | {color:red} hbase-server in the patch failed with JDK v1.8.0_111. 
{color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 34s 
{color} | {color:green} the patch passed with JDK v1.7.0_80 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 89m 8s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
20s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 124m 16s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:e01ee2f |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839304/HBASE-17112-branch-1.1-v1.patch
 |
| JIRA Issue | HBASE-17112 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux 3ea412bbafcf 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/patchprocess/precommit/personality/hbase.sh |
| git revision | branch-1.1 / a8628ee |
| Default Java | 1.7.0_80 |
| Multi-JDK 

[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672846#comment-15672846
 ] 

Anoop Sam John commented on HBASE-17112:


Ya that is what I also see.. We dont consider ts in Cells within 
Increment/Append.  So this issue is possible iff time at RS going back.
Another rare case I can think of is this..  2 increments are happening with 
same TS. Cur TS  = old cell.. So as per the logic we will put oldTS + 1..  (= 
now +1)..  Now say with same TS a delete also happening for the row.  But this 
delete can not mask the increment put as its ts> delete ts.  (Even if mvcc of 
delete op is greater)..  I know it is rare.. Just saying..   Or am I going 
wrong some where?

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672782#comment-15672782
 ] 

stack commented on HBASE-17085:
---

+1

Did a quick pass.

Don't forget to undo this:

defaultProvider(FSHLogProvider.class),  77  
defaultProvider(AsyncFSWALProvider.class),



> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer

2016-11-16 Thread Charlie Qiangeng Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Qiangeng Xu updated HBASE-17110:

Attachment: (was: SimpleBalancerBytableOverall.V1)

> Add an "Overall Strategy" option(balanced both on table level and server 
> level) to SimpleLoadBalancer
> -
>
> Key: HBASE-17110
> URL: https://issues.apache.org/jira/browse/HBASE-17110
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 2.0.0, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Attachments: HBASE-17110-V2.patch, HBASE-17110.patch
>
>
> This jira is about an enhancement of simpleLoadBalancer. Here we introduce a 
> new strategy: "bytableOverall" which could be controlled by adding:
> {noformat}
> 
>   hbase.master.loadbalance.bytableOverall
>   true
> 
> {noformat}
> We have been using the strategy on our largest cluster for several months. 
> it's proven to be very helpful and stable, especially, the result is quite 
> visible to the users.
> Here is the reason why it's helpful:
> When operating large scale clusters(our case), some companies still prefer to 
> use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan 
> generation, etc. Current SimpleLoadBalancer has two modes: 
> 1. byTable, which only guarantees that the regions of one table could be 
> uniformly distributed. 
> 2. byCluster, which ignores the distribution within tables and balance the 
> regions all together.
> If the pressures on different tables are different, the first byTable option 
> is the preferable one in most case. Yet, this choice sacrifice the cluster 
> level balance and would cause some servers to have significantly higher load, 
> e.g. 242 regions on server A but 417 regions on server B.(real world stats)
> Consider this case,  a cluster has 3 tables and 4 servers:
> {noformat}
>   server A has 3 regions: table1:1, table2:1, table3:1
>   server B has 3 regions: table1:2, table2:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 0 regions.
> {noformat}
> From the byTable strategy's perspective, the cluster has already been 
> perfectly balanced on table level. But a perfect status should be like:
> {noformat}
>   server A has 2 regions: table2:1, table3:1
>   server B has 2 regions: table1:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 2 regions: table1:1, table2:2
> {noformat}
> We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, 
> table2 and table3 still keep balanced.   
> And this is what the new mode "byTableOverall" can achieve.
> Two UTs have been added as well and the last one demonstrates the advantage 
> of the new strategy.
> Also, a onConfigurationChange method has been implemented to hot control the 
> "slop" variable.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17115) HMaster/HRegion Info Server does not honour admin.acl

2016-11-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672754#comment-15672754
 ] 

Jerry He commented on HBASE-17115:
--

bq. we'd still be missing secure authentication
Does HBASE-5291 (which is done recently) cover this? 
Yeah, we still lack the authorization part.

> HMaster/HRegion Info Server does not honour admin.acl
> -
>
> Key: HBASE-17115
> URL: https://issues.apache.org/jira/browse/HBASE-17115
> Project: HBase
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>
> Currently there is no way to enable protected URLs like /jmx,  /conf  only 
> for admins. This is applicable for both Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer

2016-11-16 Thread Charlie Qiangeng Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672733#comment-15672733
 ] 

Charlie Qiangeng Xu commented on HBASE-17110:
-

Just uploaded to review board [~tedyu]  and [~zghaobac]


> Add an "Overall Strategy" option(balanced both on table level and server 
> level) to SimpleLoadBalancer
> -
>
> Key: HBASE-17110
> URL: https://issues.apache.org/jira/browse/HBASE-17110
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 2.0.0, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Attachments: HBASE-17110-V2.patch, HBASE-17110.patch, 
> SimpleBalancerBytableOverall.V1
>
>
> This jira is about an enhancement of simpleLoadBalancer. Here we introduce a 
> new strategy: "bytableOverall" which could be controlled by adding:
> {noformat}
> 
>   hbase.master.loadbalance.bytableOverall
>   true
> 
> {noformat}
> We have been using the strategy on our largest cluster for several months. 
> it's proven to be very helpful and stable, especially, the result is quite 
> visible to the users.
> Here is the reason why it's helpful:
> When operating large scale clusters(our case), some companies still prefer to 
> use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan 
> generation, etc. Current SimpleLoadBalancer has two modes: 
> 1. byTable, which only guarantees that the regions of one table could be 
> uniformly distributed. 
> 2. byCluster, which ignores the distribution within tables and balance the 
> regions all together.
> If the pressures on different tables are different, the first byTable option 
> is the preferable one in most case. Yet, this choice sacrifice the cluster 
> level balance and would cause some servers to have significantly higher load, 
> e.g. 242 regions on server A but 417 regions on server B.(real world stats)
> Consider this case,  a cluster has 3 tables and 4 servers:
> {noformat}
>   server A has 3 regions: table1:1, table2:1, table3:1
>   server B has 3 regions: table1:2, table2:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 0 regions.
> {noformat}
> From the byTable strategy's perspective, the cluster has already been 
> perfectly balanced on table level. But a perfect status should be like:
> {noformat}
>   server A has 2 regions: table2:1, table3:1
>   server B has 2 regions: table1:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 2 regions: table1:1, table2:2
> {noformat}
> We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, 
> table2 and table3 still keep balanced.   
> And this is what the new mode "byTableOverall" can achieve.
> Two UTs have been added as well and the last one demonstrates the advantage 
> of the new strategy.
> Also, a onConfigurationChange method has been implemented to hot control the 
> "slop" variable.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672690#comment-15672690
 ] 

binlijin commented on HBASE-17118:
--

OK 

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, HBASE-17118-master_v3.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer

2016-11-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672688#comment-15672688
 ] 

Ted Yu commented on HBASE-17110:


Please upload to reviewboard.

Thanks

> Add an "Overall Strategy" option(balanced both on table level and server 
> level) to SimpleLoadBalancer
> -
>
> Key: HBASE-17110
> URL: https://issues.apache.org/jira/browse/HBASE-17110
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 2.0.0, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Attachments: HBASE-17110-V2.patch, HBASE-17110.patch, 
> SimpleBalancerBytableOverall.V1
>
>
> This jira is about an enhancement of simpleLoadBalancer. Here we introduce a 
> new strategy: "bytableOverall" which could be controlled by adding:
> {noformat}
> 
>   hbase.master.loadbalance.bytableOverall
>   true
> 
> {noformat}
> We have been using the strategy on our largest cluster for several months. 
> it's proven to be very helpful and stable, especially, the result is quite 
> visible to the users.
> Here is the reason why it's helpful:
> When operating large scale clusters(our case), some companies still prefer to 
> use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan 
> generation, etc. Current SimpleLoadBalancer has two modes: 
> 1. byTable, which only guarantees that the regions of one table could be 
> uniformly distributed. 
> 2. byCluster, which ignores the distribution within tables and balance the 
> regions all together.
> If the pressures on different tables are different, the first byTable option 
> is the preferable one in most case. Yet, this choice sacrifice the cluster 
> level balance and would cause some servers to have significantly higher load, 
> e.g. 242 regions on server A but 417 regions on server B.(real world stats)
> Consider this case,  a cluster has 3 tables and 4 servers:
> {noformat}
>   server A has 3 regions: table1:1, table2:1, table3:1
>   server B has 3 regions: table1:2, table2:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 0 regions.
> {noformat}
> From the byTable strategy's perspective, the cluster has already been 
> perfectly balanced on table level. But a perfect status should be like:
> {noformat}
>   server A has 2 regions: table2:1, table3:1
>   server B has 2 regions: table1:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 2 regions: table1:1, table2:2
> {noformat}
> We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, 
> table2 and table3 still keep balanced.   
> And this is what the new mode "byTableOverall" can achieve.
> Two UTs have been added as well and the last one demonstrates the advantage 
> of the new strategy.
> Also, a onConfigurationChange method has been implemented to hot control the 
> "slop" variable.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17118:
-
Attachment: HBASE-17118-master_v3.patch

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, HBASE-17118-master_v3.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672684#comment-15672684
 ] 

Ted Yu commented on HBASE-17118:


lgtm
{code}
338   LOG.error("close KeyValueScanner error", ce);
{code}
The above can be a warning.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-14375) define public API for spark integration module

2016-11-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-14375?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-14375:

Priority: Blocker  (was: Critical)

> define public API for spark integration module
> --
>
> Key: HBASE-14375
> URL: https://issues.apache.org/jira/browse/HBASE-14375
> Project: HBase
>  Issue Type: Task
>  Components: spark
>Reporter: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0
>
>
> before we can put the spark integration module into a release, we need to 
> annotate its public api surface.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17114) Add an option to set special retry pause when encountering CallQueueTooBigException

2016-11-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672662#comment-15672662
 ] 

Yu Li commented on HBASE-17114:
---

Thanks [~ghelmling] for the feedback and [~tedyu]/[~zghaobac] for chiming in.

bq. But, in general, I'm not sure we should handle CQTBE differently from any 
other retry-triggering exception (other than RetryImmediatelyException), and 
giving another knob to configure seems like it would just further complicate 
HBase tuning.
AFAICS we're already doing this in 
{{ClientExceptionsUtil#isMetaClearingException}} and treated 
CQTBE/RegionTooBusyException etc. as special exceptions:
{code}
  public static boolean isSpecialException(Throwable cur) {
return (cur instanceof RegionMovedException || cur instanceof 
RegionOpeningException
|| cur instanceof RegionTooBusyException || cur instanceof 
ThrottlingException
|| cur instanceof CallQueueTooBigException);
  }
{code}
So handling CQTBE specially may not seem so special?

bq. Another approach to this would be to allow the server to hint back to the 
client how long it should back off
Agree this is another good way to handle this, but by default we are still 
using {{NoBackoffPolicy}} right? So no matter what new mechanism we add into 
back off policy, by default it won't be valid? Like in our case we're not 
turning on back off, so this solution won't work for us by default.

IMHO we could open another JIRA to introduce the more fancy solution for back 
off, and since XiaoMi already has some patch running online I guess [~zghaobac] 
may like to take the new JIRA? (and to be frank, this kind of patch is well 
welcome to upstream rather than keeping private :-)). Meanwhile, we should also 
resolve the problem for users not using back off, and since the problem does 
exist and we already have some special exception handling logic on client side, 
the method I proposed is still valid?

I'm uploading the patch, it will tell how many the changes are so we could 
better check whether it breaks any code scalability/grace. Let me know your 
thoughts.

> Add an option to set special retry pause when encountering 
> CallQueueTooBigException
> ---
>
> Key: HBASE-17114
> URL: https://issues.apache.org/jira/browse/HBASE-17114
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
>
> As titled, after HBASE-15146 we will throw {{CallQueueTooBigException}} 
> instead of dead-wait. This is good for performance for most cases but might 
> cause a side-effect that if too many clients connect to the busy RS, that the 
> retry requests may come over and over again and RS never got the chance for 
> recovering, and the issue will become especially critical when the target 
> region is META.
> So here in this JIRA we propose to supply some special retry pause for CQTBE 
> in name of {{hbase.client.pause.special}}, and by default it will be 500ms (5 
> times of {{hbase.client.pause}} default value)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672658#comment-15672658
 ] 

binlijin commented on HBASE-17118:
--

Ok, done.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672657#comment-15672657
 ] 

binlijin commented on HBASE-17118:
--

Ok, done

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17118:
-
Attachment: HBASE-17118-master_v2.patch

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, 
> HBASE-17118-master_v2.patch, StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17088:
---
Attachment: HBASE-17088-v4.patch

Reattach v4 for Hadoop QA.

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch, HBASE-17088-v3.patch, HBASE-17088-v4.patch, 
> HBASE-17088-v4.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer

2016-11-16 Thread Charlie Qiangeng Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charlie Qiangeng Xu updated HBASE-17110:

Attachment: HBASE-17110-V2.patch

> Add an "Overall Strategy" option(balanced both on table level and server 
> level) to SimpleLoadBalancer
> -
>
> Key: HBASE-17110
> URL: https://issues.apache.org/jira/browse/HBASE-17110
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 2.0.0, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Attachments: HBASE-17110-V2.patch, HBASE-17110.patch, 
> SimpleBalancerBytableOverall.V1
>
>
> This jira is about an enhancement of simpleLoadBalancer. Here we introduce a 
> new strategy: "bytableOverall" which could be controlled by adding:
> {noformat}
> 
>   hbase.master.loadbalance.bytableOverall
>   true
> 
> {noformat}
> We have been using the strategy on our largest cluster for several months. 
> it's proven to be very helpful and stable, especially, the result is quite 
> visible to the users.
> Here is the reason why it's helpful:
> When operating large scale clusters(our case), some companies still prefer to 
> use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan 
> generation, etc. Current SimpleLoadBalancer has two modes: 
> 1. byTable, which only guarantees that the regions of one table could be 
> uniformly distributed. 
> 2. byCluster, which ignores the distribution within tables and balance the 
> regions all together.
> If the pressures on different tables are different, the first byTable option 
> is the preferable one in most case. Yet, this choice sacrifice the cluster 
> level balance and would cause some servers to have significantly higher load, 
> e.g. 242 regions on server A but 417 regions on server B.(real world stats)
> Consider this case,  a cluster has 3 tables and 4 servers:
> {noformat}
>   server A has 3 regions: table1:1, table2:1, table3:1
>   server B has 3 regions: table1:2, table2:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 0 regions.
> {noformat}
> From the byTable strategy's perspective, the cluster has already been 
> perfectly balanced on table level. But a perfect status should be like:
> {noformat}
>   server A has 2 regions: table2:1, table3:1
>   server B has 2 regions: table1:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 2 regions: table1:1, table2:2
> {noformat}
> We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, 
> table2 and table3 still keep balanced.   
> And this is what the new mode "byTableOverall" can achieve.
> Two UTs have been added as well and the last one demonstrates the advantage 
> of the new strategy.
> Also, a onConfigurationChange method has been implemented to hot control the 
> "slop" variable.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17110) Add an "Overall Strategy" option(balanced both on table level and server level) to SimpleLoadBalancer

2016-11-16 Thread Charlie Qiangeng Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17110?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672653#comment-15672653
 ] 

Charlie Qiangeng Xu commented on HBASE-17110:
-

Thank you for looking into the code [~tedyu] and [~anoop.hbase], I've changed 
the strategy to be default.
And also follow [~tedyu]'s suggestion I change the format and replace the 
variable initialnumRegions wherever possible.
After a second thought, I think using standard deviation might be some how 
redundant.
I've already used "slop" to set the threshold, thus another criterion would 
over-complicate the control(hard for user, also have to introduce another conf 
setting if not be hardcoded to 10). So I remove it as well.
A new patch HBASE-17110-V2.patch has been uploaded :)


> Add an "Overall Strategy" option(balanced both on table level and server 
> level) to SimpleLoadBalancer
> -
>
> Key: HBASE-17110
> URL: https://issues.apache.org/jira/browse/HBASE-17110
> Project: HBase
>  Issue Type: New Feature
>  Components: Balancer
>Affects Versions: 2.0.0, 1.2.4
>Reporter: Charlie Qiangeng Xu
>Assignee: Charlie Qiangeng Xu
> Attachments: HBASE-17110.patch, SimpleBalancerBytableOverall.V1
>
>
> This jira is about an enhancement of simpleLoadBalancer. Here we introduce a 
> new strategy: "bytableOverall" which could be controlled by adding:
> {noformat}
> 
>   hbase.master.loadbalance.bytableOverall
>   true
> 
> {noformat}
> We have been using the strategy on our largest cluster for several months. 
> it's proven to be very helpful and stable, especially, the result is quite 
> visible to the users.
> Here is the reason why it's helpful:
> When operating large scale clusters(our case), some companies still prefer to 
> use {{SimpleLoadBalancer}} due to its simplicity, quick balance plan 
> generation, etc. Current SimpleLoadBalancer has two modes: 
> 1. byTable, which only guarantees that the regions of one table could be 
> uniformly distributed. 
> 2. byCluster, which ignores the distribution within tables and balance the 
> regions all together.
> If the pressures on different tables are different, the first byTable option 
> is the preferable one in most case. Yet, this choice sacrifice the cluster 
> level balance and would cause some servers to have significantly higher load, 
> e.g. 242 regions on server A but 417 regions on server B.(real world stats)
> Consider this case,  a cluster has 3 tables and 4 servers:
> {noformat}
>   server A has 3 regions: table1:1, table2:1, table3:1
>   server B has 3 regions: table1:2, table2:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 0 regions.
> {noformat}
> From the byTable strategy's perspective, the cluster has already been 
> perfectly balanced on table level. But a perfect status should be like:
> {noformat}
>   server A has 2 regions: table2:1, table3:1
>   server B has 2 regions: table1:2, table3:2
>   server C has 3 regions: table1:3, table2:3, table3:3
>   server D has 2 regions: table1:1, table2:2
> {noformat}
> We can see the server loads change from 3,3,3,0 to 2,2,3,2, while the table1, 
> table2 and table3 still keep balanced.   
> And this is what the new mode "byTableOverall" can achieve.
> Two UTs have been added as well and the last one demonstrates the advantage 
> of the new strategy.
> Also, a onConfigurationChange method has been implemented to hot control the 
> "slop" variable.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Attachment: HBASE-17112-branch-1.1-v1.patch

Patch for branch-1.1

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, 
> HBASE-17112-branch-1.1-v1.patch, HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17096) checkAndMutateApi doesn't work correctly on 0.98.19+

2016-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17096?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672636#comment-15672636
 ] 

Hudson commented on HBASE-17096:


FAILURE: Integrated in Jenkins build Phoenix-master #1493 (See 
[https://builds.apache.org/job/Phoenix-master/1493/])
PHOENIX-3482 Provide a work around for HBASE-17096 (Samarth: rev 
7cdd7ca00574a6d815d367be883952bc30d006fc)
* (edit) 
phoenix-core/src/main/java/org/apache/phoenix/query/ConnectionQueryServicesImpl.java


> checkAndMutateApi doesn't work correctly on 0.98.19+
> 
>
> Key: HBASE-17096
> URL: https://issues.apache.org/jira/browse/HBASE-17096
> Project: HBase
>  Issue Type: Bug
>Reporter: Samarth Jain
>Assignee: Heng Chen
> Fix For: 0.98.24
>
> Attachments: HBASE-17096-0.98.patch, HBASE-17096-0.98.v2.patch
>
>
> Below is the test case. It uses some Phoenix APIs for getting hold of admin 
> and HConnection but should be easily adopted for an HBase IT test. The second 
> checkAndMutate should return false but it is returning true. This test fails 
> with HBase-0.98.23 and works fine with HBase-0.98.17
> {code}
> @Test
> public void testCheckAndMutateApi() throws Exception {
> byte[] row = Bytes.toBytes("ROW");
> byte[] tableNameBytes = Bytes.toBytes(generateUniqueName());
> byte[] family = Bytes.toBytes(generateUniqueName());
> byte[] qualifier = Bytes.toBytes("QUALIFIER");
> byte[] oldValue = null;
> byte[] newValue = Bytes.toBytes("VALUE");
> Put put = new Put(row);
> put.add(family, qualifier, newValue);
> try (Connection conn = DriverManager.getConnection(getUrl())) {
> PhoenixConnection phxConn = conn.unwrap(PhoenixConnection.class);
> try (HBaseAdmin admin = phxConn.getQueryServices().getAdmin()) {
> HTableDescriptor tableDesc = new HTableDescriptor(
> TableName.valueOf(tableNameBytes));
> HColumnDescriptor columnDesc = new HColumnDescriptor(family);
> columnDesc.setTimeToLive(120);
> tableDesc.addFamily(columnDesc);
> admin.createTable(tableDesc);
> HTableInterface tableDescriptor = 
> admin.getConnection().getTable(tableNameBytes);
> assertTrue(tableDescriptor.checkAndPut(row, family, 
> qualifier, oldValue, put));
> Delete delete = new Delete(row);
> RowMutations mutations = new RowMutations(row);
> mutations.add(delete);
> assertTrue(tableDescriptor.checkAndMutate(row, family, 
> qualifier, CompareOp.EQUAL, newValue, mutations));
> assertFalse(tableDescriptor.checkAndMutate(row, family, 
> qualifier, CompareOp.EQUAL, newValue, mutations));
> }
> }
> }
> {code}
> FYI, [~apurtell], [~jamestaylor], [~lhofhansl]. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672631#comment-15672631
 ] 

Hadoop QA commented on HBASE-17088:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} docker {color} | {color:red} 6m 42s 
{color} | {color:red} Docker failed to build yetus/hbase:8d52d23. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839302/HBASE-17088-v4.patch |
| JIRA Issue | HBASE-17088 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4505/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch, HBASE-17088-v3.patch, HBASE-17088-v4.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Attachment: HBASE-17112-branch-1-v1.patch

Patch for branch-1/1.3/1.2

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-branch-1-v1.patch, HBASE-17112-v1.patch, 
> HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672613#comment-15672613
 ] 

Duo Zhang commented on HBASE-17088:
---

+1. Any other concerns? [~mbertozzi]. Thanks.

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch, HBASE-17088-v3.patch, HBASE-17088-v4.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672585#comment-15672585
 ] 

huaxiang sun commented on HBASE-16981:
--

Thanks Jingcheng. I am thinking to drop quarterly/yearly policy from the 
current proposal to reduce the case described by Anoop. 
the threshold  proposal may not work as the time passes, there will be more 
files and the threshold will be easily reached. 

> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672584#comment-15672584
 ] 

Phil Yang commented on HBASE-17112:
---

I think so. In Append#add and Increment#addColumn we can pass cf/cq/delta but 
can not pass ts, or pass a Cell but server will ignore its ts.

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17088) Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor

2016-11-16 Thread Guanghao Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17088?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-17088:
---
Attachment: HBASE-17088-v4.patch

Update by review comments.

> Refactor RWQueueRpcExecutor/BalancedQueueRpcExecutor/RpcExecutor
> 
>
> Key: HBASE-17088
> URL: https://issues.apache.org/jira/browse/HBASE-17088
> Project: HBase
>  Issue Type: Improvement
>  Components: rpc
>Affects Versions: 2.0.0
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Attachments: HBASE-17088-v1.patch, HBASE-17088-v2.patch, 
> HBASE-17088-v3.patch, HBASE-17088-v3.patch, HBASE-17088-v4.patch
>
>
> 1. The RWQueueRpcExecutor has eight constructor method and the longest one 
> has ten parameters. But It is only used in SimpleRpcScheduler and easy to 
> confused when read the code.
> 2. There are duplicate method implement in RWQueueRpcExecutor and 
> BalancedQueueRpcExecutor. They can be implemented in their parent class 
> RpcExecutor.
> 3. SimpleRpcScheduler read many configs to new RpcExecutor. But the 
> CALL_QUEUE_SCAN_SHARE_CONF_KEY is only needed by RWQueueRpcExecutor. And 
> CALL_QUEUE_CODEL_TARGET_DELAY, CALL_QUEUE_CODEL_INTERVAL and 
> CALL_QUEUE_CODEL_LIFO_THRESHOLD are only needed by AdaptiveLifoCoDelCallQueue.
> So I thought we can refactor it. Suggestions are welcome.
> Review board: https://reviews.apache.org/r/53726/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17112) Prevent setting timestamp of delta operations being same as previous value's

2016-11-16 Thread Phil Yang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17112?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Phil Yang updated HBASE-17112:
--
Attachment: HBASE-17112-v2.patch

Fix TestHRegion

> Prevent setting timestamp of delta operations being same as previous value's
> 
>
> Key: HBASE-17112
> URL: https://issues.apache.org/jira/browse/HBASE-17112
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.1.7, 0.98.23, 1.2.4
>Reporter: Phil Yang
>Assignee: Phil Yang
> Attachments: HBASE-17112-v1.patch, HBASE-17112-v2.patch
>
>
> In delta operations, Increment and Append. We will read current value first 
> and then write the new whole result into WAL as the type of Put with current 
> timestamp. If the previous ts is larger than current ts, we will use the 
> previous ts.
> If we have two Puts with same TS, we will ignore the Put with lower sequence 
> id. It is not friendly with versioning. And for replication we will drop 
> sequence id  while writing to peer cluster so in the slave we don't know what 
> the order they are being written. If the pushing is disordered, the result 
> will be wrong.
> We can set the new ts to previous+1 if the previous is not less than now.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-17118:
--
Assignee: binlijin

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
>Assignee: binlijin
> Attachments: HBASE-17118-master_v1.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672525#comment-15672525
 ] 

Duo Zhang commented on HBASE-17118:
---

Usually we should avoid catching Throwable, catching Exception is enough.

[~tedyu] For java7 or newer you can still get the original Throwable and you 
can get the exception of scanner.close by calling the getSuppressed method of 
the Throwable, and the suppressed exceptions will also be printed when calling 
printStackTrace. But I agree that we should catch the exception of 
scanner.close since it is a read operation.

Thanks.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: HBASE-17118-master_v1.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17115) HMaster/HRegion Info Server does not honour admin.acl

2016-11-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17115?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672516#comment-15672516
 ] 

Andrew Purtell commented on HBASE-17115:


I would be weird to have service authorization policy in the policy files and 
this other setting in hbase site. Consider hooking this into Hadoop service 
auth like RPC. Also, we'd still be missing secure authentication, and that 
concern seems similar to HADOOP-13415

> HMaster/HRegion Info Server does not honour admin.acl
> -
>
> Key: HBASE-17115
> URL: https://issues.apache.org/jira/browse/HBASE-17115
> Project: HBase
>  Issue Type: Bug
>Reporter: Arshad Mohammad
>
> Currently there is no way to enable protected URLs like /jmx,  /conf  only 
> for admins. This is applicable for both Master and RegionServer.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672497#comment-15672497
 ] 

Ted Yu commented on HBASE-17118:


{code}
332 scanner.close();
{code}
Exception from close() call should be caught (and logged).
Otherwise this potential exception would eclipse the Throwable.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: HBASE-17118-master_v1.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17085) AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync

2016-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17085?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672449#comment-15672449
 ] 

Duo Zhang commented on HBASE-17085:
---

Let's fix this issue first? [~stack] [~ram_krish].
I will try other methods to aggregate more syncs after this(limit concurrent 
syncs, add a delay before issuing sync, etc.). Thanks.

> AsyncFSWAL may issue unnecessary AsyncDFSOutput.sync
> 
>
> Key: HBASE-17085
> URL: https://issues.apache.org/jira/browse/HBASE-17085
> Project: HBase
>  Issue Type: Sub-task
>  Components: wal
>Affects Versions: 2.0.0
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0
>
> Attachments: HBASE-17085-v1.patch, HBASE-17085-v2.patch, 
> HBASE-17085-v2.patch, HBASE-17085.patch
>
>
> The problem is in appendAndSync method, we will issue an  AsyncDFSOutput.sync 
> if syncFutures is not empty. The SyncFutures in syncFutures can only be 
> removed after an AsyncDFSOutput.sync comes back, so before the 
> AsyncDFSOutput.sync actually returns, we will always issue an  
> AsyncDFSOutput.sync after an append even if there is no new sync request.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16981) Expand Mob Compaction Partition policy from daily to weekly, monthly and beyond

2016-11-16 Thread Jingcheng Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16981?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672441#comment-15672441
 ] 

Jingcheng Du commented on HBASE-16981:
--

Thanks huaxiang.
I think it is okay to use your existing implementation in this JIRA. I'll 
review it soon. If any other improvements, let's file another JIRA to fix it?
What's your idea [~anoopsamjohn]?


> Expand Mob Compaction Partition policy from daily to weekly, monthly and 
> beyond
> ---
>
> Key: HBASE-16981
> URL: https://issues.apache.org/jira/browse/HBASE-16981
> Project: HBase
>  Issue Type: New Feature
>  Components: mob
>Affects Versions: 2.0.0
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-16981.master.001.patch, 
> HBASE-16981.master.002.patch, 
> Supportingweeklyandmonthlymobcompactionpartitionpolicyinhbase.pdf
>
>
> Today the mob region holds all mob files for all regions. With daily 
> partition mob compaction policy, after major mob compaction, there is still 
> one file per region daily. Given there is 365 days in one year, at least 365 
> files per region. Since HDFS has limitation for number of files under one 
> folder, this is not going to scale if there are lots of regions. To reduce 
> mob file number,  we want to introduce other partition policies such as 
> weekly, monthly to compact mob files within one week or month into one file. 
> This jira is create to track this effort.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-3562) ValueFilter is being evaluated before performing the column match

2016-11-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-3562?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672429#comment-15672429
 ] 

Duo Zhang commented on HBASE-3562:
--

Do you mean we should commit the UTs in this patch?

Now in master, we will call columns.checkColumn before evaluating filter so I 
think the problem described here is gone. But in general, I think we should 
also count versions before evaluating filters. The current 
implementation(filter then count versions) may returns different results on the 
same data set due to major compaction.

Think of this. You set maxVersions to 3, and there are 4 versions. Your filter 
will filter out the 3 newer versions, so you will get the oldest version when 
doing a get or scan. And here comes a major compaction, the oldest version is 
reclaimed. At this time you will get nothing when doing the same get or scan.

We need to fix this I think although this is an 'incompatible change'.

Thanks.

> ValueFilter is being evaluated before performing the column match
> -
>
> Key: HBASE-3562
> URL: https://issues.apache.org/jira/browse/HBASE-3562
> Project: HBase
>  Issue Type: Bug
>  Components: Filters
>Affects Versions: 0.90.0, 0.94.7
>Reporter: Evert Arckens
> Attachments: HBASE-3562.patch
>
>
> When performing a Get operation where a both a column is specified and a 
> ValueFilter, the ValueFilter is evaluated before making the column match as 
> is indicated in the javadoc of Get.setFilter()  : " {@link 
> Filter#filterKeyValue(KeyValue)} is called AFTER all tests for ttl, column 
> match, deletes and max versions have been run. "
> The is shown in the little test below, which uses a TestComparator extending 
> a WritableByteArrayComparable.
> public void testFilter() throws Exception {
>   byte[] cf = Bytes.toBytes("cf");
>   byte[] row = Bytes.toBytes("row");
>   byte[] col1 = Bytes.toBytes("col1");
>   byte[] col2 = Bytes.toBytes("col2");
>   Put put = new Put(row);
>   put.add(cf, col1, new byte[]{(byte)1});
>   put.add(cf, col2, new byte[]{(byte)2});
>   table.put(put);
>   Get get = new Get(row);
>   get.addColumn(cf, col2); // We only want to retrieve col2
>   TestComparator testComparator = new TestComparator();
>   Filter filter = new ValueFilter(CompareOp.EQUAL, testComparator);
>   get.setFilter(filter);
>   Result result = table.get(get);
> }
> public class TestComparator extends WritableByteArrayComparable {
> /**
>  * Nullary constructor, for Writable
>  */
> public TestComparator() {
> super();
> }
> 
> @Override
> public int compareTo(byte[] theirValue) {
> if (theirValue[0] == (byte)1) {
> // If the column match was done before evaluating the filter, we 
> should never get here.
> throw new RuntimeException("I only expect (byte)2 in col2, not 
> (byte)1 from col1");
> }
> if (theirValue[0] == (byte)2) {
> return 0;
> }
> else return 1;
> }
> }
> When only one column should be retrieved, this can be worked around by using 
> a SingleColumnValueFilter instead of the ValueFilter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17114) Add an option to set special retry pause when encountering CallQueueTooBigException

2016-11-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672425#comment-15672425
 ] 

Guanghao Zhang commented on HBASE-17114:


bq. Another approach to this would be to allow the server to hint back to the 
client how long it should back off.
Now the ThrottlingException carry a wait interval to client. In our use case, 
we introduce a new parent exception DoNotRetryNowIOException 
(ThrottlingExcepiton extends it), which means the client should not retry now 
and sleep the wait interval carried back by DoNotRetryNowIOException and then 
retry. Can RS calculate a wait interval (e.g. half of the queue time) and use 
CallQueueTooBigException carry it back to client?

> Add an option to set special retry pause when encountering 
> CallQueueTooBigException
> ---
>
> Key: HBASE-17114
> URL: https://issues.apache.org/jira/browse/HBASE-17114
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
>
> As titled, after HBASE-15146 we will throw {{CallQueueTooBigException}} 
> instead of dead-wait. This is good for performance for most cases but might 
> cause a side-effect that if too many clients connect to the busy RS, that the 
> retry requests may come over and over again and RS never got the chance for 
> recovering, and the issue will become especially critical when the target 
> region is META.
> So here in this JIRA we propose to supply some special retry pause for CQTBE 
> in name of {{hbase.client.pause.special}}, and by default it will be 500ms (5 
> times of {{hbase.client.pause}} default value)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17118:
-
Description: 
KeyValueHeap#generalizedSeek
  KeyValueScanner scanner = current;
  while (scanner != null) {
Cell topKey = scanner.peek();
..
boolean seekResult;
if (isLazy && heap.size() > 0) {
  // If there is only one scanner left, we don't do lazy seek.
  seekResult = scanner.requestSeek(seekKey, forward, useBloom);
} else {
  seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
  forward);
}
..
scanner = heap.poll();
  }
(1) scanner = heap.poll();  Retrieves and removes the head of this queue
(2) scanner.requestSeek(seekKey, forward, useBloom); or 
NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
throw exception, and scanner will have no chance to close, so will cause the 
scanner leak.

  was:
KeyValueHeap#generalizedSeek
  KeyValueScanner scanner = current;
  while (scanner != null) {
Cell topKey = scanner.peek();
..
boolean seekResult;
if (isLazy && heap.size() > 0) {
  // If there is only one scanner left, we don't do lazy seek.
  seekResult = scanner.requestSeek(seekKey, forward, useBloom);
} else {
  seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
  forward);
}
..
scanner = heap.poll();
  }
(1) scanner = heap.poll();  Retrieves and removes the head of this queue
(2) scanner.requestSeek(seekKey, forward, useBloom); or 
NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
throw exception, and scanner will have no change to close, so will cause the 
scanner leak.


> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: HBASE-17118-master_v1.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no chance to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17118:
-
Attachment: HBASE-17118-master_v1.patch

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: HBASE-17118-master_v1.patch, StoreScanner.png, 
> StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no change to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672406#comment-15672406
 ] 

binlijin commented on HBASE-17118:
--

We find the problem so later, and could find the HFile which cause the 
IllegalArgumentException, and have no clue about the IllegalArgumentException 
yet.

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no change to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672402#comment-15672402
 ] 

binlijin commented on HBASE-17118:
--

We encounter the problem, and we find the problem at 2016.11.10 through 
unnormal gc, and dump the heap, and find the StoreScanner leak, through the 
heap dump we find the StoreScanner create at  2016-11-02
{code}
2016-11-02 07:36:15,056 ERROR [B.defaultRpcServer.handler=5,queue=5,port=16020] 
ipc.RpcServer: Unexpected throwable object
java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.limit(SingleByteBuff.java:91)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.limit(SingleByteBuff.java:33)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:393)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateBlockChecksum(ChecksumUtil.java:158)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateBlockChecksum(HFileBlock.java:1737)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockDataInternal(HFileBlock.java:1686)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.readBlockData(HFileBlock.java:1495)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl.readBlock(HFileReaderImpl.java:1440)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlockIndex$CellBasedKeyBlockIndexReader.loadDataBlockWithScanInfo(HFileBlockIndex.java:322)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.seekTo(HFileReaderImpl.java:817)
at 
org.apache.hadoop.hbase.io.hfile.HFileReaderImpl$HFileScannerImpl.reseekTo(HFileReaderImpl.java:798)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseekAtOrAfter(StoreFileScanner.java:263)
at 
org.apache.hadoop.hbase.regionserver.StoreFileScanner.reseek(StoreFileScanner.java:180)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:323)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.reseek(KeyValueHeap.java:267)
at 
org.apache.hadoop.hbase.regionserver.StoreScanner.reseek(StoreScanner.java:824)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.doRealSeek(NonLazyKeyValueScanner.java:55)
at 
org.apache.hadoop.hbase.regionserver.NonLazyKeyValueScanner.requestSeek(NonLazyKeyValueScanner.java:39)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.generalizedSeek(KeyValueHeap.java:321)
at 
org.apache.hadoop.hbase.regionserver.KeyValueHeap.requestSeek(KeyValueHeap.java:279)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.joinedHeapMayHaveData(HRegion.java:5904)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextInternal(HRegion.java:5845)
at 
org.apache.hadoop.hbase.regionserver.HRegion$RegionScannerImpl.nextRaw(HRegion.java:5589)
at 
org.apache.hadoop.hbase.regionserver.RSRpcServices.scan(RSRpcServices.java:2644)
at 
org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32205)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:839)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:756)
2016-11-02 07:36:15,056 DEBUG [B.defaultRpcServer.handler=5,queue=5,port=16020] 
ipc.RpcServer: B.defaultRpcServer.handler=5,queue=5,port=16020: callId: 115 
service: ClientService methodName: Scan size: 29 connection: 11.180.36.86:54872
java.io.IOException
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:894)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:102)
at 
org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
at java.lang.Thread.run(Thread.java:756)
Caused by: java.lang.IllegalArgumentException
at java.nio.Buffer.limit(Buffer.java:267)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.limit(SingleByteBuff.java:91)
at 
org.apache.hadoop.hbase.nio.SingleByteBuff.limit(SingleByteBuff.java:33)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock.getBufferReadOnly(HFileBlock.java:393)
at 
org.apache.hadoop.hbase.io.hfile.ChecksumUtil.validateBlockChecksum(ChecksumUtil.java:158)
at 
org.apache.hadoop.hbase.io.hfile.HFileBlock$FSReaderImpl.validateBlockChecksum(HFileBlock.java:1737)
at 

[jira] [Updated] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17118?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

binlijin updated HBASE-17118:
-
Attachment: StoreScannerLeakHeap.png
StoreScanner.png

> StoreScanner leaked in KeyValueHeap
> ---
>
> Key: HBASE-17118
> URL: https://issues.apache.org/jira/browse/HBASE-17118
> Project: HBase
>  Issue Type: Bug
>Reporter: binlijin
> Attachments: StoreScanner.png, StoreScannerLeakHeap.png
>
>
> KeyValueHeap#generalizedSeek
>   KeyValueScanner scanner = current;
>   while (scanner != null) {
> Cell topKey = scanner.peek();
> ..
> boolean seekResult;
> if (isLazy && heap.size() > 0) {
>   // If there is only one scanner left, we don't do lazy seek.
>   seekResult = scanner.requestSeek(seekKey, forward, useBloom);
> } else {
>   seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
>   forward);
> }
> ..
> scanner = heap.poll();
>   }
> (1) scanner = heap.poll();  Retrieves and removes the head of this queue
> (2) scanner.requestSeek(seekKey, forward, useBloom); or 
> NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
> throw exception, and scanner will have no change to close, so will cause the 
> scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17118) StoreScanner leaked in KeyValueHeap

2016-11-16 Thread binlijin (JIRA)
binlijin created HBASE-17118:


 Summary: StoreScanner leaked in KeyValueHeap
 Key: HBASE-17118
 URL: https://issues.apache.org/jira/browse/HBASE-17118
 Project: HBase
  Issue Type: Bug
Reporter: binlijin


KeyValueHeap#generalizedSeek
  KeyValueScanner scanner = current;
  while (scanner != null) {
Cell topKey = scanner.peek();
..
boolean seekResult;
if (isLazy && heap.size() > 0) {
  // If there is only one scanner left, we don't do lazy seek.
  seekResult = scanner.requestSeek(seekKey, forward, useBloom);
} else {
  seekResult = NonLazyKeyValueScanner.doRealSeek(scanner, seekKey,
  forward);
}
..
scanner = heap.poll();
  }
(1) scanner = heap.poll();  Retrieves and removes the head of this queue
(2) scanner.requestSeek(seekKey, forward, useBloom); or 
NonLazyKeyValueScanner.doRealSeek(scanner, seekKey, forward);
throw exception, and scanner will have no change to close, so will cause the 
scanner leak.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17114) Add an option to set special retry pause when encountering CallQueueTooBigException

2016-11-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672372#comment-15672372
 ] 

Ted Yu commented on HBASE-17114:


bq. the multiplier received by the client represents the server state at a 
previous point in time

How about passing the timestamp of when the server raises the exception along 
with the multiplier ?
Client would be able to adjust the waiting period based on these two pieces of 
information.

> Add an option to set special retry pause when encountering 
> CallQueueTooBigException
> ---
>
> Key: HBASE-17114
> URL: https://issues.apache.org/jira/browse/HBASE-17114
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
>
> As titled, after HBASE-15146 we will throw {{CallQueueTooBigException}} 
> instead of dead-wait. This is good for performance for most cases but might 
> cause a side-effect that if too many clients connect to the busy RS, that the 
> retry requests may come over and over again and RS never got the chance for 
> recovering, and the issue will become especially critical when the target 
> region is META.
> So here in this JIRA we propose to supply some special retry pause for CQTBE 
> in name of {{hbase.client.pause.special}}, and by default it will be 500ms (5 
> times of {{hbase.client.pause}} default value)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-6338) Cache Method in RPC handler

2016-11-16 Thread binlijin (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-6338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672316#comment-15672316
 ] 

binlijin commented on HBASE-6338:
-

OH yeh, this is not needed any more.

> Cache Method in RPC handler
> ---
>
> Key: HBASE-6338
> URL: https://issues.apache.org/jira/browse/HBASE-6338
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.94.3
>Reporter: binlijin
> Attachments: HBASE-6338-90-2.patch, HBASE-6338-90.patch, 
> HBASE-6338-92-2.patch, HBASE-6338-92.patch, HBASE-6338-94-2.patch, 
> HBASE-6338-94.patch, HBASE-6338-trunk-2.patch, HBASE-6338-trunk.patch
>
>
> Every call in rpc handler a Method will be created, if we cache the method 
> will improve a little.
> I test with 0.90, Average Class.getMethod(String name, Class... 
> parameterTypes) cost 4780 ns , if we cache it cost 2620 ns.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17114) Add an option to set special retry pause when encountering CallQueueTooBigException

2016-11-16 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17114?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672280#comment-15672280
 ] 

Gary Helmling commented on HBASE-17114:
---

The new CoDel may help in successfully processing more requests in these 
overloaded situations.

But, in general, I'm not sure we should handle CQTBE differently from any other 
retry-triggering exception (other than RetryImmediatelyException), and giving 
another knob to configure seems like it would just further complicate HBase 
tuning.

Another approach to this would be to allow the server to hint back to the 
client how long it should back off.  In this case, the exception itself could 
carry a multiplier as part of the payload.  As the server remains overloaded 
for a longer and longer period of time, in could increase the multiplier 
returned in the exception, which would allow it to hint to clients that they 
should back off for longer.  The heuristics for doing this correctly may be 
tricky to get right, but I think this could be more generally applicable.  We 
could introduce a new parent exception (RetryIOException) to contain the 
multiplier and apply this in all situations that make sense.  However, this 
would also require a change to RPC to carry through the multiplier value.  This 
isn't perfect either -- the multiplier received by the client represents the 
server state at a previous point in time, which may already have changed.  But 
I think this is better than just statically configuring different pauses for 
different exceptions.

> Add an option to set special retry pause when encountering 
> CallQueueTooBigException
> ---
>
> Key: HBASE-17114
> URL: https://issues.apache.org/jira/browse/HBASE-17114
> Project: HBase
>  Issue Type: Bug
>Reporter: Yu Li
>Assignee: Yu Li
>
> As titled, after HBASE-15146 we will throw {{CallQueueTooBigException}} 
> instead of dead-wait. This is good for performance for most cases but might 
> cause a side-effect that if too many clients connect to the busy RS, that the 
> retry requests may come over and over again and RS never got the chance for 
> recovering, and the issue will become especially critical when the target 
> region is META.
> So here in this JIRA we propose to supply some special retry pause for CQTBE 
> in name of {{hbase.client.pause.special}}, and by default it will be 500ms (5 
> times of {{hbase.client.pause}} default value)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-17108) ZKConfig.getZKQuorumServersString does not return the correct client port number

2016-11-16 Thread Andrew Purtell (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17108?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Purtell resolved HBASE-17108.

Resolution: Fixed
  Assignee: Andrew Purtell

Pushed to 0.98

> ZKConfig.getZKQuorumServersString does not return the correct client port 
> number
> 
>
> Key: HBASE-17108
> URL: https://issues.apache.org/jira/browse/HBASE-17108
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.98.17
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
> Fix For: 0.98.24
>
> Attachments: HBASE-17108-0.98.patch
>
>
> ZKConfig.getZKQuorumServersString may not return the correct client port 
> number, at least on 0.98 branch. See PHOENIX-3485. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16995) Build client Java API and client protobuf messages

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672219#comment-15672219
 ] 

Hadoop QA commented on HBASE-16995:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 10s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 6s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
43s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 50s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
41s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
19s {color} | {color:green} master passed {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red} 1m 43s 
{color} | {color:red} hbase-protocol-shaded in master has 24 extant Findbugs 
warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 13s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 1m 
0s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} cc {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 52s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 3m 
40s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
18s {color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red} 0m 0s 
{color} | {color:red} The patch has 13 line(s) that end in whitespace. Use git 
apply --whitespace=fix. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
24m 36s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} hbaseprotoc {color} | {color:green} 0m 
56s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 4m 3s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 24s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 15s 
{color} | {color:green} hbase-protocol in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 23s 
{color} | {color:green} hbase-protocol-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 0m 56s 
{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
16s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 50m 7s {color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839265/HBASE-16995.003.patch 
|
| JIRA Issue | HBASE-16995 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  cc  hbaseprotoc  |
| uname | Linux baa30e433519 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 

[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-16 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Attachment: (was: HBASE-16708-v1.patch)

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16708) Expose endpoint Coprocessor name in "responseTooSlow" log messages

2016-11-16 Thread Yi Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16708?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yi Liang updated HBASE-16708:
-
Attachment: HBASE-16708-v1.patch

> Expose endpoint Coprocessor name in "responseTooSlow" log messages
> --
>
> Key: HBASE-16708
> URL: https://issues.apache.org/jira/browse/HBASE-16708
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.0.0
>Reporter: Nick Dimiduk
>Assignee: Yi Liang
> Attachments: HBASE-16708-v1.patch
>
>
> Operational diagnostics of a Phoenix install would be easier if we included 
> which endpoint coprocessor was being called in this responseTooSlow WARN 
> message.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-14960) Fallback to using default RPCControllerFactory if class cannot be loaded

2016-11-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-14960?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672170#comment-15672170
 ] 

Enis Soztutar commented on HBASE-14960:
---

Thanks [~seva_ostapenko]. The best medium to ask this question would be on the 
vendor specific communication channels since this concerns HDP, not an Apache. 

However, let me reply here for your convenience. 
1.1.2 is an Apache release that have already been shipped. By definition, you 
cannot change the bits that are already released. All the releases in the 
fixVersion of this jira already contains the patch. 
HDP-2.5 is "based" on 1.1.2, but contains other patches on top of the base 
version including this patch. Even 2.5.0 contains this patch. If you have 
further questions, please ask on the vendor forums / mailing lists. 

> Fallback to using default RPCControllerFactory if class cannot be loaded
> 
>
> Key: HBASE-14960
> URL: https://issues.apache.org/jira/browse/HBASE-14960
> Project: HBase
>  Issue Type: Bug
>Reporter: Enis Soztutar
>Assignee: Enis Soztutar
> Fix For: 2.0.0, 1.2.0, 1.3.0, 0.98.17
>
> Attachments: HBASE-14960-0.98.patch, hbase-14960_v1.patch, 
> hbase-14960_v2.patch, hbase-14960_v3.patch, hbase-14960_v4.patch
>
>
> In Phoenix + HBase clusters, the hbase-site.xml configuration will point to a 
> custom rpc controller factory which is a Phoenix-specific one to configure 
> the priorities for index and system catalog table. 
> However, sometimes these Phoenix-enabled clusters are used from pure-HBase 
> client applications resulting in ClassNotFoundExceptions in application code 
> or MapReduce jobs. Since hbase configuration is shared between 
> Phoenix-clients and HBase clients, having different configurations at the 
> client side is hard. 
> We can instead try to load up the RPCControllerFactory from conf, and if not 
> found, fallback to the default one (in case this is a pure HBase client). In 
> case Phoenix is already in the classpath, it will work as usual. 
> This does not affect the rpc scheduler factory since it is only used at the 
> server side. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-11-16 Thread Guang Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672161#comment-15672161
 ] 

Guang Yang commented on HBASE-12894:


Thanks [~busbey], yeah I did an audit of the new dependencies and it should be 
good now.

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST, UI
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Guang Yang
>Priority: Critical
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
> Attachments: HBASE-12894_Jetty9_v0.patch, 
> HBASE-12894_Jetty9_v1.patch, HBASE-12894_Jetty9_v1.patch, 
> HBASE-12894_Jetty9_v2.patch, HBASE-12894_Jetty9_v3.patch, 
> HBASE-12894_Jetty9_v4.patch, HBASE-12894_Jetty9_v5.patch, 
> HBASE-12894_Jetty9_v6.patch, HBASE-12894_Jetty9_v7.patch, 
> HBASE-12894_Jetty9_v8.patch, dependency_list_after, dependency_list_before
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> https://issues.apache.org/jira/browse/HBASE-3377 was a minor version update 
> and did not require significant work. This update will include a package 
> namespace update so there will likely be a larger number of required changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16941) FavoredNodes - Split/Merge code paths

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672149#comment-15672149
 ] 

Hadoop QA commented on HBASE-16941:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m 57s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green} 0m 
0s {color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green} 0m 
0s {color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 2m 
59s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
22s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 2m 
11s {color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 41s 
{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue} 0m 8s 
{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
59s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 0m 48s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
39s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
21s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
28m 39s {color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha1. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 3m 0s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 50s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 1m 48s 
{color} | {color:green} hbase-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 95m 35s 
{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
26s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 156m 31s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=1.12.3 Server=1.12.3 Image:yetus/hbase:8d52d23 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839238/HBASE-16941.master.006.patch
 |
| JIRA Issue | HBASE-16941 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  hadoopcheck  
hbaseanti  checkstyle  compile  |
| uname | Linux ba6056838982 3.13.0-93-generic #140-Ubuntu SMP Mon Jul 18 
21:21:05 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 48439e5 |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4499/testReport/ |
| modules | C: hbase-common hbase-server U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4499/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> FavoredNodes - Split/Merge code paths
> 

[jira] [Commented] (HBASE-16489) Configuration parsing

2016-11-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672136#comment-15672136
 ] 

Enis Soztutar commented on HBASE-16489:
---

Also for constant names, in Java we sometimes use all caps, sometimes not. I 
personally do not like all caps at all since I think it reduces readability. 
Googles style guide recommends to name all constants with a prefix of {{k}} and 
camel-case: 
https://google.github.io/styleguide/cppguide.html#Constant_Names. HDFS also 
uses this convention it seems. Let's use that going forward. We can fix the 
existing code retroactively in another issue.  

> Configuration parsing
> -
>
> Key: HBASE-16489
> URL: https://issues.apache.org/jira/browse/HBASE-16489
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-16489.HBASE-14850.v1.patch, 
> HBASE-16489.HBASE-14850.v2.patch, HBASE-16489.HBASE-14850.v3.patch, 
> HBASE-16489.HBASE-14850.v4.patch
>
>
> Reading hbase-site.xml is required to read various properties viz. 
> zookeeper-quorum, client retires etc.  We can either use Apache Xerces or 
> Boost libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16489) Configuration parsing

2016-11-16 Thread Enis Soztutar (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16489?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672127#comment-15672127
 ] 

Enis Soztutar commented on HBASE-16489:
---


 - Why do you wrap everything in  {{ASSERT_NO_THROW()}} statements? Usage of 
ASSERT_THROW() is for the case where the wrapped code will throw expected 
exceptions. However, in the case that the expectation is that the code should 
not throw exceptions, there is no need to use ASSERT_NO_THROW. If the code 
indeed throws an exception, the test method will fail anyway since it will not 
be caught and the test suite execution will fail. Please remove all such 
statements. 
 - Why are we aliasing two times below? Just pick one. Let's use the camel case 
(like {{ConfigMap}}) for similar stuff in the future as well. All caps is not 
readable. 
{code}
+  typedef std::map ConfigMap;
+  using CONFIG_MAP = Configuration::ConfigMap;
{code}

I think we have previously talked about using the Google'c C++ conventions. 
Let's use those recommendations as a guide from now on. For example 
https://google.github.io/styleguide/cppguide.html#Aliases talks about:
{code}
Don't put an alias in your public API just to save typing in the 
implementation; do so only if you intend it to be used by your clients.
{code}
Configuration is public API, but I think we are not exposing the typedefs to 
the client, so we are good there. 
Also for naming, check: 
https://google.github.io/styleguide/cppguide.html#Type_Names

 - We should not write/delete anything from unit tests for directories outside 
of the project directory. Normally all java unit tests are writing under 
{{target/}}. We can write to temporary directories under {{build/test-data/}} 
for this module., but cannot ever delete / access files from outside 
(especially not under /etc/hbase/conf). For unit testing default search path, 
you can create a tmp directory, move/write the files and set the search path 
there. 
Also you can look into moving the XMLs for the test code to be distributed / 
kept outside of the code. In maven / Java land, these kind of test resources 
will be under src/test/resources for each module. We can have test-resources or 
something and keep files there. It is not a big deal if we cannot do this 
though. 
 - Let's rename {{ConfigurationLoader}} to {{HBaseConfigurationLoader}}. 
 - Can you please follow the API that I was referring above, and also similar 
to HDFS-8707. The API that you have is: 
{code}
Configuration conf;
ConfigurationLoader loader;
loader.SetDefaultSearchPath();
loader.AddDefaultResources();
loader.Load(conf);
{code}
 
HDFS usage is something like this 
(https://github.com/apache/hadoop/blob/HDFS-8707/hadoop-hdfs-project/hadoop-hdfs-native-client/src/main/native/libhdfspp/examples/cpp/cat/cat.cpp):
 
{code}
  hdfs::ConfigurationLoader loader;
  //Loading default config files core-site.xml and hdfs-site.xml from the 
config path
  hdfs::optional config = 
loader.LoadDefaultResources();
{code}

In case of HDFS, the actual configuration object knows about default and site 
files and adds those to the file names. I think it is fine for us to hard code 
hbase-default and hbase-site for now in the HBaseConfigurationLoader. The only 
thing is from user API point of view, usage should be like: 
{code}
  hbase::HBaseConfigurationLoader loader;
  //Loading default config files core-site.xml and hdfs-site.xml from the 
config path
  hbase::optional config = 
loader.LoadDefaultResources();
{code}
So, please change the Load signature to return a newly constructed 
Configuration, and also add LoadDefaultResources method. 

> Configuration parsing
> -
>
> Key: HBASE-16489
> URL: https://issues.apache.org/jira/browse/HBASE-16489
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Sudeep Sunthankar
>Assignee: Sudeep Sunthankar
> Attachments: HBASE-16489.HBASE-14850.v1.patch, 
> HBASE-16489.HBASE-14850.v2.patch, HBASE-16489.HBASE-14850.v3.patch, 
> HBASE-16489.HBASE-14850.v4.patch
>
>
> Reading hbase-site.xml is required to read various properties viz. 
> zookeeper-quorum, client retires etc.  We can either use Apache Xerces or 
> Boost libraries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-17082) ForeignExceptionUtil isn’t packaged when building shaded protocol with -Pcompile-protobuf

2016-11-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-17082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672099#comment-15672099
 ] 

Hudson commented on HBASE-17082:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #1965 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/1965/])
HBASE-17082 ForeignExceptionUtil isnt packaged when building shaded (stack: rev 
48439e57201ee3be5eb12e6187002501af305a35)
* (edit) hbase-protocol-shaded/README.txt


> ForeignExceptionUtil isn’t packaged when building shaded protocol with 
> -Pcompile-protobuf
> -
>
> Key: HBASE-17082
> URL: https://issues.apache.org/jira/browse/HBASE-17082
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: ChiaPing Tsai
>Assignee: ChiaPing Tsai
> Fix For: 2.0.0
>
> Attachments: 17082_attempted_fix.txt, 17082_attempted_fix2.txt, 
> HBASE-17082.nothing.patch, HBASE-17082.nothing.patch, 
> HBASE-17082.nothing.patch, HBASE-17082.v0.patch, HBASE-17082.v1.patch, 
> patch-unit-hbase-client (after v1.patch).txt, patch-unit-hbase-server (after 
> v1.patch).txt
>
>
> The source folder will be replaced from src/main/java to 
> project.build.directory/protoc-generated-sources when building shaded 
> protocol with -Pcompile-protobuf, but we do not copy the 
> ForeignExceptionUtil. So the final jar lacks the ForeignExceptionUtil and it 
> causes the test error for hbase-client and hbase-server.
> {noformat}
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[169,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[100,36]
>  cannot find symbol
>   symbol:   class ForeignExceptionUtil
>   location: package org.apache.hadoop.hbase.util
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegionServer.java:[2144,17]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.regionserver.HRegionServer
> [ERROR] 
> /testptch/hbase/hbase-server/src/main/java/org/apache/hadoop/hbase/master/MasterRpcServices.java:[938,32]
>  cannot find symbol
>   symbol:   variable ForeignExceptionUtil
>   location: class org.apache.hadoop.hbase.master.MasterRpcServices
> {noformat}
> This bug blocks the patches which are against the hbase-protocol-shaded 
> module. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16998) [Master] Analyze table use reports and update quota violations

2016-11-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672096#comment-15672096
 ] 

Hadoop QA commented on HBASE-16998:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 0s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red} 0m 8s {color} 
| {color:red} HBASE-16998 does not apply to master. Rebase required? Wrong 
Branch? See https://yetus.apache.org/documentation/0.3.0/precommit-patchnames 
for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12839263/HBASE-16998.001.patch 
|
| JIRA Issue | HBASE-16998 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/4502/console |
| Powered by | Apache Yetus 0.3.0   http://yetus.apache.org |


This message was automatically generated.



> [Master] Analyze table use reports and update quota violations
> --
>
> Key: HBASE-16998
> URL: https://issues.apache.org/jira/browse/HBASE-16998
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-16998.001.patch
>
>
> Given the collected table usage reports from RegionServers, the Master needs 
> to inspect all filesystem-use quotas and determine which need to move into 
> violation and which need to move out of violation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-17000) [RegionServer] Compute region filesystem space use and report to Master

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-17000?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-17000:
---
Summary: [RegionServer] Compute region filesystem space use and report to 
Master  (was: [RegionServer] Compute region filesystem space use)

> [RegionServer] Compute region filesystem space use and report to Master
> ---
>
> Key: HBASE-17000
> URL: https://issues.apache.org/jira/browse/HBASE-17000
> Project: HBase
>  Issue Type: Sub-task
>  Components: regionserver
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-17000.001.patch
>
>
> Each RegionServer needs to track how much space a Region takes up and roll 
> this up to the table level.
> Aggregation of this information in the Master will be covered by HBASE-16997.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-16997) [Master] Collect table use information from RegionServers

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser resolved HBASE-16997.

Resolution: Invalid

This is encapsulated in HBASE-17000

> [Master] Collect table use information from RegionServers
> -
>
> Key: HBASE-16997
> URL: https://issues.apache.org/jira/browse/HBASE-16997
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>
> The Master will need to get reports of table usage in the cluster from 
> RegionServers.
> RegionServers could report this to the Master or the Master could poll this 
> information from the RegionServers. Need to determine which model is more 
> appropriate given the importance of applying quotas (e.g. we do not want 
> quota-use calculation to impact region assignment).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16961) FileSystem Quotas

2016-11-16 Thread Josh Elser (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672086#comment-15672086
 ] 

Josh Elser commented on HBASE-16961:


For those following along at the top-level only: we're getting close to a 
"feature".

With the patch on HBASE-16998, clients can define space quotas, the 
RegionServers report region space use to the Master, and the Master can parse 
these reports to make decisions about violation of space quotas. Going to start 
trying to move the patches through some more formal review process to try to 
reduce the burden of review at the end.

> FileSystem Quotas
> -
>
> Key: HBASE-16961
> URL: https://issues.apache.org/jira/browse/HBASE-16961
> Project: HBase
>  Issue Type: New Feature
>Reporter: Josh Elser
>Assignee: Josh Elser
>
> Umbrella issue for tracking the filesystem utilization of HBase data, 
> defining quotas on that utilization, and enforcement when utilization exceeds 
> the limits of the quota.
> At a high level: we can define quotas on tables and namespaces. Region size 
> is computed by RegionServers and sent to the Master. The Master inspects the 
> sizes of Regions, rolling up to table and namespace sizes. Defined quotas in 
> the quota table are evaluated given the computed sizes, and, for those 
> tables/namespaces violating the quota, RegionServers are informed to take 
> some action to limit any further filesystem growth by that table/namespace.
> Discuss: 
> https://lists.apache.org/thread.html/66a4b0c3725b5cbdd61dd6111c43847adaeef7b7da5f4cd045df30ef@%3Cdev.hbase.apache.org%3E
> Design Doc: 
> http://home.apache.org/~elserj/hbase/FileSystemQuotasforApacheHBase.pdf or 
> https://docs.google.com/document/d/1VtLWDkB2tpwc_zgCNPE1ulZOeecF-YA2FYSK3TSs_bw/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16995) Build client Java API and client protobuf messages

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-16995:
---
Attachment: HBASE-16995.003.patch

.003 is a rebase and a missed enum value for violation policies as described in 
the design doc.

After the patch I put up on HBASE-16998, we're getting close to the semblance 
of a real "feature". As such, I'm going to start pushing these through code 
review.

> Build client Java API and client protobuf messages
> --
>
> Key: HBASE-16995
> URL: https://issues.apache.org/jira/browse/HBASE-16995
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-16995.001.patch, HBASE-16995.002.patch, 
> HBASE-16995.003.patch
>
>
> Extend the existing Java API and protobuf messages to allow the client to set 
> filesystem-use quotas via the Master.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16998) [Master] Analyze table use reports and update quota violations

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-16998:
---
Status: Patch Available  (was: Open)

> [Master] Analyze table use reports and update quota violations
> --
>
> Key: HBASE-16998
> URL: https://issues.apache.org/jira/browse/HBASE-16998
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-16998.001.patch
>
>
> Given the collected table usage reports from RegionServers, the Master needs 
> to inspect all filesystem-use quotas and determine which need to move into 
> violation and which need to move out of violation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16998) [Master] Analyze table use reports and update quota violations

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-16998:
---
Attachment: HBASE-16998.001.patch

.001 The "hard" stuff. This patch adds another Chore to the master which 
numerates the region space reports the Master received from RegionServers and 
decides if a Table needs to have some violation policy enacted or disabled. The 
actual enacting/disabling of that policy is not included.

> [Master] Analyze table use reports and update quota violations
> --
>
> Key: HBASE-16998
> URL: https://issues.apache.org/jira/browse/HBASE-16998
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-16998.001.patch
>
>
> Given the collected table usage reports from RegionServers, the Master needs 
> to inspect all filesystem-use quotas and determine which need to move into 
> violation and which need to move out of violation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16998) [Master] Analyze table use reports and update quota violations

2016-11-16 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16998?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-16998:
---
Fix Version/s: 2.0.0

> [Master] Analyze table use reports and update quota violations
> --
>
> Key: HBASE-16998
> URL: https://issues.apache.org/jira/browse/HBASE-16998
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Josh Elser
>Assignee: Josh Elser
> Fix For: 2.0.0
>
> Attachments: HBASE-16998.001.patch
>
>
> Given the collected table usage reports from RegionServers, the Master needs 
> to inspect all filesystem-use quotas and determine which need to move into 
> violation and which need to move out of violation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-11-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15672035#comment-15672035
 ] 

Sean Busbey commented on HBASE-12894:
-

If the licensing changes have all been double checked, I can do a review 
tonight.

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST, UI
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Guang Yang
>Priority: Critical
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
> Attachments: HBASE-12894_Jetty9_v0.patch, 
> HBASE-12894_Jetty9_v1.patch, HBASE-12894_Jetty9_v1.patch, 
> HBASE-12894_Jetty9_v2.patch, HBASE-12894_Jetty9_v3.patch, 
> HBASE-12894_Jetty9_v4.patch, HBASE-12894_Jetty9_v5.patch, 
> HBASE-12894_Jetty9_v6.patch, HBASE-12894_Jetty9_v7.patch, 
> HBASE-12894_Jetty9_v8.patch, dependency_list_after, dependency_list_before
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> https://issues.apache.org/jira/browse/HBASE-3377 was a minor version update 
> and did not require significant work. This update will include a package 
> namespace update so there will likely be a larger number of required changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HBASE-17117) Reversed scan returns deleted versions and breaks RegionLocator

2016-11-16 Thread Ashu Pachauri (JIRA)
Ashu Pachauri created HBASE-17117:
-

 Summary: Reversed scan returns deleted versions and breaks 
RegionLocator
 Key: HBASE-17117
 URL: https://issues.apache.org/jira/browse/HBASE-17117
 Project: HBase
  Issue Type: Bug
  Components: Scanners
Affects Versions: 1.3.0
Reporter: Ashu Pachauri
Priority: Blocker
 Fix For: 1.3.0


We started seeing clients persistently throwing errors as they were trying to 
talk to a region that was non existent (split a few days ago). We verified that 
the region was deleted from meta when the split happened.

On performing a raw scan on meta, the deleted version for the split region 
appears, which also does on performing a normal reversed scan. Since 
MetaScanner uses a reversed scan, this explains why clients see non existent 
regions.

We also verified that there was no in-memory corrupt state by failing over the 
master. When we trigger major compaction on meta, the problem goes away further 
confirming the fact that we were seeing deleted versions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-12894) Upgrade Jetty to 9.2.6

2016-11-16 Thread Guang Yang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-12894?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671965#comment-15671965
 ] 

Guang Yang commented on HBASE-12894:


Hi [~stack], [~busbey],
Is there anything I should look at for this PR? Any chance we can merge this as 
part of 2.0 release? Thanks.

> Upgrade Jetty to 9.2.6
> --
>
> Key: HBASE-12894
> URL: https://issues.apache.org/jira/browse/HBASE-12894
> Project: HBase
>  Issue Type: Improvement
>  Components: REST, UI
>Affects Versions: 0.98.0
>Reporter: Rick Hallihan
>Assignee: Guang Yang
>Priority: Critical
>  Labels: MicrosoftSupport
> Fix For: 2.0.0
>
> Attachments: HBASE-12894_Jetty9_v0.patch, 
> HBASE-12894_Jetty9_v1.patch, HBASE-12894_Jetty9_v1.patch, 
> HBASE-12894_Jetty9_v2.patch, HBASE-12894_Jetty9_v3.patch, 
> HBASE-12894_Jetty9_v4.patch, HBASE-12894_Jetty9_v5.patch, 
> HBASE-12894_Jetty9_v6.patch, HBASE-12894_Jetty9_v7.patch, 
> HBASE-12894_Jetty9_v8.patch, dependency_list_after, dependency_list_before
>
>
> The Jetty component that is used for the HBase Stargate REST endpoint is 
> version 6.1.26 and is fairly outdated. We recently had a customer inquire 
> about enabling cross-origin resource sharing (CORS) for the REST endpoint and 
> found that this older version does not include the necessary filter or 
> configuration options, highlighted at: 
> http://wiki.eclipse.org/Jetty/Feature/Cross_Origin_Filter
> The Jetty project has had significant updates through versions 7, 8 and 9, 
> including a transition to be an Eclipse subproject, so updating to the latest 
> version may be non-trivial. The last update to the Jetty component in 
> https://issues.apache.org/jira/browse/HBASE-3377 was a minor version update 
> and did not require significant work. This update will include a package 
> namespace update so there will likely be a larger number of required changes. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-11-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671907#comment-15671907
 ] 

Ted Yu commented on HBASE-16179:


With patch v15, there would be two jars generated:
{code}
-rw-r--r--  1 tyu  staff  775422 Nov 16 14:39 
./hbase-spark/target/hbase-spark-2.0.2_2.11-2.0.0-SNAPSHOT.jar
-rw-r--r--  1 tyu  staff  769903 Nov 16 14:38 
./hbase-spark-scala-2.10/target/hbase-spark-2.0.2_2.10-2.0.0-SNAPSHOT.jar
{code}
hbase-spark-2.0.2_2.11-2.0.0-SNAPSHOT.jar is compiled against Spark 2.0.2 with 
Scala 2.11
hbase-spark-2.0.2_2.10-2.0.0-SNAPSHOT.jar is compiled against Spark 2.0.2 with 
Scala 2.10

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 
> 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-16179) Fix compilation errors when building hbase-spark against Spark 2.0

2016-11-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-16179?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-16179:
---
Attachment: 16179.v15.txt

> Fix compilation errors when building hbase-spark against Spark 2.0
> --
>
> Key: HBASE-16179
> URL: https://issues.apache.org/jira/browse/HBASE-16179
> Project: HBase
>  Issue Type: Bug
>  Components: spark
>Reporter: Ted Yu
>Assignee: Ted Yu
>Priority: Critical
> Fix For: 2.0.0
>
> Attachments: 16179.v0.txt, 16179.v1.txt, 16179.v1.txt, 16179.v10.txt, 
> 16179.v11.txt, 16179.v12.txt, 16179.v12.txt, 16179.v12.txt, 16179.v13.txt, 
> 16179.v15.txt, 16179.v4.txt, 16179.v5.txt, 16179.v7.txt, 16179.v8.txt, 
> 16179.v9.txt
>
>
> I tried building hbase-spark module against Spark-2.0 snapshot and got the 
> following compilation errors:
> http://pastebin.com/bg3w247a
> Some Spark classes such as DataTypeParser and Logging are no longer 
> accessible to downstream projects.
> hbase-spark module should not depend on such classes.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6982) Tool or equivalent functionality to perform online region merges.

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6982?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6982.
--
Resolution: Duplicate

Resolving as dupe of  HBASE-1621 

> Tool or equivalent functionality to perform online region merges.
> -
>
> Key: HBASE-6982
> URL: https://issues.apache.org/jira/browse/HBASE-6982
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 0.90.6
>Reporter: Jeff Lord
>
> Request the ability to be able to merge regions for a table while the cluster 
> remains online and the table is enabled. See for similar idea 
> https://issues.apache.org/jira/browse/HBASE-1621



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6968) Several HBase write perf improvement

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6968?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6968.
--
Resolution: Not A Problem

Resolving as not a problem anymore. "I look through trunk code, there's no 
change needed, so let's set this affects issue version on 0.90/0.92/0.94 only, 
right ?"

> Several HBase write perf improvement
> 
>
> Key: HBASE-6968
> URL: https://issues.apache.org/jira/browse/HBASE-6968
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 0.90.6, 0.92.2, 0.94.2
>Reporter: Liyin Tang
>
> Here are 2 hbase write performance improvements recently:
> 1) Avoid creating HBaseConfiguraiton object for each HLog. Every time when 
> creating a HBaseConfiguraiton object, it would parse the xml configuration 
> files from disk, which is not cheap operation.
> In HLog.java:
> orig:
> {code:title=HLog.java}
>   newWriter = createWriter(fs, newPath, HBaseConfiguration.create(conf));
> {code}
> new:
> {code}
>   newWriter = createWriter(fs, newPath, conf);
> {code}
> 2) Change 2 hotspot synchronized functions into double locking pattern. So it 
> shall remove the synchronization overhead in the normal case.
> orig:
> {code:title=HBaseRpcMetrics.java}
>   public synchronized void inc(String name, int amt) {
> MetricsTimeVaryingRate m = get(name); 
> if (m == null) {  
>   m = create(name);   
> } 
> m.inc(amt);   
>   }
> {code}
> new:
> {code}
>   public void inc(String name, int amt) { 
> MetricsTimeVaryingRate m = get(name); 
> if (m == null) {  
>   synchronized (this) {   
> if ((m = get(name)) == null) {
>   m = create(name);   
> } 
>   }   
> } 
> m.inc(amt);   
>   }
> {code}
> =
> orig:
> {code:title=MemStoreFlusher.java}
>   public synchronized void reclaimMemStoreMemory() {  
> if (this.server.getGlobalMemstoreSize().get() >= globalMemStoreLimit) {   
>   flushSomeRegions(); 
> }
>   }   
> {code}
> new:
> {code}
>   public void reclaimMemStoreMemory() {   
> if (this.server.getGlobalMemstoreSize().get() >= globalMemStoreLimit) {   
>   flushSomeRegions(); 
> }
>   }   
>   private synchronized void flushSomeRegions() {  
> if (this.server.getGlobalMemstoreSize().get() < globalMemStoreLimit) {
>   return; // double check the global memstore size inside of the 
> synchronized block.  
> } 
>  ...   
>  }
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6956) Do not return back to HTablePool closed connections

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6956.
--
Resolution: Not A Problem

No longer a problem. We don't do connections in the 0.90-way.

> Do not return back to HTablePool closed connections
> ---
>
> Key: HBASE-6956
> URL: https://issues.apache.org/jira/browse/HBASE-6956
> Project: HBase
>  Issue Type: Bug
>  Components: Client
>Affects Versions: 0.90.6
>Reporter: Igor Yurinok
>
> Sometimes we see a lot of Exception about closed connections:
> {code}
>  
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
>  closed
> org.apache.hadoop.hbase.client.ClosedConnectionException: 
> org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@553fd068
>  closed
> {code}
> After investigation we assumed that it occurs because closed connection 
> returns back into HTablePool. 
> For our opinion best solution is  check whether the table is closed in method 
> HTablePool.putTable and if true don't add it into the queue and release such 
> HTableInterface.
> But unfortunatly right now there are no access to HTable#closed field through 
> HTableInterface



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9802) A new failover test framework for HBase

2016-11-16 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671879#comment-15671879
 ] 

Esteban Gutierrez commented on HBASE-9802:
--

[~tobe] do you still have plans to contribute this back?

> A new failover test framework for HBase
> ---
>
> Key: HBASE-9802
> URL: https://issues.apache.org/jira/browse/HBASE-9802
> Project: HBase
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 0.94.3
>Reporter: chendihao
>Assignee: chendihao
>Priority: Minor
>
> Currently HBase uses ChaosMonkey for IT test and fault injection. It will 
> restart regionserver, force balancer and perform other actions randomly and 
> periodically. However, we need a more extensible and full-featured framework 
> for our failover test and we find ChaosMonkey cant' suit our needs since it 
> has the following drawbacks.
> 1) Only process-level actions can be simulated, not support 
> machine-level/hardware-level/network-level actions.
> 2) No data validation before and after the test, the fatal bugs such as that 
> can cause data inconsistent may be overlook.
> 3) When failure occurs, we can't repro the problem and hard to figure out the 
> reason.
> Therefore, we have developed a new framework to satisfy the need of failover 
> test. We extended ChaosMonkey and implement the function to validate data and 
> to replay failed actions. Here are the features we add.
> 1) Policy/Task/Action abstraction, seperating Task from Policy and Action 
> makes it easier to manage and replay a set of actions.
> 2) Make action configurable. We have implemented some actions to cause 
> machine failure and defined the same interface as original actions.
> 3) We should validate the date consistent before and after failover test to 
> ensure the availability and data correctness.
> 4) After performing a set of actions, we also check the consistency of table 
> as well.
> 5) The set of actions that caused test failure can be replayed, and the 
> reproducibility of actions can help fixing the exposed bugs.
> Our team has developed this framework and run for a while. Some bugs were 
> exposed and fixed by running this test framework. Moreover, we have a monitor 
> program which shows the progress of failover test and make sure our cluster 
> is as stable as we want. Now we are trying to make it more general and will 
> opensource it later.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6929) Publish Hbase 0.94 artifacts build against hadoop-2.0

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6929.
--
Resolution: Not A Problem

Resolving as not a problem any more.

> Publish Hbase 0.94 artifacts build against hadoop-2.0
> -
>
> Key: HBASE-6929
> URL: https://issues.apache.org/jira/browse/HBASE-6929
> Project: HBase
>  Issue Type: Task
>  Components: build
>Affects Versions: 0.94.2
>Reporter: Enis Soztutar
> Attachments: 6929.txt, hbase-6929_v2.patch
>
>
> Downstream projects (flume, hive, pig, etc) depends on hbase, but since the 
> hbase binaries build with hadoop-2.0 are not pushed to maven, they cannot 
> depend on them. AFAIK, hadoop 1 and 2 are not binary compatible, so we should 
> also push hbase jars build with hadoop2.0 profile into maven, possibly with 
> version string like 0.94.2-hadoop2.0. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6902) Add doc and unit test of the various checksum settings

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6902:
-
Component/s: documentation

> Add doc and unit test of the various checksum settings
> --
>
> Key: HBASE-6902
> URL: https://issues.apache.org/jira/browse/HBASE-6902
> Project: HBase
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 0.95.2
>Reporter: stack
>Priority: Critical
>
> See HBASE-6868.  Doc the options, their pluses and negatives as well as the 
> bugs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-9826) KeyValue should guard itself against corruptions

2016-11-16 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9826?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez resolved HBASE-9826.
--
Resolution: Fixed

in KeyValue.createByteArray() we check for the valid parameters used in the 
constructor. Resolving since we haven't seen this corrupted KVs in a very long 
time.

> KeyValue should guard itself against corruptions
> 
>
> Key: HBASE-9826
> URL: https://issues.apache.org/jira/browse/HBASE-9826
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 0.89-fb
>Reporter: Amitanand Aiyer
>Priority: Minor
>
> We have seen a case where a corrupted KV was causing a flush to fail 
> repeatedly.
> KV seems to have some sanity checks when it is created. But, not sure how the 
> corrupted KV got in.
> We could add some sanity checks before/after serialization to make sure KV's 
> are not corrupted.
> I've seen this issue on 0.89. But, I am not sure about the other versions. 
> Since the trunk has moved to pb; this may not apply.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HBASE-9844) zookeepers.sh - ZKServerTool log permission issue

2016-11-16 Thread Esteban Gutierrez (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15671860#comment-15671860
 ] 

Esteban Gutierrez commented on HBASE-9844:
--

We shouldn't be logging to a file. We need a flag to log to stderr instead 
since the ZKServerTool only prints the list of the ZK quorum.

> zookeepers.sh - ZKServerTool log permission issue
> -
>
> Key: HBASE-9844
> URL: https://issues.apache.org/jira/browse/HBASE-9844
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.12, 2.0.0
> Environment: Linux
>Reporter: Sebastien Barrier
>Priority: Minor
>  Labels: beginner
>
> The zookeepers.sh script exec the following command during it's process
> /usr/local/hbase/bin/hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool
> before doing this it also change of directory to the hbase binary for example 
> 'cd /usr/local/hbase/bin' if the permissions of the directory are differents 
> from the user running the ZKServerTool for example hadoop user and root for 
> the directory there's the following error because it try to create a log file 
> (hadoop.log) in the current directory
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: ./hadoop.log (Permission denied)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:212)
> at java.io.FileOutputStream.(FileOutputStream.java:136)
> the log should be written in HBASE_LOG_DIR and not in the current directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HBASE-6773) Make the dfs replication factor configurable per table

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6773?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack resolved HBASE-6773.
--
Resolution: Invalid

No longer valid.

Can set by CF using the key HColumnDescriptor.DEFAULT_DFS_REPLICATION (It could 
be done in a nicer way but its doable).

> Make the dfs replication factor configurable per table
> --
>
> Key: HBASE-6773
> URL: https://issues.apache.org/jira/browse/HBASE-6773
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 0.95.2
>Reporter: Nicolas Liochon
>Assignee: Devaraj Das
>
> Today, it's an application level configuration. So all the HFiles are 
> replicated 3 times per default.
> There are some reasons to make it per table:
> - some tables are critical while some others are not. For example, meta would 
> benefit from an higher level of replication, to ensure we continue working 
> even when we lose 20% of the cluster.
> - some tables are backuped somewhere else, used by non essential process, so 
> the user may accept a lower level of replication for these ones.
> - it should be a dynamic parameter. For example, during a bulk load we set a 
> replication of 1 or 2, then we increase it. It's in the same space as 
> disabling the WAL for some writes.
> The case that seems important to me is meta. We can also handle this one by a 
> specific parameter in the usual hbase-site.xml if we don't want a generic 
> solution.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9844) zookeepers.sh - ZKServerTool log permission issue

2016-11-16 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-9844:
-
Affects Version/s: 2.0.0

> zookeepers.sh - ZKServerTool log permission issue
> -
>
> Key: HBASE-9844
> URL: https://issues.apache.org/jira/browse/HBASE-9844
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.12, 2.0.0
> Environment: Linux
>Reporter: Sebastien Barrier
>Priority: Minor
>  Labels: beginner
>
> The zookeepers.sh script exec the following command during it's process
> /usr/local/hbase/bin/hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool
> before doing this it also change of directory to the hbase binary for example 
> 'cd /usr/local/hbase/bin' if the permissions of the directory are differents 
> from the user running the ZKServerTool for example hadoop user and root for 
> the directory there's the following error because it try to create a log file 
> (hadoop.log) in the current directory
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: ./hadoop.log (Permission denied)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:212)
> at java.io.FileOutputStream.(FileOutputStream.java:136)
> the log should be written in HBASE_LOG_DIR and not in the current directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-9844) zookeepers.sh - ZKServerTool log permission issue

2016-11-16 Thread Esteban Gutierrez (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-9844?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Esteban Gutierrez updated HBASE-9844:
-
Labels: beginner  (was: )

> zookeepers.sh - ZKServerTool log permission issue
> -
>
> Key: HBASE-9844
> URL: https://issues.apache.org/jira/browse/HBASE-9844
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Affects Versions: 0.94.12, 2.0.0
> Environment: Linux
>Reporter: Sebastien Barrier
>Priority: Minor
>  Labels: beginner
>
> The zookeepers.sh script exec the following command during it's process
> /usr/local/hbase/bin/hbase org.apache.hadoop.hbase.zookeeper.ZKServerTool
> before doing this it also change of directory to the hbase binary for example 
> 'cd /usr/local/hbase/bin' if the permissions of the directory are differents 
> from the user running the ZKServerTool for example hadoop user and root for 
> the directory there's the following error because it try to create a log file 
> (hadoop.log) in the current directory
> log4j:ERROR setFile(null,true) call failed.
> java.io.FileNotFoundException: ./hadoop.log (Permission denied)
> at java.io.FileOutputStream.open(Native Method)
> at java.io.FileOutputStream.(FileOutputStream.java:212)
> at java.io.FileOutputStream.(FileOutputStream.java:136)
> the log should be written in HBASE_LOG_DIR and not in the current directory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6772) Make the Distributed Split HDFS Location aware

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6772:
-
Component/s: MTTR

> Make the Distributed Split HDFS Location aware
> --
>
> Key: HBASE-6772
> URL: https://issues.apache.org/jira/browse/HBASE-6772
> Project: HBase
>  Issue Type: Improvement
>  Components: master, MTTR, regionserver
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>
> During a hlog split, each log file (a single hdfs block) is allocated to a 
> different region server. This region server reads the file and creates the 
> recovery edit files.
> The allocation to the region server is random. We could take into account the 
> locations of the log file to split:
> - the reads would be local, hence faster. This allows short circuit as well.
> - less network i/o used during a failure (and this is important)
> - we would be sure to read from a working datanode, hence we're sure we won't 
> have read errors. Read errors slow the split process a lot, as we often enter 
> the "timeouted world". 
> We need to limit the calls to the namenode however.
> Typical algo could be:
> - the master gets the locations of the hlog files
> - it writes it into ZK, if possible in one transaction (this way all the 
> tasks are visible alltogether, allowing some arbitrage by the region server).
> - when the regionserver receives the event, it checks for all logs and all 
> locations.
> - if there is a match, it takes it
> - if not it waits something like 0.2s (to give the time to other regionserver 
> to take it if the location matches), and take any remaining task.
> Drawbacks are:
> - a 0.2s delay added if there is no regionserver available on one of the 
> locations. It's likely possible to remove it with some extra synchronization.
> - Small increase in complexity and dependency to HDFS
> Considering the advantages, it's worth it imho.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6772) Make the Distributed Split HDFS Location aware

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6772:
-
Affects Version/s: (was: 0.95.2)
   2.0.0

> Make the Distributed Split HDFS Location aware
> --
>
> Key: HBASE-6772
> URL: https://issues.apache.org/jira/browse/HBASE-6772
> Project: HBase
>  Issue Type: Improvement
>  Components: master, MTTR, regionserver
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>Assignee: Nicolas Liochon
>
> During a hlog split, each log file (a single hdfs block) is allocated to a 
> different region server. This region server reads the file and creates the 
> recovery edit files.
> The allocation to the region server is random. We could take into account the 
> locations of the log file to split:
> - the reads would be local, hence faster. This allows short circuit as well.
> - less network i/o used during a failure (and this is important)
> - we would be sure to read from a working datanode, hence we're sure we won't 
> have read errors. Read errors slow the split process a lot, as we often enter 
> the "timeouted world". 
> We need to limit the calls to the namenode however.
> Typical algo could be:
> - the master gets the locations of the hlog files
> - it writes it into ZK, if possible in one transaction (this way all the 
> tasks are visible alltogether, allowing some arbitrage by the region server).
> - when the regionserver receives the event, it checks for all logs and all 
> locations.
> - if there is a match, it takes it
> - if not it waits something like 0.2s (to give the time to other regionserver 
> to take it if the location matches), and take any remaining task.
> Drawbacks are:
> - a 0.2s delay added if there is no regionserver available on one of the 
> locations. It's likely possible to remove it with some extra synchronization.
> - Small increase in complexity and dependency to HDFS
> Considering the advantages, it's worth it imho.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6771) To recover parts of the properties of .tableinfo file by reading HFile

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6771?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6771:
-
Priority: Minor  (was: Major)

> To recover parts of the properties of .tableinfo file by reading HFile 
> ---
>
> Key: HBASE-6771
> URL: https://issues.apache.org/jira/browse/HBASE-6771
> Project: HBase
>  Issue Type: Improvement
>  Components: hbck
>Affects Versions: 0.95.2
>Reporter: Jie Huang
>Assignee: Jie Huang
>Priority: Minor
>
> Currently, we only fabricate a bare minimum .tableinfo version while it is 
> missing in hbck. The end-user still needs to correct them later. According to 
> the [~jmhsieh]'s proposal in HBASE-5631, we'd better to recover it by reading 
> some properties(e.g., compression settings, encodings, etc) from exiting 
> newest HFile under each region folder.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HBASE-6752) On region server failure, serve writes and timeranged reads during the log split

2016-11-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-6752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-6752:
-
Affects Version/s: (was: 0.95.2)
   2.0.0

> On region server failure, serve writes and timeranged reads during the log 
> split
> 
>
> Key: HBASE-6752
> URL: https://issues.apache.org/jira/browse/HBASE-6752
> Project: HBase
>  Issue Type: Improvement
>  Components: regionserver
>Affects Versions: 2.0.0
>Reporter: Nicolas Liochon
>
> Opening for write on failure would mean:
> - Assign the region to a new regionserver. It marks the region as recovering
>   -- specific exception returned to the client when we cannot server.
>   -- allow them to know where they stand. The exception can include some time 
> information (failure stated on: ...)
>   -- allow them to go immediately on the right regionserver, instead of 
> retrying or calling the region holding meta to get the new address
>  => save network calls, lower the load on meta.
> - Do the split as today. Priority is given to region server holding the new 
> regions
>   -- help to share the load balancing code: the split is done by region 
> server considered as available for new regions
>   -- help locality (the recovered edits are available on the region server) 
> => lower the network usage
> - When the split is finished, we're done as of today
> - while the split is progressing, the region server can
>  -- serve writes
>--- that's useful for all application that need to write but not read 
> immediately:
>--- whatever logs events to analyze them later
>--- opentsdb is a perfect example.   
>  -- serve reads if they have a compatible time range. For heavily used 
> tables, it could be an help, because:
>--- we can expect to have a few minutes of data only (as it's loaded)
>--- the heaviest queries, often accepts a few -or more- minutes delay. 
> Some "What if":
> 1) the split fails
> => Retry until it works. As today. Just that we serves writes. We need to 
> know (as today) that the region has not recovered if we fail again.
> 2) the regionserver fails during the split
> => As 1 and as of today/
> 3) the regionserver fails after the split but before the state change to 
> fully available.
> => New assign. More logs to split (the ones already dones and the new ones).
> 4) the assignment fails
> => Retry until it works. As today.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   3   4   >