[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502826#comment-16502826
 ] 

Hadoop QA commented on HBASE-20679:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
38s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
4s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
49s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
11s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
39s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  6m 
50s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  6m 50s{color} 
| {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  2m 
14s{color} | {color:red} root: The patch generated 3 new + 41 unchanged - 0 
fixed = 44 total (was 41) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} shadedjars {color} | {color:red}  3m 
59s{color} | {color:red} patch has 10 errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  4m 
25s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  9m 
14s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
43s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 10m 11s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
19s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 63m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.http.log.TestLogLevel |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20679 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926678/HBASE-20679.003.patch 
|

[jira] [Commented] (HBASE-20682) MoveProcedure can be subtask of ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502823#comment-16502823
 ] 

stack commented on HBASE-20682:
---

Look into adding a {{reopen}} API to the RS Admin Interface. If a crash after 
dispatch, can safely go to end because region will have been 'reopened' by SCP 
recovery.

> MoveProcedure can be subtask of 
> ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.
> 
>
> Key: HBASE-20682
> URL: https://issues.apache.org/jira/browse/HBASE-20682
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
>
> MoveProcedure is used in at least two places as a means of 'reopening' a 
> region after a config has changed. This issue is about review of MP so this 
> usage is first-class (and that MP is good procedure citizen running as a 
> subprocedure. In question is what should happen if the source server or the 
> target servers are offline when we go to run. As of the parent issue, we'll 
> skip to the end. Should we instead go ahead with the move (internally, if we 
> are asked to assign to an offline server, we'll pick another -- do same for 
> unassign? Would help in this reopen use case).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502819#comment-16502819
 ] 

Sean Busbey commented on HBASE-20679:
-

{quote}

Sounds great (goals/non-goals; explore various mechanisms... didn't know about 
JPDA... – What about CPs?). Another issue though Sean?

Meantime, I'd suggest we get this nice one in. We can disable when alternate 
hotfixing channel?
{quote}

I'd rather the instructions on use be in a troubleshooting entry of the ref 
guide with a warning, but not strongly.


Could we more firmly convey the experimental nature by including it in the 
config name? something like 
"hbase.experimental.http.compile.jsp.dynamically.enabled"?

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502818#comment-16502818
 ] 

Allan Yang commented on HBASE-20679:


{quote}
That a bug fix in TestLogLevel.java ?
{quote}
Yes, in LogLevel serverlet, requests are made for 'head.jsp' and 'foot.jsp', 
which are not exist in the webcontext. Adding those jsp to the webcontext works.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20688) Refguide has "HBase Backup" section and a chapter named "Backup and Restore"; neither refers to the other

2018-06-05 Thread stack (JIRA)
stack created HBASE-20688:
-

 Summary: Refguide has "HBase Backup" section and a chapter named 
"Backup and Restore"; neither refers to the other
 Key: HBASE-20688
 URL: https://issues.apache.org/jira/browse/HBASE-20688
 Project: HBase
  Issue Type: Bug
Reporter: stack


The two backup sections are not connected or related. It'd be confusing to the 
user. Needs addressing.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502804#comment-16502804
 ] 

stack commented on HBASE-20679:
---

Sorry [~allan163], I overwrote your RN. Restored it. Added warning that feature 
is dangerous.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20679:
--
Release Note: 
Adds the ability to dynamically compile jsp. Enable if you need to do some 'hot 
fix'. As with the example given in HBASE-20679, we can use a jsp to fix a RIT 
stuck situation. Set 'hbase.http.compile.jsp.dynamically.enabled' to true if 
you want to use this feature (default is false). After toggling on this flag, 
any jsp put in the ${HBASE_HOME}/hbase-webapps/master(or regionserver) can be 
dynamically compiled and executed via visiting the jsp from web browser or by 
loading the page via 'curl' command. For example, if you put a jsp named 
fix.jsp into ${HBASE_HOME}/hbase-webapps/master, you can execute this jsp by 
visiting this url: http://masterServerName:infoport/fix.jsp

For experts only. Loading and running arbitrary code can damage/destroy your 
deploy.

  was:
Adding the ability to dynamically compile jsp enable us to do some hot fix. As 
the example given in HBASE-20679, we can use a jsp to fix a RIT stuck 
situation. For experts only. Loading and running arbitrary code can 
damage/destroy your deploy.

The ability to dynamically load jsps is off by default. To enable set 
hbase.http.compile.jsp.dynamically.enable to true.


> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20679:
--
Release Note: 
Adding the ability to dynamically compile jsp enable us to do some hot fix. As 
the example given in HBASE-20679, we can use a jsp to fix a RIT stuck 
situation. For experts only. Loading and running arbitrary code can 
damage/destroy your deploy.

The ability to dynamically load jsps is off by default. To enable set 
hbase.http.compile.jsp.dynamically.enable to true.

  was:Adding the ability to dynamically compile jsp enable us to do some hot 
fix. As the example given in HBASE-20679, we can use a jsp to fix a RIT stuck 
situation. Set 'hbase.http.compile.jsp.dynamically.enabled' to true if you want 
to use this feature(default is false). After toggling on this flag, any jsp put 
in the  ${HBASE_HOME}/hbase-webapps/master(or regionserver) can be dynamically 
compiled and executed via visiting the jsp from web browser or by 'curl' 
command. For example, if you put a jsp named fix.jsp into 
${HBASE_HOME}/hbase-webapps/master. You can execute this jsp by this url: 
http://masterServerName:infoport/fix.jsp


> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502803#comment-16502803
 ] 

stack commented on HBASE-20679:
---

[~allan163] That a bug fix in TestLogLevel.java ?

Patch looks great. Added the config to the RN.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20679:
---
Release Note: Adding the ability to dynamically compile jsp enable us to do 
some hot fix. As the example given in HBASE-20679, we can use a jsp to fix a 
RIT stuck situation. Set 'hbase.http.compile.jsp.dynamically.enabled' to true 
if you want to use this feature(default is false). After toggling on this flag, 
any jsp put in the  ${HBASE_HOME}/hbase-webapps/master(or regionserver) can be 
dynamically compiled and executed via visiting the jsp from web browser or by 
'curl' command. For example, if you put a jsp named fix.jsp into 
${HBASE_HOME}/hbase-webapps/master. You can execute this jsp by this url: 
http://masterServerName:infoport/fix.jsp  (was: Adding the ability to 
dynamically compile jsp enable us to do some hot fix. As the example given in 
HBASE-20679, we can use a jsp to fix a RIT stuck situation

The ability to dynamically load jsps is off by default. To enable set 
hbase.http.compile.jsp.dynamically.enable to true.)

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20679:
--
Release Note: 
Adding the ability to dynamically compile jsp enable us to do some hot fix. As 
the example given in HBASE-20679, we can use a jsp to fix a RIT stuck situation

The ability to dynamically load jsps is off by default. To enable set 
hbase.http.compile.jsp.dynamically.enable to true.

  was:Adding the ability to dynamically compile jsp enable us to do some hot 
fix. As the example given in HBASE-20679, we can use a jsp to fix a RIT stuck 
situation


> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502802#comment-16502802
 ] 

stack commented on HBASE-20679:
---

bq. Personally I think operational tooling around hot fixing would be valuable 
as a first class feature, but we'd need to start by laying out goals and 
non-goals. for example, why wouldn't we base it on JPDA's built in hot swap 
capability or the HotSwapAgent project. Or a dynamic classloader that accepts 
things to run from something like hbck2?

Sounds great (goals/non-goals; explore various mechanisms... didn't know about 
JPDA...  -- What about CPs?). Another issue though Sean?

Meantime, I'd suggest we get this nice one in. We can disable when alternate 
hotfixing channel?

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20682) MoveProcedure can be subtask of ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502799#comment-16502799
 ] 

stack commented on HBASE-20682:
---

bq. So we need to do something in SCP to tell the pending region related 
procedures I will take charge of this region and please give up assign/unassign 
the region. 

Thats not how it works now. We make it so the AP and/or UP that is running can 
go on to complete in spite of server crash. The SCP recovers regions that we on 
the crashed server and then puts up assigns so they can find new homes. When it 
is safe to do so -- after the WALs have been split -- the SCP will then break 
any stuck RPCs that were trying to go against the downed server (there is no  
timeout in the AMv2 dispatch currently).

The SCP does no locking. Its assigns go into the general queue and may happen 
before or after any that were ongoing (An AssignProcedure has logic that stops 
the assign if it notices one has happened concurrently <= could be problematic 
but at least one assign only runs at a time because of locking).

MRP is basic. It just spawns an UP and an  then an AP in series. It does not 
take a lock. There are some preflight checks. Up to this the MRP will just 
'complete' quietly if its preconditions are not met -- e.g. a region is not 
online -- but that causes issues when MRP is a subprocedure of another who 
depends on MRP executing.

MRP is used by the balancer mostly. Its not the end of the world if it doesn't 
complete. Thats how it has been treated up to this.

I forgot about its use as a region-reopen. I'm thinking that we make a 
dedicated reopen region procedure. It would not use UP and AP. It would not be 
like MRP in that it will be persist until it succeeds in close and open of a 
region. It will take a lock across the close/open (not like MRP). It would not 
take source and target servers just finding the region wherever it is. If a 
crash while it is running, it'll block on the crashed server until SCP says it 
can continue. It won'd spawn subprocedures as MRP does.

> MoveProcedure can be subtask of 
> ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.
> 
>
> Key: HBASE-20682
> URL: https://issues.apache.org/jira/browse/HBASE-20682
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
>
> MoveProcedure is used in at least two places as a means of 'reopening' a 
> region after a config has changed. This issue is about review of MP so this 
> usage is first-class (and that MP is good procedure citizen running as a 
> subprocedure. In question is what should happen if the source server or the 
> target servers are offline when we go to run. As of the parent issue, we'll 
> skip to the end. Should we instead go ahead with the move (internally, if we 
> are asked to assign to an offline server, we'll pick another -- do same for 
> unassign? Would help in this reopen use case).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Allan Yang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allan Yang updated HBASE-20679:
---
Attachment: HBASE-20679.003.patch

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.003.patch, 
> HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20569) NPE in RecoverStandbyProcedure.execute

2018-06-05 Thread Guanghao Zhang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20569?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Guanghao Zhang updated HBASE-20569:
---
Attachment: HBASE-20569.HBASE-19064.011.patch

> NPE in RecoverStandbyProcedure.execute
> --
>
> Key: HBASE-20569
> URL: https://issues.apache.org/jira/browse/HBASE-20569
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Major
> Attachments: HBASE-20569.HBASE-19064.001.patch, 
> HBASE-20569.HBASE-19064.002.patch, HBASE-20569.HBASE-19064.003.patch, 
> HBASE-20569.HBASE-19064.004.patch, HBASE-20569.HBASE-19064.005.patch, 
> HBASE-20569.HBASE-19064.006.patch, HBASE-20569.HBASE-19064.007.patch, 
> HBASE-20569.HBASE-19064.008.patch, HBASE-20569.HBASE-19064.009.patch, 
> HBASE-20569.HBASE-19064.010.patch, HBASE-20569.HBASE-19064.011.patch
>
>
> We call ReplaySyncReplicationWALManager.initPeerWorkers in INIT_WORKERS state 
> and then use it in DISPATCH_TASKS. But if we restart the master and the 
> procedure is restarted from state DISPATCH_TASKS, no one will call the 
> initPeerWorkers method and we will get NPE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502798#comment-16502798
 ] 

Sean Busbey commented on HBASE-20679:
-

We need a more accurate subject for this jira. It's not just dynamic JSP 
compiling if the goal is expressly adding a hot fix to running services.

If we're including this as a stop-gap I'd like to make sure it's called out as 
experimental / dangerous / could change without warning in future releases.

Personally I think operational tooling around hot fixing would be valuable as a 
first class feature, but we'd need to start by laying out goals and non-goals. 
for example, why wouldn't we base it on [JPDA's built in hot swap 
capability|https://docs.oracle.com/javase/8/docs/technotes/guides/jpda/enhancements1.4.html#hotswap]
 or the [HotSwapAgent project|http://hotswapagent.org/]. Or a dynamic 
classloader that accepts things to run from something like hbck2?

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502773#comment-16502773
 ] 

stack commented on HBASE-20679:
---

No harm in a flag I'd say [~allan163] (Do we have one for CPs?). Can ship w/ it 
off. I can help w/ doing the doc if you put up a decent RN so operators in a 
bind can find it.

Mighty [~esteban], I wish (smile). I think there'll always be edge cases and 
holes -- always. The example [~allan163] cites is a particularly good one. Its 
a problem in the new assignment/pv2 store.  We don't have tooling coverage here 
yet. Meantime the man would be stuck w/o his hot fix mechanism. Thanks.

On hbck2 making use of this mechanism, I'd hope we'd have something more 
formal. I was just roping in [~uagashe] so he could see the interesting 
approach taken here to a problem that needs the Master to drive the fix.

[~uagashe] talk anytime Interested in how we'd do fix w/o Master (w/o 
duplicating functionality or worse having two actors making change where we've 
worked hard to make it so only a single writer ever).

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Allan Yang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502763#comment-16502763
 ] 

Allan Yang commented on HBASE-20679:


I'm here just to offer a useful tool to use in the real production env. Yeah, 
injecting JSPs may cause some safety issue. But it indeed a good way to do some 
hot fix. "Assignment" fix is only one small example. When there is a bug in 
Master or RegionServer, all other tools are useless, we think it is a better 
way to inject a jsp other than hack the code and restart the server.
If every one agree, I can modify the patch to have a flag to control whether to 
enable compiling jsp dynamically.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502716#comment-16502716
 ] 

Hadoop QA commented on HBASE-20649:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
7s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
17s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
50s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
54s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m  
4s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m 
55s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}182m 
34s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
52s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}253m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | 

[jira] [Commented] (HBASE-20682) MoveProcedure can be subtask of ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.

2018-06-05 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502711#comment-16502711
 ] 

Duo Zhang commented on HBASE-20682:
---

Let's draw a high level picture first.

In general, we have table lock and region lock to make sure that the related 
procedures will be executed serially. For a table related procedure, it will 
hold the exclusive lock for the table. And for a region related procedure, it 
will hold the shared lock for the table and the exclusive lock for the region. 
The rule is clean and very easy to understand.

But in the real world, the AssignProcedure/UnassignProcedure could fail due to 
RS crash. So we need to do something in SCP to tell the pending region related 
procedures I will take charge of this region and please give up assign/unassign 
the region. And there is also another region related procedure, 
MoveRegionProcedure. It will schedule an UnassignProcedure and then an 
AssignProcedure. The picture here is not very clear, need to think more.



> MoveProcedure can be subtask of 
> ModifyTableProcedure/ReopenTableRegionsProcedure; ensure all kosher.
> 
>
> Key: HBASE-20682
> URL: https://issues.apache.org/jira/browse/HBASE-20682
> Project: HBase
>  Issue Type: Sub-task
>  Components: amv2
>Reporter: stack
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
>
> MoveProcedure is used in at least two places as a means of 'reopening' a 
> region after a config has changed. This issue is about review of MP so this 
> usage is first-class (and that MP is good procedure citizen running as a 
> subprocedure. In question is what should happen if the source server or the 
> target servers are offline when we go to run. As of the parent issue, we'll 
> skip to the end. Should we instead go ahead with the move (internally, if we 
> are asked to assign to an offline server, we'll pick another -- do same for 
> unassign? Would help in this reopen use case).



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread Sergey Soldatov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502708#comment-16502708
 ] 

Sergey Soldatov commented on HBASE-20657:
-

[~stack] Actually the patch I provided makes MTP holding the lock, so while 
it's not finished, it will keep the exclusive lock ( exactly that [~Apache9] 
mentioned in the last comment). But that would lead to the concurrent execution 
problem that I've addressed in the MasterProcedureScheduler changes. So the 
patch fixes the case when the table is healthy and all regions are online and 2 
concurrent MTPs are executed. 

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread Duo Zhang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502705#comment-16502705
 ] 

Duo Zhang commented on HBASE-20657:
---

{quote}
Also, the table lock is not exclusive while the procedure is running; it is 
only exclusive around each of the procedure steps. 
{quote}

Yes, this is a bit strange to me. The base class of all procedures has a 
holdLock method which returns false, but until now, for all the procedures I've 
implemented, I will override the holdLock method and change it to return 
true... I wonder why we make it return false in the base class since most 
procedures will return true...

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502695#comment-16502695
 ] 

stack commented on HBASE-20657:
---

Looking at ModifyTableProcedure, it looks like it needs a bit of love. For 
instance, in the last step, it spawns subprocedures to reopen regions but 
returns done without waiting on their conclusion. Let me play with it. Also, 
the table lock is not exclusive while the procedure is running; it is only 
exclusive around each of the procedure steps. Let me try having the table lock 
span the important steps.

This procedure does a load of stuff. Its a bit like SCP. Its only now getting a 
bit of exercise it seems. Thanks for turning up the issues.

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20684) org.apache.hadoop.hbase.client.Scan#setStopRow javadoc uses incorrect method

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502685#comment-16502685
 ] 

Hudson commented on HBASE-20684:


Results for branch branch-2
[build #827 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/827/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/827//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/827//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/827//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> org.apache.hadoop.hbase.client.Scan#setStopRow javadoc uses incorrect method
> 
>
> Key: HBASE-20684
> URL: https://issues.apache.org/jira/browse/HBASE-20684
> Project: HBase
>  Issue Type: Bug
>  Components: Client, documentation
>Affects Versions: 2.0.0
>Reporter: Evgenii
>Assignee: Evgenii
>Priority: Trivial
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20684.master.001.patch, 
> HBASE-20684.master.002.patch, HBASE-20684.master.003.patch
>
>
> org.apache.hadoop.hbase.client.Scan#setStopRow javadoc, in "deprecated" 
> paragraph, points out to incorrect value. It uses pointer to withStartRow, 
> but should use withStopRow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20657:
--
Priority: Critical  (was: Major)

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20657:
--
Fix Version/s: 2.0.1

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502662#comment-16502662
 ] 

stack commented on HBASE-20657:
---

Thanks [~sergey.soldatov] (and @ankit singhai) for the digging.

>From the description {{ When we are processing 
>MODIFY_TABLE_REOPEN_ALL_REGIONS we create MoveRegionProcedure for each region, 
>but it checks whether the region is online (and it's not),.}} ...

So, we can't "move" a region that is offline. That seems reasonable.  And if 
you didn't close a region, you shouldn't be doing its reopen So, MP failing 
seems right? Need to have MODIFY_TABLE_REOPEN_ALL_REGIONS deal with this? Or I 
should make a reopen procedure... one that gets lock on region and does 
reopen... would remove a layer of procedures too... not making use of MTP.

bq. Another question is related to (1) from the description. Is it expected, 
that during the retry from client we generate a new nonce key for the same 
procedure?

Thats broken. I believe [~an...@apache.org] found this... Lets fix.

Let me assign myself for now...  If you don't mind [~sergey.soldatov]

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-20657:
-

Assignee: stack  (was: Sergey Soldatov)

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: stack
>Priority: Major
> Fix For: 2.0.1
>
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20684) org.apache.hadoop.hbase.client.Scan#setStopRow javadoc uses incorrect method

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20684?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502656#comment-16502656
 ] 

Hudson commented on HBASE-20684:


Results for branch branch-2.0
[build #392 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/392/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/392//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/392//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/392//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> org.apache.hadoop.hbase.client.Scan#setStopRow javadoc uses incorrect method
> 
>
> Key: HBASE-20684
> URL: https://issues.apache.org/jira/browse/HBASE-20684
> Project: HBase
>  Issue Type: Bug
>  Components: Client, documentation
>Affects Versions: 2.0.0
>Reporter: Evgenii
>Assignee: Evgenii
>Priority: Trivial
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20684.master.001.patch, 
> HBASE-20684.master.002.patch, HBASE-20684.master.003.patch
>
>
> org.apache.hadoop.hbase.client.Scan#setStopRow javadoc, in "deprecated" 
> paragraph, points out to incorrect value. It uses pointer to withStartRow, 
> but should use withStopRow



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-06-05 Thread stack (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502635#comment-16502635
 ] 

stack commented on HBASE-20671:
---

Yeah. Good one.  Problematic code is in the load of meta after a master crash:

{code}
State localState = state;
if (localState == null) {
  // No region state column data in hbase:meta table! Are I doing a 
rolling upgrade from
  // hbase1 to hbase2? Am I restoring a SNAPSHOT or otherwise adding a 
region to hbase:meta?
  // In any of these cases, state is empty. For now, presume OFFLINE 
but there are probably
  // cases where we need to probe more to be sure this correct; TODO 
informed by experience.
  LOG.info(regionInfo.getEncodedName() + " regionState=null; presuming 
" + State.OFFLINE);
  localState = State.OFFLINE;
}
{code]

Above note allows that there are cases where the presumption that the region is 
OFFLINE is wrong (per HBASE-19529).

Let me write a test for this one to repro. We need a FOR_GC state to which we 
set Regions that are up for deletion.

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: Josh Elser
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack reassigned HBASE-20671:
-

Assignee: stack  (was: Josh Elser)

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20671) Merged region brought back to life causing RS to be killed by Master

2018-06-05 Thread stack (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20671?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-20671:
--
Fix Version/s: 2.0.1

> Merged region brought back to life causing RS to be killed by Master
> 
>
> Key: HBASE-20671
> URL: https://issues.apache.org/jira/browse/HBASE-20671
> Project: HBase
>  Issue Type: Bug
>  Components: amv2
>Affects Versions: 2.0.0
>Reporter: Josh Elser
>Assignee: stack
>Priority: Critical
> Fix For: 2.0.1
>
> Attachments: 
> hbase-hbase-master-ctr-e138-1518143905142-336066-01-03.hwx.site.log.zip, 
> hbase-hbase-regionserver-ctr-e138-1518143905142-336066-01-02.hwx.site.log.zip
>
>
> Another bug coming out of a master restart and replay of the pv2 logs.
> The master merged two regions into one successfully, was restarted, but then 
> ended up assigning the children region back out to the cluster. There is a 
> log message which appears to indicate that RegionStates acknowledges that it 
> doesn't know what this region is as it's replaying the pv2 WAL; however, it 
> incorrectly assumes that the region is just OFFLINE and needs to be assigned.
> {noformat}
> 2018-05-30 04:26:00,055 INFO  
> [RpcServer.default.FPBQ.Fifo.handler=29,queue=2,port=2] master.HMaster: 
> Client=hrt_qa//172.27.85.11 Merge regions a7dd6606dcacc9daf085fc9fa2aecc0c 
> and 4017a3c778551d4d258c785d455f9c0b
> 2018-05-30 04:28:27,525 DEBUG 
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> procedure2.ProcedureExecutor: Completed pid=4368, state=SUCCESS; 
> MergeTableRegionsProcedure table=tabletwo_merge, 
> regions=[a7dd6606dcacc9daf085fc9fa2aecc0c, 4017a3c778551d4d258c785d455f9c0b], 
> forcibly=false
> {noformat}
> {noformat}
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: a7dd6606dcacc9daf085fc9fa2aecc0c 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,263 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=a7dd6606dcacc9daf085fc9fa2aecc0c
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.AssignmentManager: 4017a3c778551d4d258c785d455f9c0b 
> regionState=null; presuming OFFLINE
> 2018-05-30 04:29:20,266 INFO  
> [master/ctr-e138-1518143905142-336066-01-03:2] 
> assignment.RegionStates: Added to offline, CURRENTLY NEVER CLEARED!!! 
> rit=OFFLINE, location=null, table=tabletwo_merge, 
> region=4017a3c778551d4d258c785d455f9c0b
> {noformat}
> Eventually, the RS reports in its online regions, and the master tells it to 
> kill itself:
> {noformat}
> 2018-05-30 04:29:24,272 WARN  
> [RpcServer.default.FPBQ.Fifo.handler=26,queue=2,port=2] 
> assignment.AssignmentManager: Killing 
> ctr-e138-1518143905142-336066-01-02.hwx.site,16020,1527654546619: Not 
> online: tabletwo_merge,,1527652130538.a7dd6606dcacc9daf085fc9fa2aecc0c.
> {noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502607#comment-16502607
 ] 

Ted Yu commented on HBASE-20630:


Consider using EnvironmentEdgeManager in the deletion test so that the 
parameters are closer to reality.

> B: Delete command enhancements
> 
>
> Key: HBASE-20630
> URL: https://issues.apache.org/jira/browse/HBASE-20630
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20630-v1.patch
>
>
> Make the command more useable. Currently, user needs to provide list of 
> backup ids to delete. It would be nice to have more convenient options, such 
> as: deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502600#comment-16502600
 ] 

Hadoop QA commented on HBASE-20630:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
30s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 
11s{color} | {color:red} hbase-backup: The patch generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
11s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 56s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m 
42s{color} | {color:green} hbase-backup in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 40m 50s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20630 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926644/HBASE-20630-v1.patch |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 27c97230a122 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13102/artifact/patchprocess/diff-checkstyle-hbase-backup.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13102/testReport/ |
| Max. process+thread count | 4309 (vs. ulimit of 1) |
| modules | C: hbase-backup U: hbase-backup |
| Console output | 

[jira] [Commented] (HBASE-20331) clean up shaded packaging for 2.1

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20331?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502569#comment-16502569
 ] 

Hudson commented on HBASE-20331:


Results for branch HBASE-20331
[build #31 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/31/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/31//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/31//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/31//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(x) {color:red}-1 client integration test{color}
--Failed when running client tests on top of Hadoop 2. [see log for 
details|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-20331/31//artifacts/output-integration/hadoop-2.log].
 (note that this means we didn't run on Hadoop 3)


> clean up shaded packaging for 2.1
> -
>
> Key: HBASE-20331
> URL: https://issues.apache.org/jira/browse/HBASE-20331
> Project: HBase
>  Issue Type: Umbrella
>  Components: Client, mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
>
> polishing pass on shaded modules for 2.0 based on trying to use them in more 
> contexts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20683) Incorrect return value for PreUpgradeValidator

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20683?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502562#comment-16502562
 ] 

Ted Yu commented on HBASE-20683:


lgtm
{code}
   * @return DataBlockEncoding compatible with HBase 2
   * @throws IOException if a remote or network exception occurs
   */
  private boolean validateDBE() throws IOException {
{code}
I wonder if the return value should be int: the number of incompatibilities (0 
means pass).

> Incorrect return value for PreUpgradeValidator
> --
>
> Key: HBASE-20683
> URL: https://issues.apache.org/jira/browse/HBASE-20683
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 3.0.0, 2.1.0, 2.0.1
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Critical
> Attachments: HBASE-20683.master.001.patch
>
>
> PreUpgradeValidator currently returns 1 when there is no incompatibilities.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502545#comment-16502545
 ] 

Ted Yu commented on HBASE-20630:


{code}
+// Purge all backups which are older than 0 days
+// Must return 1 deleted backup
+args = new String[] { "delete", "-p", "0" };
{code}
This is a somehow strange setting - 0 days.

If the deletion is for freeing up space, I doubt anybody would specify such 
number to this command.

> B: Delete command enhancements
> 
>
> Key: HBASE-20630
> URL: https://issues.apache.org/jira/browse/HBASE-20630
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20630-v1.patch
>
>
> Make the command more useable. Currently, user needs to provide list of 
> backup ids to delete. It would be nice to have more convenient options, such 
> as: deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502543#comment-16502543
 ] 

Ted Yu commented on HBASE-20630:


{code}
-import static 
org.apache.hadoop.hbase.backup.BackupRestoreConstants.OPTION_PATH;
{code}
Option is no longer supported ?
{code}
+  /*DEBUG*/LOG.error("DDD "+e.getMessage());
{code}
Is it debug or error ?
{code}
+addOptWithArg(OPTION_PATH_PURGE, OPTION_PATH_DESC);
{code}
The constant name, OPTION_PATH_DESC, doesn't reflect purging.
{code}
+  public void testBackupDeleteDateCommand() throws Exception {
{code}
Date usually means specific point in time. The test specifies number of days 
which is duration.


> B: Delete command enhancements
> 
>
> Key: HBASE-20630
> URL: https://issues.apache.org/jira/browse/HBASE-20630
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20630-v1.patch
>
>
> Make the command more useable. Currently, user needs to provide list of 
> backup ids to delete. It would be nice to have more convenient options, such 
> as: deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502538#comment-16502538
 ] 

Vladimir Rodionov commented on HBASE-20631:
---

{quote}
OPTION_PATH_DESC -> OPTION_PATH_PURGE_DESC
{quote}

We have separate descriptions for PATH and PURGED

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502536#comment-16502536
 ] 

Vladimir Rodionov commented on HBASE-20631:
---

{quote}
If the number of deleted backups is lower than backupIds.length, what may be 
the cause ?
{quote}

Some backups were not found (may be they have been deleted earlier)

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502535#comment-16502535
 ] 

Vladimir Rodionov commented on HBASE-20631:
---

That is the wrong ticket :), that is why I removed the patch. I attached 
updated patch to HBASE-20630.

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-20630:
--
Status: Patch Available  (was: Open)

cc: [~te...@apache.org]

> B: Delete command enhancements
> 
>
> Key: HBASE-20630
> URL: https://issues.apache.org/jira/browse/HBASE-20630
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20630-v1.patch
>
>
> Make the command more useable. Currently, user needs to provide list of 
> backup ids to delete. It would be nice to have more convenient options, such 
> as: deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20630) B: Delete command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20630?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-20630:
--
Attachment: HBASE-20630-v1.patch

> B: Delete command enhancements
> 
>
> Key: HBASE-20630
> URL: https://issues.apache.org/jira/browse/HBASE-20630
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
> Attachments: HBASE-20630-v1.patch
>
>
> Make the command more useable. Currently, user needs to provide list of 
> backup ids to delete. It would be nice to have more convenient options, such 
> as: deleting all backups which are older than XXX days, etc 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20670) NPE in HMaster#isInMaintenanceMode

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502468#comment-16502468
 ] 

Hudson commented on HBASE-20670:


Results for branch branch-1.4
[build #345 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/345/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/345//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/345//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/345//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NPE in HMaster#isInMaintenanceMode
> --
>
> Key: HBASE-20670
> URL: https://issues.apache.org/jira/browse/HBASE-20670
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.3.2
>Reporter: Andrew Purtell
>Assignee: Andrew Purtell
>Priority: Minor
> Fix For: 3.0.0, 2.1.0, 1.5.0, 2.0.1, 1.4.5
>
> Attachments: HBASE-20670-branch-1.patch, HBASE-20670.patch
>
>
> {noformat}
> Problem accessing /master-status. Reason: INTERNAL_SERVER_ERROR
> Caused by: java.lang.NullPointerException
> at 
> org.apache.hadoop.hbase.master.HMaster.isInMaintenanceMode(HMaster.java:2559)
> {noformat}
> The ZK trackers, including the maintenance mode tracker, are initialized only 
> after we try to bring up the filesystem. If HDFS is in safe mode and the 
> master is waiting on that, when an access to the master status page comes in 
> we trip over this problem. There might be other issues after we fix this, but 
> NPE Is always a bug, so let's address it. One option is to connect the ZK 
> based components with ZK before attempting to bring up the filesystem. Let me 
> try that first. If that doesn't work we could at least throw an IOE.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20611) UnsupportedOperationException may thrown when calling getCallQueueInfo()

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502454#comment-16502454
 ] 

Ted Yu commented on HBASE-20611:


{code}
+   * Use poll() whenever possible.
{code}
Can you clarify the above - poll is not part of Iterator ?
{code}
+  if (index < size())
+return (E) BoundedPriorityBlockingQueue.this.queue.objects[index++];
{code}
Enclose return in pair of curlies.
{code}
+  private final class Itr implements Iterator {
+private int index = 0;
+private Object[] crArray = AdaptiveLifoCoDelCallQueue.this.queue.toArray();
{code}
Why modification count is not involved in the above Iterator ?

> UnsupportedOperationException may thrown when calling getCallQueueInfo()
> 
>
> Key: HBASE-20611
> URL: https://issues.apache.org/jira/browse/HBASE-20611
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-20611.master.002.patch, 
> HBASE-20611.master.003.patch, HBASE-20611.master.004.patch, 
> HBASE-20611.master.005.patch
>
>
> HBASE-16290 added a new feature to dump queue info,  the method 
> getCallQueueInfo() need to iterate the queue to get the elements in the 
> queue.  But, except the Java's LinkedBlockingQueue, the other queue 
> implementations like BoundedPriorityBlockingQueue and 
> AdaptiveLifoCoDelCallQueue don't implement the method iterator(). If those 
> queues are used, a UnsupportedOperationException  will be thrown.
> This can be easily be reproduced by the UT testCallQueueInfo while adding a 
> conf: conf.set("hbase.ipc.server.callqueue.type", "deadline")
> {code}
> java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hbase.util.BoundedPriorityBlockingQueue.iterator(BoundedPriorityBlockingQueue.java:285)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.getCallQueueCountsSummary(RpcExecutor.java:166)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.getCallQueueInfo(SimpleRpcScheduler.java:241)
>   at 
> org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler.testCallQueueInfo(TestSimpleRpcScheduler.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20611) UnsupportedOperationException may thrown when calling getCallQueueInfo()

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502445#comment-16502445
 ] 

Ted Yu commented on HBASE-20611:


Can you give more meaningful names to the new tests so that we know what they 
test ?
{code}
+  public void testIteratorFailWithConcurrentModificationException2() {
{code}
Can you add comment in the test for where the ConcurrentModificationException 
is expected to be thrown ?
{code}
+assertEquals(false, match);
+assertNotEquals(testList.size(), testIteratorList.size());
{code}
It doesn't seem the above assertions would trigger 
ConcurrentModificationException. If ConcurrentModificationException is thrown 
earlier in the test, what's the meaning of adding the assertions ?

> UnsupportedOperationException may thrown when calling getCallQueueInfo()
> 
>
> Key: HBASE-20611
> URL: https://issues.apache.org/jira/browse/HBASE-20611
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-20611.master.002.patch, 
> HBASE-20611.master.003.patch, HBASE-20611.master.004.patch, 
> HBASE-20611.master.005.patch
>
>
> HBASE-16290 added a new feature to dump queue info,  the method 
> getCallQueueInfo() need to iterate the queue to get the elements in the 
> queue.  But, except the Java's LinkedBlockingQueue, the other queue 
> implementations like BoundedPriorityBlockingQueue and 
> AdaptiveLifoCoDelCallQueue don't implement the method iterator(). If those 
> queues are used, a UnsupportedOperationException  will be thrown.
> This can be easily be reproduced by the UT testCallQueueInfo while adding a 
> conf: conf.set("hbase.ipc.server.callqueue.type", "deadline")
> {code}
> java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hbase.util.BoundedPriorityBlockingQueue.iterator(BoundedPriorityBlockingQueue.java:285)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.getCallQueueCountsSummary(RpcExecutor.java:166)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.getCallQueueInfo(SimpleRpcScheduler.java:241)
>   at 
> org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler.testCallQueueInfo(TestSimpleRpcScheduler.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Work started] (HBASE-20553) Add dependency CVE checking to nightly tests

2018-06-05 Thread Sakthi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-20553 started by Sakthi.
--
> Add dependency CVE checking to nightly tests
> 
>
> Key: HBASE-20553
> URL: https://issues.apache.org/jira/browse/HBASE-20553
> Project: HBase
>  Issue Type: Umbrella
>  Components: dependencies
>Affects Versions: 3.0.0
>Reporter: Sean Busbey
>Assignee: Sakthi
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
>
> We should proactively work to flag dependencies with known CVEs so that we 
> can then update them early in our development instead of near a release.
> YETUS-441 is working to add a plugin for this, we should grab a copy early to 
> make sure it works for us.
> Rough outline:
> 1. [install yetus locally|http://yetus.apache.org/downloads/]
> 2. [install the dependency-check 
> cli|https://www.owasp.org/index.php/OWASP_Dependency_Check] (homebrew 
> instructions on right hand margin)
> 3. Get a local copy of the OWASP datafile ({{dependency-check --updateonly 
> --data /some/local/path/to/dir}})
> 4. Run {{hbase_nightly_yetus.sh}} using matching environment variables from 
> the “yetus general check” (currently [line #126 in our nightly 
> Jenkinsfile|https://github.com/apache/hbase/blob/master/dev-support/Jenkinsfile#L126])
> 5. Grab the plugin definition and suppression file from from YETUS-441
> 6. put the plugin definition either in a directory of dev-support or into the 
> hbase-personality.sh directly
> 7. Re-run {{hbase_nightly_yetus.sh}} to verify that the plugin results show 
> up. (Probably this will involve adding new pointers for “where is the 
> suppression file”, “where is the OWASP datafile” and pointing them somewhere 
> locally.)
> Once all of that is in place we’ll get the changes needed into a branch that 
> we can test out. Over in YETUS-441 I’ll need to add a jenkins job that’ll 
> handle periodically updating a copy of the datafile for the OWASP dependency 
> checker. Presuming I have that in place by the time we have a nightly branch 
> to check this out, then we’ll also need to update our nightly Jenkinsfile to 
> fetch the data file from that job.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-05 Thread Peter Somogyi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502435#comment-16502435
 ] 

Peter Somogyi commented on HBASE-20649:
---

v2: change tab to spaces in stacktrace example

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-05 Thread Peter Somogyi (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Peter Somogyi updated HBASE-20649:
--
Attachment: HBASE-20649.master.002.patch

> Validate HFiles do not have PREFIX_TREE DataBlockEncoding
> -
>
> Key: HBASE-20649
> URL: https://issues.apache.org/jira/browse/HBASE-20649
> Project: HBase
>  Issue Type: New Feature
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Attachments: HBASE-20649.master.001.patch, 
> HBASE-20649.master.002.patch
>
>
> HBASE-20592 adds a tool to check column families on the cluster do not have 
> PREFIX_TREE encoding.
> Since it is possible that DataBlockEncoding was already changed but HFiles 
> are not rewritten yet we would need a tool that can verify the content of 
> hfiles in the cluster.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502432#comment-16502432
 ] 

Ted Yu commented on HBASE-20631:


You don't need to delete patch v1.
It serves as basis for future patches.

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-20631:
--
Attachment: (was: HBASE-20630-v1.patch)

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20631) B: Merge command enhancements

2018-06-05 Thread Vladimir Rodionov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vladimir Rodionov updated HBASE-20631:
--
Status: Open  (was: Patch Available)

> B: Merge command enhancements 
> 
>
> Key: HBASE-20631
> URL: https://issues.apache.org/jira/browse/HBASE-20631
> Project: HBase
>  Issue Type: New Feature
>Reporter: Vladimir Rodionov
>Assignee: Vladimir Rodionov
>Priority: Major
>
> Currently, merge supports only list of backup ids, which users must provide. 
> Date range merges seem more convenient for users. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502414#comment-16502414
 ] 

Hadoop QA commented on HBASE-20657:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
39s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
33s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
32s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
16s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
14s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 48s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}181m 
37s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}218m 25s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20657 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926591/HBASE-20657-3-branch-2.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 2cf73bc82e48 4.4.0-89-generic #112-Ubuntu SMP Mon Jul 31 
19:38:41 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13095/testReport/ |
| Max. process+thread count | 4547 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13095/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> Retrying RPC call for 

[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502406#comment-16502406
 ] 

Sean Busbey commented on HBASE-20679:
-

I mean in a world where we continue to not ship a hot-fix ability.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502403#comment-16502403
 ] 

Sean Busbey commented on HBASE-20679:
-

[~esteban] could you describe the workflow for handling the example situation 
[~allan163]'s team ran into?

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20649) Validate HFiles do not have PREFIX_TREE DataBlockEncoding

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20649?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502393#comment-16502393
 ] 

Hadoop QA commented on HBASE-20649:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
21s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
15s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
50s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  4m  
8s{color} | {color:blue} branch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 7s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
42s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
10s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
 9s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 23 line(s) with tabs. {color} |
| {color:blue}0{color} | {color:blue} refguide {color} | {color:blue}  3m 
59s{color} | {color:blue} patch has no errors when building the reference 
guide. See footer for rendered docs, which you should manually inspect. {color} 
|
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 8s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
8m 52s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: . {color} 
|
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}170m 
26s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}231m  9s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HBASE-20611) UnsupportedOperationException may thrown when calling getCallQueueInfo()

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20611?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502386#comment-16502386
 ] 

Hadoop QA commented on HBASE-20611:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
18s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
49s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
16s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
56s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
42s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
10s{color} | {color:red} hbase-server: The patch generated 1 new + 24 unchanged 
- 0 fixed = 25 total (was 24) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
44s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 49s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}136m  
9s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}177m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20611 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926598/HBASE-20611.master.005.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 78f3d4984cc3 3.13.0-137-generic #186-Ubuntu SMP Mon Dec 4 
19:09:19 UTC 2017 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13097/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13097/testReport/ |
| Max. process+thread count | 4368 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Commented] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502375#comment-16502375
 ] 

Hadoop QA commented on HBASE-20681:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
27s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 9s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 3s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
14s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
53s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
59s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
10m 36s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m  
8s{color} | {color:green} hbase-resource-bundle in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hbase-assembly in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hbase-shaded in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 36m 29s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926617/HBASE-20681.003.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  |
| uname | Linux bf4328e5eac0 3.13.0-143-generic #192-Ubuntu SMP Tue Feb 27 
10:45:36 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13100/testReport/ |
| Max. process+thread count | 87 (vs. ulimit of 1) |
| modules | C: hbase-resource-bundle hbase-assembly hbase-shaded U: . |
| Console output | 

[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Esteban Gutierrez (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502360#comment-16502360
 ] 

Esteban Gutierrez commented on HBASE-20679:
---

I'm -1 on having a way to inject JSPs in the server side, not only because it 
opens the possibility of security issues, but considering the maturity of HBase 
it gives the impression that we will always have a class of issues that cannot 
be fixed by our regular means: e.g. hbck, reassigning a region manually or 
deploying a fix and performing a rolling restart.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502326#comment-16502326
 ] 

Hadoop QA commented on HBASE-20681:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
55s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
14s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
4s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
9m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 2.7.4 
or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  3m 
34s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}179m 
11s{color} | {color:green} root in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
32s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}232m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20681 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926578/HBASE-20681.002.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  |
| uname | Linux d8ba59c17ad6 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13093/testReport/ |
| Max. process+thread count | 4308 (vs. ulimit of 1) |
| modules | C: hbase-resource-bundle hbase-assembly hbase-shaded . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13093/console |
| Powered by | Apache Yetus 0.7.0   http://yetus.apache.org |


This message was automatically generated.



> IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core
> ---
>
> Key: HBASE-20681

[jira] [Commented] (HBASE-20577) Make Log Level page design consistent with the design of other pages in UI

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20577?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502306#comment-16502306
 ] 

Nihal Jain commented on HBASE-20577:


Hi [~yuzhih...@gmail.com], while trying out rest server, I discovered that Rest 
does not have 'header.jsp'. So this throws a FNF exception in loglevel page. My 
bad, i was not aware of other projects having web UI when I wrote this fix. 
Sorry for the trouble.
 Anyways, I think it would be wrong to assume all projects' web ui's will be 
having 'header.jsp'. To mitigate, we can do the following:
 * 1) fall back to original log level design if 'header.jsp' is not present
 * 2) Next, we can refactor jsp file of other projects (like rest) to have 
'header.jsp' and 'footer.jsp'

How should we handle it? Should I create a new JIRA for point 1 and 2 together? 
Or can we just re-open this issue and add point 1 fix as addendum? And refactor 
related code in another jira?

> Make Log Level page design consistent with the design of other pages in UI
> --
>
> Key: HBASE-20577
> URL: https://issues.apache.org/jira/browse/HBASE-20577
> Project: HBase
>  Issue Type: Improvement
>  Components: UI, Usability
>Reporter: Nihal Jain
>Assignee: Nihal Jain
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20577.master.001.patch, 
> HBASE-20577.master.002.patch, after_patch_LogLevel_CLI.png, 
> after_patch_get_log_level.png, after_patch_require_field_validation.png, 
> after_patch_set_log_level_bad.png, after_patch_set_log_level_success.png, 
> before_patch_no_validation_required_field.png
>
>
> The Log Level page in web UI seems out of the place. I think we should make 
> it look consistent with design of other pages in HBase web UI.
> Also, validation of required fields should be done, otherwise user should not 
> be allowed to click submit button.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20687) Implement security policy save and restore tool

2018-06-05 Thread Alex Araujo (JIRA)
Alex Araujo created HBASE-20687:
---

 Summary: Implement security policy save and restore tool
 Key: HBASE-20687
 URL: https://issues.apache.org/jira/browse/HBASE-20687
 Project: HBase
  Issue Type: Improvement
Reporter: Alex Araujo
Assignee: Alex Araujo


Extract ACL (accesscontroller) and label (visibilitycontroller) metadata, 
persist it, allow for restore by namespace or by table (or globally, of 
course). That would be a big level up for operability of the security features.

Kudos to [~apurtell] for this idea.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502282#comment-16502282
 ] 

Josh Elser commented on HBASE-20681:


.003
 * Restores default scope of {{test}} and overrides to {{compile}} in 
hbase-assembly *only*
 * Removes addition of case-differing new-bsd license, and makes all 
comparisons case insensitively

As a result of the above, hamcrest-core does not appear in the shaded jars now. 
I don't know if that's "correct" or not (given the previous comments about our 
integration tests and our expectations).

> IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core
> ---
>
> Key: HBASE-20681
> URL: https://issues.apache.org/jira/browse/HBASE-20681
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20681.001.patch, HBASE-20681.002.patch, 
> HBASE-20681.003.patch
>
>
> HADOOP-15406 marked mockito and junit as test-only dependencies which, I 
> believe, has stopped them from being included in a stock Hadoop classpath. 
> Prior, you'd get hamcrest at {{share/hadoop/common/lib/hamcrest-core-1.3.jar}}
> However, we depend on it being there for our junit in hbase-it:
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-it ---
> [INFO] org.apache.hbase:hbase-it:jar:2.0.1-SNAPSHOT
> [INFO] +- junit:junit:jar:4.12:test
> [INFO] |  \- org.hamcrest:hamcrest-core:jar:1.3:test
> {noformat}
> We need to make sure we include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20340) potential null pointer exception in org.apache.hadoop.hbase.rest.TableScanResource given empty resource

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502283#comment-16502283
 ] 

Nihal Jain commented on HBASE-20340:


In  [^HBASE-20340-master.001.patch] we're doing {{rs.listCells()}} twice. Can 
we return null after performing {{rs.listCells()}} in the existing code?


> potential null pointer exception in 
> org.apache.hadoop.hbase.rest.TableScanResource given empty resource
> ---
>
> Key: HBASE-20340
> URL: https://issues.apache.org/jira/browse/HBASE-20340
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: andy zhou
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-20340-master.001.patch
>
>
> Our code analyzer has detected a potential null pointer issue in 
> org.apache.hadoop.hbase.rest.TableScanResource as follows:
> {code:java}
> // org.apache.hadoop.hbase.client.Result.java Line #244 
> public List listCells() {
> return isEmpty()? null: Arrays.asList(rawCells());
> }
> {code}
> {code:java}
> // org.apache.hadoop.hbase.rest.TableScanResource Line #95
> List kvs = rs.listCells(); 
> for (Cell kv : kvs) { ... }
> {code}
>  
> Given empty rs, kvs shall be null instead of an empty list, leading to 
> potential null pointer exception
>  
> (org.apache.hadoop.hbase.client.Result.java Line #244 and 
> org.apache.hadoop.hbase.rest.TableScanResource Line #95
> Linkage to the code is here:
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java#L244
> SourceBrella Inc.
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java#L95



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Josh Elser (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated HBASE-20681:
---
Attachment: HBASE-20681.003.patch

> IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core
> ---
>
> Key: HBASE-20681
> URL: https://issues.apache.org/jira/browse/HBASE-20681
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20681.001.patch, HBASE-20681.002.patch, 
> HBASE-20681.003.patch
>
>
> HADOOP-15406 marked mockito and junit as test-only dependencies which, I 
> believe, has stopped them from being included in a stock Hadoop classpath. 
> Prior, you'd get hamcrest at {{share/hadoop/common/lib/hamcrest-core-1.3.jar}}
> However, we depend on it being there for our junit in hbase-it:
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-it ---
> [INFO] org.apache.hbase:hbase-it:jar:2.0.1-SNAPSHOT
> [INFO] +- junit:junit:jar:4.12:test
> [INFO] |  \- org.hamcrest:hamcrest-core:jar:1.3:test
> {noformat}
> We need to make sure we include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20357) AccessControlClient API Enhancement

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502258#comment-16502258
 ] 

Nihal Jain edited comment on HBASE-20357 at 6/5/18 6:34 PM:


The docstring of AccessChecker.requirePermission(User, String, String, Action) 
can be updated. We can add params user, request. Currently I see following 
method description in my IDE. Since these two params are missing, these are 
displayed in an unordered fashion.
{noformat}
Authorizes that the current user has global privileges for the given action.

Parameters:
perm The action being requested
filterUser User name to be filtered as requested
user
request{noformat}

Just noticed. It is the case in each method in the aforementioned class. May be 
we can fix it with this patch.

Also, proper doc can be added to differentiate user and filterUser.


was (Author: nihaljain.cs):
The docstring of AccessChecker.requirePermission(User, String, String, Action) 
can be updated. We can add params user, request. Currently I see following 
method description in my IDE. Since these two params are missing, these are 
displayed in an unordered fashion.
{noformat}
Authorizes that the current user has global privileges for the given action.

Parameters:
perm The action being requested
filterUser User name to be filtered as requested
user
request{noformat}

> AccessControlClient API Enhancement
> ---
>
> Key: HBASE-20357
> URL: https://issues.apache.org/jira/browse/HBASE-20357
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Attachments: HBASE-20357.master.001.patch
>
>
> *Background:*
>  Currently HBase ACLs can be retrieved based on the namespace or table name 
> only. There is no direct API available to retrieve the permissions based on 
> the namespace, table name, column family and column qualifier for specific 
> user.
> Client has to write application logic in multiple steps to retrieve ACLs 
> based on table name, column name and column qualifier for specific user.
>  HBase should enhance AccessControlClient APIs to simplyfy this.
> *AccessControlClient API should be extended with following APIs,*    
>  # To retrieve permissions based on the namespace, table name, column family 
> and column qualifier for specific user.
>   Permissions can be retrieved based on the following inputs,
>        - Namespace/Table (already available)
>        - Namespace/Table + UserName
>        - Table + CF
>        - Table + CF + UserName
>        - Table + CF + CQ
>        - Table + CF + CQ + UserName
>           Scope of retrieving permission will be as follows,
>                  - Same as existing
>        2. To validate whether a user is allowed to perform specified 
> operations on a particular table, will be useful to check user privilege 
> instead of getting ACD during client                                    
> operation.
>              User validation can be performed based on following inputs, 
>                   - Table + CF + CQ + UserName + Actions
>             Scope of validating user privilege,
>                     User can perform self check without any special privilege 
> but ADMIN privilege will be required to perform check for other users.
>                     For example, suppose there are two users "userA" & 
> "userB" then there can be below scenarios,
>                         - when userA want to check whether userA have 
> privilege to perform mentioned actions
>                                 > userA don't need ADMIN privilege, as it's a 
> self query.
>                        - when userA want to check whether userB have 
> privilege to perform mentioned actions,
>                                 > userA must have ADMIN or superuser 
> privilege, as it's trying to query for other user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Umesh Agashe (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502259#comment-16502259
 ] 

Umesh Agashe commented on HBASE-20679:
--

Thanks for the patch [~allan163]! @stack, we need to discuss moving away from 
running master requirement for hbck2.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Comment Edited] (HBASE-20357) AccessControlClient API Enhancement

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502258#comment-16502258
 ] 

Nihal Jain edited comment on HBASE-20357 at 6/5/18 6:30 PM:


The docstring of AccessChecker.requirePermission(User, String, String, Action) 
can be updated. We can add params user, request. Currently I see following 
method description in my IDE. Since these two params are missing, these are 
displayed in an unordered fashion.
{noformat}
Authorizes that the current user has global privileges for the given action.

Parameters:
perm The action being requested
filterUser User name to be filtered as requested
user
request{noformat}


was (Author: nihaljain.cs):
The docstring of AccessChecker.requirePermission(User, String, String, Action) 
can be updated. We can add params user, request. Currently I see following 
method description in my IDE. Since these two params are missing, these are 
displayed in an ordered fashion.
{noformat}
Authorizes that the current user has global privileges for the given action.

Parameters:
perm The action being requested
filterUser User name to be filtered as requested
user
request{noformat}

> AccessControlClient API Enhancement
> ---
>
> Key: HBASE-20357
> URL: https://issues.apache.org/jira/browse/HBASE-20357
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Attachments: HBASE-20357.master.001.patch
>
>
> *Background:*
>  Currently HBase ACLs can be retrieved based on the namespace or table name 
> only. There is no direct API available to retrieve the permissions based on 
> the namespace, table name, column family and column qualifier for specific 
> user.
> Client has to write application logic in multiple steps to retrieve ACLs 
> based on table name, column name and column qualifier for specific user.
>  HBase should enhance AccessControlClient APIs to simplyfy this.
> *AccessControlClient API should be extended with following APIs,*    
>  # To retrieve permissions based on the namespace, table name, column family 
> and column qualifier for specific user.
>   Permissions can be retrieved based on the following inputs,
>        - Namespace/Table (already available)
>        - Namespace/Table + UserName
>        - Table + CF
>        - Table + CF + UserName
>        - Table + CF + CQ
>        - Table + CF + CQ + UserName
>           Scope of retrieving permission will be as follows,
>                  - Same as existing
>        2. To validate whether a user is allowed to perform specified 
> operations on a particular table, will be useful to check user privilege 
> instead of getting ACD during client                                    
> operation.
>              User validation can be performed based on following inputs, 
>                   - Table + CF + CQ + UserName + Actions
>             Scope of validating user privilege,
>                     User can perform self check without any special privilege 
> but ADMIN privilege will be required to perform check for other users.
>                     For example, suppose there are two users "userA" & 
> "userB" then there can be below scenarios,
>                         - when userA want to check whether userA have 
> privilege to perform mentioned actions
>                                 > userA don't need ADMIN privilege, as it's a 
> self query.
>                        - when userA want to check whether userB have 
> privilege to perform mentioned actions,
>                                 > userA must have ADMIN or superuser 
> privilege, as it's trying to query for other user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20357) AccessControlClient API Enhancement

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502258#comment-16502258
 ] 

Nihal Jain commented on HBASE-20357:


The docstring of AccessChecker.requirePermission(User, String, String, Action) 
can be updated. We can add params user, request. Currently I see following 
method description in my IDE. Since these two params are missing, these are 
displayed in an ordered fashion.
{noformat}
Authorizes that the current user has global privileges for the given action.

Parameters:
perm The action being requested
filterUser User name to be filtered as requested
user
request{noformat}

> AccessControlClient API Enhancement
> ---
>
> Key: HBASE-20357
> URL: https://issues.apache.org/jira/browse/HBASE-20357
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Attachments: HBASE-20357.master.001.patch
>
>
> *Background:*
>  Currently HBase ACLs can be retrieved based on the namespace or table name 
> only. There is no direct API available to retrieve the permissions based on 
> the namespace, table name, column family and column qualifier for specific 
> user.
> Client has to write application logic in multiple steps to retrieve ACLs 
> based on table name, column name and column qualifier for specific user.
>  HBase should enhance AccessControlClient APIs to simplyfy this.
> *AccessControlClient API should be extended with following APIs,*    
>  # To retrieve permissions based on the namespace, table name, column family 
> and column qualifier for specific user.
>   Permissions can be retrieved based on the following inputs,
>        - Namespace/Table (already available)
>        - Namespace/Table + UserName
>        - Table + CF
>        - Table + CF + UserName
>        - Table + CF + CQ
>        - Table + CF + CQ + UserName
>           Scope of retrieving permission will be as follows,
>                  - Same as existing
>        2. To validate whether a user is allowed to perform specified 
> operations on a particular table, will be useful to check user privilege 
> instead of getting ACD during client                                    
> operation.
>              User validation can be performed based on following inputs, 
>                   - Table + CF + CQ + UserName + Actions
>             Scope of validating user privilege,
>                     User can perform self check without any special privilege 
> but ADMIN privilege will be required to perform check for other users.
>                     For example, suppose there are two users "userA" & 
> "userB" then there can be below scenarios,
>                         - when userA want to check whether userA have 
> privilege to perform mentioned actions
>                                 > userA don't need ADMIN privilege, as it's a 
> self query.
>                        - when userA want to check whether userB have 
> privilege to perform mentioned actions,
>                                 > userA must have ADMIN or superuser 
> privilege, as it's trying to query for other user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20357) AccessControlClient API Enhancement

2018-06-05 Thread Nihal Jain (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502243#comment-16502243
 ] 

Nihal Jain commented on HBASE-20357:


Changes in {{RSGroupAdminEndpoint}} are fine as we need not filter requests 
based on users; Also {{TestRSGroupsWithACL}} would not be affected as we call 
{{rsGroupAdminEndpoint.checkPermission()}} from tests. 

> AccessControlClient API Enhancement
> ---
>
> Key: HBASE-20357
> URL: https://issues.apache.org/jira/browse/HBASE-20357
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Attachments: HBASE-20357.master.001.patch
>
>
> *Background:*
>  Currently HBase ACLs can be retrieved based on the namespace or table name 
> only. There is no direct API available to retrieve the permissions based on 
> the namespace, table name, column family and column qualifier for specific 
> user.
> Client has to write application logic in multiple steps to retrieve ACLs 
> based on table name, column name and column qualifier for specific user.
>  HBase should enhance AccessControlClient APIs to simplyfy this.
> *AccessControlClient API should be extended with following APIs,*    
>  # To retrieve permissions based on the namespace, table name, column family 
> and column qualifier for specific user.
>   Permissions can be retrieved based on the following inputs,
>        - Namespace/Table (already available)
>        - Namespace/Table + UserName
>        - Table + CF
>        - Table + CF + UserName
>        - Table + CF + CQ
>        - Table + CF + CQ + UserName
>           Scope of retrieving permission will be as follows,
>                  - Same as existing
>        2. To validate whether a user is allowed to perform specified 
> operations on a particular table, will be useful to check user privilege 
> instead of getting ACD during client                                    
> operation.
>              User validation can be performed based on following inputs, 
>                   - Table + CF + CQ + UserName + Actions
>             Scope of validating user privilege,
>                     User can perform self check without any special privilege 
> but ADMIN privilege will be required to perform check for other users.
>                     For example, suppose there are two users "userA" & 
> "userB" then there can be below scenarios,
>                         - when userA want to check whether userA have 
> privilege to perform mentioned actions
>                                 > userA don't need ADMIN privilege, as it's a 
> self query.
>                        - when userA want to check whether userB have 
> privilege to perform mentioned actions,
>                                 > userA must have ADMIN or superuser 
> privilege, as it's trying to query for other user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20340) potential null pointer exception in org.apache.hadoop.hbase.rest.TableScanResource given empty resource

2018-06-05 Thread Ted Yu (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-20340:
---
Priority: Minor  (was: Major)

> potential null pointer exception in 
> org.apache.hadoop.hbase.rest.TableScanResource given empty resource
> ---
>
> Key: HBASE-20340
> URL: https://issues.apache.org/jira/browse/HBASE-20340
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: andy zhou
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-20340-master.001.patch
>
>
> Our code analyzer has detected a potential null pointer issue in 
> org.apache.hadoop.hbase.rest.TableScanResource as follows:
> {code:java}
> // org.apache.hadoop.hbase.client.Result.java Line #244 
> public List listCells() {
> return isEmpty()? null: Arrays.asList(rawCells());
> }
> {code}
> {code:java}
> // org.apache.hadoop.hbase.rest.TableScanResource Line #95
> List kvs = rs.listCells(); 
> for (Cell kv : kvs) { ... }
> {code}
>  
> Given empty rs, kvs shall be null instead of an empty list, leading to 
> potential null pointer exception
>  
> (org.apache.hadoop.hbase.client.Result.java Line #244 and 
> org.apache.hadoop.hbase.rest.TableScanResource Line #95
> Linkage to the code is here:
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java#L244
> SourceBrella Inc.
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java#L95



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20340) potential null pointer exception in org.apache.hadoop.hbase.rest.TableScanResource given empty resource

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502209#comment-16502209
 ] 

Hadoop QA commented on HBASE-20340:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
39s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
17s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
28s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
44s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
20s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
53s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
11m 55s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4 or 3.0.0. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
32s{color} | {color:green} hbase-rest in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 9s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 43m 44s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20340 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926602/HBASE-20340-master.001.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 02c98efc469b 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| findbugs | v3.1.0-RC3 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13099/testReport/ |
| Max. process+thread count | 2450 (vs. ulimit of 1) |
| modules | C: hbase-rest U: hbase-rest |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13099/console |
| Powered by | 

[jira] [Commented] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502195#comment-16502195
 ] 

Josh Elser commented on HBASE-20681:


{quote}Ugh. this scope change smells bad. Can we find a different way to get 
things included in the specific context where we need it?
{quote}
Setting the scope in dependencyManagement drives me bonkers (spent a good 
thirty minutes trying to get the stupid jar included in the tarball because I 
assumed the default scope was still {{compile}}). As long as it works, I can 
set the scope to be {{compile }}in hbase-assembly (and elsewhere if needed?)
{quote}Do we actually need hamcrest in the shaded artifacts, or is this just a 
side effect of the scope change above? Could we filter it out instead of 
relocating it?
{quote}
If we expect people to be able to run our IntegrationTest code using the shaded 
jars, then yes. I was going under the assumption that was "yes". I think we 
could be doing a better job here, but that's a long pole to separate 
testing-utility from public API, and then separate dependencies accordingly.
{quote}please either make the name match case insensitive or add a supplemental 
info entry that corrects the name for hamcrest. Let's try to keep a single 
instance of names here.
{quote}
You got it, boss.

> IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core
> ---
>
> Key: HBASE-20681
> URL: https://issues.apache.org/jira/browse/HBASE-20681
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20681.001.patch, HBASE-20681.002.patch
>
>
> HADOOP-15406 marked mockito and junit as test-only dependencies which, I 
> believe, has stopped them from being included in a stock Hadoop classpath. 
> Prior, you'd get hamcrest at {{share/hadoop/common/lib/hamcrest-core-1.3.jar}}
> However, we depend on it being there for our junit in hbase-it:
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-it ---
> [INFO] org.apache.hbase:hbase-it:jar:2.0.1-SNAPSHOT
> [INFO] +- junit:junit:jar:4.12:test
> [INFO] |  \- org.hamcrest:hamcrest-core:jar:1.3:test
> {noformat}
> We need to make sure we include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20357) AccessControlClient API Enhancement

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20357?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502189#comment-16502189
 ] 

Ted Yu commented on HBASE-20357:


Pankaj:
Since the patch is relatively large, please spin up a secure cluster and verify 
that existing permission requirement is met.

Thanks

> AccessControlClient API Enhancement
> ---
>
> Key: HBASE-20357
> URL: https://issues.apache.org/jira/browse/HBASE-20357
> Project: HBase
>  Issue Type: Improvement
>  Components: security
>Reporter: Pankaj Kumar
>Assignee: Pankaj Kumar
>Priority: Major
> Attachments: HBASE-20357.master.001.patch
>
>
> *Background:*
>  Currently HBase ACLs can be retrieved based on the namespace or table name 
> only. There is no direct API available to retrieve the permissions based on 
> the namespace, table name, column family and column qualifier for specific 
> user.
> Client has to write application logic in multiple steps to retrieve ACLs 
> based on table name, column name and column qualifier for specific user.
>  HBase should enhance AccessControlClient APIs to simplyfy this.
> *AccessControlClient API should be extended with following APIs,*    
>  # To retrieve permissions based on the namespace, table name, column family 
> and column qualifier for specific user.
>   Permissions can be retrieved based on the following inputs,
>        - Namespace/Table (already available)
>        - Namespace/Table + UserName
>        - Table + CF
>        - Table + CF + UserName
>        - Table + CF + CQ
>        - Table + CF + CQ + UserName
>           Scope of retrieving permission will be as follows,
>                  - Same as existing
>        2. To validate whether a user is allowed to perform specified 
> operations on a particular table, will be useful to check user privilege 
> instead of getting ACD during client                                    
> operation.
>              User validation can be performed based on following inputs, 
>                   - Table + CF + CQ + UserName + Actions
>             Scope of validating user privilege,
>                     User can perform self check without any special privilege 
> but ADMIN privilege will be required to perform check for other users.
>                     For example, suppose there are two users "userA" & 
> "userB" then there can be below scenarios,
>                         - when userA want to check whether userA have 
> privilege to perform mentioned actions
>                                 > userA don't need ADMIN privilege, as it's a 
> self query.
>                        - when userA want to check whether userB have 
> privilege to perform mentioned actions,
>                                 > userA must have ADMIN or superuser 
> privilege, as it's trying to query for other user.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20334) add a test that expressly uses both our shaded client and the one from hadoop 3

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502155#comment-16502155
 ] 

Josh Elser commented on HBASE-20334:


{quote}Yes, this is currently an end-to-end test. I could adapt it to 
optionally use an existing cluster.
{quote}
My only concern was using a top-level directory in HDFS if it was a "secure" 
instance. That would very likely require HDFS superuser access to create. If 
we're using a minicluster only, that's perfectly fine (I think supporting a 
real hdfs complicates this unnecessarily).
{quote}I could structure things differently by putting this test into its own 
folder and including them as resources there. The down side is that the script 
will no longer be self contained. At this point I don't have a strong feeling 
either way.
{quote}
Yeah, I feel you. I didn't know of any bash "magic" that we could do which 
would still give us a self-contained "unit" as you've created now. No strong 
feelings from me either. I think what you have now is great.

> add a test that expressly uses both our shaded client and the one from hadoop 
> 3
> ---
>
> Key: HBASE-20334
> URL: https://issues.apache.org/jira/browse/HBASE-20334
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20334.0.patch
>
>
> Since we're making a shaded client that bleed out of our namespace and into 
> Hadoop's, we should ensure that we can show our clients coexisting. Even if 
> it's just an IT that successfully talks to both us and HDFS via our 
> respective shaded clients, that'd be a big help in keeping us proactive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20333) break up shaded client into one with no Hadoop and one that's standalone

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502152#comment-16502152
 ] 

Josh Elser commented on HBASE-20333:


{quote}By saying "just give me hadoop however you can do that" we have options. 
shaded hadoop jars on hadoop 3 and regular hadoop 2 jars on hadoop 2.
{quote}
Got it, wasn't considering the case of folks actually using our new shaded jars 
with Hadoop2 at all (pessimistically assuming that folks on h2 will just keep 
doing what they've always been doing – but that's a bad reason).
{quote}The provided but not relocate hadoop is what we have now; what we've had 
since we first added a shaded jar
{quote}
Ok, if that's what we have been doing, I think it's fine to leave that one 
as-is. I was mostly curious as I've noticed fun issues where Hadoop 2.7.4 
client jars can't talk to a Hadoop 2.8.3 installation. I'm just worried about 
the long-term stability of trying to provide Hadoop client libraries (or tell 
people "rebuild hbase against the hadoop you use). I think this is related to 
your changes here, but not something we need to hash out right now.
{quote}It's not relocated because parts of Hadoop are in our API. I can go back 
and find the jira with me and Elliott going around about it if you'd like 
background.
{quote}
No, don't put yourself out :)

> break up shaded client into one with no Hadoop and one that's standalone
> 
>
> Key: HBASE-20333
> URL: https://issues.apache.org/jira/browse/HBASE-20333
> Project: HBase
>  Issue Type: Sub-task
>  Components: shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20333.WIP.0.patch
>
>
> there are contexts where we want to stay out of our downstream users way wrt 
> dependencies, but they need more Hadoop classes than we provide. i.e. any 
> downstream client that wants to use both HBase and HDFS in their application, 
> or any non-MR YARN application.
> Now that Hadoop also has shaded client artifacts for Hadoop 3, we're also 
> providing less incremental benefit by including our own rewritten Hadoop 
> classes to avoid downstream needing to pull in all of Hadoop's transitive 
> dependencies.
> right now those users need to ensure that any jars from the Hadoop project 
> are loaded in the classpath prior to our shaded client jar. This is brittle 
> and prone to weird debugging trouble.
> instead, we should have two artifacts: one that just lists Hadoop as a 
> prerequisite and one that still includes the rewritten-but-not-relocated 
> Hadoop classes.
> We can then use docs to emphasize when each of these is appropriate to use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20615) include shaded artifacts in binary install

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502147#comment-16502147
 ] 

Sean Busbey commented on HBASE-20615:
-

yes. my current reworking is moving the wip client-tarball to use the shaded 
jars exclusively.

> include shaded artifacts in binary install
> --
>
> Key: HBASE-20615
> URL: https://issues.apache.org/jira/browse/HBASE-20615
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Client, Usability
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20615.0.patch
>
>
> Working through setting up an IT for our shaded artifacts in HBASE-20334 
> makes our lack of packaging seem like an oversight. While I could work around 
> by pulling the shaded clients out of whatever build process built the 
> convenience binary that we're trying to test, it seems v awkward.
> After reflecting on it more, it makes more sense to me for there to be a 
> common place in the install that folks running jobs against the cluster can 
> rely on. If they need to run without a full hbase install, that should still 
> work fine via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20334) add a test that expressly uses both our shaded client and the one from hadoop 3

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502146#comment-16502146
 ] 

Sean Busbey commented on HBASE-20334:
-

{quote}
bq. /path/to/component/bin-install
You mean the HBase installation by this? (the unpacked HBase binary tarball)
{quote}

yes.

{quote}
{code}
+  
+hbase.rootdir
+
+/hbase
+  
{code}
Double-checking, the script is always starting up a minicluster – never trying 
to use an existing HDFS installation?
{quote}

Yes, this is currently an end-to-end test. I could adapt it to optionally use 
an existing cluster.

{quote}
{code}
+yarn_server_tests_test_jar="$3"
+mapred_jobclient_test_jar="$4"
{code}
Could add some bash "is file" conditional checks on these two.
{quote}

will do.

{quote}
I'm sure you would've done this on your own already if there was an easy way, 
but is there a way we could avoid generated the TSV data and the Java file at 
the end of the script?
{quote}

I could structure things differently by putting this test into its own folder 
and including them as resources there. The down side is that the script will no 
longer be self contained. At this point I don't have a strong feeling either 
way.

> add a test that expressly uses both our shaded client and the one from hadoop 
> 3
> ---
>
> Key: HBASE-20334
> URL: https://issues.apache.org/jira/browse/HBASE-20334
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20334.0.patch
>
>
> Since we're making a shaded client that bleed out of our namespace and into 
> Hadoop's, we should ensure that we can show our clients coexisting. Even if 
> it's just an IT that successfully talks to both us and HDFS via our 
> respective shaded clients, that'd be a big help in keeping us proactive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20615) include shaded artifacts in binary install

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20615?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502143#comment-16502143
 ] 

Josh Elser commented on HBASE-20615:


Trying to run through this in my head now:
 # Patch here puts HBase shaded jars (including new byo-hadoop variant) in our 
bin-tarball
 # client-tarball (the WIP one) could eventually also (instead of?) pick up 
these HBase shaded jars
 # HBASE-20334 gets us 95% of the way to validating that the client-tarball 
actually works (don't think we need to complicate things here further though)

Is that what you're thinking as well, Sean?

> include shaded artifacts in binary install
> --
>
> Key: HBASE-20615
> URL: https://issues.apache.org/jira/browse/HBASE-20615
> Project: HBase
>  Issue Type: Sub-task
>  Components: build, Client, Usability
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20615.0.patch
>
>
> Working through setting up an IT for our shaded artifacts in HBASE-20334 
> makes our lack of packaging seem like an oversight. While I could work around 
> by pulling the shaded clients out of whatever build process built the 
> convenience binary that we're trying to test, it seems v awkward.
> After reflecting on it more, it makes more sense to me for there to be a 
> common place in the install that folks running jobs against the cluster can 
> rely on. If they need to run without a full hbase install, that should still 
> work fine via e.g. grabbing from the maven repo.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19798) Retry time should not be 0

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19798?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502142#comment-16502142
 ] 

Hadoop QA commented on HBASE-19798:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Findbugs executables are not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} branch-1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
34s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
12s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} branch-1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
13s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} branch-1 passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} branch-1 passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
12s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green}  
1m 21s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.4. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed with JDK v1.8.0_172 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.7.0_181 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
25s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 8s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 14m 59s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:36a7029 |
| JIRA Issue | HBASE-19798 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12926599/HBASE-19798.branch-1.002.patch
 |
| Optional Tests |  asflicense  javac  javadoc  unit  findbugs  shadedjars  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 7898f40366d8 4.4.0-43-generic 

[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-06-05 Thread Hadoop QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502138#comment-16502138
 ] 

Hadoop QA commented on HBASE-20257:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
27s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
15s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m  
4s{color} | {color:red} root in the patch failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  0m 
11s{color} | {color:red} hbase-spark in the patch failed. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  0m 11s{color} 
| {color:red} hbase-spark in the patch failed. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  3m 
45s{color} | {color:red} The patch causes 10 errors with Hadoop v2.7.4. {color} 
|
| {color:red}-1{color} | {color:red} hadoopcheck {color} | {color:red}  8m 
11s{color} | {color:red} The patch causes 10 errors with Hadoop v3.0.0. {color} 
|
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
12s{color} | {color:red} hbase-spark in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  0m 14s{color} 
| {color:red} hbase-spark in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 32m 17s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:d8b550f |
| JIRA Issue | HBASE-20257 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12916314/HBASE-20257.v04.patch 
|
| Optional Tests |  asflicense  javac  javadoc  unit  shadedjars  hadoopcheck  
xml  compile  |
| uname | Linux ed1e77ff1534 3.13.0-139-generic #188-Ubuntu SMP Tue Jan 9 
14:43:09 UTC 2018 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 7d3750bd9f |
| maven | version: Apache Maven 3.5.3 
(3383c37e1f9e9b3bc3df5050c29c8aff9f295297; 2018-02-24T19:49:05Z) |
| Default Java | 1.8.0_171 |
| mvninstall | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-mvninstall-root.txt
 |
| compile | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-compile-hbase-spark.txt
 |
| javac | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-compile-hbase-spark.txt
 |
| hadoopcheck | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-javac-2.7.4.txt
 |
| hadoopcheck | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-javac-3.0.0.txt
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HBASE-Build/13096/artifact/patchprocess/patch-javadoc-hbase-spark.txt

[jira] [Commented] (HBASE-20681) IntegrationTestDriver fails after HADOOP-15406 due to missing hamcrest-core

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20681?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502137#comment-16502137
 ] 

Sean Busbey commented on HBASE-20681:
-

{code}
   
> Key: HBASE-20681
> URL: https://issues.apache.org/jira/browse/HBASE-20681
> Project: HBase
>  Issue Type: Bug
>  Components: integration tests
>Reporter: Romil Choksi
>Assignee: Josh Elser
>Priority: Major
> Fix For: 3.0.0, 2.1.0, 2.0.1
>
> Attachments: HBASE-20681.001.patch, HBASE-20681.002.patch
>
>
> HADOOP-15406 marked mockito and junit as test-only dependencies which, I 
> believe, has stopped them from being included in a stock Hadoop classpath. 
> Prior, you'd get hamcrest at {{share/hadoop/common/lib/hamcrest-core-1.3.jar}}
> However, we depend on it being there for our junit in hbase-it:
> {noformat}
> [INFO] --- maven-dependency-plugin:3.0.1:tree (default-cli) @ hbase-it ---
> [INFO] org.apache.hbase:hbase-it:jar:2.0.1-SNAPSHOT
> [INFO] +- junit:junit:jar:4.12:test
> [INFO] |  \- org.hamcrest:hamcrest-core:jar:1.3:test
> {noformat}
> We need to make sure we include it.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20334) add a test that expressly uses both our shaded client and the one from hadoop 3

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20334?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502135#comment-16502135
 ] 

Josh Elser commented on HBASE-20334:


{code:java}
/path/to/component/bin-install{code}
You mean the HBase installation by this? (the unpacked HBase binary tarball)
{code:java}
+  
+hbase.rootdir
+
+/hbase
+  {code}
Double-checking, the script is always starting up a minicluster – never trying 
to use an existing HDFS installation?
{code:java}
+yarn_server_tests_test_jar="$3"
+mapred_jobclient_test_jar="$4"{code}
Could add some bash "is file" conditional checks on these two.

I'm sure you would've done this on your own already if there was an easy way, 
but is there a way we could avoid generated the TSV data and the Java file at 
the end of the script?

> add a test that expressly uses both our shaded client and the one from hadoop 
> 3
> ---
>
> Key: HBASE-20334
> URL: https://issues.apache.org/jira/browse/HBASE-20334
> Project: HBase
>  Issue Type: Sub-task
>  Components: hadoop3, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Major
> Attachments: HBASE-20334.0.patch
>
>
> Since we're making a shaded client that bleed out of our namespace and into 
> Hadoop's, we should ensure that we can show our clients coexisting. Even if 
> it's just an IT that successfully talks to both us and HDFS via our 
> respective shaded clients, that'd be a big help in keeping us proactive.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20678) NPE in ReplicationSourceManager#NodeFailoverWorker

2018-06-05 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20678?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502134#comment-16502134
 ] 

Hudson commented on HBASE-20678:


Results for branch branch-2
[build #826 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/826/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/826//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/826//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/826//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> NPE in ReplicationSourceManager#NodeFailoverWorker
> --
>
> Key: HBASE-20678
> URL: https://issues.apache.org/jira/browse/HBASE-20678
> Project: HBase
>  Issue Type: Bug
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
>Priority: Minor
> Fix For: 2.1.0
>
> Attachments: HBASE-20678.master.001.patch, 
> HBASE-20678.master.002.patch, HBASE-20678.master.003.patch
>
>
> 2018-06-04 10:28:43,362 INFO  [ReplicationExecutor-0] 
> replication.ZKReplicationQueueStorage(432): Claim queue queueId=1 from 
> hao-optiplex-7050,38491,1528079278158 to 
> hao-optiplex-7050,39931,1528079278272 failed with 
> org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = 
> NoNode, someone else took the log?
> Exception in thread "ReplicationExecutor-0" java.lang.NullPointerException
>   
> 
>   at 
> org.apache.hadoop.hbase.replication.regionserver.ReplicationSourceManager$NodeFailoverWorker.run(ReplicationSourceManager.java:858)
>   at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
> ZKReplicationQueueStorage's claimQueue method may return null when got 
> NoNodeException.
> {code:java}
>   Pair> peer = 
> queueStorage.claimQueue(deadRS,
> queues.get(ThreadLocalRandom.current().nextInt(queues.size())), 
> server.getServerName());
>   long sleep = sleepBeforeFailover / 2;
>   if (!peer.getSecond().isEmpty()) {
> newQueues.put(peer.getFirst(), peer.getSecond());
> sleep = sleepBeforeFailover;
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20340) potential null pointer exception in org.apache.hadoop.hbase.rest.TableScanResource given empty resource

2018-06-05 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-20340:

Attachment: HBASE-20340-master.001.patch
Status: Patch Available  (was: Open)

Fix potential NPE by checking null of the object.

> potential null pointer exception in 
> org.apache.hadoop.hbase.rest.TableScanResource given empty resource
> ---
>
> Key: HBASE-20340
> URL: https://issues.apache.org/jira/browse/HBASE-20340
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: andy zhou
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-20340-master.001.patch
>
>
> Our code analyzer has detected a potential null pointer issue in 
> org.apache.hadoop.hbase.rest.TableScanResource as follows:
> {code:java}
> // org.apache.hadoop.hbase.client.Result.java Line #244 
> public List listCells() {
> return isEmpty()? null: Arrays.asList(rawCells());
> }
> {code}
> {code:java}
> // org.apache.hadoop.hbase.rest.TableScanResource Line #95
> List kvs = rs.listCells(); 
> for (Cell kv : kvs) { ... }
> {code}
>  
> Given empty rs, kvs shall be null instead of an empty list, leading to 
> potential null pointer exception
>  
> (org.apache.hadoop.hbase.client.Result.java Line #244 and 
> org.apache.hadoop.hbase.rest.TableScanResource Line #95
> Linkage to the code is here:
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java#L244
> SourceBrella Inc.
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java#L95



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Assigned] (HBASE-20340) potential null pointer exception in org.apache.hadoop.hbase.rest.TableScanResource given empty resource

2018-06-05 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang reassigned HBASE-20340:
---

Assignee: Xu Cang

> potential null pointer exception in 
> org.apache.hadoop.hbase.rest.TableScanResource given empty resource
> ---
>
> Key: HBASE-20340
> URL: https://issues.apache.org/jira/browse/HBASE-20340
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 2.0.0-beta-2
>Reporter: andy zhou
>Assignee: Xu Cang
>Priority: Major
>
> Our code analyzer has detected a potential null pointer issue in 
> org.apache.hadoop.hbase.rest.TableScanResource as follows:
> {code:java}
> // org.apache.hadoop.hbase.client.Result.java Line #244 
> public List listCells() {
> return isEmpty()? null: Arrays.asList(rawCells());
> }
> {code}
> {code:java}
> // org.apache.hadoop.hbase.rest.TableScanResource Line #95
> List kvs = rs.listCells(); 
> for (Cell kv : kvs) { ... }
> {code}
>  
> Given empty rs, kvs shall be null instead of an empty list, leading to 
> potential null pointer exception
>  
> (org.apache.hadoop.hbase.client.Result.java Line #244 and 
> org.apache.hadoop.hbase.rest.TableScanResource Line #95
> Linkage to the code is here:
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-client/src/main/java/org/apache/hadoop/hbase/client/Result.java#L244
> SourceBrella Inc.
> https://github.com/apache/hbase/blob/9e9b347d667e1fc6165c9f8ae5ae7052147e8895/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/TableScanResource.java#L95



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20333) break up shaded client into one with no Hadoop and one that's standalone

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502120#comment-16502120
 ] 

Sean Busbey commented on HBASE-20333:
-

{quote}

What's the use-case you have in mind for a jar with Hadoop provided but not 
relocated? Custom mapreduce jobs that use both Hadoop and HBase APIs?
{quote}

No, those should use our hbase-shaded-mapreduce artifact that never includes 
hadoop (once HBASE-20332 lands). Hadoop jars in that case need to come from MR.

The provided but not relocate hadoop is what we have now; what we've had since 
we first added a shaded jar in ... 1.1? It's not relocated because parts of 
Hadoop are in our API. I can go back and find the jira with me and Elliott 
going around about it if you'd like background.

> break up shaded client into one with no Hadoop and one that's standalone
> 
>
> Key: HBASE-20333
> URL: https://issues.apache.org/jira/browse/HBASE-20333
> Project: HBase
>  Issue Type: Sub-task
>  Components: shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20333.WIP.0.patch
>
>
> there are contexts where we want to stay out of our downstream users way wrt 
> dependencies, but they need more Hadoop classes than we provide. i.e. any 
> downstream client that wants to use both HBase and HDFS in their application, 
> or any non-MR YARN application.
> Now that Hadoop also has shaded client artifacts for Hadoop 3, we're also 
> providing less incremental benefit by including our own rewritten Hadoop 
> classes to avoid downstream needing to pull in all of Hadoop's transitive 
> dependencies.
> right now those users need to ensure that any jars from the Hadoop project 
> are loaded in the classpath prior to our shaded client jar. This is brittle 
> and prone to weird debugging trouble.
> instead, we should have two artifacts: one that just lists Hadoop as a 
> prerequisite and one that still includes the rewritten-but-not-relocated 
> Hadoop classes.
> We can then use docs to emphasize when each of these is appropriate to use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20333) break up shaded client into one with no Hadoop and one that's standalone

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502114#comment-16502114
 ] 

Sean Busbey commented on HBASE-20333:
-

{quote}
If you would have asked me on the spot, I think I'd have said that an HBase 
shaded jar just requires the Hadoop shaded jar and we move on. Maybe the 
problem is that we depend on things from Hadoop that aren't included in that 
shaded jar?
{quote}

branch-2 hadoop is not dead, and it has no shaded jar. By saying "just give me 
hadoop however you can do that" we have options. shaded hadoop jars on hadoop 3 
and regular hadoop 2 jars on hadoop 2. (the test on HBASE-20334 shows an 
example of determining this at operations time)

> break up shaded client into one with no Hadoop and one that's standalone
> 
>
> Key: HBASE-20333
> URL: https://issues.apache.org/jira/browse/HBASE-20333
> Project: HBase
>  Issue Type: Sub-task
>  Components: shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20333.WIP.0.patch
>
>
> there are contexts where we want to stay out of our downstream users way wrt 
> dependencies, but they need more Hadoop classes than we provide. i.e. any 
> downstream client that wants to use both HBase and HDFS in their application, 
> or any non-MR YARN application.
> Now that Hadoop also has shaded client artifacts for Hadoop 3, we're also 
> providing less incremental benefit by including our own rewritten Hadoop 
> classes to avoid downstream needing to pull in all of Hadoop's transitive 
> dependencies.
> right now those users need to ensure that any jars from the Hadoop project 
> are loaded in the classpath prior to our shaded client jar. This is brittle 
> and prone to weird debugging trouble.
> instead, we should have two artifacts: one that just lists Hadoop as a 
> prerequisite and one that still includes the rewritten-but-not-relocated 
> Hadoop classes.
> We can then use docs to emphasize when each of these is appropriate to use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20333) break up shaded client into one with no Hadoop and one that's standalone

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20333?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502106#comment-16502106
 ] 

Josh Elser commented on HBASE-20333:


{quote}instead, we should have two artifacts: one that just lists Hadoop as a 
prerequisite and one that still includes the rewritten-but-not-relocated Hadoop 
classes.
{quote}
What's the use-case you have in mind for a jar with Hadoop provided but not 
relocated? Custom mapreduce jobs that use both Hadoop and HBase APIs?

If you would have asked me on the spot, I think I'd have said that an HBase 
shaded jar just requires the Hadoop shaded jar and we move on. Maybe the 
problem is that we depend on things from Hadoop that aren't included in that 
shaded jar?

Thanks for your efforts here!

> break up shaded client into one with no Hadoop and one that's standalone
> 
>
> Key: HBASE-20333
> URL: https://issues.apache.org/jira/browse/HBASE-20333
> Project: HBase
>  Issue Type: Sub-task
>  Components: shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20333.WIP.0.patch
>
>
> there are contexts where we want to stay out of our downstream users way wrt 
> dependencies, but they need more Hadoop classes than we provide. i.e. any 
> downstream client that wants to use both HBase and HDFS in their application, 
> or any non-MR YARN application.
> Now that Hadoop also has shaded client artifacts for Hadoop 3, we're also 
> providing less incremental benefit by including our own rewritten Hadoop 
> classes to avoid downstream needing to pull in all of Hadoop's transitive 
> dependencies.
> right now those users need to ensure that any jars from the Hadoop project 
> are loaded in the classpath prior to our shaded client jar. This is brittle 
> and prone to weird debugging trouble.
> instead, we should have two artifacts: one that just lists Hadoop as a 
> prerequisite and one that still includes the rewritten-but-not-relocated 
> Hadoop classes.
> We can then use docs to emphasize when each of these is appropriate to use.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-19798) Retry time should not be 0

2018-06-05 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-19798?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-19798:

Attachment: HBASE-19798.branch-1.002.patch

> Retry time should not be 0
> --
>
> Key: HBASE-19798
> URL: https://issues.apache.org/jira/browse/HBASE-19798
> Project: HBase
>  Issue Type: Bug
>  Components: hbase
>Affects Versions: 1.2.6
> Environment: linux centos 6.5 
> jdk 1.8
>  
>  
>Reporter: chenrongwei
>Assignee: Xu Cang
>Priority: Minor
> Attachments: HBASE-19798.branch-1.001.patch, 
> HBASE-19798.branch-1.002.patch
>
>
> hbase.client.retries.number  should not be set to zero.
> if set to zero  will cause ConnectionManager  report 
> NoServerForRegionException.
> source code detail in 1.2.6's ConnectionManager  like belows:
>       int localNumRetries = (retry ? numTries : 1); 
>       for (int tries = 0; true; tries++) {
>         if (tries >= localNumRetries)
> {           throw new NoServerForRegionException("Unable to find region for " 
>               + Bytes.toStringBinary(row) + " in " + tableName +              
>  " after " + localNumRetries + " tries.");         }
> actual exception info
> org.apache.hadoop.hbase.client.NoServerForRegionException: Unable to find 
> region for 196053079 in fin_tech:crawler_event after 0 tries.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20611) UnsupportedOperationException may thrown when calling getCallQueueInfo()

2018-06-05 Thread Xu Cang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xu Cang updated HBASE-20611:

Attachment: HBASE-20611.master.005.patch

> UnsupportedOperationException may thrown when calling getCallQueueInfo()
> 
>
> Key: HBASE-20611
> URL: https://issues.apache.org/jira/browse/HBASE-20611
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Xu Cang
>Priority: Major
> Attachments: HBASE-20611.master.002.patch, 
> HBASE-20611.master.003.patch, HBASE-20611.master.004.patch, 
> HBASE-20611.master.005.patch
>
>
> HBASE-16290 added a new feature to dump queue info,  the method 
> getCallQueueInfo() need to iterate the queue to get the elements in the 
> queue.  But, except the Java's LinkedBlockingQueue, the other queue 
> implementations like BoundedPriorityBlockingQueue and 
> AdaptiveLifoCoDelCallQueue don't implement the method iterator(). If those 
> queues are used, a UnsupportedOperationException  will be thrown.
> This can be easily be reproduced by the UT testCallQueueInfo while adding a 
> conf: conf.set("hbase.ipc.server.callqueue.type", "deadline")
> {code}
> java.lang.UnsupportedOperationException
>   at 
> org.apache.hadoop.hbase.util.BoundedPriorityBlockingQueue.iterator(BoundedPriorityBlockingQueue.java:285)
>   at 
> org.apache.hadoop.hbase.ipc.RpcExecutor.getCallQueueCountsSummary(RpcExecutor.java:166)
>   at 
> org.apache.hadoop.hbase.ipc.SimpleRpcScheduler.getCallQueueInfo(SimpleRpcScheduler.java:241)
>   at 
> org.apache.hadoop.hbase.ipc.TestSimpleRpcScheduler.testCallQueueInfo(TestSimpleRpcScheduler.java:164)
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20332) shaded mapreduce module shouldn't include hadoop

2018-06-05 Thread Josh Elser (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502096#comment-16502096
 ] 

Josh Elser commented on HBASE-20332:


{quote}everyone fine with me punting the check for old hbase apis to a 
follow-on that's blocked on the checkstyle update needed for the "illegal 
classes" rule?
{quote}
+1

> shaded mapreduce module shouldn't include hadoop
> 
>
> Key: HBASE-20332
> URL: https://issues.apache.org/jira/browse/HBASE-20332
> Project: HBase
>  Issue Type: Sub-task
>  Components: mapreduce, shading
>Affects Versions: 2.0.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 3.0.0, 2.1.0
>
> Attachments: HBASE-20332.0.patch, HBASE-20332.1.WIP.patch, 
> HBASE-20332.2.WIP.patch, HBASE-20332.3.patch
>
>
> AFAICT, we should just entirely skip including hadoop in our shaded mapreduce 
> module
> 1) Folks expect to run yarn / mr apps via {{hadoop jar}} / {{yarn jar}}
> 2) those commands include all the needed Hadoop jars in your classpath by 
> default (both client side and in the containers)
> 3) If you try to use "user classpath first" for your job as a workaround 
> (e.g. for some library your application needs that hadoop provides) then our 
> inclusion of *some but not all* hadoop classes then causes everything to fall 
> over because of mixing rewritten and non-rewritten hadoop classes
> 4) if you don't use "user classpath first" then all of our 
> non-relocated-but-still-shaded hadoop classes are ignored anyways so we're 
> just wasting space



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20686) Asyncfs should retry upon RetryStartFileException

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502079#comment-16502079
 ] 

Wei-Chiu Chuang commented on HBASE-20686:
-

Attach a simple fix to make the behavior consistent with DFSOutputStream.  
[^HBASE-20686.master.001.patch]  Will attach a test as well later.

> Asyncfs should retry upon RetryStartFileException
> -
>
> Key: HBASE-20686
> URL: https://issues.apache.org/jira/browse/HBASE-20686
> Project: HBase
>  Issue Type: Bug
>  Components: asyncclient
>Affects Versions: 2.0.0-beta-1
> Environment: HBase 2.0, Hadoop 3 with at-rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-20686.master.001.patch
>
>
> In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when 
> NameNode experience encryption zone related issue. The code exists in 
> DFSOutputStream#newStreamForCreate(). (HDFS-6970)
> In HBase-2's asyncfs implementation, 
> FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of 
> HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon 
> RetryStartFileException. So it is less resilient to such issues.
> Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but 
> asyncfs does not. Therefore, hbase gets different exceptions than before.
> File this jira to get this corrected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20686) Asyncfs should retry upon RetryStartFileException

2018-06-05 Thread Wei-Chiu Chuang (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20686?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HBASE-20686:

Attachment: HBASE-20686.master.001.patch

> Asyncfs should retry upon RetryStartFileException
> -
>
> Key: HBASE-20686
> URL: https://issues.apache.org/jira/browse/HBASE-20686
> Project: HBase
>  Issue Type: Bug
>  Components: asyncclient
>Affects Versions: 2.0.0-beta-1
> Environment: HBase 2.0, Hadoop 3 with at-rest encryption
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>Priority: Major
> Attachments: HBASE-20686.master.001.patch
>
>
> In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when 
> NameNode experience encryption zone related issue. The code exists in 
> DFSOutputStream#newStreamForCreate(). (HDFS-6970)
> In HBase-2's asyncfs implementation, 
> FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of 
> HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon 
> RetryStartFileException. So it is less resilient to such issues.
> Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but 
> asyncfs does not. Therefore, hbase gets different exceptions than before.
> File this jira to get this corrected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-20686) Asyncfs should retry upon RetryStartFileException

2018-06-05 Thread Wei-Chiu Chuang (JIRA)
Wei-Chiu Chuang created HBASE-20686:
---

 Summary: Asyncfs should retry upon RetryStartFileException
 Key: HBASE-20686
 URL: https://issues.apache.org/jira/browse/HBASE-20686
 Project: HBase
  Issue Type: Bug
  Components: asyncclient
Affects Versions: 2.0.0-beta-1
 Environment: HBase 2.0, Hadoop 3 with at-rest encryption

Reporter: Wei-Chiu Chuang
Assignee: Wei-Chiu Chuang


In Hadoop-2.6 and above, HDFS client retries on RetryStartFileException when 
NameNode experience encryption zone related issue. The code exists in 
DFSOutputStream#newStreamForCreate(). (HDFS-6970)

In HBase-2's asyncfs implementation, 
FanOutOneBlockAsyncDFSOutputHelper#createOutput() is somewhat an imitation of 
HDFS's DFSOutputStream#newStreamForCreate(). However it does not retry upon 
RetryStartFileException. So it is less resilient to such issues.

Also, DFSOutputStream#newStreamForCreate() upwraps RemoteExceptions, but 
asyncfs does not. Therefore, hbase gets different exceptions than before.

File this jira to get this corrected.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20257) hbase-spark should not depend on com.google.code.findbugs.jsr305

2018-06-05 Thread Ted Yu (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502075#comment-16502075
 ] 

Ted Yu commented on HBASE-20257:


Ping [~busbey] for review

> hbase-spark should not depend on com.google.code.findbugs.jsr305
> 
>
> Key: HBASE-20257
> URL: https://issues.apache.org/jira/browse/HBASE-20257
> Project: HBase
>  Issue Type: Task
>  Components: build, spark
>Affects Versions: 3.0.0
>Reporter: Ted Yu
>Assignee: Artem Ervits
>Priority: Minor
>  Labels: beginner
> Attachments: HBASE-20257.v01.patch, HBASE-20257.v02.patch, 
> HBASE-20257.v03.patch, HBASE-20257.v04.patch
>
>
> The following can be observed in the build output of master branch:
> {code}
> [WARNING] Rule 0: org.apache.maven.plugins.enforcer.BannedDependencies failed 
> with message:
> We don't allow the JSR305 jar from the Findbugs project, see HBASE-16321.
> Found Banned Dependency: com.google.code.findbugs:jsr305:jar:1.3.9
> Use 'mvn dependency:tree' to locate the source of the banned dependencies.
> {code}
> Here is related snippet from hbase-spark/pom.xml:
> {code}
> 
>   com.google.code.findbugs
>   jsr305
> {code}
> Dependency on jsr305 should be dropped.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502047#comment-16502047
 ] 

Sean Busbey commented on HBASE-20679:
-

please add docs.

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-19852) HBase Thrift 1 server SPNEGO Improvements

2018-06-05 Thread Kevin Risden (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-19852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502051#comment-16502051
 ] 

Kevin Risden commented on HBASE-19852:
--

Yes latest uploaded patch is latest on reviewboard as well.

> HBase Thrift 1 server SPNEGO Improvements
> -
>
> Key: HBASE-19852
> URL: https://issues.apache.org/jira/browse/HBASE-19852
> Project: HBase
>  Issue Type: Improvement
>  Components: Thrift
>Reporter: Kevin Risden
>Assignee: Kevin Risden
>Priority: Major
> Attachments: HBASE-19852.master.001.patch, 
> HBASE-19852.master.002.patch, HBASE-19852.master.003.patch, 
> HBASE-19852.master.004.patch, HBASE-19852.master.006.patch, 
> HBASE-19852.master.007.patch.txt, HBASE-19852.master.008.patch
>
>
> HBase Thrift1 server has some issues when trying to use SPNEGO.
> From mailing list:
> http://mail-archives.apache.org/mod_mbox/hbase-user/201801.mbox/%3CCAJU9nmh5YtZ%2BmAQSLo91yKm8pRVzAPNLBU9vdVMCcxHRtRqgoA%40mail.gmail.com%3E
> {quote}While setting up the HBase Thrift server with HTTP, there were a
> significant amount of 401 errors where the HBase Thrift wasn't able to
> handle the incoming Kerberos request. Documentation online is sparse when
> it comes to setting up the principal/keytab for HTTP Kerberos.
> I noticed that the HBase Thrift HTTP implementation was missing SPNEGO
> principal/keytab like other Thrift based servers (HiveServer2). It looks
> like HiveServer2 Thrift implementation and HBase Thrift v1 implementation
> were very close to the same at one point. I made the following changes to
> HBase Thrift v1 server implementation to make it work:
> * add SPNEGO principal/keytab if in HTTP mode
> * return 401 immediately if no authorization header instead of waiting for
> try/catch down in program flow{quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-20657) Retrying RPC call for ModifyTableProcedure may get stuck

2018-06-05 Thread Sergey Soldatov (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20657?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sergey Soldatov updated HBASE-20657:

Attachment: HBASE-20657-3-branch-2.patch

> Retrying RPC call for ModifyTableProcedure may get stuck
> 
>
> Key: HBASE-20657
> URL: https://issues.apache.org/jira/browse/HBASE-20657
> Project: HBase
>  Issue Type: Bug
>  Components: Client, proc-v2
>Affects Versions: 2.0.0
>Reporter: Sergey Soldatov
>Assignee: Sergey Soldatov
>Priority: Major
> Attachments: HBASE-20657-1-branch-2.patch, 
> HBASE-20657-2-branch-2.patch, HBASE-20657-3-branch-2.patch, 
> HBASE-20657-testcase-branch2.patch
>
>
> Env: 2 masters, 1 RS. 
> Steps to reproduce: Active master is killed while ModifyTableProcedure is 
> executed. 
> If the table has enough regions it may come that when the secondary master 
> get active some of the regions may be closed, so once client retries the call 
> to the new active master, a new ModifyTableProcedure is created and get stuck 
> during MODIFY_TABLE_REOPEN_ALL_REGIONS state handling. That happens because:
> 1. When we are retrying from client side, we call modifyTableAsync which 
> create a procedure with a new nonce key:
> {noformat}
>  ModifyTableRequest request = 
> RequestConverter.buildModifyTableRequest(
> td.getTableName(), td, ng.getNonceGroup(), ng.newNonce());
> {noformat}
>  So on the server side, it's considered as a new procedure and starts 
> executing immediately.
> 2. When we are processing  MODIFY_TABLE_REOPEN_ALL_REGIONS we create 
> MoveRegionProcedure for each region, but it checks whether the region is 
> online (and it's not), so it fails immediately, forcing the procedure to 
> restart.
> [~an...@apache.org] saw a similar case when two concurrent ModifyTable 
> procedures were running and got stuck in the similar way. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20679) Add the ability to compile JSP dynamically in Jetty

2018-06-05 Thread Sean Busbey (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20679?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16502040#comment-16502040
 ] 

Sean Busbey commented on HBASE-20679:
-

please include a flag that allows disabling this feature, since it 
substantially complicates security analysis of our runtime out-of-the-box. (I 
am -1 without such a flag)

If hbck2 supported AssignmentV2 would we still want to do this?

Is JSP compilation the right tool for what sounds like "have a way to inject 
the master doing XYZ"? Could this be better done as a plugin somewhere other 
than the InfoServer?

> Add the ability to compile JSP dynamically in Jetty
> ---
>
> Key: HBASE-20679
> URL: https://issues.apache.org/jira/browse/HBASE-20679
> Project: HBase
>  Issue Type: New Feature
>Affects Versions: 2.0.0
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Major
> Fix For: 3.0.0
>
> Attachments: HBASE-20679.002.patch, HBASE-20679.patch
>
>
> As discussed in HBASE-20617, adding the ability to dynamically compile jsp 
> enable us to do some hot fix. 
>  For example, several days ago, in our testing HBase-2.0 cluster, 
> procedureWals were corrupted due to some unknown reasons. After restarting 
> the cluster, since some procedures(AssignProcedure for example) were 
> corrupted and couldn't be replayed. Some regions were stuck in RIT forever. 
> We couldn't use HBCK since it haven't support AssignmentV2 yet. As a matter 
> of fact, the namespace region was not online, so the master was not inited, 
> we even couldn't use shell command like assign/move. But, we wrote a jsp and 
> fix this issue easily. The jsp file is like this:
> {code:java}
> <%
>   String action = request.getParameter("action");
>   HMaster master = (HMaster)getServletContext().getAttribute(HMaster.MASTER);
>   List offlineRegionsToAssign = new ArrayList<>();
>   List regionRITs = 
> master.getAssignmentManager()
>   .getRegionStates().getRegionsInTransition();
>   for (RegionStates.RegionStateNode regionStateNode :  regionRITs) {
> // if regionStateNode don't have a procedure attached, but meta state 
> shows
> // this region is in RIT, that means the previous procedure may be 
> corrupted
> // we need to create a new assignProcedure to assign them
> if (!regionStateNode.isInTransition()) {
>   offlineRegionsToAssign.add(regionStateNode.getRegionInfo());
>   out.println("RIT region:" + regionStateNode);
> }
>   }
>   // Assign offline regions. Uses round-robin.
>   if ("fix".equals(action) && offlineRegionsToAssign.size() > 0) {
> 
> master.getMasterProcedureExecutor().submitProcedures(master.getAssignmentManager().
> createRoundRobinAssignProcedures(offlineRegionsToAssign));
>   } else {
> out.println("use ?action=fix to fix RIT regions");
>   }
> %>
> {code}
> Above it is only one example we can do if we have the ability to compile jsp 
> dynamically. We think it is very useful.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >