[jira] [Updated] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19020:

Status: Patch Available  (was: In Progress)

> TestXmlParsing exception checking relies on a particular xml implementation 
> without declaring it.
> -
>
> Key: HBASE-19020
> URL: https://issues.apache.org/jira/browse/HBASE-19020
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, REST
>Affects Versions: 2.0.0-alpha-1, 1.1.9, 1.2.5, 1.3.0, 1.4.0
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: HBASE-19020.0.patch
>
>
> The test added in HBASE-17424 is overly specific:
> {code}
>   @Test
>   public void testFailOnExternalEntities() throws Exception {
> final String externalEntitiesXml =
> ""
> + "  ] >"
> + " ";
> Client client = mock(Client.class);
> RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
> null);
> Response resp = new Response(200, null, externalEntitiesXml.getBytes());
> when(client.get("/version/cluster", 
> Constants.MIMETYPE_XML)).thenReturn(resp);
> try {
>   admin.getClusterVersion();
>   fail("Expected getClusterVersion() to throw an exception");
> } catch (IOException e) {
>   final String exceptionText = StringUtils.stringifyException(e);
>   final String expectedText = "The entity \"xee\" was referenced, but not 
> declared.";
>   LOG.error("exception text: " + exceptionText, e);
>   assertTrue("Exception does not contain expected text", 
> exceptionText.contains(expectedText));
> }
>   }
> {code}
> Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
> the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey updated HBASE-19020:

Attachment: HBASE-19020.0.patch

-0
  - check for a cause that's part of the Java XML API (should be consistent 
across implementations)
  - check for just the external entity name in the error message, rather than 
the particular phrasing about how it has failed (spotty assumption but true so 
far)

> TestXmlParsing exception checking relies on a particular xml implementation 
> without declaring it.
> -
>
> Key: HBASE-19020
> URL: https://issues.apache.org/jira/browse/HBASE-19020
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, REST
>Affects Versions: 1.3.0, 1.4.0, 1.2.5, 1.1.9, 2.0.0-alpha-1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
> Attachments: HBASE-19020.0.patch
>
>
> The test added in HBASE-17424 is overly specific:
> {code}
>   @Test
>   public void testFailOnExternalEntities() throws Exception {
> final String externalEntitiesXml =
> ""
> + "  ] >"
> + " ";
> Client client = mock(Client.class);
> RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
> null);
> Response resp = new Response(200, null, externalEntitiesXml.getBytes());
> when(client.get("/version/cluster", 
> Constants.MIMETYPE_XML)).thenReturn(resp);
> try {
>   admin.getClusterVersion();
>   fail("Expected getClusterVersion() to throw an exception");
> } catch (IOException e) {
>   final String exceptionText = StringUtils.stringifyException(e);
>   final String expectedText = "The entity \"xee\" was referenced, but not 
> declared.";
>   LOG.error("exception text: " + exceptionText, e);
>   assertTrue("Exception does not contain expected text", 
> exceptionText.contains(expectedText));
> }
>   }
> {code}
> Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
> the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207050#comment-16207050
 ] 

ramkrishna.s.vasudevan commented on HBASE-18946:


Since HBASE-19017 is resolved I think this issue is better now. Will wait for 
reviews and mean while will check if the patch can cause other issues. 
[~huaxiang] - Thanks for your time.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18945) Make a IA.LimitedPrivate interface for CellComparator

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18945?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207047#comment-16207047
 ] 

ramkrishna.s.vasudevan commented on HBASE-18945:


QA is green. Ping !!!

> Make a IA.LimitedPrivate interface for CellComparator
> -
>
> Key: HBASE-18945
> URL: https://issues.apache.org/jira/browse/HBASE-18945
> Project: HBase
>  Issue Type: Sub-task
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-18495.patch, HBASE-18945_2.patch, 
> HBASE-18945_3.patch, HBASE-18945_4.patch, HBASE-18945_5.patch, 
> HBASE-18945_6.patch, HBASE-18945_6.patch
>
>
> Based on discussions over in HBASE-18826 and HBASE-18183 it is better we 
> expose CellComparator as a public interface so that it could be used in 
> Region/Store interfaces to be exposed to CPs.
> Currently the Comparator is exposed in Region, STore and StoreFile. There is 
> another discussion whether to expose it at all layers or only at Region. 
> However since we are exposing this to CPs CellComparator being @Private is 
> not the ideal way to do it. We have to change it to LimitedPrivate. But 
> CellComparator has lot of additional methods which are internal (like where a 
> Cell is compared with an incoming byte[] used in index comparsions etc).
> One way to expose is that as being done now in HBASE-18826 - by exposing the 
> return type as Comparator. But this is not powerful. It only allows to 
> compare cells. So we try to expose an IA.LimitedPrivate interface that is 
> more powerful and allows comparing individual cell components also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207045#comment-16207045
 ] 

Sean Busbey commented on HBASE-18233:
-

{quote}
bq. Seems to show up on all recent branches... (I think). On at least master 
and branch-2 too.
oops... seems quite frequent for branch-1.2 (in this single JIRA we've already 
retried 5 times)... any JIRA tracking this timeout problem?

{quote}

There's a dev@hbase thread on it. I don't believe that's turned into a jira yet.

> We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
> -
>
> Key: HBASE-18233
> URL: https://issues.apache.org/jira/browse/HBASE-18233
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18233-branch-1.2.patch, 
> HBASE-18233-branch-1.2.v2.patch, HBASE-18233-branch-1.2.v3.patch, 
> HBASE-18233-branch-1.2.v4 (1).patch, HBASE-18233-branch-1.2.v4 (1).patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch
>
>
> Please refer to the discuss in HBASE-18144
> https://issues.apache.org/jira/browse/HBASE-18144?focusedCommentId=16051701=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051701



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207044#comment-16207044
 ] 

Sean Busbey commented on HBASE-18233:
-

you can just look at the nightly job to run a no-op patch.

> We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
> -
>
> Key: HBASE-18233
> URL: https://issues.apache.org/jira/browse/HBASE-18233
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18233-branch-1.2.patch, 
> HBASE-18233-branch-1.2.v2.patch, HBASE-18233-branch-1.2.v3.patch, 
> HBASE-18233-branch-1.2.v4 (1).patch, HBASE-18233-branch-1.2.v4 (1).patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch
>
>
> Please refer to the discuss in HBASE-18144
> https://issues.apache.org/jira/browse/HBASE-18144?focusedCommentId=16051701=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051701



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19017) [AMv2] EnableTableProcedure is not retaining the assignments

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19017:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Pushed to master and branch-2.
Thanks to all for the reviews. 

> [AMv2] EnableTableProcedure is not retaining the assignments
> 
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207035#comment-16207035
 ] 

ramkrishna.s.vasudevan commented on HBASE-19017:


[~easyliangjob]
Thanks for the comment and review. So will commit this now. The failed test 
case is unrelated.

> EnableTableProcedure is not retaining the assignments
> -
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19007) Align Services Interfaces in Master and RegionServer

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207036#comment-16207036
 ] 

stack commented on HBASE-19007:
---

[~appy] [~ram_krish] or [~anoop.hbase] Perhaps you have ideas.

> Align Services Interfaces in Master and RegionServer
> 
>
> Key: HBASE-19007
> URL: https://issues.apache.org/jira/browse/HBASE-19007
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-19007.master.001.patch
>
>
> HBASE-18183 adds a CoprocessorRegionServerService to give a view on 
> RegionServiceServices that is safe to expose to Coprocessors.
> On the Master-side, MasterServices becomes an Interface for exposing to 
> Coprocessors.
> We need to align the two.
> For background, see 
> https://issues.apache.org/jira/browse/HBASE-12260?focusedCommentId=16203820=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16203820
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19017) [AMv2] EnableTableProcedure is not retaining the assignments

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19017:
---
Summary: [AMv2] EnableTableProcedure is not retaining the assignments  
(was: EnableTableProcedure is not retaining the assignments)

> [AMv2] EnableTableProcedure is not retaining the assignments
> 
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207030#comment-16207030
 ] 

stack commented on HBASE-10367:
---

+1 on patch. Nice. Make a nice release note [~jerryhe] for this nice addtion so 
others find it.

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-10367-master-2.patch, HBASE-10367-master.patch, 
> HBASE-10367-master.patch
>
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207029#comment-16207029
 ] 

stack commented on HBASE-19021:
---

Thanks for taking a look [~jerryhe]

bq. hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
Previous default is cluster wide, not by table.

Thanks. bytable is not implemented in AMv2 IIRC.

bq.  Servers with no assignments is not added for balance consideration.

You saying when you add a Server, it doesn't get Regions? (I don't recall this 
in testing but perhaps so).

bq. Crashed server is not removed from the in-memory server map in 
RegionStates, which affects balance.

Ok. Good.

bq. Draining marker is not respected when balance.

You are fixing this over in another issue?

Thanks.

I skimmed the patch. It looks great. Thanks.



> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
> Attachments: HBASE-19021-master.patch
>
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-18233:
--
Attachment: HBASE-18233-branch-1.2.v4 (1).patch

Retry though I should try adding a noop patch to see if timeouts.

> We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
> -
>
> Key: HBASE-18233
> URL: https://issues.apache.org/jira/browse/HBASE-18233
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18233-branch-1.2.patch, 
> HBASE-18233-branch-1.2.v2.patch, HBASE-18233-branch-1.2.v3.patch, 
> HBASE-18233-branch-1.2.v4 (1).patch, HBASE-18233-branch-1.2.v4 (1).patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch
>
>
> Please refer to the discuss in HBASE-18144
> https://issues.apache.org/jira/browse/HBASE-18144?focusedCommentId=16051701=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051701



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207019#comment-16207019
 ] 

stack commented on HBASE-18233:
---

Thats pretty good. Retry.

> We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
> -
>
> Key: HBASE-18233
> URL: https://issues.apache.org/jira/browse/HBASE-18233
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18233-branch-1.2.patch, 
> HBASE-18233-branch-1.2.v2.patch, HBASE-18233-branch-1.2.v3.patch, 
> HBASE-18233-branch-1.2.v4 (1).patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch
>
>
> Please refer to the discuss in HBASE-18144
> https://issues.apache.org/jira/browse/HBASE-18144?focusedCommentId=16051701=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051701



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19007) Align Services Interfaces in Master and RegionServer

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207018#comment-16207018
 ] 

stack commented on HBASE-19007:
---

On @appy comments...

bq.  Even if the original author designed these for CP only... 

No. MS and RSS were for internal, mock/test use originally. Only Region 
Interface to my knowledge was for CP only.

bq. What do you say...

Yeah. Before you commented, I was trying to work it so CPs got everything from 
CpEnv (no Server exposed). Code actually looks better. Expectations are better 
managed too; i.e. if you need anything, get it from the environment. But see my 
problem above. Raw CpEnv base Interface is somehow to produce access to the 
'servers' ZKW. A few CPs depend on this working. Others want access to 
HRegionServer  

Thanks [~appy]

> Align Services Interfaces in Master and RegionServer
> 
>
> Key: HBASE-19007
> URL: https://issues.apache.org/jira/browse/HBASE-19007
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-19007.master.001.patch
>
>
> HBASE-18183 adds a CoprocessorRegionServerService to give a view on 
> RegionServiceServices that is safe to expose to Coprocessors.
> On the Master-side, MasterServices becomes an Interface for exposing to 
> Coprocessors.
> We need to align the two.
> For background, see 
> https://issues.apache.org/jira/browse/HBASE-12260?focusedCommentId=16203820=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16203820
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19007) Align Services Interfaces in Master and RegionServer

2017-10-16 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207012#comment-16207012
 ] 

stack commented on HBASE-19007:
---

.001 WIP Bit stuck because we've given access that should not be allowed up to 
this; i.e. a generic CoprocessorEnvironment must somehow produce the a 
ZooKeeperWatcher or a RegionCoprocessorEnvironment is supposed to give access 
to a HRegionServer instance (I'd think that a RCE should not require a hosting 
HRS). Ain't sure how to progress; i.e make a backdoor not available generally 
to all CPs.

> Align Services Interfaces in Master and RegionServer
> 
>
> Key: HBASE-19007
> URL: https://issues.apache.org/jira/browse/HBASE-19007
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-19007.master.001.patch
>
>
> HBASE-18183 adds a CoprocessorRegionServerService to give a view on 
> RegionServiceServices that is safe to expose to Coprocessors.
> On the Master-side, MasterServices becomes an Interface for exposing to 
> Coprocessors.
> We need to align the two.
> For background, see 
> https://issues.apache.org/jira/browse/HBASE-12260?focusedCommentId=16203820=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16203820
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19007) Align Services Interfaces in Master and RegionServer

2017-10-16 Thread stack (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

stack updated HBASE-19007:
--
Attachment: HBASE-19007.master.001.patch

> Align Services Interfaces in Master and RegionServer
> 
>
> Key: HBASE-19007
> URL: https://issues.apache.org/jira/browse/HBASE-19007
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Blocker
> Attachments: HBASE-19007.master.001.patch
>
>
> HBASE-18183 adds a CoprocessorRegionServerService to give a view on 
> RegionServiceServices that is safe to expose to Coprocessors.
> On the Master-side, MasterServices becomes an Interface for exposing to 
> Coprocessors.
> We need to align the two.
> For background, see 
> https://issues.apache.org/jira/browse/HBASE-12260?focusedCommentId=16203820=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16203820
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16207002#comment-16207002
 ] 

Sean Busbey commented on HBASE-19020:
-

The first parsing exception is from the internal version of Xerces that ships 
with the sun JDK.

The second parsing exception is from the dependency 
{{com.fasterxml.woodstox:woodstox-core:jar:5.0.3:compile}}, which is brought in 
by hadoop-common 3.0.0-beta1. Examining that jar shows it's definitely 
implementing the requested java API (via 
{{META-INF/services/javax.xml.stream.XMLInputFactory}}).

> TestXmlParsing exception checking relies on a particular xml implementation 
> without declaring it.
> -
>
> Key: HBASE-19020
> URL: https://issues.apache.org/jira/browse/HBASE-19020
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, REST
>Affects Versions: 1.3.0, 1.4.0, 1.2.5, 1.1.9, 2.0.0-alpha-1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
>
> The test added in HBASE-17424 is overly specific:
> {code}
>   @Test
>   public void testFailOnExternalEntities() throws Exception {
> final String externalEntitiesXml =
> ""
> + "  ] >"
> + " ";
> Client client = mock(Client.class);
> RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
> null);
> Response resp = new Response(200, null, externalEntitiesXml.getBytes());
> when(client.get("/version/cluster", 
> Constants.MIMETYPE_XML)).thenReturn(resp);
> try {
>   admin.getClusterVersion();
>   fail("Expected getClusterVersion() to throw an exception");
> } catch (IOException e) {
>   final String exceptionText = StringUtils.stringifyException(e);
>   final String expectedText = "The entity \"xee\" was referenced, but not 
> declared.";
>   LOG.error("exception text: " + exceptionText, e);
>   assertTrue("Exception does not contain expected text", 
> exceptionText.contains(expectedText));
> }
>   }
> {code}
> Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
> the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206999#comment-16206999
 ] 

Hadoop QA commented on HBASE-18233:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m  
1s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
23s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
21s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
0s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
22s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
30m  6s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 33m  4s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
10s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 89m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.regionserver.wal.TestLogRollingNoCluster |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:933f4b3 |
| JIRA Issue | HBASE-18233 |
| JIRA Patch URL | 

[jira] [Commented] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206983#comment-16206983
 ] 

Hadoop QA commented on HBASE-19021:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  6m 
39s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
2s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
31s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch 2 line(s) with tabs. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
37s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
52m 20s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}137m 
55s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}217m  3s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-19021 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892498/HBASE-19021-master.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 6b7873fd0c55 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 51489b20 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
| whitespace | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146/artifact/patchprocess/whitespace-tabs.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9146/console |
| Powered by | Apache Yetus 0.4.0   http://yetus.apache.org |


This message was 

[jira] [Commented] (HBASE-10367) RegionServer graceful stop / decommissioning

2017-10-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-10367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206970#comment-16206970
 ] 

Jerry He commented on HBASE-10367:
--

[~stack]  are you good with the patch?  Any more comments?
Comments from others?

> RegionServer graceful stop / decommissioning
> 
>
> Key: HBASE-10367
> URL: https://issues.apache.org/jira/browse/HBASE-10367
> Project: HBase
>  Issue Type: Improvement
>Reporter: Enis Soztutar
>Assignee: Jerry He
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-10367-master-2.patch, HBASE-10367-master.patch, 
> HBASE-10367-master.patch
>
>
> Right now, we have a weird way of node decommissioning / graceful stop, which 
> is a graceful_stop.sh bash script, and a region_mover ruby script, and some 
> draining server support which you have to manually write to a znode 
> (really!). Also draining servers is only partially supported in LB operations 
> (LB does take that into account for roundRobin assignment, but not for normal 
> balance) 
> See 
> http://hbase.apache.org/book/node.management.html and HBASE-3071
> I think we should support graceful stop as a first class citizen. Thinking 
> about it, it seems that the difference between regionserver stop and graceful 
> stop is that regionserver stop will close the regions, but the master will 
> only assign them after the znode is deleted. 
> In the new master design (or even before), if we allow RS to be able to close 
> regions on its own (without master initiating it), then graceful stop becomes 
> regular stop. The RS already closes the regions cleanly, and will reject new 
> region assignments, so that we don't need much of the balancer or draining 
> server trickery. 
> This ties into the new master/AM redesign (HBASE-5487), but still deserves 
> it's own jira. Let's use this to brainstorm on the design. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19021:
-
Attachment: HBASE-19021-master.patch

> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
> Attachments: HBASE-19021-master.patch
>
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206963#comment-16206963
 ] 

Jerry He commented on HBASE-19021:
--

bq. Why the negation ?
{code}
   * @param forceByCluster a flag to force to aggregate the server-load to the 
cluster level
   * @return A clone of current assignments by table.
   */
  public Map> 
getAssignmentsByTable(
  final boolean forceByCluster) {
if (!forceByCluster) return getAssignmentsByTable();
{code}
!isByTable will be cluster level.

> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19021:
-
Attachment: (was: HBASE-19021-master.patch)

> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-18233:
--
Attachment: HBASE-18233-branch-1.2.v4.patch

bq. Seems to show up on all recent branches... (I think). On at least master 
and branch-2 too.
oops... seems quite frequent for branch-1.2 (in this single JIRA we've already 
retried 5 times)... any JIRA tracking this timeout problem?

Retry HadoopQA...

> We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock
> -
>
> Key: HBASE-18233
> URL: https://issues.apache.org/jira/browse/HBASE-18233
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.7
>Reporter: Allan Yang
>Assignee: Allan Yang
>Priority: Blocker
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.2.7
>
> Attachments: HBASE-18233-branch-1.2.patch, 
> HBASE-18233-branch-1.2.v2.patch, HBASE-18233-branch-1.2.v3.patch, 
> HBASE-18233-branch-1.2.v4 (1).patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch, 
> HBASE-18233-branch-1.2.v4.patch, HBASE-18233-branch-1.2.v4.patch
>
>
> Please refer to the discuss in HBASE-18144
> https://issues.apache.org/jira/browse/HBASE-18144?focusedCommentId=16051701=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16051701



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Yu Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206950#comment-16206950
 ] 

Yu Li commented on HBASE-19022:
---

Checking the doc and it's really nice analysis through structure101. I think 
this is good to include in our 3.0 plan.

> Untangle and split hbase-server module
> --
>
> Key: HBASE-19022
> URL: https://issues.apache.org/jira/browse/HBASE-19022
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>
> https://docs.google.com/document/d/1wZAimGcJzc0jys0-EATRi0CyGMVlcIxTvZtegPF4mfw/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16290) Dump summary of callQueue content; can help debugging

2017-10-16 Thread Sreeram Venkatasubramanian (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206949#comment-16206949
 ] 

Sreeram Venkatasubramanian commented on HBASE-16290:


Hi [~chia7712], I have implemented the suggested change for FifoRpcScheduler. 
FifoRpcScheduler uses ThreadPoolExecutor. Unlike SimpleRpcScheduler, there 
seems to be no easy way to unit test the number of tasks inside 
ThreadPoolExecutor. Minimum of 1 worker task always runs in ThreadPoolExecutor 
and it consumes some tasks from the work queue. So the input number of tasks 
does not end up matching with the tasks held by ThreadPoolExecutor (which is 
lesser in number). Any suggestions on this ?

> Dump summary of callQueue content; can help debugging
> -
>
> Key: HBASE-16290
> URL: https://issues.apache.org/jira/browse/HBASE-16290
> Project: HBase
>  Issue Type: Bug
>  Components: Operability
>Affects Versions: 2.0.0
>Reporter: stack
>Assignee: Sreeram Venkatasubramanian
>  Labels: beginner
> Fix For: 2.0.0
>
> Attachments: DebugDump_screenshot.png, HBASE-16290.master.001.patch, 
> HBASE-16290.master.002.patch, HBASE-16290.master.003.patch, 
> HBASE-16290.master.004.patch, HBASE-16290.master.005.patch, Sample Summary.txt
>
>
> Being able to get a clue what is in a backedup callQueue could give insight 
> on what is going on on a jacked server. Just needs to summarize count, sizes, 
> call types. Useful debugging. In a servlet?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-15172) Support setting storage policy in bulkload

2017-10-16 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-15172?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-15172:
--
Release Note: After HBASE-15172/HBASE-19016 we could set storage policy 
through "hbase.hstore.block.storage.policy" property for bulkload, or 
"hbase.hstore.block.storage.policy." for a specified family. 
Supported storage policy includes: ALL_SSD, ONE_SSD, HOT, WARM, COLD, etc.  
(was: After HBASE-15172 we could set storage policy through 
"hbase.hstore.storagepolicy" property for bulkload, or 
"hbase.hstore.storagepolicy." for a specified family. Supported 
storage policy includes: ALL_SSD, ONE_SSD, HOT, WARM, COLD, etc.)

> Support setting storage policy in bulkload
> --
>
> Key: HBASE-15172
> URL: https://issues.apache.org/jira/browse/HBASE-15172
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Yu Li
>Assignee: Yu Li
> Fix For: 2.0.0
>
> Attachments: HBASE-15172.patch, HBASE-15172.v2.patch
>
>
> When using tiered HFile storage, we should be able to generating hfile with 
> correct storage type during bulkload. This JIRA is targeting at making it 
> possible.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19016) Coordinate storage policy property name for table schema and bulkload

2017-10-16 Thread Yu Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19016?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yu Li updated HBASE-19016:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Pushed into master and branch-2. Thanks [~tedyu] for review.

> Coordinate storage policy property name for table schema and bulkload
> -
>
> Key: HBASE-19016
> URL: https://issues.apache.org/jira/browse/HBASE-19016
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Yu Li
>Assignee: Yu Li
>Priority: Minor
> Fix For: 3.0.0, 2.0.0-alpha-4
>
> Attachments: HBASE-19016.patch
>
>
> As pointed out in this [email|https://s.apache.org/Rp2J] in our user mailing 
> list, the property name for specifying storage policy in table schema 
> (HBASE-14061) and bulkload (HBASE-15172) are different. Since these two 
> features are all for 2.0 only and not yet released, it would be better to 
> align the name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-16 Thread Guanghao Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206915#comment-16206915
 ] 

Guanghao Zhang commented on HBASE-18950:


Ping [~Apache9] for reviewing.

> Remove Optional parameters in AsyncAdmin interface
> --
>
> Key: HBASE-18950
> URL: https://issues.apache.org/jira/browse/HBASE-18950
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18950.master.001.patch, 
> HBASE-18950.master.002.patch, HBASE-18950.master.003.patch, 
> HBASE-18950.master.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19023?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-19023:
---
Affects Version/s: 2.0.0-alpha-3
   Labels: document  (was: )

> Usage for rowcounter in refguide is out of sync with code
> -
>
> Key: HBASE-19023
> URL: https://issues.apache.org/jira/browse/HBASE-19023
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: Ted Yu
>  Labels: document
>
> src/main/asciidoc/_chapters/troubleshooting.adoc:
> {code}
> HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
> $HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
> {code}
> The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19023) Usage for rowcounter in refguide is out of sync with code

2017-10-16 Thread Ted Yu (JIRA)
Ted Yu created HBASE-19023:
--

 Summary: Usage for rowcounter in refguide is out of sync with code
 Key: HBASE-19023
 URL: https://issues.apache.org/jira/browse/HBASE-19023
 Project: HBase
  Issue Type: Bug
Reporter: Ted Yu


src/main/asciidoc/_chapters/troubleshooting.adoc:
{code}
HADOOP_CLASSPATH=`hbase classpath` hadoop jar 
$HBASE_HOME/hbase-server-VERSION.jar rowcounter usertable
{code}
The class is no longer in hbase-server jar. It is in hbase-mapreduce jar.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206870#comment-16206870
 ] 

Appy commented on HBASE-19022:
--

Ah, that's because it was from cloudera account. Reshared it from my personal 
account. Updated the link in the description.
Sorry for the trouble [~zyork]. :)

> Untangle and split hbase-server module
> --
>
> Key: HBASE-19022
> URL: https://issues.apache.org/jira/browse/HBASE-19022
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>
> https://docs.google.com/document/d/1wZAimGcJzc0jys0-EATRi0CyGMVlcIxTvZtegPF4mfw/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19001) Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206869#comment-16206869
 ] 

Duo Zhang commented on HBASE-19001:
---

OK, the problem of Tephra is for flush and compaction. There are two things, 
first it sets to read all versions, second it adds a Filter.

I think the first one is not a problem for flush/compaction, we always read all 
versions when flush/compaction. The flush/compaction for MOB maybe different 
but it is OK I think? The MOB file works like an external storage.

For the filter, the code is
{code}
  static class IncludeInProgressFilter extends FilterBase {
private final long visibilityUpperBound;
private final Set invalidIds;
private final Filter txFilter;

public IncludeInProgressFilter(long upperBound, Collection invalids, 
Filter transactionFilter) {
  this.visibilityUpperBound = upperBound;
  this.invalidIds = Sets.newHashSet(invalids);
  this.txFilter = transactionFilter;
}

@Override
public ReturnCode filterKeyValue(Cell cell) throws IOException {
  // include all cells visible to in-progress transactions, except for 
those already marked as invalid
  long ts = cell.getTimestamp();
  if (ts > visibilityUpperBound) {
// include everything that could still be in-progress except invalids
if (invalidIds.contains(ts)) {
  return ReturnCode.SKIP;
}
return ReturnCode.INCLUDE;
  }
  return txFilter.filterKeyValue(cell);
}
  }
{code}

It just does filterKeyValue, so I think it is easy to change to use a wrap of 
InternalScanner and do filtering on the Cell list returned by 
InternalScanner.next. There is a example:

https://github.com/apache/hbase/blob/master/hbase-examples/src/main/java/org/apache/hadoop/hbase/coprocessor/example/ZooKeeperScanPolicyObserver.java

{code}
  private InternalScanner wrap(InternalScanner scanner) {
OptionalLong optExpireBefore = getExpireBefore();
if (!optExpireBefore.isPresent()) {
  return scanner;
}
long expireBefore = optExpireBefore.getAsLong();
return new DelegatingInternalScanner(scanner) {

  @Override
  public boolean next(List result, ScannerContext scannerContext) 
throws IOException {
boolean moreRows = scanner.next(result, scannerContext);
result.removeIf(c -> c.getTimestamp() < expireBefore);
return moreRows;
  }
};
  }
{code}

Thanks.

> Remove the hooks in RegionObserver which are designed to construct a 
> StoreScanner which is marked as IA.Private
> ---
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>
> There are three methods here
> {code}
> KeyValueScanner 
> preStoreScannerOpen(ObserverContext c,
>   Store store, Scan scan, NavigableSet targetCols, 
> KeyValueScanner s, long readPt)
>   throws IOException;
> InternalScanner 
> preFlushScannerOpen(ObserverContext c,
>   Store store, List scanners, InternalScanner s, long 
> readPoint)
>   throws IOException;
> InternalScanner 
> preCompactScannerOpen(ObserverContext c,
>   Store store, List scanners, ScanType 
> scanType, long earliestPutTs,
>   InternalScanner s, CompactionLifeCycleTracker tracker, 
> CompactionRequest request,
>   long readPoint) throws IOException;
> {code}
> For the flush and compact ones, we've discussed many times, it is not safe to 
> let user inject a Filter or even implement their own InternalScanner using 
> the store file scanners, as our correctness highly depends on the complicated 
> logic in SQM and StoreScanner. CP users are expected to wrap the original 
> InternalScanner(it is a StoreScanner anyway) in preFlush/preCompact methods 
> to do filtering or something else.
> For preStoreScannerOpen it even returns a KeyValueScanner which is marked as 
> IA.Private... This is less hurt but still, we've decided to not expose 
> StoreScanner to CP users so here this method is useless. CP users can use 
> preGetOp and preScannerOpen method to modify the Get/Scan object passed in to 
> inject into the scan operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19022:
-
Description: 
https://docs.google.com/document/d/1wZAimGcJzc0jys0-EATRi0CyGMVlcIxTvZtegPF4mfw/edit?usp=sharing
  (was: 
https://docs.google.com/document/d/1n6v0gUMfpGV4rjQvmYQzVIKpTAExg_lMN2ng-f_OotE/edit#)

> Untangle and split hbase-server module
> --
>
> Key: HBASE-19022
> URL: https://issues.apache.org/jira/browse/HBASE-19022
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>
> https://docs.google.com/document/d/1wZAimGcJzc0jys0-EATRi0CyGMVlcIxTvZtegPF4mfw/edit?usp=sharing



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Zach York (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206867#comment-16206867
 ] 

Zach York commented on HBASE-19022:
---

The google doc appears locked down, can you make it public?

> Untangle and split hbase-server module
> --
>
> Key: HBASE-19022
> URL: https://issues.apache.org/jira/browse/HBASE-19022
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>
> https://docs.google.com/document/d/1n6v0gUMfpGV4rjQvmYQzVIKpTAExg_lMN2ng-f_OotE/edit#



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-11743) Add unit test for the fix that sorts custom value of BUCKET_CACHE_BUCKETS_KEY

2017-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206865#comment-16206865
 ] 

Ted Yu commented on HBASE-11743:


[~gustavoanatoly]:
Are you working on this ?

> Add unit test for the fix that sorts custom value of BUCKET_CACHE_BUCKETS_KEY
> -
>
> Key: HBASE-11743
> URL: https://issues.apache.org/jira/browse/HBASE-11743
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Assignee: Gustavo Anatoly
>Priority: Minor
>
> HBASE-11550 sorts the custom value of BUCKET_CACHE_BUCKETS_KEY such that 
> there is no wastage in bucket allocation.
> This JIRA is to add unit test for the fix so that there is no regression in 
> the future.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-18907) Methods missing rpc timeout parameter in HTable

2017-10-16 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18907?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HBASE-18907:
---
Affects Version/s: 1.2.6

> Methods missing rpc timeout parameter in HTable
> ---
>
> Key: HBASE-18907
> URL: https://issues.apache.org/jira/browse/HBASE-18907
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 1.2.6
>Reporter: Ted Yu
>  Labels: client
>
> When revisiting HBASE-15645, I found that two methods (mutateRow and 
> checkAndMutate) miss the rpcTimeout parameter to newCaller() in HTable:
> {code}
> return rpcCallerFactory. newCaller().callWithRetries(callable, 
> this.operationTimeout);
> {code}
> I checked branch-1.2
> Other branch(es) may have the same problem



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206860#comment-16206860
 ] 

Appy commented on HBASE-18898:
--

bq. ...don't have to enforce particular method signatures. The consumers can 
opt in to what they want,.
bq. I also agree it's nice not to need required method signatures.
bq. Most of our CP compat breaks have been addition of methods to interfaces or 
modification of method signatures. Both types of incompatibility would be much 
less likely to occur. We would implement the wiring up of code to framework 
guided by annotations and variations in the shape of interface/classes and 
method signatures would not be a concern any longer. 

Sounds amazing.
But am unaware of the framework you guys are talking about. Let me learn about 
it first. Yayy..new stuff. Looks like this will be fun.



> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18943) Cannot start mini dfs cluster using hadoop-3 in test due to NoSuchMethodError in jetty

2017-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18943?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206858#comment-16206858
 ] 

Ted Yu commented on HBASE-18943:


HADOOP-14930 was closed as Wont Fix.

> Cannot start mini dfs cluster using hadoop-3 in test due to NoSuchMethodError 
> in jetty 
> ---
>
> Key: HBASE-18943
> URL: https://issues.apache.org/jira/browse/HBASE-18943
> Project: HBase
>  Issue Type: Test
>Reporter: Ted Yu
>Priority: Critical
>
> When starting mini dfs cluster against hadoop-3:
> {code}
>   dfsCluster = startMiniDFSCluster(numDataNodes, dataNodeHosts);
> {code}
> The above call would end up with:
> {code}
> java.lang.NoSuchMethodError: 
> org.eclipse.jetty.server.session.SessionHandler.getSessionManager()Lorg/eclipse/jetty/server/SessionManager;
>   at org.apache.hadoop.hbase.client.TestHCM.setUpBeforeClass(TestHCM.java:251)
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206851#comment-16206851
 ] 

Ted Yu commented on HBASE-19021:


{code}
1421
this.assignmentManager.getRegionStates().getAssignmentsByTable(!isByTable);
{code}
Why the negation ?
{code}
+for (Map table: result.values()) {
{code}
table is a Map. Consider renaming the variable for better readability.
{code}
+  if (this.balancerName.contains("StochasticLoadBalancer")) {
+ avgLoadPlusSlop++;
+ avgLoadMinusSlop--;
{code}
Indentation is off.
{code}
+try {
+Thread.sleep(200);
+  } catch (InterruptedException e) {}
{code}
Please handle InterruptedException properly.



> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
> Attachments: HBASE-19021-master.patch
>
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19001) Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-16 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19001:
--
Description: 
There are three methods here
{code}
KeyValueScanner 
preStoreScannerOpen(ObserverContext c,
  Store store, Scan scan, NavigableSet targetCols, KeyValueScanner 
s, long readPt)
  throws IOException;

InternalScanner 
preFlushScannerOpen(ObserverContext c,
  Store store, List scanners, InternalScanner s, long 
readPoint)
  throws IOException;

InternalScanner 
preCompactScannerOpen(ObserverContext c,
  Store store, List scanners, ScanType scanType, 
long earliestPutTs,
  InternalScanner s, CompactionLifeCycleTracker tracker, CompactionRequest 
request,
  long readPoint) throws IOException;
{code}

For the flush and compact ones, we've discussed many times, it is not safe to 
let user inject a Filter or even implement their own InternalScanner using the 
store file scanners, as our correctness highly depends on the complicated logic 
in SQM and StoreScanner. CP users are expected to wrap the original 
InternalScanner(it is a StoreScanner anyway) in preFlush/preCompact methods to 
do filtering or something else.

For preStoreScannerOpen it even returns a KeyValueScanner which is marked as 
IA.Private... This is less hurt but still, we've decided to not expose 
StoreScanner to CP users so here this method is useless. CP users can use 
preGetOp and preScannerOpen method to modify the Get/Scan object passed in to 
inject into the scan operation.

> Remove the hooks in RegionObserver which are designed to construct a 
> StoreScanner which is marked as IA.Private
> ---
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>
> There are three methods here
> {code}
> KeyValueScanner 
> preStoreScannerOpen(ObserverContext c,
>   Store store, Scan scan, NavigableSet targetCols, 
> KeyValueScanner s, long readPt)
>   throws IOException;
> InternalScanner 
> preFlushScannerOpen(ObserverContext c,
>   Store store, List scanners, InternalScanner s, long 
> readPoint)
>   throws IOException;
> InternalScanner 
> preCompactScannerOpen(ObserverContext c,
>   Store store, List scanners, ScanType 
> scanType, long earliestPutTs,
>   InternalScanner s, CompactionLifeCycleTracker tracker, 
> CompactionRequest request,
>   long readPoint) throws IOException;
> {code}
> For the flush and compact ones, we've discussed many times, it is not safe to 
> let user inject a Filter or even implement their own InternalScanner using 
> the store file scanners, as our correctness highly depends on the complicated 
> logic in SQM and StoreScanner. CP users are expected to wrap the original 
> InternalScanner(it is a StoreScanner anyway) in preFlush/preCompact methods 
> to do filtering or something else.
> For preStoreScannerOpen it even returns a KeyValueScanner which is marked as 
> IA.Private... This is less hurt but still, we've decided to not expose 
> StoreScanner to CP users so here this method is useless. CP users can use 
> preGetOp and preScannerOpen method to modify the Get/Scan object passed in to 
> inject into the scan operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19001) Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-16 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206847#comment-16206847
 ] 

Duo Zhang commented on HBASE-19001:
---

Title and description changed. Let me check the usage of Tephra. Thanks 
[~ghelmling] for the pointer.

> Remove the hooks in RegionObserver which are designed to construct a 
> StoreScanner which is marked as IA.Private
> ---
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>
> There are three methods here
> {code}
> KeyValueScanner 
> preStoreScannerOpen(ObserverContext c,
>   Store store, Scan scan, NavigableSet targetCols, 
> KeyValueScanner s, long readPt)
>   throws IOException;
> InternalScanner 
> preFlushScannerOpen(ObserverContext c,
>   Store store, List scanners, InternalScanner s, long 
> readPoint)
>   throws IOException;
> InternalScanner 
> preCompactScannerOpen(ObserverContext c,
>   Store store, List scanners, ScanType 
> scanType, long earliestPutTs,
>   InternalScanner s, CompactionLifeCycleTracker tracker, 
> CompactionRequest request,
>   long readPoint) throws IOException;
> {code}
> For the flush and compact ones, we've discussed many times, it is not safe to 
> let user inject a Filter or even implement their own InternalScanner using 
> the store file scanners, as our correctness highly depends on the complicated 
> logic in SQM and StoreScanner. CP users are expected to wrap the original 
> InternalScanner(it is a StoreScanner anyway) in preFlush/preCompact methods 
> to do filtering or something else.
> For preStoreScannerOpen it even returns a KeyValueScanner which is marked as 
> IA.Private... This is less hurt but still, we've decided to not expose 
> StoreScanner to CP users so here this method is useless. CP users can use 
> preGetOp and preScannerOpen method to modify the Get/Scan object passed in to 
> inject into the scan operation.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19021:
-
Attachment: HBASE-19021-master.patch

> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
> Attachments: HBASE-19021-master.patch
>
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19021?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jerry He updated HBASE-19021:
-
Status: Patch Available  (was: Open)

> Restore a few important missing logics for balancer in 2.0
> --
>
> Key: HBASE-19021
> URL: https://issues.apache.org/jira/browse/HBASE-19021
> Project: HBase
>  Issue Type: Bug
>Reporter: Jerry He
>Assignee: Jerry He
>Priority: Critical
> Attachments: HBASE-19021-master.patch
>
>
> After looking at the code, and some testing, I see the following things are 
> missing for balancer to work properly after AMv2.
> # hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
> Previous default is cluster wide, not by table.
> # Servers with no assignments is not added for balance consideration.
> # Crashed server is not removed from the in-memory server map in 
> RegionStates, which affects balance.
> # Draining marker is not respected when balance.
> Also try to re-enable {{TestRegionRebalancing}}, which has a 
> {{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19001) Remove the hooks in RegionObserver which are designed to construct a StoreScanner which is marked as IA.Private

2017-10-16 Thread Duo Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Duo Zhang updated HBASE-19001:
--
Summary: Remove the hooks in RegionObserver which are designed to construct 
a StoreScanner which is marked as IA.Private  (was: Remove StoreScanner 
dependency in our own CP related tests)

> Remove the hooks in RegionObserver which are designed to construct a 
> StoreScanner which is marked as IA.Private
> ---
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Appy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19022?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Appy updated HBASE-19022:
-
Description: 
https://docs.google.com/document/d/1n6v0gUMfpGV4rjQvmYQzVIKpTAExg_lMN2ng-f_OotE/edit#

> Untangle and split hbase-server module
> --
>
> Key: HBASE-19022
> URL: https://issues.apache.org/jira/browse/HBASE-19022
> Project: HBase
>  Issue Type: Improvement
>Reporter: Appy
>
> https://docs.google.com/document/d/1n6v0gUMfpGV4rjQvmYQzVIKpTAExg_lMN2ng-f_OotE/edit#



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19022) Untangle and split hbase-server module

2017-10-16 Thread Appy (JIRA)
Appy created HBASE-19022:


 Summary: Untangle and split hbase-server module
 Key: HBASE-19022
 URL: https://issues.apache.org/jira/browse/HBASE-19022
 Project: HBase
  Issue Type: Improvement
Reporter: Appy






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19021) Restore a few important missing logics for balancer in 2.0

2017-10-16 Thread Jerry He (JIRA)
Jerry He created HBASE-19021:


 Summary: Restore a few important missing logics for balancer in 2.0
 Key: HBASE-19021
 URL: https://issues.apache.org/jira/browse/HBASE-19021
 Project: HBase
  Issue Type: Bug
Reporter: Jerry He
Assignee: Jerry He
Priority: Critical


After looking at the code, and some testing, I see the following things are 
missing for balancer to work properly after AMv2.

# hbase.master.loadbalance.bytable is not respected. It is always 'bytable'. 
Previous default is cluster wide, not by table.
# Servers with no assignments is not added for balance consideration.
# Crashed server is not removed from the in-memory server map in RegionStates, 
which affects balance.
# Draining marker is not respected when balance.

Also try to re-enable {{TestRegionRebalancing}}, which has a 
{{testRebalanceOnRegionServerNumberChange}}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-13346) Clean up Filter package for post 1.0 s/KeyValue/Cell/g

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206763#comment-16206763
 ] 

Hadoop QA commented on HBASE-13346:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 14m  
7s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
42s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
11s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
41s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
37s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
31s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green} 10m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
13s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
53s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
31s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  3m 
45s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  3m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
27s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
57m 11s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  7m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
15s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}152m  
8s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 20m 
10s{color} | {color:green} hbase-mapreduce in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  6m  
0s{color} | {color:green} hbase-spark in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  1m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}307m 53s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-13346 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892432/HBASE-13346.master.008.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux b1a08dd2084f 3.13.0-119-generic 

[jira] [Commented] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206753#comment-16206753
 ] 

huaxiang sun commented on HBASE-19017:
--

+1

> EnableTableProcedure is not retaining the assignments
> -
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18233) We shouldn't wait for readlock in doMiniBatchMutation in case of deadlock

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18233?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206733#comment-16206733
 ] 

Hadoop QA commented on HBASE-18233:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
48s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
44s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
32s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
36s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 3s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
23s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 9s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
56s{color} | {color:green} branch-1.2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} branch-1.2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} branch-1.2 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
34s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
29s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
22m 15s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}294m 42s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  3m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}361m 18s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.replication.TestReplicationKillSlaveRS |
|   | org.apache.hadoop.hbase.replication.TestReplicationDisableInactivePeer |
|   | org.apache.hadoop.hbase.mapreduce.TestHFileOutputFormat |
|   | 

[jira] [Commented] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread Yi Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206711#comment-16206711
 ] 

Yi Liang commented on HBASE-19017:
--

reviewed your patch, the fix is correct. 
For HBASE-18984, I also add some clean up in the AssignProcedure. You can 
commit this one first, and I will rebase the patch there. 

And the problem I found seems not related to retain assignment, and try to 
reproduce and maybe open a new jira for it. 

> EnableTableProcedure is not retaining the assignments
> -
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18946) Stochastic load balancer assigns replica regions to the same RS

2017-10-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18946?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206684#comment-16206684
 ] 

huaxiang sun commented on HBASE-18946:
--

Sorry, [~ramkrishna], busy with something else. going through the changes now 
and will post the update, thanks.

> Stochastic load balancer assigns replica regions to the same RS
> ---
>
> Key: HBASE-18946
> URL: https://issues.apache.org/jira/browse/HBASE-18946
> Project: HBase
>  Issue Type: Bug
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18946.patch, HBASE-18946.patch, 
> TestRegionReplicasWithRestartScenarios.java
>
>
> Trying out region replica and its assignment I can see that some times the 
> default LB Stocahstic load balancer assigns replica regions to the same RS. 
> This happens when we have 3 RS checked in and we have a table with 3 
> replicas. When a RS goes down then the replicas being assigned to same RS is 
> acceptable but the case when we have enough RS to assign this behaviour is 
> undesirable and does not solve the purpose of replicas. 
> [~huaxiang] and [~enis]. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206682#comment-16206682
 ] 

Hadoop QA commented on HBASE-19017:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  8m 
51s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 6s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
29s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  7m 
 0s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m  
7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
28s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
41s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
58m  9s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}149m 39s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
38s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}245m 14s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.security.token.TestZKSecretWatcher |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-19017 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892430/HBASE-19017.patch |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 16c9c55bd1a0 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build@2/component/dev-support/hbase-personality.sh
 |
| git revision | master / 51489b20 |
| Default Java | 1.8.0_144 |
| findbugs | v3.1.0-RC3 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/9145/testReport/ |
| modules | C: hbase-server U: hbase-server |
| Console output | 

[jira] [Comment Edited] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1620#comment-1620
 ] 

Andrew Purtell edited comment on HBASE-18898 at 10/16/17 10:10 PM:
---

I agree with [~mdrob] comments above. Often in JAX-RS programming you can use 
one method to implement for multiple annotations. The flexibility is 
attractive. I also agree it's nice not to need required method signatures. Most 
of our CP compat breaks have been addition of methods to interfaces or 
modification of method signatures. Both types of incompatibility would be much 
less likely to occur. We would implement the wiring up of code to framework 
guided by annotations and variations in the shape of interface/classes and 
method signatures would not be a concern any longer. The long term maintenance 
burdens for both us and implementers would be lessened. I also agree that 
annotating method parameters and wiring them up may be a step too far without a 
real DI framework, but we should still look into it. 


was (Author: apurtell):
I agree with [~mdrob] comments above. Often in JAX-RS programming you can use 
one method to implement for multiple annotations. The flexibility is 
attractive. I also agree it's nice not to need required method signatures. Most 
of our CP compat breaks have been addition of methods or modification of method 
signatures. Both types of incompatibility would be much less likely to occur. 
The long term maintenance burdens for both us and implementers would be 
lessened. I also agree that annotating method parameters and wiring them up may 
be a step too far without a real DI framework, but we should still look into 
it. 

> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1620#comment-1620
 ] 

Andrew Purtell commented on HBASE-18898:


I agree with [~mdrob] comments above. Often in JAX-RS programming you can use 
one method to implement for multiple annotations. The flexibility is 
attractive. I also agree it's nice not to need required method signatures. Most 
of our CP compat breaks have been addition of methods or modification of method 
signatures. Both types of incompatibility would be much less likely to occur. 
The long term maintenance burdens for both us and implementers would be 
lessened. I also agree that annotating method parameters and wiring them up may 
be a step too far without a real DI framework, but we should still look into 
it. 

> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Work started] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HBASE-19020 started by Sean Busbey.
---
> TestXmlParsing exception checking relies on a particular xml implementation 
> without declaring it.
> -
>
> Key: HBASE-19020
> URL: https://issues.apache.org/jira/browse/HBASE-19020
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, REST
>Affects Versions: 1.3.0, 1.4.0, 1.2.5, 1.1.9, 2.0.0-alpha-1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
>
> The test added in HBASE-17424 is overly specific:
> {code}
>   @Test
>   public void testFailOnExternalEntities() throws Exception {
> final String externalEntitiesXml =
> ""
> + "  ] >"
> + " ";
> Client client = mock(Client.class);
> RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
> null);
> Response resp = new Response(200, null, externalEntitiesXml.getBytes());
> when(client.get("/version/cluster", 
> Constants.MIMETYPE_XML)).thenReturn(resp);
> try {
>   admin.getClusterVersion();
>   fail("Expected getClusterVersion() to throw an exception");
> } catch (IOException e) {
>   final String exceptionText = StringUtils.stringifyException(e);
>   final String expectedText = "The entity \"xee\" was referenced, but not 
> declared.";
>   LOG.error("exception text: " + exceptionText, e);
>   assertTrue("Exception does not contain expected text", 
> exceptionText.contains(expectedText));
> }
>   }
> {code}
> Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
> the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206647#comment-16206647
 ] 

Sean Busbey commented on HBASE-19020:
-

exception on master, default profile (Hadoop 2.7.1):
{code}
2017-10-16 16:27:15,987 ERROR [main] client.TestXmlParsing(76): exception text: 
'java.io.IOException: Issue parsing StorageClusterVersionModel object in XML 
form: null
at 
org.apache.hadoop.hbase.rest.client.RemoteAdmin.getClusterVersion(RemoteAdmin.java:218)
at 
org.apache.hadoop.hbase.rest.client.TestXmlParsing.testFailOnExternalEntities(TestXmlParsing.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:78)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:57)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:290)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:71)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:288)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:58)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:268)
at org.junit.runners.ParentRunner.run(ParentRunner.java:363)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.execute(JUnit4Provider.java:367)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeWithRerun(JUnit4Provider.java:274)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.executeTestSet(JUnit4Provider.java:238)
at 
org.apache.maven.surefire.junit4.JUnit4Provider.invoke(JUnit4Provider.java:161)
at 
org.apache.maven.surefire.booter.ForkedBooter.invokeProviderInSameClassLoader(ForkedBooter.java:290)
at 
org.apache.maven.surefire.booter.ForkedBooter.runSuitesInProcess(ForkedBooter.java:242)
at 
org.apache.maven.surefire.booter.ForkedBooter.main(ForkedBooter.java:121)
Caused by: javax.xml.bind.UnmarshalException
 - with linked exception:
[javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,114]
Message: The entity "xee" was referenced, but not declared.]
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.handleStreamException(UnmarshallerImpl.java:432)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:368)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal(UnmarshallerImpl.java:338)
at 
org.apache.hadoop.hbase.rest.client.RemoteAdmin.getClusterVersion(RemoteAdmin.java:212)
... 25 more
Caused by: javax.xml.stream.XMLStreamException: ParseError at [row,col]:[1,114]
Message: The entity "xee" was referenced, but not declared.
at 
com.sun.org.apache.xerces.internal.impl.XMLStreamReaderImpl.next(XMLStreamReaderImpl.java:596)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.StAXStreamConnector.bridge(StAXStreamConnector.java:197)
at 
com.sun.xml.bind.v2.runtime.unmarshaller.UnmarshallerImpl.unmarshal0(UnmarshallerImpl.java:366)
... 27 more
'
java.io.IOException: Issue parsing StorageClusterVersionModel object in XML 
form: null
at 
org.apache.hadoop.hbase.rest.client.RemoteAdmin.getClusterVersion(RemoteAdmin.java:218)
at 
org.apache.hadoop.hbase.rest.client.TestXmlParsing.testFailOnExternalEntities(TestXmlParsing.java:71)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:50)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:47)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:325)

[jira] [Commented] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19020?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206643#comment-16206643
 ] 

Sean Busbey commented on HBASE-19020:
-

this might be the same problem reported in HBASE-17987, but I don't have an IBM 
jdk handy to verify.

> TestXmlParsing exception checking relies on a particular xml implementation 
> without declaring it.
> -
>
> Key: HBASE-19020
> URL: https://issues.apache.org/jira/browse/HBASE-19020
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, REST
>Affects Versions: 1.3.0, 1.4.0, 1.2.5, 1.1.9, 2.0.0-alpha-1
>Reporter: Sean Busbey
>Assignee: Sean Busbey
> Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13
>
>
> The test added in HBASE-17424 is overly specific:
> {code}
>   @Test
>   public void testFailOnExternalEntities() throws Exception {
> final String externalEntitiesXml =
> ""
> + "  ] >"
> + " ";
> Client client = mock(Client.class);
> RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
> null);
> Response resp = new Response(200, null, externalEntitiesXml.getBytes());
> when(client.get("/version/cluster", 
> Constants.MIMETYPE_XML)).thenReturn(resp);
> try {
>   admin.getClusterVersion();
>   fail("Expected getClusterVersion() to throw an exception");
> } catch (IOException e) {
>   final String exceptionText = StringUtils.stringifyException(e);
>   final String expectedText = "The entity \"xee\" was referenced, but not 
> declared.";
>   LOG.error("exception text: " + exceptionText, e);
>   assertTrue("Exception does not contain expected text", 
> exceptionText.contains(expectedText));
> }
>   }
> {code}
> Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
> the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19020) TestXmlParsing exception checking relies on a particular xml implementation without declaring it.

2017-10-16 Thread Sean Busbey (JIRA)
Sean Busbey created HBASE-19020:
---

 Summary: TestXmlParsing exception checking relies on a particular 
xml implementation without declaring it.
 Key: HBASE-19020
 URL: https://issues.apache.org/jira/browse/HBASE-19020
 Project: HBase
  Issue Type: Bug
  Components: dependencies, REST
Affects Versions: 2.0.0-alpha-1, 1.1.9, 1.2.5, 1.3.0, 1.4.0
Reporter: Sean Busbey
Assignee: Sean Busbey
 Fix For: 1.4.0, 1.3.2, 1.5.0, 1.2.7, 2.0.0-beta-1, 1.1.13


The test added in HBASE-17424 is overly specific:

{code}
  @Test
  public void testFailOnExternalEntities() throws Exception {
final String externalEntitiesXml =
""
+ "  ] >"
+ " ";
Client client = mock(Client.class);
RemoteAdmin admin = new RemoteAdmin(client, HBaseConfiguration.create(), 
null);
Response resp = new Response(200, null, externalEntitiesXml.getBytes());

when(client.get("/version/cluster", 
Constants.MIMETYPE_XML)).thenReturn(resp);

try {
  admin.getClusterVersion();
  fail("Expected getClusterVersion() to throw an exception");
} catch (IOException e) {
  final String exceptionText = StringUtils.stringifyException(e);
  final String expectedText = "The entity \"xee\" was referenced, but not 
declared.";
  LOG.error("exception text: " + exceptionText, e);
  assertTrue("Exception does not contain expected text", 
exceptionText.contains(expectedText));
}
  }
{code}

Specifically, when running against Hadoop 3.0.0-beta1 this test fails because 
the exception text is different, though I'm still figuring out why.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18127) Enable state to be passed between the region observer coprocessor hook calls

2017-10-16 Thread Andrew Purtell (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206631#comment-16206631
 ] 

Andrew Purtell commented on HBASE-18127:


Also may need this cross hooks outside of RPC, so perhaps attached to the 
environment not the RPC call context.

> Enable state to be passed between the region observer coprocessor hook calls
> 
>
> Key: HBASE-18127
> URL: https://issues.apache.org/jira/browse/HBASE-18127
> Project: HBase
>  Issue Type: New Feature
>Reporter: Lars Hofhansl
>Assignee: Abhishek Singh Chouhan
> Attachments: HBASE-18127.master.001.patch, 
> HBASE-18127.master.002.patch, HBASE-18127.master.002.patch, 
> HBASE-18127.master.003.patch, HBASE-18127.master.004.patch, 
> HBASE-18127.master.005.patch, HBASE-18127.master.005.patch, 
> HBASE-18127.master.006.patch
>
>
> Allow regionobserver to optionally skip postPut/postDelete when 
> postBatchMutate was called.
> Right now a RegionObserver can only statically implement one or the other. In 
> scenarios where we need to work sometimes on the single postPut and 
> postDelete hooks and sometimes on the batchMutate hooks, there is currently 
> no place to convey this information to the single hooks. I.e. the work has 
> been done in the batch, skip the single hooks.
> There are various solutions:
> 1. Allow some state to be passed _per operation_.
> 2. Remove the single hooks and always only call batch hooks (with a default 
> wrapper for the single hooks).
> 3. more?
> [~apurtell], what we had discussed a few days back.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16338) update jackson to 2.y

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206623#comment-16206623
 ] 

Sean Busbey commented on HBASE-16338:
-

yes please, or maybe generally we need a way to pass more flags to maven.

> update jackson to 2.y
> -
>
> Key: HBASE-16338
> URL: https://issues.apache.org/jira/browse/HBASE-16338
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Mike Drob
> Fix For: 2.0.0-beta-2
>
> Attachments: 16338.txt, HBASE-16338.v10.patch, HBASE-16338.v2.patch, 
> HBASE-16338.v3.patch, HBASE-16338.v5.patch, HBASE-16338.v6.patch, 
> HBASE-16338.v7.patch, HBASE-16338.v8.patch, HBASE-16338.v9.patch
>
>
> Our jackson dependency is from ~3 years ago. Update to the jackson 2.y line, 
> using 2.7.0+.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18914) Remove AsyncAdmin's methods which were already deprecated in Admin interface

2017-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206594#comment-16206594
 ] 

Hudson commented on HBASE-18914:


FAILURE: Integrated in Jenkins build HBase-Trunk_matrix #3899 (See 
[https://builds.apache.org/job/HBase-Trunk_matrix/3899/])
HBASE-18914 Remove AsyncAdmin's methods which were already deprecated in 
(zghao: rev 51489b2081102b02785c89b6a03d36f54e29657b)
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncSnapshotAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java


> Remove AsyncAdmin's methods which were already deprecated in Admin interface
> 
>
> Key: HBASE-18914
> URL: https://issues.apache.org/jira/browse/HBASE-18914
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18914.master.001.patch, 
> HBASE-18914.master.002.patch, HBASE-18914.master.002.patch, 
> HBASE-18914.master.003.patch, HBASE-18914.master.003.patch
>
>
> Since we are not release hbase 2.0 now, I thought it is ok to remove the 
> methods which were already de deprecated in Admin interface.
> The methods which were marked as deprecated in HBASE-18241.
> HTableDescriptor[] deleteTables(Pattern)
> HTableDescriptor[] enableTables(Pattern)
> HTableDescriptor[] disableTables(Pattern)
> getAlterStatus()
> closeRegion()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19019) QA fails on hbase-thrift module with timeout

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206585#comment-16206585
 ] 

Mike Drob commented on HBASE-19019:
---

It looks like it's failing to set up the test at all...

> QA fails on hbase-thrift module with timeout
> 
>
> Key: HBASE-19019
> URL: https://issues.apache.org/jira/browse/HBASE-19019
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: Peter Somogyi
>Priority: Critical
>
> For any modification in hbase-thrift module the precommit build fails with 
> timeout for {{TestThriftServerCmdLine}}. I noticed this failure on multiple 
> patches: HBASE-18967 and HBASE-18996 even when the patch did not contain any 
> modification 
> (https://issues.apache.org/jira/secure/attachment/12892414/HBASE-18967.branch-1.3.002.patch)
> The {{TestThriftServerCmdLine}} test passes locally on both mentioned patches.
> One failure: https://builds.apache.org/job/PreCommit-HBASE-Build/9127/
> {code}
> [INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ hbase-thrift ---
> [INFO] Surefire report directory: 
> /testptch/hbase/hbase-thrift/target/surefire-reports
> [INFO] Using configured provider 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider
> [INFO] parallel='none', perCoreThreadCount=true, threadCount=0, 
> useUnlimitedThreads=false, threadCountSuites=0, threadCountClasses=0, 
> threadCountMethods=0, parallelOptimized=true
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
> Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.201 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
> Running 
> org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.257 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
> Running 
> org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.621 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
> Running org.apache.hadoop.hbase.thrift.TestCallQueue
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.473 sec - 
> in org.apache.hadoop.hbase.thrift.TestCallQueue
> Running org.apache.hadoop.hbase.thrift.TestThriftHttpServer
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.239 sec - 
> in org.apache.hadoop.hbase.thrift.TestThriftHttpServer
> Running org.apache.hadoop.hbase.thrift.TestThriftServer
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.55 sec - 
> in org.apache.hadoop.hbase.thrift.TestThriftServer
> Running org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine
> Results :
> Tests run: 72, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18:33.104s
> [INFO] Finished at: Mon Oct 16 03:42:59 UTC 2017
> [INFO] Final Memory: 61M/1299M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on 
> project hbase-thrift: There was a timeout or other error in the fork -> [Help 
> 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18967) Backport HBASE-17181 to branch-1.3

2017-10-16 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206577#comment-16206577
 ] 

Peter Somogyi commented on HBASE-18967:
---

I created HBASE-19019 for the QA failure.

> Backport HBASE-17181 to branch-1.3
> --
>
> Key: HBASE-18967
> URL: https://issues.apache.org/jira/browse/HBASE-18967
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Chia-Ping Tsai
>Assignee: Peter Somogyi
> Fix For: 1.3.2
>
> Attachments: HBASE-18967.branch-1.3.001.patch, 
> HBASE-18967.branch-1.3.001.patch, HBASE-18967.branch-1.3.001.patch, 
> HBASE-18967.branch-1.3.001.patch, HBASE-18967.branch-1.3.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19019) QA fails on hbase-thrift module with timeout

2017-10-16 Thread Peter Somogyi (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19019?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206576#comment-16206576
 ] 

Peter Somogyi commented on HBASE-19019:
---

I don't have access to build machines so I can't really debug this issue. Maybe 
the host is overloaded which causes the timeout but I was not able to reproduce 
that locally on higher load on my machine.
Last night I also tried to run a precommit job when there were no other 
hbase-precommit jobs running but that check failed as well.

> QA fails on hbase-thrift module with timeout
> 
>
> Key: HBASE-19019
> URL: https://issues.apache.org/jira/browse/HBASE-19019
> Project: HBase
>  Issue Type: Bug
>  Components: Thrift
>Reporter: Peter Somogyi
>Priority: Critical
>
> For any modification in hbase-thrift module the precommit build fails with 
> timeout for {{TestThriftServerCmdLine}}. I noticed this failure on multiple 
> patches: HBASE-18967 and HBASE-18996 even when the patch did not contain any 
> modification 
> (https://issues.apache.org/jira/secure/attachment/12892414/HBASE-18967.branch-1.3.002.patch)
> The {{TestThriftServerCmdLine}} test passes locally on both mentioned patches.
> One failure: https://builds.apache.org/job/PreCommit-HBASE-Build/9127/
> {code}
> [INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ hbase-thrift ---
> [INFO] Surefire report directory: 
> /testptch/hbase/hbase-thrift/target/surefire-reports
> [INFO] Using configured provider 
> org.apache.maven.surefire.junitcore.JUnitCoreProvider
> [INFO] parallel='none', perCoreThreadCount=true, threadCount=0, 
> useUnlimitedThreads=false, threadCountSuites=0, threadCountClasses=0, 
> threadCountMethods=0, parallelOptimized=true
> ---
>  T E S T S
> ---
> Running org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
> Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.201 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
> Running 
> org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
> Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.257 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
> Running 
> org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
> Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.621 sec - 
> in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
> Running org.apache.hadoop.hbase.thrift.TestCallQueue
> Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.473 sec - 
> in org.apache.hadoop.hbase.thrift.TestCallQueue
> Running org.apache.hadoop.hbase.thrift.TestThriftHttpServer
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.239 sec - 
> in org.apache.hadoop.hbase.thrift.TestThriftHttpServer
> Running org.apache.hadoop.hbase.thrift.TestThriftServer
> Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.55 sec - 
> in org.apache.hadoop.hbase.thrift.TestThriftServer
> Running org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine
> Results :
> Tests run: 72, Failures: 0, Errors: 0, Skipped: 0
> [INFO] 
> 
> [INFO] BUILD FAILURE
> [INFO] 
> 
> [INFO] Total time: 18:33.104s
> [INFO] Finished at: Mon Oct 16 03:42:59 UTC 2017
> [INFO] Final Memory: 61M/1299M
> [INFO] 
> 
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on 
> project hbase-thrift: There was a timeout or other error in the fork -> [Help 
> 1]
> [ERROR] 
> [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
> switch.
> [ERROR] Re-run Maven using the -X switch to enable full debug logging.
> [ERROR] 
> [ERROR] For more information about the errors and possible solutions, please 
> read the following articles:
> [ERROR] [Help 1] 
> http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (HBASE-19019) QA fails on hbase-thrift module with timeout

2017-10-16 Thread Peter Somogyi (JIRA)
Peter Somogyi created HBASE-19019:
-

 Summary: QA fails on hbase-thrift module with timeout
 Key: HBASE-19019
 URL: https://issues.apache.org/jira/browse/HBASE-19019
 Project: HBase
  Issue Type: Bug
  Components: Thrift
Reporter: Peter Somogyi
Priority: Critical


For any modification in hbase-thrift module the precommit build fails with 
timeout for {{TestThriftServerCmdLine}}. I noticed this failure on multiple 
patches: HBASE-18967 and HBASE-18996 even when the patch did not contain any 
modification 
(https://issues.apache.org/jira/secure/attachment/12892414/HBASE-18967.branch-1.3.002.patch)

The {{TestThriftServerCmdLine}} test passes locally on both mentioned patches.

One failure: https://builds.apache.org/job/PreCommit-HBASE-Build/9127/
{code}
[INFO] --- maven-surefire-plugin:2.18.1:test (default-test) @ hbase-thrift ---
[INFO] Surefire report directory: 
/testptch/hbase/hbase-thrift/target/surefire-reports
[INFO] Using configured provider 
org.apache.maven.surefire.junitcore.JUnitCoreProvider
[INFO] parallel='none', perCoreThreadCount=true, threadCount=0, 
useUnlimitedThreads=false, threadCountSuites=0, threadCountClasses=0, 
threadCountMethods=0, parallelOptimized=true

---
 T E S T S
---
Running org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
Tests run: 29, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 48.201 sec - 
in org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandler
Running 
org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
Tests run: 15, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.257 sec - in 
org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithReadOnly
Running org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
Tests run: 6, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 1.621 sec - in 
org.apache.hadoop.hbase.thrift2.TestThriftHBaseServiceHandlerWithLabels
Running org.apache.hadoop.hbase.thrift.TestCallQueue
Tests run: 18, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.473 sec - in 
org.apache.hadoop.hbase.thrift.TestCallQueue
Running org.apache.hadoop.hbase.thrift.TestThriftHttpServer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 5.239 sec - in 
org.apache.hadoop.hbase.thrift.TestThriftHttpServer
Running org.apache.hadoop.hbase.thrift.TestThriftServer
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 75.55 sec - in 
org.apache.hadoop.hbase.thrift.TestThriftServer
Running org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine

Results :

Tests run: 72, Failures: 0, Errors: 0, Skipped: 0

[INFO] 
[INFO] BUILD FAILURE
[INFO] 
[INFO] Total time: 18:33.104s
[INFO] Finished at: Mon Oct 16 03:42:59 UTC 2017
[INFO] Final Memory: 61M/1299M
[INFO] 
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test (default-test) on 
project hbase-thrift: There was a timeout or other error in the fork -> [Help 1]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please 
read the following articles:
[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoFailureException
{code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16338) update jackson to 2.y

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206562#comment-16206562
 ] 

Mike Drob commented on HBASE-16338:
---

Looks like adding debug flag to yetus does not pass it along to maven. Should I 
file a YETUS jira for that?

> update jackson to 2.y
> -
>
> Key: HBASE-16338
> URL: https://issues.apache.org/jira/browse/HBASE-16338
> Project: HBase
>  Issue Type: Task
>  Components: dependencies
>Reporter: Sean Busbey
>Assignee: Mike Drob
> Fix For: 2.0.0-beta-2
>
> Attachments: 16338.txt, HBASE-16338.v10.patch, HBASE-16338.v2.patch, 
> HBASE-16338.v3.patch, HBASE-16338.v5.patch, HBASE-16338.v6.patch, 
> HBASE-16338.v7.patch, HBASE-16338.v8.patch, HBASE-16338.v9.patch
>
>
> Our jackson dependency is from ~3 years ago. Update to the jackson 2.y line, 
> using 2.7.0+.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18893) shell 'alter' command no longer distinguishes column add/modify/delete

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18893?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206557#comment-16206557
 ] 

Mike Drob commented on HBASE-18893:
---

Discussed this with [~appy], he suggested that we can manually examine the new 
table descriptor and then call preAddColumn/preDeleteColumn/etc as appropriate 
and still keep a single call to modifyTable in the submit procedure. An 
interesting problem comes up with the return values of preXXX methods - 
preModifyTable is void so there is no return false to bypass chance here, but 
the other preXXX methods are boolean typed. Do we need to respect their ability 
to bypass or not?

[~Apache9], [~apurtell], [~stack] - this is related to the discussion you were 
already having, tagging y'all for visibility.

> shell 'alter' command no longer distinguishes column add/modify/delete
> --
>
> Key: HBASE-18893
> URL: https://issues.apache.org/jira/browse/HBASE-18893
> Project: HBase
>  Issue Type: Bug
>  Components: shell
>Reporter: Mike Drob
>
> After HBASE-15641 all 'alter' commands go through a single modifyTable call 
> at the end, so we no longer can easily distinguish add, modify, and delete 
> column events. This potentially affects coprocessors that needed the update 
> notifications for new or removed columns.
> Let's let the shell still make separate behaviour calls like it did before 
> without undoing the batching that seems pretty useful.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19007) Align Services Interfaces in Master and RegionServer

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19007?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206520#comment-16206520
 ] 

Appy commented on HBASE-19007:
--

So seeing Server exposed, something irked, and these are my lines of thought.
We had MasterServices and RSS. Even if the original author designed these for 
CP only, here we are years down the line cleaning up these confused interfaces. 
I think the reasons were:
- location of these classes: o.a.h.h.master/ o.a.h.h.regionserver - hmm...looks 
like something for internal use.
- Naming. MasterServices - An interface for master?...hmm.. let's use it for 
testing and not exposing HMaster.

We have learnt that marking classes with IA.LP doesn't help if they are deep 
inside our code. Let's not do the same for Server. It's already being used in 
over 100 places internally.
I'd suggest that *anything* and *everything* that needs to be exposed to CP 
should be a method in some env. Even if we want to expose a full set of 
functions, which are already in an internal interface, let's not expose the 
interface. Instead, let's make wrapper functions in the CpEnv. That way:
- we clearly isolate internal and external by this well defined boundary - 
*CoprocessorEnvironment
- Although wrapping fns in Envs is few extra lines of code, this extra step 
ensures that we'll never expose anything to CPs by mistake.
- I like that this boundary is in appropriate location - o.a.h.h.coprocessors.

What do you say [~anoop.hbase], [~stack] ?

> Align Services Interfaces in Master and RegionServer
> 
>
> Key: HBASE-19007
> URL: https://issues.apache.org/jira/browse/HBASE-19007
> Project: HBase
>  Issue Type: Task
>Reporter: stack
>Priority: Blocker
>
> HBASE-18183 adds a CoprocessorRegionServerService to give a view on 
> RegionServiceServices that is safe to expose to Coprocessors.
> On the Master-side, MasterServices becomes an Interface for exposing to 
> Coprocessors.
> We need to align the two.
> For background, see 
> https://issues.apache.org/jira/browse/HBASE-12260?focusedCommentId=16203820=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16203820
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19001) Remove StoreScanner dependency in our own CP related tests

2017-10-16 Thread Gary Helmling (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206506#comment-16206506
 ] 

Gary Helmling commented on HBASE-19001:
---

Agree with Anoop.  This needs a full description with context, explanation of 
what the replacement for this functionality is, and some plan on how we 
communicate this to downstream users.

I assume this was discussed in a thread on the dev list first.  Can we also 
point to that discussion.

This will break the current implementation of the Apache Tephra 
TransactionProcessor:
https://github.com/apache/incubator-tephra/blob/master/tephra-hbase-compat-1.3/src/main/java/org/apache/tephra/hbase/coprocessor/TransactionProcessor.java

so pointing to some context on why this change was made when downstream users 
come looking for it would be very helpful.



> Remove StoreScanner dependency in our own CP related tests
> --
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206490#comment-16206490
 ] 

Mike Drob commented on HBASE-18898:
---

bq. That'll at least give power of compile time checks. Checking correctness 
for arguments, return types, exceptions, etc of an impl should not require 
running a cluster and loading the coprocessor.
We probably need a very robust test harness for users to check their CP 
implementations, but that's a separate issue

> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206487#comment-16206487
 ] 

Mike Drob commented on HBASE-18898:
---

Another benefit is that a single function can be annotated as PRE_PUT, 
PRE_DELETE, etc, if there is duplicative functionality.

I agree with Andrew that we can follow the JAX-RS model here, and don't have to 
enforce particular method signatures. The consumers can opt in to what they 
want, see similar at 
https://github.com/apache/hbase/blob/master/hbase-rest/src/main/java/org/apache/hadoop/hbase/rest/ExistsResource.java#L59
 where the consumer opts in to get method parameters. This is much probably 
much easier to do using a real DI framework, I don't want to see us going down 
this path ourselves...

> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-16338) update jackson to 2.y

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-16338?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206485#comment-16206485
 ] 

Hadoop QA commented on HBASE-16338:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:blue}0{color} | {color:blue} shelldocs {color} | {color:blue}  0m  
9s{color} | {color:blue} Shelldocs was not available. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 17 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
29s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
35s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
40s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
28s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
 2s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded hbase-shaded/hbase-shaded-mapreduce . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
47s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
56s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
14s{color} | {color:red} hbase-rest in the patch failed. {color} |
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  0m 
36s{color} | {color:red} hbase-spark in the patch failed. {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} rubocop {color} | {color:green}  0m  
3s{color} | {color:green} There were no new rubocop issues. {color} |
| {color:green}+1{color} | {color:green} ruby-lint {color} | {color:green}  0m  
2s{color} | {color:green} There were no new ruby-lint issues. {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  0m 
 4s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
12s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
15s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
35m 17s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hbase-resource-bundle hbase-shaded hbase-shaded/hbase-shaded-mapreduce . 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | 

[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206459#comment-16206459
 ] 

Appy commented on HBASE-18898:
--

Thinking about it more:

By annotated methods, i assume you mean something like this
{noformat}
@ObserverHook(Master.PRE_DELETE_TABLE)
boolean myPreDeleteHook(...) {
...
}
{noformat}
where we'll expose a set of enums which can be passed as a parameter to the 
annotation to denote which type of observer hook that function is.
Then, using reflection,  we can get all methods marked @ObserverHook.

But we'll still have to 1)  Expose reference method signatures somewhere 2) 
Verify that the annotated methods have correct signature. So that, if someone 
annotates {{void foo()}} by mistake (note that all observer hooks at least take 
one param - environment), we can report failure.
Maybe our current *Observer interfaces will become reference for the method 
signatures?

But then, why not let implementations use method override?
That'll at least give power of compile time checks. Checking  correctness for 
arguments, return types, exceptions, etc of an impl should not require running 
a cluster and loading the coprocessor.

And if we have method overrides, having an extra annotation in implementations 
is completely redundant. (is it not?)

The one +ve i see with annotated methods is, we can support say multiple hooks 
of one type in single implementation i.e. multiple fns can be annotated 
PRE_DELETE. But is it worth that effort?

> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-18950:
---

Assignee: Sean Busbey  (was: Guanghao Zhang)

> Remove Optional parameters in AsyncAdmin interface
> --
>
> Key: HBASE-18950
> URL: https://issues.apache.org/jira/browse/HBASE-18950
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Sean Busbey
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18950.master.001.patch, 
> HBASE-18950.master.002.patch, HBASE-18950.master.003.patch, 
> HBASE-18950.master.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-16 Thread Sean Busbey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Busbey reassigned HBASE-18950:
---

Assignee: Guanghao Zhang  (was: Sean Busbey)

> Remove Optional parameters in AsyncAdmin interface
> --
>
> Key: HBASE-18950
> URL: https://issues.apache.org/jira/browse/HBASE-18950
> Project: HBase
>  Issue Type: Sub-task
>  Components: Client
>Reporter: Duo Zhang
>Assignee: Guanghao Zhang
>Priority: Blocker
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18950.master.001.patch, 
> HBASE-18950.master.002.patch, HBASE-18950.master.003.patch, 
> HBASE-18950.master.004.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206451#comment-16206451
 ] 

Sean Busbey commented on HBASE-19018:
-

Some additional context for bystanders: test dependencies aren't included 
transitively. we rely on the hadoop-minicluster dependency for the mini 
dfs/yarn/etc clusters. that dependency is just a pom that pulls in various bits 
from across the hadoop project so that it will have those mini cluster 
implementations. This involves pulling in a fair number of test-jars. Without a 
bunch of archeological work on Hadoop's repo and jira it's hard to say if the 
lack of bouncycastle in the set of dependencies is intentional or not. But with 
only a failed use of an internal class to go on, we'll be hard pressed to 
change it.

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206448#comment-16206448
 ] 

Mike Drob commented on HBASE-19018:
---

Gotcha. Yea, let's add the dep locally then.

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18898) Provide way for the core flow to know whether CP implemented each of the hooks

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18898?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206443#comment-16206443
 ] 

Appy commented on HBASE-18898:
--

I considered the annotation way earlier, but didn't suggest that because of 
following pros and cons (++ and -- bullets):

w/ annotated methods
++ makes matching logic easier on our side
-- More IA.Public stuff. We'll need a set of enum for each observer, so 6 
total. More compat work. :-)
-- Has to be done before beta1 else postponed till 3.0

w/ method override (i.e. current way)
++ No code changes needed in existing CP implementations
++ Compile time checks instead of runtime failures
++ Internal change only, so no restrictions. Can be done anytime.
-- More matching logic on our side.

What do you say [~apurtell]?


> Provide way for the core flow to know whether CP implemented each of the hooks
> --
>
> Key: HBASE-18898
> URL: https://issues.apache.org/jira/browse/HBASE-18898
> Project: HBase
>  Issue Type: Improvement
>  Components: Coprocessors, Performance
>Reporter: Anoop Sam John
>Assignee: Anoop Sam John
>Priority: Critical
>
> This came as a discussion topic at the tale of HBASE-17732
> Can we have a way in the code (before trying to call the hook) to know 
> whether the user has implemented one particular hook or not? eg: On write 
> related hooks only prePut() might be what the user CP implemented. All others 
> are just dummy impl from the interface. Can we have a way for the core code 
> to know this and avoid the call to other dummy hooks fully? Some times we do 
> some processing for just calling CP hooks (Say we have to make a POJO out of 
> PB object for calling) and if the user CP not impl this hook, we can avoid 
> this extra work fully. The pain of this will be more when we have to later 
> deprecate one hook and add new. So the dummy impl in new hook has to call the 
> old one and that might be doing some extra work normally.
> If the CP f/w itself is having a way to tell this, the core code can make 
> use. What am expecting is some thing like in PB way where we can call 
> CPObject.hasPre(), then CPObject. pre ().. Should not like asking 
> users to impl this extra ugly thing. When the CP instance is loaded in the 
> RS/HM, that object will be having this info also. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206441#comment-16206441
 ] 

Sean Busbey edited comment on HBASE-19018 at 10/16/17 7:28 PM:
---

it does but the declared dependency is a test dependency, because 
KeyStoreTestUtil is in a Hadoop module's test jar.


was (Author: busbey):
it doea but the declared dependency is a test dependency, because 
KeyStoreTestUtil is in a Hadoop module's test jar.

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206441#comment-16206441
 ] 

Sean Busbey commented on HBASE-19018:
-

it doea but the declared dependency is a test dependency, because 
KeyStoreTestUtil is in a Hadoop module's test jar.

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18967) Backport HBASE-17181 to branch-1.3

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18967?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206440#comment-16206440
 ] 

Hadoop QA commented on HBASE-18967:
---

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
46s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
 4s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
16s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
 0s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
11s{color} | {color:green} branch-1.3 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} branch-1.3 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} branch-1.3 passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  2m 
20s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
26m 47s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.4.0 2.4.1 2.5.0 2.5.1 2.5.2 2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3. 
{color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed with JDK v1.7.0_151 {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 18m 42s{color} 
| {color:red} hbase-thrift in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
 7s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 75m 51s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Timed out junit tests | 
org.apache.hadoop.hbase.thrift.TestThriftServerCmdLine |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce 

[jira] [Commented] (HBASE-19014) surefire fails; When writing xml report stdout/stderr ... No such file or directory

2017-10-16 Thread Chia-Ping Tsai (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19014?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206437#comment-16206437
 ] 

Chia-Ping Tsai commented on HBASE-19014:


QA is still 
running...https://builds.apache.org/view/H-L/view/HBase/job/PreCommit-HBASE-Build/9135/console

> surefire fails; When writing xml report stdout/stderr ... No such file or 
> directory
> ---
>
> Key: HBASE-19014
> URL: https://issues.apache.org/jira/browse/HBASE-19014
> Project: HBase
>  Issue Type: Bug
>Reporter: Chia-Ping Tsai
>Assignee: Chia-Ping Tsai
> Fix For: 2.0.0, 1.4.0, 1.3.2, 1.5.0, 1.2.7
>
> Attachments: HBASE-19014.branch-1.v0.patch
>
>
> {code}
> 17:22:33 [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-surefire-plugin:2.18.1:test 
> (secondPartTestsExecution) on project hbase-server: ExecutionException: 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> org.apache.maven.surefire.report.ReporterException: When writing xml report 
> stdout/stderr: /tmp/stderr1114622923250399196deferred (No such file or 
> directory) -> [Help 1]
> {code}
> It happens frequently on my jenkins...I update the surefire to 2.20.1, and 
> then the failure doesn't happen again. see SUREFIRE-1239.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206429#comment-16206429
 ] 

Mike Drob commented on HBASE-19018:
---

Shouldn't whatever module contains KeyStoreTestUtil already have a declared 
dependency on BouncyCastle?

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Sean Busbey (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206416#comment-16206416
 ] 

Sean Busbey commented on HBASE-19018:
-

Why would our use of their internal class be a Hadoop problem?

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18912) Update Admin methods to return Lists instead of arrays

2017-10-16 Thread Appy (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18912?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206414#comment-16206414
 ] 

Appy commented on HBASE-18912:
--

Another reason to have a reflection based test to enforce the invariant that 
two admins are in sync. :)

> Update Admin methods to return Lists instead of arrays
> --
>
> Key: HBASE-18912
> URL: https://issues.apache.org/jira/browse/HBASE-18912
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19018) Use of hadoop internals that require bouncycastle should declare bouncycastle dependency

2017-10-16 Thread Mike Drob (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19018?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206396#comment-16206396
 ] 

Mike Drob commented on HBASE-19018:
---

why is this an hbase jira and not a hadoop one?

> Use of hadoop internals that require bouncycastle should declare bouncycastle 
> dependency
> 
>
> Key: HBASE-19018
> URL: https://issues.apache.org/jira/browse/HBASE-19018
> Project: HBase
>  Issue Type: Bug
>  Components: dependencies, test
>Affects Versions: 2.0.0-alpha-3
>Reporter: Sean Busbey
>Assignee: Sean Busbey
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
>
> The tests for HBASE-15806 rely on a Hadoop internal class, 
> {{KeyStoreTestUtil}}, which in turn relies on the Bouncycastle library for 
> certificate generation.
> when building / running with Hadoop 2.7.1, we accidentally get a bouncycastle 
> implementation via a transitive dependency of {{hadoop-minikdc}}. When 
> attempting to run against Hadoop 3.0.0-alpha4 and 3.0.0-beta1 (and presumably 
> future Hadoop 3.y releases), this bouncycastle jar is no longer pulled in and 
> we fail with a CNFE.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19001) Remove StoreScanner dependency in our own CP related tests

2017-10-16 Thread Anoop Sam John (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19001?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206321#comment-16206321
 ] 

Anoop Sam John commented on HBASE-19001:


Pls change the jira title and desc. This is not a test change. We are removing 
3 CP pre hooks. We should indicate that in title

> Remove StoreScanner dependency in our own CP related tests
> --
>
> Key: HBASE-19001
> URL: https://issues.apache.org/jira/browse/HBASE-19001
> Project: HBase
>  Issue Type: Sub-task
>  Components: Coprocessors
>Reporter: Duo Zhang
>Assignee: Duo Zhang
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-19001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18693) adding an option to restore_snapshot to move mob files from archive dir to working dir

2017-10-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206319#comment-16206319
 ] 

huaxiang sun commented on HBASE-18693:
--

I am checking these failed unittests locally and will do another QA run after 
local verification, thanks.

> adding an option to restore_snapshot to move mob files from archive dir to 
> working dir
> --
>
> Key: HBASE-18693
> URL: https://issues.apache.org/jira/browse/HBASE-18693
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0-alpha-2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18693.master.001.patch, 
> HBASE-18693.master.002.patch
>
>
> Today, there is a single mob region where mob files for all user regions are 
> saved. There could be many files (one million) in a single mob directory. 
> When one mob table is restored or cloned from snapshot, links are created for 
> these mob files. This creates a scaling issue for mob compaction. In mob 
> compaction's select() logic, for each hFileLink, it needs to call NN's 
> getFileStatus() to get the size of the linked hfile. Assume that one such 
> call takes 20ms, 20ms * 100 = 6 hours. 
> To avoid this overhead, we want to add an option so that restore_snapshot can 
> move mob files from archive dir to working dir. clone_snapshot is more 
> complicated as it can clone a snapshot to a different table so moving that 
> can destroy the snapshot. No option will be added for clone_snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18950) Remove Optional parameters in AsyncAdmin interface

2017-10-16 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18950?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206320#comment-16206320
 ] 

Hadoop QA commented on HBASE-18950:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 12m 
41s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 5 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
33s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  3m 
55s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
52s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
18s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
59s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
 2s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  3m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
37m 42s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.6.1 2.6.2 2.6.3 2.6.4 2.6.5 2.7.1 2.7.2 2.7.3 or 3.0.0-alpha4. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
40s{color} | {color:green} hbase-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}103m 
34s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
31s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}180m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce Image:yetus/hbase:5d60123 |
| JIRA Issue | HBASE-18950 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12892392/HBASE-18950.master.004.patch
 |
| Optional Tests |  asflicense  shadedjars  javac  javadoc  unit  findbugs  
hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 19d66ef35174 3.13.0-129-generic #178-Ubuntu SMP Fri Aug 11 
12:48:20 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | 
/home/jenkins/jenkins-slave/workspace/PreCommit-HBASE-Build/component/dev-support/hbase-personality.sh
 |
| git revision | master / 51489b20 |
| Default Java | 1.8.0_144 |
| 

[jira] [Commented] (HBASE-18914) Remove AsyncAdmin's methods which were already deprecated in Admin interface

2017-10-16 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18914?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206317#comment-16206317
 ] 

Hudson commented on HBASE-18914:


FAILURE: Integrated in Jenkins build HBase-2.0 #698 (See 
[https://builds.apache.org/job/HBase-2.0/698/])
HBASE-18914 Remove AsyncAdmin's methods which were already deprecated in 
(zghao: rev 58b0585d66b90d3cdf47da84c64f2912e1773934)
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncTableAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncRegionAdminApi.java
* (edit) 
hbase-client/src/main/java/org/apache/hadoop/hbase/client/AsyncAdmin.java
* (edit) 
hbase-server/src/test/java/org/apache/hadoop/hbase/client/TestAsyncSnapshotAdminApi.java


> Remove AsyncAdmin's methods which were already deprecated in Admin interface
> 
>
> Key: HBASE-18914
> URL: https://issues.apache.org/jira/browse/HBASE-18914
> Project: HBase
>  Issue Type: Sub-task
>Reporter: Guanghao Zhang
>Assignee: Guanghao Zhang
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-18914.master.001.patch, 
> HBASE-18914.master.002.patch, HBASE-18914.master.002.patch, 
> HBASE-18914.master.003.patch, HBASE-18914.master.003.patch
>
>
> Since we are not release hbase 2.0 now, I thought it is ok to remove the 
> methods which were already de deprecated in Admin interface.
> The methods which were marked as deprecated in HBASE-18241.
> HTableDescriptor[] deleteTables(Pattern)
> HTableDescriptor[] enableTables(Pattern)
> HTableDescriptor[] disableTables(Pattern)
> getAlterStatus()
> closeRegion()



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18693) adding an option to restore_snapshot to move mob files from archive dir to working dir

2017-10-16 Thread Ted Yu (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206314#comment-16206314
 ] 

Ted Yu commented on HBASE-18693:


Can you get a clean QA run ?
See if the 3 failed tests can be reproduced locally.

> adding an option to restore_snapshot to move mob files from archive dir to 
> working dir
> --
>
> Key: HBASE-18693
> URL: https://issues.apache.org/jira/browse/HBASE-18693
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0-alpha-2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18693.master.001.patch, 
> HBASE-18693.master.002.patch
>
>
> Today, there is a single mob region where mob files for all user regions are 
> saved. There could be many files (one million) in a single mob directory. 
> When one mob table is restored or cloned from snapshot, links are created for 
> these mob files. This creates a scaling issue for mob compaction. In mob 
> compaction's select() logic, for each hFileLink, it needs to call NN's 
> getFileStatus() to get the size of the linked hfile. Assume that one such 
> call takes 20ms, 20ms * 100 = 6 hours. 
> To avoid this overhead, we want to add an option so that restore_snapshot can 
> move mob files from archive dir to working dir. clone_snapshot is more 
> complicated as it can clone a snapshot to a different table so moving that 
> can destroy the snapshot. No option will be added for clone_snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-18693) adding an option to restore_snapshot to move mob files from archive dir to working dir

2017-10-16 Thread huaxiang sun (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-18693?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206304#comment-16206304
 ] 

huaxiang sun commented on HBASE-18693:
--

@tedyu and [~jingcheng.du], I posted v2 at the review board, any comments for 
v2? Thanks.

> adding an option to restore_snapshot to move mob files from archive dir to 
> working dir
> --
>
> Key: HBASE-18693
> URL: https://issues.apache.org/jira/browse/HBASE-18693
> Project: HBase
>  Issue Type: Improvement
>  Components: mob
>Affects Versions: 2.0.0-alpha-2
>Reporter: huaxiang sun
>Assignee: huaxiang sun
> Attachments: HBASE-18693.master.001.patch, 
> HBASE-18693.master.002.patch
>
>
> Today, there is a single mob region where mob files for all user regions are 
> saved. There could be many files (one million) in a single mob directory. 
> When one mob table is restored or cloned from snapshot, links are created for 
> these mob files. This creates a scaling issue for mob compaction. In mob 
> compaction's select() logic, for each hFileLink, it needs to call NN's 
> getFileStatus() to get the size of the linked hfile. Assume that one such 
> call takes 20ms, 20ms * 100 = 6 hours. 
> To avoid this overhead, we want to add an option so that restore_snapshot can 
> move mob files from archive dir to working dir. clone_snapshot is more 
> complicated as it can clone a snapshot to a different table so moving that 
> can destroy the snapshot. No option will be added for clone_snapshot.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-13346) Clean up Filter package for post 1.0 s/KeyValue/Cell/g

2017-10-16 Thread Tamas Penzes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Penzes updated HBASE-13346:
-
Status: Open  (was: Patch Available)

> Clean up Filter package for post 1.0 s/KeyValue/Cell/g
> --
>
> Key: HBASE-13346
> URL: https://issues.apache.org/jira/browse/HBASE-13346
> Project: HBase
>  Issue Type: Bug
>  Components: API, Filters
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Tamas Penzes
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-13346.master.001.patch, 
> HBASE-13346.master.002.patch, HBASE-13346.master.003.patch, 
> HBASE-13346.master.003.patch, HBASE-13346.master.004.patch, 
> HBASE-13346.master.005.patch, HBASE-13346.master.006.patch, 
> HBASE-13346.master.007.patch, HBASE-13346.master.008.patch
>
>
> Since we have a bit of a messy Filter API with KeyValue vs Cell reference 
> mixed up all over the place, I recommend cleaning this up once and for all. 
> There should be no {{KeyValue}} (or {{kv}}, {{kvs}} etc.) in any method or 
> parameter name.
> This includes deprecating and renaming filters too, for example 
> {{FirstKeyOnlyFilter}}, which really should be named {{FirstKeyValueFilter}} 
> as it does _not_ just return the key, but the entire cell. It should be 
> deprecated and renamed to {{FirstCellFilter}} (or {{FirstColumnFilter}} if 
> you prefer).
> In general we should clarify and settle on {{KeyValue}} vs {{Cell}} vs 
> {{Column}} in our naming. The latter two are the only ones going forward with 
> the public API, and are used synonymous. We should carefully check which is 
> better suited (is it really a specific cell, or the newest cell, aka the 
> newest column value) and settle on a naming schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-13346) Clean up Filter package for post 1.0 s/KeyValue/Cell/g

2017-10-16 Thread Tamas Penzes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Penzes updated HBASE-13346:
-
Status: Patch Available  (was: Open)

> Clean up Filter package for post 1.0 s/KeyValue/Cell/g
> --
>
> Key: HBASE-13346
> URL: https://issues.apache.org/jira/browse/HBASE-13346
> Project: HBase
>  Issue Type: Bug
>  Components: API, Filters
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Tamas Penzes
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-13346.master.001.patch, 
> HBASE-13346.master.002.patch, HBASE-13346.master.003.patch, 
> HBASE-13346.master.003.patch, HBASE-13346.master.004.patch, 
> HBASE-13346.master.005.patch, HBASE-13346.master.006.patch, 
> HBASE-13346.master.007.patch, HBASE-13346.master.008.patch
>
>
> Since we have a bit of a messy Filter API with KeyValue vs Cell reference 
> mixed up all over the place, I recommend cleaning this up once and for all. 
> There should be no {{KeyValue}} (or {{kv}}, {{kvs}} etc.) in any method or 
> parameter name.
> This includes deprecating and renaming filters too, for example 
> {{FirstKeyOnlyFilter}}, which really should be named {{FirstKeyValueFilter}} 
> as it does _not_ just return the key, but the entire cell. It should be 
> deprecated and renamed to {{FirstCellFilter}} (or {{FirstColumnFilter}} if 
> you prefer).
> In general we should clarify and settle on {{KeyValue}} vs {{Cell}} vs 
> {{Column}} in our naming. The latter two are the only ones going forward with 
> the public API, and are used synonymous. We should carefully check which is 
> better suited (is it really a specific cell, or the newest cell, aka the 
> newest column value) and settle on a naming schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-13346) Clean up Filter package for post 1.0 s/KeyValue/Cell/g

2017-10-16 Thread Tamas Penzes (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-13346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tamas Penzes updated HBASE-13346:
-
Attachment: HBASE-13346.master.008.patch

> Clean up Filter package for post 1.0 s/KeyValue/Cell/g
> --
>
> Key: HBASE-13346
> URL: https://issues.apache.org/jira/browse/HBASE-13346
> Project: HBase
>  Issue Type: Bug
>  Components: API, Filters
>Affects Versions: 2.0.0
>Reporter: Lars George
>Assignee: Tamas Penzes
>Priority: Critical
> Fix For: 2.0.0-alpha-4
>
> Attachments: HBASE-13346.master.001.patch, 
> HBASE-13346.master.002.patch, HBASE-13346.master.003.patch, 
> HBASE-13346.master.003.patch, HBASE-13346.master.004.patch, 
> HBASE-13346.master.005.patch, HBASE-13346.master.006.patch, 
> HBASE-13346.master.007.patch, HBASE-13346.master.008.patch
>
>
> Since we have a bit of a messy Filter API with KeyValue vs Cell reference 
> mixed up all over the place, I recommend cleaning this up once and for all. 
> There should be no {{KeyValue}} (or {{kv}}, {{kvs}} etc.) in any method or 
> parameter name.
> This includes deprecating and renaming filters too, for example 
> {{FirstKeyOnlyFilter}}, which really should be named {{FirstKeyValueFilter}} 
> as it does _not_ just return the key, but the entire cell. It should be 
> deprecated and renamed to {{FirstCellFilter}} (or {{FirstColumnFilter}} if 
> you prefer).
> In general we should clarify and settle on {{KeyValue}} vs {{Cell}} vs 
> {{Column}} in our naming. The latter two are the only ones going forward with 
> the public API, and are used synonymous. We should carefully check which is 
> better suited (is it really a specific cell, or the newest cell, aka the 
> newest column value) and settle on a naming schema.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

[ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16206292#comment-16206292
 ] 

ramkrishna.s.vasudevan commented on HBASE-19017:


[~tedyu]
Yes I have already added that null check. Anyway let me what other issues are 
there as per [~easyliangjob].

> EnableTableProcedure is not retaining the assignments
> -
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (HBASE-19017) EnableTableProcedure is not retaining the assignments

2017-10-16 Thread ramkrishna.s.vasudevan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HBASE-19017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

ramkrishna.s.vasudevan updated HBASE-19017:
---
Status: Patch Available  (was: Open)

> EnableTableProcedure is not retaining the assignments
> -
>
> Key: HBASE-19017
> URL: https://issues.apache.org/jira/browse/HBASE-19017
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.0.0-alpha-3
>Reporter: ramkrishna.s.vasudevan
>Assignee: ramkrishna.s.vasudevan
> Fix For: 2.0.0-beta-1
>
> Attachments: HBASE-19017.patch
>
>
> Found this while working on HBASE-18946. In branch-1.4 when ever we do enable 
> table we try retain assignment. 
> But in branch-2 and trunk the EnableTableProcedure tries to get the location 
> from the existing regionNode. It always returns null because while doing 
> region CLOSE while disabling a table, the regionNode's 'regionLocation' is 
> made NULL but the 'lastHost' is actually having the servername where the 
> region was hosted. But on trying assignment again we try to see what was the 
> last RegionLocation and not the 'lastHost' and we go ahead with new 
> assignment.
> On region CLOSE while disable table
> {code}
> public void markRegionAsClosed(final RegionStateNode regionNode) throws 
> IOException {
> final RegionInfo hri = regionNode.getRegionInfo();
> synchronized (regionNode) {
>   State state = regionNode.transitionState(State.CLOSED, 
> RegionStates.STATES_EXPECTED_ON_CLOSE);
>   regionStates.removeRegionFromServer(regionNode.getRegionLocation(), 
> regionNode);
>   regionNode.setLastHost(regionNode.getRegionLocation());
>   regionNode.setRegionLocation(null);
>   regionStateStore.updateRegionLocation(regionNode.getRegionInfo(), state,
> regionNode.getRegionLocation()/*null*/, regionNode.getLastHost(),
> HConstants.NO_SEQNUM, regionNode.getProcedure().getProcId());
>   sendRegionClosedNotification(hri);
> }
> {code}
> In AssignProcedure
> {code}
> ServerName lastRegionLocation = regionNode.offline();
> {code}
> {code}
> public ServerName setRegionLocation(final ServerName serverName) {
>   ServerName lastRegionLocation = this.regionLocation;
>   if (LOG.isTraceEnabled() && serverName == null) {
> LOG.trace("Tracking when we are set to null " + this, new 
> Throwable("TRACE"));
>   }
>   this.regionLocation = serverName;
>   this.lastUpdate = EnvironmentEdgeManager.currentTime();
>   return lastRegionLocation;
> }
> {code}
> So further code in AssignProcedure
> {code}
>  boolean retain = false;
> if (!forceNewPlan) {
>   if (this.targetServer != null) {
> retain = targetServer.equals(lastRegionLocation);
> regionNode.setRegionLocation(targetServer);
>   } else {
> if (lastRegionLocation != null) {
>   // Try and keep the location we had before we offlined.
>   retain = true;
>   regionNode.setRegionLocation(lastRegionLocation);
> }
>   }
> }
> {code}
> Tries to do retainAssignment but fails because lastRegionLocation is always 
> null.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


  1   2   >