[GitHub] [hbase] sunhelly commented on issue #354: HBASE-20368 Fix RIT stuck when a rsgroup has no online servers but AM…

2019-07-04 Thread GitBox
sunhelly commented on issue #354: HBASE-20368 Fix RIT stuck when a rsgroup has 
no online servers but AM…
URL: https://github.com/apache/hbase/pull/354#issuecomment-508652413
 
 
   @jatsakthi I hava fixed the NPE produced by the UT I uploaded yesterday, and 
when only run the UT except the fixed server codes, it will pass, because when 
'lastHost' is not in rsgroup, regions will be processed as misplaced, and TRSP 
will retry to confirm open and reassign it. But if we make 'lastHost' in our 
group, like the UT I updated right now, there will be rit stuck. I have replied 
in HBASE-20728 you mentioned before. If you have any concerns or problems, we 
can discuss more.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-20368) Fix RIT stuck when a rsgroup has no online servers but AM's pendingAssginQueue is cleared

2019-07-04 Thread Xiaolin Ha (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878999#comment-16878999
 ] 

Xiaolin Ha commented on HBASE-20368:


See this region stuck log: WARN [ProcExecTimeout] 
assignment.AssignmentManager(1328): STUCK Region-In-Transition rit=OPEN, 
location=localhost,32843,1562307050191, 
table=Group_testKillAllRSInGroupAndThenAddNew, 
region=a763499801435d2f78ab42876c6cb3ec

I think region state 'OPEN' may be error and confusing? When SCP starts and 
creates TRSP, should these new TRSPs  also call serverCrashed() to set region 
state to 'ABNORMALLY_CLOSED'? Any concerns if assign region begins at state 
'ABNORMALLY_CLOSED'? [~zghaobac],[~Apache9]

Relevant codes in SCP:
{quote}private void assignRegions(MasterProcedureEnv env, List 
regions) throws IOException {
 AssignmentManager am = env.getMasterServices().getAssignmentManager();
 for (RegionInfo region : regions) {
 RegionStateNode regionNode = 
am.getRegionStates().getOrCreateRegionStateNode(region);
 regionNode.lock();
 try {
 if (regionNode.getProcedure() != null) {
 LOG.info("{} found RIT {}; {}", this, regionNode.getProcedure(), regionNode);
 regionNode.getProcedure().serverCrashed(env, regionNode, getServerName());
 } else {
 if 
(env.getMasterServices().getTableStateManager().isTableState(regionNode.getTable(),
 TableState.State.DISABLING, TableState.State.DISABLED)) {
 continue;
 }
 TransitRegionStateProcedure proc = TransitRegionStateProcedure.assign(env, 
region, null);
 regionNode.setProcedure(proc);
 addChildProcedure(proc);
 }
 } finally {
 regionNode.unlock();
 }
 }
}{quote}

> Fix RIT stuck when a rsgroup has no online servers but AM's 
> pendingAssginQueue is cleared
> -
>
> Key: HBASE-20368
> URL: https://issues.apache.org/jira/browse/HBASE-20368
> Project: HBase
>  Issue Type: Bug
>  Components: rsgroup
>Affects Versions: 2.0.0
>Reporter: Xiaolin Ha
>Assignee: Xiaolin Ha
>Priority: Major
> Fix For: 2.0.6, 2.1.6
>
> Attachments: HBASE-20368.branch-2.001.patch, 
> HBASE-20368.branch-2.002.patch, HBASE-20368.branch-2.003.patch, 
> HBASE-20368.branch-2.003.patch, HBASE-20368.branch-2.003.patch, 
> HBASE-20368.branch-2.1.001.patch
>
>
> This error can be reproduced by shutting down all servers in a rsgroups and 
> starting them soon afterwards. 
> The regions on this rsgroup will be reassigned, but there is no available 
> servers of this rsgroup.
> They will be added to AM's pendingAssginQueue, which AM will clear regardless 
> of the result of assigning in this case.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20728) Failure and recovery of all RSes in a RSgroup requires master restart for region assignments

2019-07-04 Thread Xiaolin Ha (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878993#comment-16878993
 ] 

Xiaolin Ha commented on HBASE-20728:


I can reproduce this error, by steps:
 # add more than one servers to a rsgroup,
 # move table to this rsgroup,
 # move all table regions to one server of this rsgroup (this is important, to 
make definitely region's 'lastHost' in rsgroup, or maybe in other group),
 # stop all the region servers in this rsgroup (better wait a while),
 # restart servers in this rsgroup,
 # rit stuck appears, and rs name in the {{RIT}} message has the old timestamp, 
logs like:  WARN [ProcExecTimeout] assignment.AssignmentManager(1328): STUCK 
Region-In-Transition rit=OPEN, location=localhost,32843,1562307050191, 
table=Group_testKillAllRSInGroupAndThenAddNew, 
region=a763499801435d2f78ab42876c6cb3ec
 # if change step 5 by add a new server to this rsgroup, the RIT message in 
step 6 should has old rs info.

ROOT cause of this problem is the same as HBASE-20368. We discussed at: 
https://github.com/apache/hbase/pull/354

 

 

 

> Failure and recovery of all RSes in a RSgroup requires master restart for 
> region assignments
> 
>
> Key: HBASE-20728
> URL: https://issues.apache.org/jira/browse/HBASE-20728
> Project: HBase
>  Issue Type: Bug
>  Components: master, rsgroup
>Reporter: Biju Nair
>Assignee: Sakthi
>Priority: Minor
>
> If all the RSes in a RSgroup hosting user tables fail and recover, master 
> still looks for old RSes (with old timestamp in the RS identifier) to assign 
> regions. i.e. Regions are left in transition making the tables in the RSGroup 
> unavailable. User need to restart {{master}} or manually assign the regions 
> to make the tables available. Steps to recreate the scenario in a local 
> cluster
>  - Add required properties to {{site.xml}} to enable {{rsgroup}} and start 
> hbase
>  - Bring up multiple region servers using {{local-regionservers.sh start}}
>  - Create a {{rsgroup}} and move a subset of  {{regionservers}} to the group
>  - Create a table, move it to the group and put some data
>  - Stop the {{regionservers}} in the group and restart them
>  - From the {{master UI}}, we can see that the region for the table in 
> transition and the RS name in the {{RIT}} message has the old timestamp.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22656) [Metrics] Tabe metrics 'BatchPut' and 'BatchDelete' are never updated

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22656:
--
Summary: [Metrics]  Tabe metrics 'BatchPut' and 'BatchDelete' are never 
updated  (was: [Metrics]  Tabe metrics `BatchPut` and `BatchDelete` are never 
updated)

> [Metrics]  Tabe metrics 'BatchPut' and 'BatchDelete' are never updated
> --
>
> Key: HBASE-22656
> URL: https://issues.apache.org/jira/browse/HBASE-22656
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-22656.master.001.patch
>
>
> {code}
>   public void updatePutBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updatePut(tn, t); // Here should use updatePutBatch
> }
> ...
>   }
>   public void updateDeleteBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updateDelete(tn, t); // Here should use updateDeleteBatch
> }
> ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22656) [Metrics] Tabe metrics `BatchPut` and `BatchDelete` are never updated

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22656:
--
Attachment: HBASE-22656.master.001.patch

> [Metrics]  Tabe metrics `BatchPut` and `BatchDelete` are never updated
> --
>
> Key: HBASE-22656
> URL: https://issues.apache.org/jira/browse/HBASE-22656
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-22656.master.001.patch
>
>
> {code}
>   public void updatePutBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updatePut(tn, t); // Here should use updatePutBatch
> }
> ...
>   }
>   public void updateDeleteBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updateDelete(tn, t); // Here should use updateDeleteBatch
> }
> ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22656) [Metrics] Tabe metrics `BatchPut` and `BatchDelete` are never updated

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22656?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22656:
--
Status: Patch Available  (was: Open)

> [Metrics]  Tabe metrics `BatchPut` and `BatchDelete` are never updated
> --
>
> Key: HBASE-22656
> URL: https://issues.apache.org/jira/browse/HBASE-22656
> Project: HBase
>  Issue Type: Bug
>  Components: metrics
>Reporter: Reid Chan
>Assignee: Reid Chan
>Priority: Minor
> Attachments: HBASE-22656.master.001.patch
>
>
> {code}
>   public void updatePutBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updatePut(tn, t); // Here should use updatePutBatch
> }
> ...
>   }
>   public void updateDeleteBatch(TableName tn, long t) {
> if (tableMetrics != null && tn != null) {
>   tableMetrics.updateDelete(tn, t); // Here should use updateDeleteBatch
> }
> ...
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22659) Resilient block caching for cache sensitive data serving

2019-07-04 Thread Andrew Purtell (JIRA)
Andrew Purtell created HBASE-22659:
--

 Summary: Resilient block caching for cache sensitive data serving
 Key: HBASE-22659
 URL: https://issues.apache.org/jira/browse/HBASE-22659
 Project: HBase
  Issue Type: Brainstorming
  Components: BlockCache
Reporter: Andrew Purtell


Caching in data serving remains crucial for performance. Networks are fast but 
not yet fast enough. RDMA may change this once it becomes more popular and 
available. Caching layers should be resilient to crashes to avoid the cost of 
rewarming. In the context of HBase with root filesystem placed on S3, the 
object store is quite slow relative to other options like HDFS, so caching is 
particularly essential given the rewarming costs will be high, either client 
visible performance degradation (due to cache miss and reload) or elevated IO 
due to prefetching.

We expect for cloud serving when backed by S3 the HBase blockcache will be 
configured for hosting the entirety of the warm set, which may be very large, 
so we also expect the selection of the file backed option and the placement of 
the filesystem for cache file storage on local fast solid state devices. These 
devices offer data persistence beyond the lifetime of an individual process. We 
can take advantage of this to make block caching partially resilient to short 
duration process failures and restarts. 

When the blockcache is backed by a file system, when starting up it can 
reinitialize and prewarm using a scan over preexisting disk contents. These 
will be cache files left behind by another process executing earlier on the 
same instance. This strategy is applicable to process restart and rolling 
upgrade scenarios specifically. (The local storage may not survive an instance 
reboot.) 

Once the server has reloaded the blockcache metadata from local storage it can 
advertise to the HMaster the list of HFiles for which it has some precached 
blocks resident. This implies the blockcache's file backed option should 
maintain a mapping of source HFile paths for the blocks in cache. We don't need 
to provide more granular information on which blocks (or not) of the HFile are 
in cache. It is unlikely entries for the HFile will be cached elsewhere. We can 
assume placement of a region containing the HFile on a server with any block 
cached there will be better than alternatives. 

The HMaster already waits for regionserver registration activity to stabilize 
before assigning regions and we can contemplate adding configurable delay in 
region reassignment for sever crash handling  in the hopes a restarted or 
recovered instance will come online and report in-cache reloaded contents in 
time for an assignment decision to consider this new factor in data locality. 
When finally processing (re)assignment the HMaster can consider this additional 
factor when building the assignment plan. We already calculate a HDFS level 
locality metric. We can also calculate a new cache level locality metric 
aggregated from regionserver reports of re-warmed cache contents. For a given 
region we can build a candidate assignment set of servers reporting cached 
blocks for its associated HFiles, and the master can assign the region to the 
server with the highest weight. Otherwise we (re)assign using the HDFS locality 
metric as before.

In this way during rolling restart or quick process restart via supervisory 
process scenarios we are very likely to assign a region back to the server that 
was most recently hosting it, and we can pick up for immediate reuse any file 
backed blockcache data accumulated for the region by the previous process. 
These are going to be the most common scenarios encountered during normal 
cluster operation. This will allow HBase's internal data caching to be 
resilient to short duration crashes and administrative process restarts.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22417) DeleteTableProcedure.deleteFromMeta method should remove table from Master's table descriptors cache

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878860#comment-16878860
 ] 

HBase QA commented on HBASE-22417:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  1m 
48s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
45s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
54s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  4m 
50s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
42s{color} | {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
33s{color} | {color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
56s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  1m 
22s{color} | {color:red} hbase-server: The patch generated 2 new + 2 unchanged 
- 0 fixed = 4 total (was 2) {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
14m 22s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}238m 12s{color} 
| {color:red} hbase-server in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}293m 15s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.hbase.client.TestTableFavoredNodes |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=18.09.5 Server=18.09.5 base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/605/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22417 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973696/HBASE-22417.master.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux cd4e81c2d07b 4.15.0-52-generic #56-Ubuntu SMP Tue Jun 4 
22:49:08 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | master / 9116534f5d |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HBASE-Build/605/artifact/patchprocess/diff-checkstyle-hbase-server.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HBASE-Build/605/artifact/patchprocess/patch-unit-hbase-server.txt
 |
|  Test Results 

[jira] [Commented] (HBASE-22567) HBCK2 addMissingRegionsToMeta

2019-07-04 Thread Daisuke Kobayashi (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878827#comment-16878827
 ] 

Daisuke Kobayashi commented on HBASE-22567:
---

Well, even there's no entry in ZK, it looks master can correct hbase:meta, 
regarding tables, upon starting. Here's my observation (the last commit of my 
build was 302a9ce0564a796ef59c74fe475984cb3693801a). Suppose hbase:meta is 
missing a given table's row, 'dice' in this example, with some reason whereas 
HDFS has the data.

If the table's znode is not present in ZK, master actually does put the entry 
based on the HDFS info.
{code:java}
2019-07-05 02:13:14,815 WARN  [master/192.168.3.5:16000:becomeActiveMaster] 
master.TableStateManager: dice has no table state in hbase:meta, assuming 
ENABLED
2019-07-05 02:13:15,004 INFO  [master/192.168.3.5:16000:becomeActiveMaster] 
hbase.MetaTableAccessor: Updated tableName=dice, state=ENABLED in hbase:meta
{code}
On the other hand, if the table's znode exists in ZK, master puts the entry 
based on the ZK info.
{code:java}
2019-07-05 02:19:02,182 INFO  [master/192.168.3.5:16000:becomeActiveMaster] 
master.TableStateManager: Migrating table state from zookeeper to hbase:meta; 
tableName=dice, state=ENABLED
{code}
Correct me if I'm wrong.

> HBCK2 addMissingRegionsToMeta
> -
>
> Key: HBASE-22567
> URL: https://issues.apache.org/jira/browse/HBASE-22567
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>
> Following latest discussion on HBASE-21745, this proposes an hbck2 command 
> that allows for inserting back regions missing in META that still have 
> *regioninfo* available in HDFS. Although this is still an interactive and 
> simpler version than the old _OfflineMetaRepair_, it still relies on hdfs 
> state as the source of truth, and performs META updates mostly independently 
> from Master (apart from requiring Meta table been online).
> For a more detailed explanation on this command behaviour, pasting _command 
> usage_ text:
> {noformat}
> To be used for scenarios where some regions may be missing in META,
> but there's still a valid 'regioninfo' metadata file on HDFS.
> This is a lighter version of 'OfflineMetaRepair' tool commonly used for
> similar issues on 1.x release line.
> This command needs META to be online. For each table name passed as
> parameter, it performs a diff between regions available in META,
> against existing regions dirs on HDFS. Then, for region dirs with
> no matches in META, it reads regioninfo metadata file and
> re-creates given region in META. Regions are re-created in 'CLOSED'
> state at META table only, but not in Masters' cache, and are not
> assigned either. A rolling Masters restart, followed by a
> hbck2 'assigns' command with all re-inserted regions is required.
> This hbck2 'assigns' command is printed for user convenience.
> WARNING: To avoid potential region overlapping problems due to ongoing
> splits, this command disables given tables while re-inserting regions.
> An example adding missing regions for tables 'table_1' and 'table_2':
> $ HBCK2 addMissingRegionsInMeta table_1 table_2
> Returns hbck2 'assigns' command with all re-inserted regions.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase-operator-tools] asfgit commented on issue #3: Hbase 22567

2019-07-04 Thread GitBox
asfgit commented on issue #3: Hbase 22567
URL: 
https://github.com/apache/hbase-operator-tools/pull/3#issuecomment-508543972
 
 
   
   Refer to this link for build results (access rights to CI server needed): 
   https://builds.apache.org/job/PreCommit-HBASE-OPERATOR-TOOLS-Build/6/
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300469487
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/hbck2/meta/MetaFixer.java
 ##
 @@ -0,0 +1,128 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.hbck2.meta;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+public class MetaFixer implements Closeable {
+  private static final String HBASE_DATA_DIR = "/data/";
+  private static final String HBASE_DEFAULT_NAMESPACE = "default/";
+  private FileSystem fs;
+  private Connection conn;
+  private Configuration config;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300469459
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/hbck2/meta/MetaFixer.java
 ##
 @@ -0,0 +1,128 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.hbck2.meta;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+public class MetaFixer implements Closeable {
+  private static final String HBASE_DATA_DIR = "/data/";
+  private static final String HBASE_DEFAULT_NAMESPACE = "default/";
+  private FileSystem fs;
+  private Connection conn;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300469435
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/hbck2/meta/MetaFixer.java
 ##
 @@ -0,0 +1,128 @@
+/**
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ * http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing, software
+ * distributed under the License is distributed on an "AS IS" BASIS,
+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+ * See the License for the specific language governing permissions and
+ * limitations under the License.
+ */
+package org.apache.hbase.hbck2.meta;
+
+import org.apache.hadoop.conf.Configuration;
+import org.apache.hadoop.fs.FileStatus;
+import org.apache.hadoop.fs.FileSystem;
+import org.apache.hadoop.fs.Path;
+import org.apache.hadoop.hbase.MetaTableAccessor;
+import org.apache.hadoop.hbase.TableName;
+import org.apache.hadoop.hbase.client.Connection;
+import org.apache.hadoop.hbase.client.ConnectionFactory;
+import org.apache.hadoop.hbase.client.RegionInfo;
+import org.apache.hadoop.hbase.regionserver.HRegionFileSystem;
+
+import java.io.Closeable;
+import java.io.IOException;
+import java.util.ArrayList;
+import java.util.HashMap;
+import java.util.List;
+import java.util.Map;
+import java.util.Optional;
+import java.util.stream.Collectors;
+
+public class MetaFixer implements Closeable {
+  private static final String HBASE_DATA_DIR = "/data/";
+  private static final String HBASE_DEFAULT_NAMESPACE = "default/";
+  private FileSystem fs;
 
 Review comment:
   Done.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300469233
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -440,51 +620,64 @@ public int run(String[] args) throws IOException {
 String[] commands = commandLine.getArgs();
 String command = commands[0];
 switch (command) {
-  case SET_TABLE_STATE:
-if (commands.length < 3) {
-  usage(options, command + " takes tablename and state arguments: e.g. 
user ENABLED");
-  return EXIT_FAILURE;
-}
-System.out.println(setTableState(TableName.valueOf(commands[1]),
-TableState.State.valueOf(commands[2])));
-break;
-
-  case ASSIGNS:
-if (commands.length < 2) {
-  usage(options, command + " takes one or more encoded region names");
-  return EXIT_FAILURE;
-}
-System.out.println(assigns(purgeFirst(commands)));
-break;
+case SET_TABLE_STATE:
+  if (commands.length < 3) {
+usage(options, command + " takes tablename and state arguments: e.g. 
user ENABLED");
+return EXIT_FAILURE;
+  }
+  System.out.println(setTableState(TableName.valueOf(commands[1]), 
TableState.State.valueOf(commands[2])));
+  break;
 
-  case BYPASS:
-if (commands.length < 2) {
-  usage(options, command + " takes one or more pids");
-  return EXIT_FAILURE;
-}
-List bs = bypass(purgeFirst(commands));
-if (bs == null) {
-  // Something went wrong w/ the parse and command didn't run.
-  return EXIT_FAILURE;
-}
-System.out.println(toString(bs));
-break;
+case ASSIGNS:
+  if (commands.length < 2) {
+usage(options, command + " takes one or more encoded region names");
+return EXIT_FAILURE;
+  }
+  System.out.println(assigns(purgeFirst(commands)));
+  break;
 
-  case UNASSIGNS:
-if (commands.length < 2) {
-  usage(options, command + " takes one or more encoded region names");
-  return EXIT_FAILURE;
-}
-System.out.println(toString(unassigns(purgeFirst(commands;
-break;
-
-  case SET_REGION_STATE:
-if(commands.length < 3){
-  usage(options, command + " takes region encoded name and state 
arguments: e.g. "
-+ "35f30b0ce922c34bf5c284eff33ba8b3 CLOSING");
-  return EXIT_FAILURE;
-}
-return setRegionState(commands[1], 
RegionState.State.valueOf(commands[2]));
+case BYPASS:
+  if (commands.length < 2) {
+usage(options, command + " takes one or more pids");
+return EXIT_FAILURE;
+  }
+  List bs = bypass(purgeFirst(commands));
+  if (bs == null) {
+// Something went wrong w/ the parse and command didn't run.
+return EXIT_FAILURE;
+  }
+  System.out.println(toString(bs));
+  break;
+
+case UNASSIGNS:
+  if (commands.length < 2) {
+usage(options, command + " takes one or more encoded region names");
+return EXIT_FAILURE;
+  }
+  System.out.println(toString(unassigns(purgeFirst(commands;
+  break;
+
+case SET_REGION_STATE:
+  if (commands.length < 3) {
+usage(options, command + " takes region encoded name and state 
arguments: e.g. "
+  + "35f30b0ce922c34bf5c284eff33ba8b3 CLOSING");
+return EXIT_FAILURE;
+  }
+  return setRegionState(commands[1], 
RegionState.State.valueOf(commands[2]));
 
 Review comment:
   Yeah, because we are already returning output of _setRegionState_ method, a 
_break_ would be unreachable. Maybe we should change _setRegionState_ case 
handling to become consistent with other methods? Probably on a separate jira. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300467711
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
+writer.println("   hbck2 'assigns' command with all re-inserted regions is 
required. ");
+writer.println("   This hbck2 'assigns' command is printed for user 
convenience.");
+writer.println("   WARNING: To avoid potential region overlapping problems 
due to ongoing ");
+writer.println("   splits, this command disables given tables while 
re-inserting regions. ");
+writer.println("   An example adding missing regions for tables 'table_1' 
and 'table_2':");
+writer.println(" $ HBCK2 addMissingRegionsInMeta table_1 table_2");
+writer.println("   Returns hbck2 'assigns' command with all re-inserted 
regions.");
+writer.println();
+writer.println(" " + REPORT_MISSING_REGIONS_IN_META + " ...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a checking only method, designed for reporting 
purposes and");
+writer.println("   doesn't perform any fixes. Operators should react upon 
the reported.");
+writer.println("   This command needs META to be online. For each 
namespace/table passed");
+writer.println("   as parameter, it performs a diff between regions 
available in META, ");
+writer.println("   against existing regions dirs on HDFS. Region dirs with 
no matches");
+writer.println("   are printed grouped under its related table name. 
Tables with no");
+writer.println("   missing regions will show a 'no missing regions' 
message. If no");
+writer.println("   namespace or table is specified, it will verify all 
existing regions.");
+writer.println("   It accepts a combination of multiple namespace and 
tables. Table names");
+writer.println("   should include the namespace portion, even for tables 
in the default");
+writer.println("   namespace, otherwise it will assume as a namespace 
value.");
+writer.println("   An example triggering missing regions report for tables 
'table_1'");
+writer.println("   and 'table_2', under default namespace:");
+writer.println(" $ HBCK2 reportMissingRegionsInMeta default:table_1 
default:table_2");
+writer.println("   An example triggering missing regions report for table 
'table_1'");
+writer.println("   under default namespace, and for all tables from 
namespace 'ns1':");
+writer.println(" $ HBCK2 reportMissingRegionsInMeta default:table_1 
ns1");
+writer.println("   Returns list of missing regions for each table passed 
as parameter, or ");
+writer.println("   for each table on namespaces specified as parameter.");
 
 Review comment:
   This came out of previous suggestions to this PR 
[here](https://github.com/apache/hbase-operator-tools/pull/3#discussion_r292981293).
 It's basically a type of dry-run for operators double check on what is going 
to be re-added to meta. Let me emphasise that on this command description.  


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300466931
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
+writer.println("   hbck2 'assigns' command with all re-inserted regions is 
required. ");
+writer.println("   This hbck2 'assigns' command is printed for user 
convenience.");
+writer.println("   WARNING: To avoid potential region overlapping problems 
due to ongoing ");
+writer.println("   splits, this command disables given tables while 
re-inserting regions. ");
+writer.println("   An example adding missing regions for tables 'table_1' 
and 'table_2':");
+writer.println(" $ HBCK2 addMissingRegionsInMeta table_1 table_2");
+writer.println("   Returns hbck2 'assigns' command with all re-inserted 
regions.");
 
 Review comment:
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300466903
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
+writer.println("   hbck2 'assigns' command with all re-inserted regions is 
required. ");
+writer.println("   This hbck2 'assigns' command is printed for user 
convenience.");
+writer.println("   WARNING: To avoid potential region overlapping problems 
due to ongoing ");
+writer.println("   splits, this command disables given tables while 
re-inserting regions. ");
+writer.println("   An example adding missing regions for tables 'table_1' 
and 'table_2':");
+writer.println(" $ HBCK2 addMissingRegionsInMeta table_1 table_2");
 
 Review comment:
   Yep, addressing it on next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300465599
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
+writer.println("   hbck2 'assigns' command with all re-inserted regions is 
required. ");
+writer.println("   This hbck2 'assigns' command is printed for user 
convenience.");
+writer.println("   WARNING: To avoid potential region overlapping problems 
due to ongoing ");
+writer.println("   splits, this command disables given tables while 
re-inserting regions. ");
 
 Review comment:
   Done, will be in next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300465153
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
+writer.println("   hbck2 'assigns' command with all re-inserted regions is 
required. ");
+writer.println("   This hbck2 'assigns' command is printed for user 
convenience.");
 
 Review comment:
   Thanks! Will be addressed in next commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300464335
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
+writer.println("   no matches in META, it reads regioninfo metadata file 
and ");
+writer.println("   re-creates given region in META. Regions are re-created 
in 'CLOSED' ");
+writer.println("   state at META table only, but not in Masters' cache, 
and are not ");
+writer.println("   assigned either. A rolling Masters restart, followed by 
a ");
 
 Review comment:
   It's the penalty for doing things without consent from Master :)...
   
   We are performing client puts of region infos into meta. While it does 
insert the region back into meta, Active 
_Master.AssignmentManager.RegionStateStore_ never gets updated about the new 
region, so any attempt to assign these re-added regions will fail, because AM 
doesn't know these regions. I couldn't find, currently, any master rpc 
available method that could trigger a meta _reload_.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300456824
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
+writer.println("   This command needs META to be online. For each table 
name passed as");
+writer.println("   parameter, it performs a diff between regions available 
in META, ");
+writer.println("   against existing regions dirs on HDFS. Then, for region 
dirs with ");
 
 Review comment:
   The most critical features I can think of here are Splits and Merges, but I 
believe CatalogJanitor would move retired regions dirs to archive when it 
deletes given region from meta, right? Retired regions still in meta with 
region dirs wouldn't be a problem, because as long as these still have entries 
on meta, it wouldn't be marked as missing. Would you think of any valid 
scenario where meta doesn't have a given region, but this region dir is still 
on hdfs (non archived)? If so, then I think it could be a problem because this 
command would recreate this region info in meta.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300454504
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
+writer.println("   This is a lighter version of 'OfflineMetaRepair tool 
commonly used for ");
+writer.println("   similar issues on 1.x release line. ");
 
 Review comment:
   It does more complex stuffs, like validating region ranges among existing 
hdfs region dirs. Also, it sidelines and recreates meta from the scratch, then 
insert table and region info (read from hdfs) into the newly created meta 
table. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase-operator-tools] wchevreuil commented on a change in pull request #3: Hbase 22567

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #3: Hbase 22567
URL: https://github.com/apache/hbase-operator-tools/pull/3#discussion_r300448817
 
 

 ##
 File path: hbase-hbck2/src/main/java/org/apache/hbase/HBCK2.java
 ##
 @@ -334,6 +469,51 @@ private static final String getCommandUsage() {
 writer.println("   Returns \"0\" SUCCESS code if it informed region state 
is changed, "
   + "\"1\" FAIL code otherwise.");
 writer.println();
+writer.println(" " + ADD_MISSING_REGIONS_IN_META_FOR_TABLES + " 
...");
+writer.println("   To be used in scenarios where some regions may be 
missing in META,");
+writer.println("   but there's still a valid 'regioninfo metadata file on 
HDFS. ");
 
 Review comment:
   Good point. Maybe another section on the _readme_? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #315: HBASE-22594 Clean up for backup examples

2019-07-04 Thread GitBox
Apache-HBase commented on issue #315: HBASE-22594 Clean up for backup examples
URL: https://github.com/apache/hbase/pull/315#issuecomment-508520379
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 28 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ master Compile Tests _ |
   | 0 | mvndep | 28 | Maven dependency ordering for branch |
   | +1 | mvninstall | 253 | master passed |
   | +1 | compile | 65 | master passed |
   | +1 | checkstyle | 130 | master passed |
   | +1 | shadedjars | 269 | branch has no errors when building our shaded 
downstream artifacts. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hbase-checkstyle |
   | +1 | findbugs | 208 | master passed |
   | +1 | javadoc | 44 | master passed |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 14 | Maven dependency ordering for patch |
   | +1 | mvninstall | 242 | the patch passed |
   | +1 | compile | 62 | the patch passed |
   | +1 | javac | 62 | the patch passed |
   | +1 | checkstyle | 124 | root: The patch generated 0 new + 0 unchanged - 26 
fixed = 0 total (was 26) |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   | +1 | shadedjars | 268 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 731 | Patch does not cause any errors with Hadoop 2.8.5 
2.9.2 or 3.1.2. |
   | 0 | findbugs | 0 | Skipped patched modules with no Java source: 
hbase-checkstyle |
   | +1 | findbugs | 212 | the patch passed |
   | +1 | javadoc | 42 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 10 | hbase-checkstyle in the patch passed. |
   | +1 | unit | 8280 | hbase-server in the patch passed. |
   | +1 | asflicense | 47 | The patch does not generate ASF License warnings. |
   | | | 11386 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/8/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/315 |
   | Optional Tests |  dupname  asflicense  checkstyle  javac  javadoc  unit  
xml  findbugs  shadedjars  hadoopcheck  hbaseanti  compile  |
   | uname | Linux 3339e427d492 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 9116534f5d |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/8/testReport/
 |
   | Max. process+thread count | 4916 (vs. ulimit of 1) |
   | modules | C: hbase-checkstyle hbase-server U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-315/8/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on issue #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
wchevreuil commented on issue #345: HBASE-22638 : Zookeeper Utility enhancements
URL: https://github.com/apache/hbase/pull/345#issuecomment-508516455
 
 
   +1 for latest commit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] wchevreuil commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
wchevreuil commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300441002
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1859,31 +1873,33 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList != null) {
+  for (String peerIdZNode : hFileRefChildrenNoWatchList) {
 
 Review comment:
   lgtm. While @maoling comments makes sense in general, here the given if 
structure only accounts for 3 levels (2 nested inside), so I guess it's fine to 
leave it as it is, for consistence with the remaining of the class, as pointed 
out by @HorizonNet .


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] busbey commented on issue #356: HBASE-15666 shaded dependencies for hbase-testing-util

2019-07-04 Thread GitBox
busbey commented on issue #356: HBASE-15666 shaded dependencies for 
hbase-testing-util
URL: https://github.com/apache/hbase/pull/356#issuecomment-508507023
 
 
   > @busbey Does this mean that we would be able to resume testing in ycsb as 
we were blocked on HBASE-15666 as per 
https://github.com/brianfrankcooper/YCSB/blob/master/hbase12/pom.xml#L36 ?
   
   Yep exactly! That's what I'd like to test here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-20405) Update website to meet foundation recommendations

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20405?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878708#comment-16878708
 ] 

Hudson commented on HBASE-20405:


Results for branch master
[build #1199 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1199/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1199//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1199//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1199//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Update website to meet foundation recommendations
> -
>
> Key: HBASE-20405
> URL: https://issues.apache.org/jira/browse/HBASE-20405
> Project: HBase
>  Issue Type: Task
>  Components: website
>Reporter: Sean Busbey
>Assignee: Szalay-Beko Mate
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-20405-v2.png, HBASE-20405.png
>
>
> The Apache Whimsy tool includes an automated checker on if projects are 
> following foundation guidance for web sites:
> https://whimsy.apache.org/site/project/hbase
> out of 10 checks, we currently have 5 green, 4 red, and 1 orange.
> The whimsy listing gives links to relevant policy and explains what it's 
> looking for.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22537) Split happened Replica region can not be deleted after deleting table successfully and restarting RegionServer

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22537?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878705#comment-16878705
 ] 

HBase QA commented on HBASE-22537:
--

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
57s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} hbaseanti {color} | {color:green}  0m  
0s{color} | {color:green} Patch does not have any anti-patterns. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2.1 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  5m 
25s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m  
7s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
25s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
 4s{color} | {color:green} branch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
43s{color} | {color:green} branch-2.1 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
40s{color} | {color:green} branch-2.1 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  4m 
57s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  1m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} shadedjars {color} | {color:green}  5m 
57s{color} | {color:green} patch has no errors when building our shaded 
downstream artifacts. {color} |
| {color:green}+1{color} | {color:green} hadoopcheck {color} | {color:green} 
20m 44s{color} | {color:green} Patch does not cause any errors with Hadoop 
2.7.7 2.8.5 or 3.0.3 3.1.2. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}223m 
50s{color} | {color:green} hbase-server in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
34s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}285m 14s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/PreCommit-HBASE-Build/599/artifact/patchprocess/Dockerfile
 |
| JIRA Issue | HBASE-22537 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973648/HBASE-22537.branch-2.1.003.patch
 |
| Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
| uname | Linux 20e340f7b613 4.4.0-143-generic #169-Ubuntu SMP Thu Feb 7 
07:56:38 UTC 2019 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | dev-support/hbase-personality.sh |
| git revision | branch-2.1 / 6d5e120596 |
| maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
| Default Java | 1.8.0_181 |
| findbugs | v3.1.11 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HBASE-Build/599/testReport/ |
| Max. process+thread count | 5209 (vs. ulimit of 1) |
| modules | C: hbase-server U: hbase-server |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/599/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was auto

[jira] [Commented] (HBASE-22417) DeleteTableProcedure.deleteFromMeta method should remove table from Master's table descriptors cache

2019-07-04 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22417?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878703#comment-16878703
 ] 

Wellington Chevreuil commented on HBASE-22417:
--

Thanks for reviewing it and pointing the "case" formatting issue, [~stack]! Had 
reformatted the DeleteTableProcedure properly and attached new patch with the 
changes.

> DeleteTableProcedure.deleteFromMeta method should remove table from Master's 
> table descriptors cache
> 
>
> Key: HBASE-22417
> URL: https://issues.apache.org/jira/browse/HBASE-22417
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: HBASE-22417.master.001.patch, 
> HBASE-22417.master.002.patch, HBASE-22417.master.003.patch
>
>
> DeleteTableProcedure defines a static deleteFromMeta method that's currently 
> used both by DeleteTableProcedure itself and TruncateTableProcedure. 
> Sometimes, depending on the table size (and under slower, under performing 
> FileSystems), truncation can take longer to complete 
> *TRUNCATE_TABLE_CLEAR_FS_LAYOUT* stage, but the given table has already been 
> deleted from meta on previous *TRUNCATE_TABLE_REMOVE_FROM_META* stage. In 
> this case, features relying on Master's table descriptor's cache might 
> wrongly try to reference this truncating table. Master Web UI, for example, 
> would try to check this table state and end up showing a 500 error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22417) DeleteTableProcedure.deleteFromMeta method should remove table from Master's table descriptors cache

2019-07-04 Thread Wellington Chevreuil (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22417?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-22417:
-
Attachment: HBASE-22417.master.003.patch

> DeleteTableProcedure.deleteFromMeta method should remove table from Master's 
> table descriptors cache
> 
>
> Key: HBASE-22417
> URL: https://issues.apache.org/jira/browse/HBASE-22417
> Project: HBase
>  Issue Type: Bug
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
> Attachments: HBASE-22417.master.001.patch, 
> HBASE-22417.master.002.patch, HBASE-22417.master.003.patch
>
>
> DeleteTableProcedure defines a static deleteFromMeta method that's currently 
> used both by DeleteTableProcedure itself and TruncateTableProcedure. 
> Sometimes, depending on the table size (and under slower, under performing 
> FileSystems), truncation can take longer to complete 
> *TRUNCATE_TABLE_CLEAR_FS_LAYOUT* stage, but the given table has already been 
> deleted from meta on previous *TRUNCATE_TABLE_REMOVE_FROM_META* stage. In 
> this case, features relying on Master's table descriptor's cache might 
> wrongly try to reference this truncating table. Master Web UI, for example, 
> would try to check this table state and end up showing a 500 error. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-21879) Read HFile's block to ByteBuffer directly instead of to byte for reducing young gc purpose

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-21879?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878701#comment-16878701
 ] 

Hudson commented on HBASE-21879:


Results for branch HBASE-21879
[build #167 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/167/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/167//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/167//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/HBASE-21879/167//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> Read HFile's block to ByteBuffer directly instead of to byte for reducing 
> young gc purpose
> --
>
> Key: HBASE-21879
> URL: https://issues.apache.org/jira/browse/HBASE-21879
> Project: HBase
>  Issue Type: Improvement
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0
>
> Attachments: HBASE-21879.v1.patch, HBASE-21879.v1.patch, 
> QPS-latencies-before-HBASE-21879.png, gc-data-before-HBASE-21879.png
>
>
> In HFileBlock#readBlockDataInternal,  we have the following: 
> {code}
> @VisibleForTesting
> protected HFileBlock readBlockDataInternal(FSDataInputStream is, long offset,
> long onDiskSizeWithHeaderL, boolean pread, boolean verifyChecksum, 
> boolean updateMetrics)
>  throws IOException {
>  // .
>   // TODO: Make this ByteBuffer-based. Will make it easier to go to HDFS with 
> BBPool (offheap).
>   byte [] onDiskBlock = new byte[onDiskSizeWithHeader + hdrSize];
>   int nextBlockOnDiskSize = readAtOffset(is, onDiskBlock, preReadHeaderSize,
>   onDiskSizeWithHeader - preReadHeaderSize, true, offset + 
> preReadHeaderSize, pread);
>   if (headerBuf != null) {
> // ...
>   }
>   // ...
>  }
> {code}
> In the read path,  we still read the block from hfile to on-heap byte[], then 
> copy the on-heap byte[] to offheap bucket cache asynchronously,  and in my  
> 100% get performance test, I also observed some frequent young gc,  The 
> largest memory footprint in the young gen should be the on-heap block byte[].
> In fact, we can read HFile's block to ByteBuffer directly instead of to 
> byte[] for reducing young gc purpose. we did not implement this before, 
> because no ByteBuffer reading interface in the older HDFS client, but 2.7+ 
> has supported this now,  so we can fix this now. I think. 
> Will provide an patch and some perf-comparison for this. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22567) HBCK2 addMissingRegionsToMeta

2019-07-04 Thread Wellington Chevreuil (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22567?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878644#comment-16878644
 ] 

Wellington Chevreuil commented on HBASE-22567:
--

{quote}HMaster adds a given missing table's row, whereas presenting in HDFS, to 
META upon restart.
{quote}
Thanks for pointing it, [~daisuke.kobayashi]! While this would help indeed, I 
think it's still worth kind of re-implement table state recovery logic in 
HBCK2. My concern here is that this depends on ZK state 
([_queryForTableStates_|https://github.com/apache/hbase/blob/ac4e52880b2da6d0a2c2a9e949aa55d1dd4e7371/hbase-server/src/main/java/org/apache/hadoop/hbase/master/TableStateManager.java#L329]
 here iterates through table znodes in ZK), but we may face scenarios where ZK 
state is invalid (specially these days when a widespread workaround among 
operators for trying to recover hbase involves wipe out of "/hbase" znode in 
ZK).

> HBCK2 addMissingRegionsToMeta
> -
>
> Key: HBASE-22567
> URL: https://issues.apache.org/jira/browse/HBASE-22567
> Project: HBase
>  Issue Type: New Feature
>Reporter: Wellington Chevreuil
>Assignee: Wellington Chevreuil
>Priority: Major
>
> Following latest discussion on HBASE-21745, this proposes an hbck2 command 
> that allows for inserting back regions missing in META that still have 
> *regioninfo* available in HDFS. Although this is still an interactive and 
> simpler version than the old _OfflineMetaRepair_, it still relies on hdfs 
> state as the source of truth, and performs META updates mostly independently 
> from Master (apart from requiring Meta table been online).
> For a more detailed explanation on this command behaviour, pasting _command 
> usage_ text:
> {noformat}
> To be used for scenarios where some regions may be missing in META,
> but there's still a valid 'regioninfo' metadata file on HDFS.
> This is a lighter version of 'OfflineMetaRepair' tool commonly used for
> similar issues on 1.x release line.
> This command needs META to be online. For each table name passed as
> parameter, it performs a diff between regions available in META,
> against existing regions dirs on HDFS. Then, for region dirs with
> no matches in META, it reads regioninfo metadata file and
> re-creates given region in META. Regions are re-created in 'CLOSED'
> state at META table only, but not in Masters' cache, and are not
> assigned either. A rolling Masters restart, followed by a
> hbck2 'assigns' command with all re-inserted regions is required.
> This hbck2 'assigns' command is printed for user convenience.
> WARNING: To avoid potential region overlapping problems due to ongoing
> splits, this command disables given tables while re-inserting regions.
> An example adding missing regions for tables 'table_1' and 'table_2':
> $ HBCK2 addMissingRegionsInMeta table_1 table_2
> Returns hbck2 'assigns' command with all re-inserted regions.{noformat}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22582) The Compaction writer may access the lastCell whose memory has been released when appending fileInfo in the final

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878625#comment-16878625
 ] 

Hudson commented on HBASE-22582:


Results for branch branch-2.0
[build #1726 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1726/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1726//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1726//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.0/1726//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> The Compaction writer may access the lastCell whose memory has been released 
> when appending fileInfo in the final
> -
>
> Key: HBASE-22582
> URL: https://issues.apache.org/jira/browse/HBASE-22582
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.7
>
>
> Copy the comment from [~javaman_chen] under HBASE-21879: 
> https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
> In Compactor#compact, we have the following:
> {code}
> protected List compact(final CompactionRequest request...
>   ...
>   try {
> ...
>   } finally {
> Closeables.close(scanner, true);
> if (!finished && writer != null) {
>   abortWriter(writer);
> }
>   }
>   assert finished : "We should have exited the method on all error paths";
>   assert writer != null : "Writer should be non-null if no error";
>   return commitWriter(writer, fd, request);
> }
> {code}
> should we call writer#beforeShipped() before Closeables.close(scanner, true);
> In order to copy some cell's data out of the ByteBuff before it released, or 
> commitWriter may be wrong in the following call stack
> {code}
> Compactor#commitWriter
> -> HFileWriterImpl#close
>  -> HFileWriterImpl#writeFileInfo
>-> HFileWriterImpl#finishFileInfo
> {code}
> {code}
> protected void finishFileInfo() throws IOException {
>   if (lastCell != null) {
> // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean
> // byte buffer. Won't take a tuple.
> byte [] lastKey = 
> PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell);
> fileInfo.append(FileInfo.LASTKEY, lastKey, false);
>   }
>   ...
> }
> {code}
> Because the lastCell may refer to a reused ByteBuff. 
> Checked the code, It's a bug and will need to fix in all 2.x & master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-20817) Infinite loop when executing ReopenTableRegionsProcedure

2019-07-04 Thread Jacobo Coll (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-20817?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878622#comment-16878622
 ] 

Jacobo Coll commented on HBASE-20817:
-

Hi team,

 

I'm not sure if I should open a new ticket or not.

I am having this same issue in a Hortonworks HBase 
[2.0.2.3.1.2.1-1|https://repo.hortonworks.com/content/repositories/releases/org/apache/hbase/hbase-server/2.0.2.3.1.2.1-1/]
 , where it should be fixed (I've checked that this patch was applied to that 
build)

Just after creating a "view" in phoenix over an existing table, the 
"ModifyTableProcedure" triggers a "ReopenTableRegionsProcedure" that enters 
into this infinite loop of "MoveRegionProcedure". This loop has a lapse of ~5s, 
and it fills up the list of procedures, and the procedure wal is not cleanup, 
as it never finishes the running procedure.

Please, find here a selected portion of the hbase-master log. The affected 
table has a pre-split of 100, so the log is quite large. I've shrunken some 
lines with dots.

 
{noformat}
2019-07-03 16:12:27,924 INFO [PEWorker-8] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=267, ppid=266, 
state=RUNNABLE:REOPEN_TABLE_REGIONS_GET_REGIONS; ReopenTableRegionsProcedure 
table=opencga_jcoll_grch38_variants}]
2019-07-03 16:12:28,059 INFO [PEWorker-2] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=268, ppid=267, 
state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
hri=fdad9893526ef840d117e6bea7c04bc5, 
source=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960,
 
destination=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960},
 {pid=269, ppid=267, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
hri=7b8b7dc99aee4f524af41a86e10ac945, 
source=wn0-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131867,
 
destination=wn0-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131867},
 
,
 {pid=368, ppid=267, state=RUNNABLE:MOVE_REGION_UNASSIGN; MoveRegionProcedure 
hri=60ccd4513bc298b83d062cb0172ccba9, 
source=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960,
 
destination=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960}]
2019-07-03 16:12:28,096 INFO  [PEWorker-5] procedure.MasterProcedureScheduler: 
Took xlock for pid=268, ppid=267, state=RUNNABLE:MOVE_REGION_UNASSIGN; 
MoveRegionProcedure hri=fdad9893526ef840d117e6bea7c04bc5, 
source=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960,
 
destination=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960
2019-07-03 16:12:28,116 INFO [PEWorker-5] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=370, ppid=268, 
state=RUNNABLE:REGION_TRANSITION_DISPATCH; UnassignProcedure 
table=opencga_jcoll_grch38_variants, region=fdad9893526ef840d117e6bea7c04bc5, 
override=true, 
server=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960}]
2019-07-03 16:12:28,247 INFO [PEWorker-4] procedure.MasterProcedureScheduler: 
Took xlock for pid=370, ppid=268, state=RUNNABLE:REGION_TRANSITION_DISPATCH; 
UnassignProcedure table=opencga_jcoll_grch38_variants, 
region=fdad9893526ef840d117e6bea7c04bc5, override=true, 
server=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960
2019-07-03 16:12:28,280 INFO [PEWorker-4] assignment.RegionTransitionProcedure: 
Dispatch pid=370, ppid=268, state=RUNNABLE:REGION_TRANSITION_DISPATCH, 
locked=true; UnassignProcedure table=opencga_jcoll_grch38_variants, 
region=fdad9893526ef840d117e6bea7c04bc5, override=true, 
server=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960
2019-07-03 16:12:28,659 INFO [PEWorker-13] procedure2.ProcedureExecutor: 
Finished subprocedure pid=370, resume processing parent pid=268, ppid=267, 
state=RUNNABLE:MOVE_REGION_ASSIGN, locked=true; MoveRegionProcedure 
hri=fdad9893526ef840d117e6bea7c04bc5, 
source=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960,
 
destination=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960
2019-07-03 16:12:28,659 INFO [PEWorker-13] procedure2.ProcedureExecutor: 
Finished pid=370, ppid=268, state=SUCCESS; UnassignProcedure 
table=opencga_jcoll_grch38_variants, region=fdad9893526ef840d117e6bea7c04bc5, 
override=true, 
server=wn1-opencg.5w3ff4rocu0e1dpkokmkmgo5ib.zx.internal.cloudapp.net,16020,1562169131960
 in 458msec, unfinishedSiblingCount=0
2019-07-03 16:12:28,662 INFO [PEWorker-8] procedure2.ProcedureExecutor: 
Initialized subprocedures=[{pid=408, ppid=268, 
state=RUNNABLE:REGION_TRANSITION_QUEUE; AssignProcedure 
table=opencga_jcoll_grch38_variants,

[GitHub] [hbase] HorizonNet commented on issue #315: HBASE-22594 Clean up for backup examples

2019-07-04 Thread GitBox
HorizonNet commented on issue #315: HBASE-22594 Clean up for backup examples
URL: https://github.com/apache/hbase/pull/315#issuecomment-508463120
 
 
   Rebased it to the latest master. The failed tests from the latest build run 
fine locally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] Apache-HBase commented on issue #359: HBASE-20405 fix License and Thanks links on website

2019-07-04 Thread GitBox
Apache-HBase commented on issue #359: HBASE-20405 fix License and Thanks links 
on website
URL: https://github.com/apache/hbase/pull/359#issuecomment-508461243
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 32 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 261 | master passed |
   | +1 | mvnsite | 1040 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 233 | the patch passed |
   | +1 | mvnsite | 1017 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | xml | 2 | The patch has no ill-formed XML file. |
   ||| _ Other Tests _ |
   | +1 | asflicense | 19 | The patch does not generate ASF License warnings. |
   | | | 2678 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-359/1/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/359 |
   | Optional Tests |  dupname  asflicense  mvnsite  xml  |
   | uname | Linux a9d5258a9800 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 9116534f5d |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Max. process+thread count | 96 (vs. ulimit of 1) |
   | modules | C: . U: . |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-359/1/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22618) Provide a way to have Heterogeneous deployment

2019-07-04 Thread Andrew Purtell (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22618?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878594#comment-16878594
 ] 

Andrew Purtell commented on HBASE-22618:


Pluggable cost functions for the load balancer would be high leverage work 
useful for a variety of things. If that can serve to satisfy the requirements 
here too it’s a good approach and a patch would be welcome. 

> Provide a way to have Heterogeneous deployment
> --
>
> Key: HBASE-22618
> URL: https://issues.apache.org/jira/browse/HBASE-22618
> Project: HBase
>  Issue Type: Improvement
>Affects Versions: 2.1.6, 1.4.11
>Reporter: Pierre Zemb
>Priority: Major
>
> Hi,
> We wouls like to open the discussion about bringing the possibility to have 
> regions deployed on {color:#22}Heterogeneous deployment{color}, i.e Hbase 
> cluster running different kind of hardware.
> h2. Why?
>  * Cloud deployments means that we may not be able to have the same hardware 
> throughout the years
>  * Some tables may need special requirements such as SSD whereas others 
> should be using hard-drives
>  * {color:#22} {color}*in our usecase*{color:#22}(single table, 
> dedicated HBase and Hadoop tuned for our usecase, good key 
> distribution){color}*, the number of regions per RS was the real limit for 
> us*{color:#22}.{color}
> h2. Our usecase
> We found out that *in our usecase*(single table, dedicated HBase and Hadoop 
> tuned for our usecase, good key distribution)*, the number of regions per RS 
> was the real limit for us*.
> Over the years, due to historical reasons and also the need to benchmark new 
> machines, we ended-up with differents groups of hardware: some servers can 
> handle only 180 regions, whereas the biggest can handle more than 900. 
> Because of such a difference, we had to disable the LoadBalancing to avoid 
> the {{roundRobinAssigmnent}}. We developed some internal tooling which are 
> responsible for load balancing regions across RegionServers. That was 1.5 
> year ago.
> h2. Our Proof-of-concept
> We did work on a Proof-of-concept 
> [here|https://github.com/PierreZ/hbase/blob/dev/hbase14/balancer/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/HeterogeneousBalancer.java],
>  and some early tests 
> [here|https://github.com/PierreZ/hbase/blob/dev/hbase14/balancer/hbase-server/src/main/java/org/apache/hadoop/hbase/master/balancer/HeterogeneousBalancer.java],
>  
> [here|https://github.com/PierreZ/hbase/blob/dev/hbase14/balancer/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestHeterogeneousBalancerBalance.java],
>  and 
> [here|https://github.com/PierreZ/hbase/blob/dev/hbase14/balancer/hbase-server/src/test/java/org/apache/hadoop/hbase/master/balancer/TestHeterogeneousBalancerRules.java].
>  We wrote the balancer for our use-case, which means that:
>  * there is one table
>  * there is no region-replica
>  * good key dispersion
>  * there is no regions on master
> A rule file is loaded before balancing. It contains lines of rules. A rule is 
> composed of a regexp for hostname, and a limit. For example, we could have:
>  
> {quote}rs[0-9] 200
> rs1[0-9] 50
> {quote}
>  
> RegionServers with hostname matching the first rules will have a limit of 
> 200, and the others 50. If there's no match, a default is set.
> Thanks to the rule, we have two informations: the max number of regions for 
> this cluster, and the rules for each servers. {{HeterogeneousBalancer}} will 
> try to balance regions according to their capacity.
> Let's take an example. Let's say that we have 20 RS:
>  * 10 RS, named through {{rs0}} to {{rs9}} loaded with 60 regions each, and 
> each can handle 200 regions.
>  * 10 RS, named through {{rs10}} to {{rs19}} loaded with 60 regions each, and 
> each can support 50 regions.
> Based on the following rules:
>  
> {quote}rs[0-9] 200
> rs1[0-9] 50
> {quote}
>  
> The second group is overloaded, whereas the first group has plenty of space.
> We know that we can handle at maximum *2500 regions* (200*10 + 50*10) and we 
> have currently *1200 regions* (60*20). {{HeterogeneousBalancer}} will 
> understand that the cluster is *full at 48.0%* (1200/2500). Based on this 
> information, we will then *try to put all the RegionServers to ~48% of load 
> according to the rules.* In this case, it will move regions from the second 
> group to the first.
> The balancer will:
>  * compute how many regions needs to be moved. In our example, by moving 36 
> regions on rs10, we could go from 120.0% to 46.0%
>  * select regions with lowest data-locality
>  * try to find an appropriate RS for the region. We will take the lowest 
> available RS.
> h2. Other implementations and ideas
> Clay Baenziger proposed this idea on the dev ML:
> {quote}{color:#22}Could it work to 

[jira] [Commented] (HBASE-22582) The Compaction writer may access the lastCell whose memory has been released when appending fileInfo in the final

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878592#comment-16878592
 ] 

Hudson commented on HBASE-22582:


Results for branch master
[build #1198 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/master/1198/]: (x) 
*{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1198//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1198//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/master/1198//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Compaction writer may access the lastCell whose memory has been released 
> when appending fileInfo in the final
> -
>
> Key: HBASE-22582
> URL: https://issues.apache.org/jira/browse/HBASE-22582
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.7
>
>
> Copy the comment from [~javaman_chen] under HBASE-21879: 
> https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
> In Compactor#compact, we have the following:
> {code}
> protected List compact(final CompactionRequest request...
>   ...
>   try {
> ...
>   } finally {
> Closeables.close(scanner, true);
> if (!finished && writer != null) {
>   abortWriter(writer);
> }
>   }
>   assert finished : "We should have exited the method on all error paths";
>   assert writer != null : "Writer should be non-null if no error";
>   return commitWriter(writer, fd, request);
> }
> {code}
> should we call writer#beforeShipped() before Closeables.close(scanner, true);
> In order to copy some cell's data out of the ByteBuff before it released, or 
> commitWriter may be wrong in the following call stack
> {code}
> Compactor#commitWriter
> -> HFileWriterImpl#close
>  -> HFileWriterImpl#writeFileInfo
>-> HFileWriterImpl#finishFileInfo
> {code}
> {code}
> protected void finishFileInfo() throws IOException {
>   if (lastCell != null) {
> // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean
> // byte buffer. Won't take a tuple.
> byte [] lastKey = 
> PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell);
> fileInfo.append(FileInfo.LASTKEY, lastKey, false);
>   }
>   ...
> }
> {code}
> Because the lastCell may refer to a reused ByteBuff. 
> Checked the code, It's a bug and will need to fix in all 2.x & master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22654) apache-rat complains on branch-1

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878589#comment-16878589
 ] 

Hudson commented on HBASE-22654:


Results for branch branch-1.4
[build #886 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/886/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/886//General_Nightly_Build_Report/]


(/) {color:green}+1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/886//JDK7_Nightly_Build_Report/]


(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1.4/886//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> apache-rat complains on branch-1
> 
>
> Key: HBASE-22654
> URL: https://issues.apache.org/jira/browse/HBASE-22654
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.5.0, 1.4.11
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 1.5.0, 1.4.11
>
>
> License check fails after build with errorProne profile activated. The 
> hbase-error-prone module is added to the root pom only whent he profile is 
> active so running RAT check does not consider hbase-error-prone/target as a 
> build directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
Apache-HBase commented on issue #345: HBASE-22638 : Zookeeper Utility 
enhancements
URL: https://github.com/apache/hbase/pull/345#issuecomment-508452197
 
 
   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 65 | Docker mode activated. |
   ||| _ Prechecks _ |
   | +1 | hbaseanti | 0 |  Patch does not have any anti-patterns. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | -0 | test4tests | 0 | The patch doesn't appear to include any new or 
modified tests.  Please justify why no new tests are needed for this patch. 
Also please list what manual steps were performed to verify this patch. |
   ||| _ master Compile Tests _ |
   | +1 | mvninstall | 350 | master passed |
   | +1 | compile | 23 | master passed |
   | +1 | checkstyle | 15 | master passed |
   | +1 | shadedjars | 361 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | findbugs | 45 | master passed |
   | +1 | javadoc | 17 | master passed |
   ||| _ Patch Compile Tests _ |
   | +1 | mvninstall | 328 | the patch passed |
   | +1 | compile | 23 | the patch passed |
   | +1 | javac | 23 | the patch passed |
   | +1 | checkstyle | 15 | the patch passed |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | +1 | shadedjars | 360 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 1015 | Patch does not cause any errors with Hadoop 
2.8.5 2.9.2 or 3.1.2. |
   | +1 | findbugs | 48 | the patch passed |
   | +1 | javadoc | 16 | the patch passed |
   ||| _ Other Tests _ |
   | +1 | unit | 56 | hbase-zookeeper in the patch passed. |
   | +1 | asflicense | 13 | The patch does not generate ASF License warnings. |
   | | | 3165 | |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/7/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/345 |
   | Optional Tests |  dupname  asflicense  javac  javadoc  unit  findbugs  
shadedjars  hadoopcheck  hbaseanti  checkstyle  compile  |
   | uname | Linux 34bbdeddb0e9 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | master / 9116534f5d |
   | maven | version: Apache Maven 3.5.4 
(1edded0938998edf8bf061f1ceb3cfdeccf443fe; 2018-06-17T18:33:14Z) |
   | Default Java | 1.8.0_181 |
   | findbugs | v3.1.11 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/7/testReport/
 |
   | Max. process+thread count | 322 (vs. ulimit of 1) |
   | modules | C: hbase-zookeeper U: hbase-zookeeper |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-345/7/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] symat opened a new pull request #359: HBASE-20405 fix License and Thanks links on website

2019-07-04 Thread GitBox
symat opened a new pull request #359: HBASE-20405 fix License and Thanks links 
on website
URL: https://github.com/apache/hbase/pull/359
 
 
   after the previous MR, it looks the [whimsy 
tool](https://whimsy.apache.org/site/project/hbase) still failed in two cases. 
In this commit we try to fix these issues, based on the 
[zookeeper](zookeeper.apach.org) website and the [apache policy 
description](https://www.apache.org/foundation/marks/pmcs#websites)
   
   It looks the [whimsy tool](https://whimsy.apache.org/site/project/hbase) is 
checking the first occurrence of the links on the website, so we have to also 
change the links in the menubars. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300356330
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
+  return;
+}
+for (String peerIdZNode : hFileRefChildrenNoWatchList) {
+  String zNodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZNode);
+  sb.append("\n").append(zNodeToProcess).append(": ");
+  List peerHFileRefsZNodes = ZKUtil.listChildrenNoWatch(zkw, 
zNodeToProcess);
+  if (peerHFileRefsZNodes != null) {
 
 Review comment:
   e.g. if peerHFileRefsZNodes = { "a", "b", "c"},
   previous for-loop as well as String.join both return same thing: "a, b, c"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22582) The Compaction writer may access the lastCell whose memory has been released when appending fileInfo in the final

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878568#comment-16878568
 ] 

Hudson commented on HBASE-22582:


Results for branch branch-2
[build #2053 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2053/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2053//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2053//JDK8_Nightly_Build_Report_(Hadoop2)/]


(x) {color:red}-1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2/2053//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Compaction writer may access the lastCell whose memory has been released 
> when appending fileInfo in the final
> -
>
> Key: HBASE-22582
> URL: https://issues.apache.org/jira/browse/HBASE-22582
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.7
>
>
> Copy the comment from [~javaman_chen] under HBASE-21879: 
> https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
> In Compactor#compact, we have the following:
> {code}
> protected List compact(final CompactionRequest request...
>   ...
>   try {
> ...
>   } finally {
> Closeables.close(scanner, true);
> if (!finished && writer != null) {
>   abortWriter(writer);
> }
>   }
>   assert finished : "We should have exited the method on all error paths";
>   assert writer != null : "Writer should be non-null if no error";
>   return commitWriter(writer, fd, request);
> }
> {code}
> should we call writer#beforeShipped() before Closeables.close(scanner, true);
> In order to copy some cell's data out of the ByteBuff before it released, or 
> commitWriter may be wrong in the following call stack
> {code}
> Compactor#commitWriter
> -> HFileWriterImpl#close
>  -> HFileWriterImpl#writeFileInfo
>-> HFileWriterImpl#finishFileInfo
> {code}
> {code}
> protected void finishFileInfo() throws IOException {
>   if (lastCell != null) {
> // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean
> // byte buffer. Won't take a tuple.
> byte [] lastKey = 
> PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell);
> fileInfo.append(FileInfo.LASTKEY, lastKey, false);
>   }
>   ...
> }
> {code}
> Because the lastCell may refer to a reused ByteBuff. 
> Checked the code, It's a bug and will need to fix in all 2.x & master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22658:
--
Flags:   (was: Patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.10
>Reporter: liang.feng
>Priority: Major
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22658:
--
Priority: Major  (was: Critical)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.10
>Reporter: liang.feng
>Priority: Major
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Reid Chan updated HBASE-22658:
--
Affects Version/s: 1.4.10

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.10
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22582) The Compaction writer may access the lastCell whose memory has been released when appending fileInfo in the final

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878557#comment-16878557
 ] 

Hudson commented on HBASE-22582:


Results for branch branch-2.1
[build #1332 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1332/]: 
(/) *{color:green}+1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1332//General_Nightly_Build_Report/]




(/) {color:green}+1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1332//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.1/1332//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Compaction writer may access the lastCell whose memory has been released 
> when appending fileInfo in the final
> -
>
> Key: HBASE-22582
> URL: https://issues.apache.org/jira/browse/HBASE-22582
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.7
>
>
> Copy the comment from [~javaman_chen] under HBASE-21879: 
> https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
> In Compactor#compact, we have the following:
> {code}
> protected List compact(final CompactionRequest request...
>   ...
>   try {
> ...
>   } finally {
> Closeables.close(scanner, true);
> if (!finished && writer != null) {
>   abortWriter(writer);
> }
>   }
>   assert finished : "We should have exited the method on all error paths";
>   assert writer != null : "Writer should be non-null if no error";
>   return commitWriter(writer, fd, request);
> }
> {code}
> should we call writer#beforeShipped() before Closeables.close(scanner, true);
> In order to copy some cell's data out of the ByteBuff before it released, or 
> commitWriter may be wrong in the following call stack
> {code}
> Compactor#commitWriter
> -> HFileWriterImpl#close
>  -> HFileWriterImpl#writeFileInfo
>-> HFileWriterImpl#finishFileInfo
> {code}
> {code}
> protected void finishFileInfo() throws IOException {
>   if (lastCell != null) {
> // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean
> // byte buffer. Won't take a tuple.
> byte [] lastKey = 
> PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell);
> fileInfo.append(FileInfo.LASTKEY, lastKey, false);
>   }
>   ...
> }
> {code}
> Because the lastCell may refer to a reused ByteBuff. 
> Checked the code, It's a bug and will need to fix in all 2.x & master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878550#comment-16878550
 ] 

Reid Chan commented on HBASE-22658:
---

Please try git clone hbase from github and checkout branch-1, then you can work 
on it. [~a516072575]

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300344249
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
 
 Review comment:
   Thanks and totally agree @HorizonNet . Updated the PR. Please continue.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878543#comment-16878543
 ] 

HBase QA commented on HBASE-22658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  8s{color} 
| {color:red} HBASE-22658 does not apply to branch-1.0. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973661/HBASE-22658.branch-1.0.001.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/604/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300343654
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
+  return;
+}
+for (String peerIdZNode : hFileRefChildrenNoWatchList) {
+  String zNodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZNode);
+  sb.append("\n").append(zNodeToProcess).append(": ");
+  List peerHFileRefsZNodes = ZKUtil.listChildrenNoWatch(zkw, 
zNodeToProcess);
+  if (peerHFileRefsZNodes != null) {
 
 Review comment:
   Yes. Does it still loop over the complete size if you do the append with the 
join?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300343825
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
+  return;
+}
+for (String peerIdZNode : hFileRefChildrenNoWatchList) {
+  String zNodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZNode);
+  sb.append("\n").append(zNodeToProcess).append(": ");
+  List peerHFileRefsZNodes = ZKUtil.listChildrenNoWatch(zkw, 
zNodeToProcess);
+  if (peerHFileRefsZNodes != null) {
 
 Review comment:
   Yes, internally it does loop over the complete size


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300343274
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
 
 Review comment:
   Saw the comment and agree, but we should try to be consistent. If we do it 
with the return statement we should do it in all other places. Because the PR 
uses `xxx != null` in all other places it should be the same here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658.branch-1.0.001.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658.branch-1.0.001.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: HBASE-22658-for-hbase1.x-branch-1.4.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300341749
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
+  return;
+}
+for (String peerIdZNode : hFileRefChildrenNoWatchList) {
+  String zNodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZNode);
+  sb.append("\n").append(zNodeToProcess).append(": ");
+  List peerHFileRefsZNodes = ZKUtil.listChildrenNoWatch(zkw, 
zNodeToProcess);
+  if (peerHFileRefsZNodes != null) {
 
 Review comment:
   I believe you are referring to this loop:
   ```
 int size = peerHFileRefsZnodes.size();
 for (int i = 0; i < size; i++) {
   sb.append(peerHFileRefsZnodes.get(i));
   if (i != size - 1) {
 sb.append(", ");
   }
 }
   ```
   
   This is replaced in below line: 
   ```
   sb.append(String.join(", ", peerHFileRefsZNodes));
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
virajjasani commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300340537
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
 
 Review comment:
   Sure @HorizonNet will do it. In fact, I did it that way but based on comment 
from @maoling updated it


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878534#comment-16878534
 ] 

Reid Chan commented on HBASE-22658:
---

bq. Should I checkout source code to branch-1 then change the code as my 
implement
Yes.

You can either generate a patch and attach it here or create a pull request on 
Github. 
bq. Or Should I rename patch file
Yes, if you choose the former. Please refer to [Submit 
patch|http://hbase.apache.org/book.html#submitting.patches] for more details.

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22582) The Compaction writer may access the lastCell whose memory has been released when appending fileInfo in the final

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878535#comment-16878535
 ] 

Hudson commented on HBASE-22582:


Results for branch branch-2.2
[build #412 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/412/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(/) {color:green}+1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/412//General_Nightly_Build_Report/]




(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/412//JDK8_Nightly_Build_Report_(Hadoop2)/]


(/) {color:green}+1 jdk8 hadoop3 checks{color}
-- For more information [see jdk8 (hadoop3) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-2.2/412//JDK8_Nightly_Build_Report_(Hadoop3)/]


(/) {color:green}+1 source release artifact{color}
-- See build output for details.


(/) {color:green}+1 client integration test{color}


> The Compaction writer may access the lastCell whose memory has been released 
> when appending fileInfo in the final
> -
>
> Key: HBASE-22582
> URL: https://issues.apache.org/jira/browse/HBASE-22582
> Project: HBase
>  Issue Type: Bug
>  Components: Compaction
>Reporter: Zheng Hu
>Assignee: Zheng Hu
>Priority: Major
> Fix For: 3.0.0, 2.3.0, 2.0.6, 2.2.1, 2.1.7
>
>
> Copy the comment from [~javaman_chen] under HBASE-21879: 
> https://issues.apache.org/jira/browse/HBASE-21879?focusedCommentId=16862244&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-16862244
> In Compactor#compact, we have the following:
> {code}
> protected List compact(final CompactionRequest request...
>   ...
>   try {
> ...
>   } finally {
> Closeables.close(scanner, true);
> if (!finished && writer != null) {
>   abortWriter(writer);
> }
>   }
>   assert finished : "We should have exited the method on all error paths";
>   assert writer != null : "Writer should be non-null if no error";
>   return commitWriter(writer, fd, request);
> }
> {code}
> should we call writer#beforeShipped() before Closeables.close(scanner, true);
> In order to copy some cell's data out of the ByteBuff before it released, or 
> commitWriter may be wrong in the following call stack
> {code}
> Compactor#commitWriter
> -> HFileWriterImpl#close
>  -> HFileWriterImpl#writeFileInfo
>-> HFileWriterImpl#finishFileInfo
> {code}
> {code}
> protected void finishFileInfo() throws IOException {
>   if (lastCell != null) {
> // Make a copy. The copy is stuffed into our fileinfo map. Needs a clean
> // byte buffer. Won't take a tuple.
> byte [] lastKey = 
> PrivateCellUtil.getCellKeySerializedAsKeyValueKey(this.lastCell);
> fileInfo.append(FileInfo.LASTKEY, lastKey, false);
>   }
>   ...
> }
> {code}
> Because the lastCell may refer to a reused ByteBuff. 
> Checked the code, It's a bug and will need to fix in all 2.x & master branch. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] syedmurtazahassan commented on issue #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-07-04 Thread GitBox
syedmurtazahassan commented on issue #322: HBASE-22586 Javadoc Warnings related 
to @param tag
URL: https://github.com/apache/hbase/pull/322#issuecomment-508433099
 
 
   @HorizonNet 
   Thanks for valuable comments. I will address them and make the fixes 
accordingly.. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878527#comment-16878527
 ] 

HBase QA commented on HBASE-22658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-22658 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973660/HBASE-22658-for-hbase1.x-branch-1.4.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/603/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878526#comment-16878526
 ] 

liang.feng commented on HBASE-22658:


@[~reidchan]  This is my first submit patch. Should I checkout source code to 
branch-1 then change the code as my implement, and generate patch? Or Should I 
rename patch file?

 

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Issue Comment Deleted] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Comment: was deleted

(was: This is my first submit patch. Should I checkout source code to branch-1 
then change the code as my implement, and generate patch? Or Should I rename 
patch file?)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878525#comment-16878525
 ] 

liang.feng commented on HBASE-22658:


This is my first submit patch. Should I checkout source code to branch-1 then 
change the code as my implement, and generate patch? Or Should I rename patch 
file?

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread Reid Chan (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878521#comment-16878521
 ] 

Reid Chan commented on HBASE-22658:
---

Please try implement it based on branch-1.

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: HBASE-22658-for-hbase1.x.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.9
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22654) apache-rat complains on branch-1

2019-07-04 Thread Hudson (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22654?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878520#comment-16878520
 ] 

Hudson commented on HBASE-22654:


Results for branch branch-1
[build #933 on 
builds.a.o|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/933/]: 
(x) *{color:red}-1 overall{color}*

details (if available):

(x) {color:red}-1 general checks{color}
-- For more information [see general 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/933//General_Nightly_Build_Report/]


(x) {color:red}-1 jdk7 checks{color}
-- For more information [see jdk7 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/933//JDK7_Nightly_Build_Report/]


(x) {color:red}-1 jdk8 hadoop2 checks{color}
-- For more information [see jdk8 (hadoop2) 
report|https://builds.apache.org/job/HBase%20Nightly/job/branch-1/933//JDK8_Nightly_Build_Report_(Hadoop2)/]




(/) {color:green}+1 source release artifact{color}
-- See build output for details.


> apache-rat complains on branch-1
> 
>
> Key: HBASE-22654
> URL: https://issues.apache.org/jira/browse/HBASE-22654
> Project: HBase
>  Issue Type: Bug
>  Components: build
>Affects Versions: 1.5.0, 1.4.11
>Reporter: Peter Somogyi
>Assignee: Peter Somogyi
>Priority: Minor
> Fix For: 1.5.0, 1.4.11
>
>
> License check fails after build with errorProne profile activated. The 
> hbase-error-prone module is added to the root pom only whent he profile is 
> active so running RAT check does not consider hbase-error-prone/target as a 
> build directory.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658-for-hbase1.x-branch-1.4.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Affects Version/s: (was: 1.4.9)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x-branch-1.4.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: HBASE-22658-for-hbase1.x.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.9
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658-for-hbase1.x.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.9
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Affects Version/s: 1.4.9

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.9
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300329742
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
+  return;
+}
+for (String peerIdZNode : hFileRefChildrenNoWatchList) {
+  String zNodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZNode);
+  sb.append("\n").append(zNodeToProcess).append(": ");
+  List peerHFileRefsZNodes = ZKUtil.listChildrenNoWatch(zkw, 
zNodeToProcess);
+  if (peerHFileRefsZNodes != null) {
 
 Review comment:
   Is the for-loop (the one with the size) from above missing?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper Utility enhancements

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #345: HBASE-22638 : Zookeeper 
Utility enhancements
URL: https://github.com/apache/hbase/pull/345#discussion_r300328799
 
 

 ##
 File path: 
hbase-zookeeper/src/main/java/org/apache/hadoop/hbase/zookeeper/ZKUtil.java
 ##
 @@ -1860,32 +1873,35 @@ private static void getReplicationZnodesDump(ZKWatcher 
zkw, StringBuilder sb)
 // do a ls -r on this znode
 sb.append("\n").append(replicationZnode).append(": ");
 List children = ZKUtil.listChildrenNoWatch(zkw, replicationZnode);
-Collections.sort(children);
-for (String child : children) {
-  String znode = ZNodePaths.joinZNode(replicationZnode, child);
-  if (znode.equals(zkw.getZNodePaths().peersZNode)) {
-appendPeersZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().queuesZNode)) {
-appendRSZnodes(zkw, znode, sb);
-  } else if (znode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
-appendHFileRefsZnodes(zkw, znode, sb);
+if (children != null) {
+  Collections.sort(children);
+  for (String child : children) {
+String zNode = ZNodePaths.joinZNode(replicationZnode, child);
+if (zNode.equals(zkw.getZNodePaths().peersZNode)) {
+  appendPeersZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().queuesZNode)) {
+  appendRSZnodes(zkw, zNode, sb);
+} else if (zNode.equals(zkw.getZNodePaths().hfileRefsZNode)) {
+  appendHFileRefsZnodes(zkw, zNode, sb);
+}
   }
 }
   }
 
   private static void appendHFileRefsZnodes(ZKWatcher zkw, String 
hfileRefsZnode,
 StringBuilder sb) throws 
KeeperException {
 sb.append("\n").append(hfileRefsZnode).append(": ");
-for (String peerIdZnode : ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode)) 
{
-  String znodeToProcess = ZNodePaths.joinZNode(hfileRefsZnode, 
peerIdZnode);
-  sb.append("\n").append(znodeToProcess).append(": ");
-  List peerHFileRefsZnodes = ZKUtil.listChildrenNoWatch(zkw, 
znodeToProcess);
-  int size = peerHFileRefsZnodes.size();
-  for (int i = 0; i < size; i++) {
-sb.append(peerHFileRefsZnodes.get(i));
-if (i != size - 1) {
-  sb.append(", ");
-}
+final List hFileRefChildrenNoWatchList =
+ZKUtil.listChildrenNoWatch(zkw, hfileRefsZnode);
+if (hFileRefChildrenNoWatchList == null) {
 
 Review comment:
   Could you please do in the same pattern as before ("xxx != null")? Would be 
more consistent.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Affects Version/s: 1.4.9

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Affects Versions: 1.4.9
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Affects Version/s: (was: 1.4.9)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878512#comment-16878512
 ] 

HBase QA commented on HBASE-22658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  7s{color} 
| {color:red} HBASE-22658 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973657/HBASE-22658-for-hbase1.x.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/601/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Description: 
There are many retries when i am using graceful_stop.sh to shutdown region 
server after using regroup, because the target server in a different rsgroup. 
This makes it slow to graceful shutdown a regionserver. So i think that 
region_mover.rb  should only choose same rsgroup servers as target servers.

Region mover is implemented by jruby in hbase1.x and is  implemented by java in 
hbase2.x . I tried to modify the RegionMover.java class to use the same logic 
in hbase2.x, but mvn package failed due to hbase-server module and 
hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
just uploaded patch for hbase1.x .

 

  was:
There are many retries when i am using graceful_stop.sh to shutdown region 
server after using regroup, because the target server in a different rsgroup. 
so i think that region_mover.rb  should choose same rsgroup servers as target 
servers.

Region mover is implemented by jruby in hbase1.x and is  implemented by java in 
hbase2.x . I tried to modify the RegionMover.java class to use the same logic 
in hbase2.x, but mvn package failed due to hbase-server module and 
hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
just uploaded patch for hbase1.x .

 


> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> This makes it slow to graceful shutdown a regionserver. So i think that 
> region_mover.rb  should only choose same rsgroup servers as target servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread HBase QA (JIRA)


[ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16878510#comment-16878510
 ] 

HBase QA commented on HBASE-22658:
--

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  6s{color} 
| {color:red} HBASE-22658 does not apply to master. Rebase required? Wrong 
Branch? See 
https://yetus.apache.org/documentation/in-progress/precommit-patchnames for 
help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Issue | HBASE-22658 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12973657/HBASE-22658-for-hbase1.x.patch
 |
| Console output | 
https://builds.apache.org/job/PreCommit-HBASE-Build/600/console |
| Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |


This message was automatically generated.



> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] Apache-HBase commented on issue #356: HBASE-15666 shaded dependencies for hbase-testing-util

2019-07-04 Thread GitBox
Apache-HBase commented on issue #356: HBASE-15666 shaded dependencies for 
hbase-testing-util
URL: https://github.com/apache/hbase/pull/356#issuecomment-508421923
 
 
   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime | Comment |
   |::|--:|:|:|
   | 0 | reexec | 1547 | Docker mode activated. |
   ||| _ Prechecks _ |
   | 0 | shelldocs | 0 | Shelldocs was not available. |
   | +1 | @author | 0 | The patch does not contain any @author tags. |
   | +1 | test4tests | 0 | The patch appears to include 1 new or modified test 
files. |
   ||| _ branch-1.4 Compile Tests _ |
   | 0 | mvndep | 16 | Maven dependency ordering for branch |
   | +1 | mvninstall | 168 | branch-1.4 passed |
   | +1 | compile | 29 | branch-1.4 passed with JDK v1.8.0_212 |
   | +1 | compile | 35 | branch-1.4 passed with JDK v1.7.0_222 |
   | +1 | shadedjars | 167 | branch has no errors when building our shaded 
downstream artifacts. |
   | +1 | javadoc | 25 | branch-1.4 passed with JDK v1.8.0_212 |
   | +1 | javadoc | 28 | branch-1.4 passed with JDK v1.7.0_222 |
   ||| _ Patch Compile Tests _ |
   | 0 | mvndep | 12 | Maven dependency ordering for patch |
   | +1 | mvninstall | 108 | the patch passed |
   | +1 | compile | 46 | the patch passed with JDK v1.8.0_212 |
   | +1 | javac | 46 | the patch passed |
   | +1 | compile | 58 | the patch passed with JDK v1.7.0_222 |
   | +1 | javac | 58 | the patch passed |
   | +1 | shellcheck | 0 | There were no new shellcheck issues. |
   | +1 | whitespace | 0 | The patch has no whitespace issues. |
   | -1 | xml | 1 | The patch has 3 ill-formed XML file(s). |
   | +1 | shadedjars | 164 | patch has no errors when building our shaded 
downstream artifacts. |
   | +1 | hadoopcheck | 114 | Patch does not cause any errors with Hadoop 
2.7.7. |
   | +1 | javadoc | 36 | the patch passed with JDK v1.8.0_212 |
   | +1 | javadoc | 44 | the patch passed with JDK v1.7.0_222 |
   ||| _ Other Tests _ |
   | +1 | unit | 22 | hbase-shaded in the patch passed. |
   | +1 | unit | 19 | hbase-shaded-testing-util in the patch passed. |
   | +1 | unit | 15 | hbase-shaded-check-invariants in the patch passed. |
   | +1 | asflicense | 32 | The patch does not generate ASF License warnings. |
   | | | 2896 | |
   
   
   | Reason | Tests |
   |---:|:--|
   | XML | Parsing Error(s): |
   |   | hbase-shaded/hbase-shaded-testing-util/pom.xml |
   |   | 
hbase-shaded/hbase-shaded-testing-util/src/main/resources/org/apache/hadoop/hbase/shaded/org/mortbay/jetty/webapp/webdefault.xml
 |
   |   | hbase-shaded/pom.xml |
   
   
   | Subsystem | Report/Notes |
   |--:|:-|
   | Docker | Client=17.05.0-ce Server=17.05.0-ce base: 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-356/2/artifact/out/Dockerfile
 |
   | GITHUB PR | https://github.com/apache/hbase/pull/356 |
   | Optional Tests |  dupname  asflicense  unit  shellcheck  shelldocs  javac  
javadoc  shadedjars  hadoopcheck  xml  compile  |
   | uname | Linux d33c877c5bc0 4.4.0-138-generic #164-Ubuntu SMP Tue Oct 2 
17:16:02 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | /testptch/patchprocess/precommit/personality/provided.sh |
   | git revision | branch-1.4 / bd92612 |
   | maven | version: Apache Maven 3.0.5 |
   | Default Java | 1.7.0_222 |
   | Multi-JDK versions |  /usr/lib/jvm/java-8-openjdk-amd64:1.8.0_212 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_222 |
   | shellcheck | v0.4.7 |
   | xml | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-356/2/artifact/out/xml.txt
 |
   |  Test Results | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-356/2/testReport/
 |
   | Max. process+thread count | 66 (vs. ulimit of 1) |
   | modules | C: hbase-shaded hbase-shaded/hbase-shaded-testing-util 
hbase-shaded/hbase-shaded-check-invariants U: hbase-shaded |
   | Console output | 
https://builds.apache.org/job/HBase-PreCommit-GitHub-PR/job/PR-356/2/console |
   | Powered by | Apache Yetus 0.9.0 http://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Labels: gracefulshutdown region_mover rsgroup  (was: region_mover rsgroup)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: gracefulshutdown, region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: HBASE-22658-for-hbase1.x.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Description: 
There are many retries when i am using graceful_stop.sh to shutdown region 
server after using regroup, because the target server in a different rsgroup. 
so i think that region_mover.rb  should choose same rsgroup servers as target 
servers.

Region mover is implemented by jruby in hbase1.x and is  implemented by java in 
hbase2.x . I tried to modify the RegionMover.java class to use the same logic 
in hbase2.x, but mvn package failed due to hbase-server module and 
hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
just uploaded patch for hbase1.x .

 

  was:
There are many retries when i am using graceful_stop.sh to shutdown region 
server after using regroup, because the target server in a different rsgroup. 
so i think that region_mover.rb  should choose same rsgroup servers as target 
servers.

Region mover is implemented by jruby in hbase1.x and is  implemented by java in 
hbase2.x . I tried to modify the RegionMover.java class to use the same logic 
in hbase2.x, but mvn package failed due to hbase-server module and 
hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
projects in the reactor contain a cyclic reference". So I just uploaded patch 
for hbase1.x .

 


> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658-for-hbase1.x.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". I couldn't solve it.So I 
> just uploaded patch for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Description: 
There are many retries when i am using graceful_stop.sh to shutdown region 
server after using regroup, because the target server in a different rsgroup. 
so i think that region_mover.rb  should choose same rsgroup servers as target 
servers.

Region mover is implemented by jruby in hbase1.x and is  implemented by java in 
hbase2.x . I tried to modify the RegionMover.java class to use the same logic 
in hbase2.x, but mvn package failed due to hbase-server module and 
hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
projects in the reactor contain a cyclic reference". So I just uploaded patch 
for hbase1.x .

 

  was:There are many retries when i am using graceful_stop.sh to shutdown 
region server after using regroup. Sometimes some regions can't be visit, so i 
think table region_mover.rb  should choose same rsgroup servers as target 
servers 


> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup, because the target server in a different rsgroup. 
> so i think that region_mover.rb  should choose same rsgroup servers as target 
> servers.
> Region mover is implemented by jruby in hbase1.x and is  implemented by java 
> in hbase2.x . I tried to modify the RegionMover.java class to use the same 
> logic in hbase2.x, but mvn package failed due to hbase-server module and 
> hbase-rsgroup moudle eeded to depend on each other, then maven throw a "The 
> projects in the reactor contain a cyclic reference". So I just uploaded patch 
> for hbase1.x .
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[GitHub] [hbase] HorizonNet commented on issue #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-07-04 Thread GitBox
HorizonNet commented on issue #322: HBASE-22586 Javadoc Warnings related to 
@param tag
URL: https://github.com/apache/hbase/pull/322#issuecomment-508420465
 
 
   @syedmurtazahassan My suggestion is, if you need to touch the class due to 
param changes, you can do it in this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc 
Warnings related to @param tag
URL: https://github.com/apache/hbase/pull/322#discussion_r300321468
 
 

 ##
 File path: 
hbase-common/src/main/java/org/apache/hadoop/hbase/PrivateCellUtil.java
 ##
 @@ -2605,8 +2605,12 @@ public static final int compare(CellComparator 
comparator, Cell left, byte[] key
* method is used both in the normal comparator and the "same-prefix" 
comparator. Note that we are
* assuming that row portions of both KVs have already been parsed and found 
identical, and we
* don't validate that assumption here.
-   * @param commonPrefix the length of the common prefix of the two key-values 
being compared,
-   *  including row length and row
+   * @param comparator the {@link CellComparator}} to use for comparison
+   * @param left the cell to be compared
+   * @param right the serialized key part of a key-value
+   * @param roffset the offset in the key byte[]
+   * @param rlength the length of the key byte[]
+   * @param rowlength the row length
*/
   static final int compareWithoutRow(CellComparator comparator, Cell left, 
byte[] right,
 
 Review comment:
   Because you're already on it, could you please also document the return 
value?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc 
Warnings related to @param tag
URL: https://github.com/apache/hbase/pull/322#discussion_r300322193
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/master/TestActiveMasterManager.java
 ##
 @@ -196,10 +196,10 @@ public void testActiveMasterManagerFromZK() throws 
Exception {
 
   /**
* Assert there is an active master and that it has the specified address.
-   * @param zk
-   * @param thisMasterAddress
-   * @throws KeeperException
-   * @throws IOException
+   * @param zk single zookeeper watcher
+   * @param expectedAddress the expected address of the master
+   * @throws KeeperException unexpected zookeeper exception
 
 Review comment:
   "ZooKeeper"


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc Warnings related to @param tag

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #322: HBASE-22586 Javadoc 
Warnings related to @param tag
URL: https://github.com/apache/hbase/pull/322#discussion_r300321649
 
 

 ##
 File path: 
hbase-common/src/test/java/org/apache/hadoop/hbase/util/ClassLoaderTestHelper.java
 ##
 @@ -52,7 +52,7 @@
* Jar a list of files into a jar archive.
*
* @param archiveFile the target jar archive
-   * @param tobejared a list of files to be jared
+   * @param tobeJared a list of files to be jared
*/
   private static boolean createJarArchive(File archiveFile, File[] tobeJared) {
 
 Review comment:
   Ditto. For the format see my comment on your other PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [hbase] HorizonNet commented on a change in pull request #302: HBASE-22571 Javadoc Warnings related to @return tag

2019-07-04 Thread GitBox
HorizonNet commented on a change in pull request #302: HBASE-22571 Javadoc 
Warnings related to @return tag
URL: https://github.com/apache/hbase/pull/302#discussion_r300319812
 
 

 ##
 File path: 
hbase-server/src/test/java/org/apache/hadoop/hbase/util/MultiThreadedAction.java
 ##
 @@ -320,7 +320,7 @@ public boolean verifyResultAgainstDataGenerator(Result 
result, boolean verifyVal
* @param verifyCfAndColumnIntegrity verify that cf/column set in the result 
is complete. Note
*   that to use this multiPut should be 
used, or verification
*   has to happen after writes, otherwise 
there can be races.
-   * @return
+   * @return the verified result from get or scan
 
 Review comment:
   NIT, because it is a boolean it would be better to have it in the format 
"true if , false otherwise".


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[jira] [Reopened] (HBASE-20405) Update website to meet foundation recommendations

2019-07-04 Thread Szalay-Beko Mate (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-20405?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Szalay-Beko Mate reopened HBASE-20405:
--

we checked the Whimsy report again. It looks better now, but the checks for 
"thanks" and "license" are still not pass. We need a second try.

> Update website to meet foundation recommendations
> -
>
> Key: HBASE-20405
> URL: https://issues.apache.org/jira/browse/HBASE-20405
> Project: HBase
>  Issue Type: Task
>  Components: website
>Reporter: Sean Busbey
>Assignee: Szalay-Beko Mate
>Priority: Minor
>  Labels: beginner
> Fix For: 3.0.0
>
> Attachments: HBASE-20405-v2.png, HBASE-20405.png
>
>
> The Apache Whimsy tool includes an automated checker on if projects are 
> following foundation guidance for web sites:
> https://whimsy.apache.org/site/project/hbase
> out of 10 checks, we currently have 5 green, 4 red, and 1 orange.
> The whimsy listing gives links to relevant policy and explains what it's 
> looking for.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658-for-hbase1.x.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
> Attachments: HBASE-22658-for-hbase1.x.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: HBASE-22658-1.x.patch

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: HBASE-22658-1.x.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Attachment: (was: 
0001-remove_region-should-think-about-choose-same-rsgroup.patch)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: region_mover, rsgroup
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Labels: region_mover rsgroup  (was: )
Attachment: 0001-remove_region-should-think-about-choose-same-rsgroup.patch
Status: Patch Available  (was: Open)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>  Labels: rsgroup, region_mover
> Attachments: 
> 0001-remove_region-should-think-about-choose-same-rsgroup.patch
>
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22658?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

liang.feng updated HBASE-22658:
---
Description: There are many retries when i am using graceful_stop.sh to 
shutdown region server after using regroup. Sometimes some regions can't be 
visit, so i think table region_mover.rb  should choose same rsgroup servers as 
target servers   (was: region_mover.rb  should choose same rsgroup servers as 
target servers when regroup is enabled)

> region_mover.rb  should choose same rsgroup servers as target servers 
> --
>
> Key: HBASE-22658
> URL: https://issues.apache.org/jira/browse/HBASE-22658
> Project: HBase
>  Issue Type: Improvement
>  Components: rsgroup, shell
>Reporter: liang.feng
>Priority: Critical
>
> There are many retries when i am using graceful_stop.sh to shutdown region 
> server after using regroup. Sometimes some regions can't be visit, so i think 
> table region_mover.rb  should choose same rsgroup servers as target servers 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Updated] (HBASE-22537) Split happened Replica region can not be deleted after deleting table successfully and restarting RegionServer

2019-07-04 Thread Wellington Chevreuil (JIRA)


 [ 
https://issues.apache.org/jira/browse/HBASE-22537?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wellington Chevreuil updated HBASE-22537:
-
Attachment: HBASE-22537.branch-2.1.003.patch

> Split happened Replica region can not be deleted after deleting table 
> successfully and restarting RegionServer
> --
>
> Key: HBASE-22537
> URL: https://issues.apache.org/jira/browse/HBASE-22537
> Project: HBase
>  Issue Type: Bug
>  Components: Region Assignment
>Affects Versions: 2.1.1
>Reporter: Y. SREENIVASULU REDDY
>Assignee: Y. SREENIVASULU REDDY
>Priority: Minor
> Fix For: 2.1.6
>
> Attachments: HBASE-22537.branch-2.1.002.patch, 
> HBASE-22537.branch-2.1.003.patch, HBASE-22537.branch-2.1.patch
>
>
> [Test step]
> 1.create a table (set RegionReplication=2).
> 2.insert data to the table utill region be splitted.
> 3.Disable and Drop the table.
> 4.Parent replica region holding Regionserver, Kill forcefully 
> 5.HBase WebUI will show that the replica regions will be in RIT.
> [Expect Output]
> Parent replica region should be deleted.
> [Actual Output]
> Parent replica region still exists.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Created] (HBASE-22658) region_mover.rb should choose same rsgroup servers as target servers

2019-07-04 Thread liang.feng (JIRA)
liang.feng created HBASE-22658:
--

 Summary: region_mover.rb  should choose same rsgroup servers as 
target servers 
 Key: HBASE-22658
 URL: https://issues.apache.org/jira/browse/HBASE-22658
 Project: HBase
  Issue Type: Improvement
  Components: rsgroup, shell
Reporter: liang.feng


region_mover.rb  should choose same rsgroup servers as target servers when 
regroup is enabled



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


  1   2   >