ndimiduk commented on a change in pull request #1933:
URL: https://github.com/apache/hbase/pull/1933#discussion_r443704467
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -1957,17 +1959,27 @@ public boolean normalizeRegions() throws IOException {
continue;
}
- // as of this writing, `plan.execute()` is non-blocking, so there's
no artificial rate-
- // limiting of merge requests due to this serial loop.
+ // as of this writing, `plan.submit()` is non-blocking and uses
Async Admin APIs to
+ // submit task , so there's no artificial rate-
+ // limiting of merge/split requests due to this serial loop.
for (NormalizationPlan plan : plans) {
- plan.execute(admin);
+ Future<Void> future = plan.submit(admin);
+ submittedPlanList.add(future);
if (plan.getType() == PlanType.SPLIT) {
splitPlanCount++;
} else if (plan.getType() == PlanType.MERGE) {
mergePlanCount++;
}
}
}
+ for (Future<?> submittedPlan : submittedPlanList) {
+ try {
+ submittedPlan.get();
+ } catch (Exception e) {
+ normalizationPlanFailureCount++;
Review comment:
Going through the client's admin interface seems unnecessary for this as
well. There's no way, for example, to log the PID of the action procedure that
failed (unless it happens to be included in an exception message).
* Can the action be taken via the `MasterServices` interface?
* Maybe it's fine to not wait on these actions, so long as the PIDs are
successfully submitted. I wonder what kind of back-pressure the system has,
though, if multiple normalizer runs result in 1000's of split/merge actions
that can't be completed backing up over time. Right now, our protection against
this is the lock against multiple concurrent normalizer runs.
##########
File path:
hbase-client/src/main/java/org/apache/hadoop/hbase/client/RawAsyncHBaseAdmin.java
##########
@@ -1294,7 +1294,7 @@ private void checkAndGetTableName(byte[]
encodeRegionName, AtomicReference<Table
return;
}
- MergeTableRegionsRequest request = null;
+ MergeTableRegionsRequest request;
Review comment:
Can this be made `final` as well?
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/master/HMaster.java
##########
@@ -1957,17 +1959,27 @@ public boolean normalizeRegions() throws IOException {
continue;
}
- // as of this writing, `plan.execute()` is non-blocking, so there's
no artificial rate-
- // limiting of merge requests due to this serial loop.
+ // as of this writing, `plan.submit()` is non-blocking and uses
Async Admin APIs to
+ // submit task , so there's no artificial rate-
+ // limiting of merge/split requests due to this serial loop.
for (NormalizationPlan plan : plans) {
- plan.execute(admin);
+ Future<Void> future = plan.submit(admin);
+ submittedPlanList.add(future);
if (plan.getType() == PlanType.SPLIT) {
splitPlanCount++;
} else if (plan.getType() == PlanType.MERGE) {
mergePlanCount++;
}
}
}
+ for (Future<?> submittedPlan : submittedPlanList) {
+ try {
+ submittedPlan.get();
Review comment:
I don't think the counter as a metric will help an operator very much.
Regions can split or merge in the background, causing a planed action to fail.
The failure should be logged, I think (maybe the normal split/merge pathways do
this already? I haven't checked).
It would be nice to have a final log message for the normalizer run that
says something like "successfully executed x of y normalization actions."
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/master/normalizer/EmptyNormalizationPlan.java
##########
@@ -44,7 +45,34 @@ public static EmptyNormalizationPlan getInstance(){
* No-op for empty plan.
*/
@Override
- public void execute(Admin admin) {
+ public Future<Void> submit(Admin admin) {
+ return new Future<Void>() {
Review comment:
No need for an anonymous class, just use
`CompletableFuture.completedFuture`.
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]