Apache9 commented on a change in pull request #2630:
URL: https://github.com/apache/hbase/pull/2630#discussion_r528207034
##########
File path:
hbase-client/src/main/java/org/apache/hadoop/hbase/shaded/protobuf/ResponseConverter.java
##########
@@ -158,52 +156,30 @@ public static SingleResponse getResult(final
ClientProtos.MutateRequest request,
// If there is an exception from the server, the exception is set at
// the RegionActionResult level, which has been handled above.
if (actions.hasCondition()) {
- Result result = null;
- if (actionResult.getResultOrExceptionCount() > 0) {
- ResultOrException roe = actionResult.getResultOrException(0);
- if (roe.hasResult()) {
- Result r = ProtobufUtil.toResult(roe.getResult(), cells);
- if (!r.isEmpty()) {
- result = r;
- }
- }
- }
- responseValue = new
CheckAndMutateResult(actionResult.getProcessed(), result);
+ results.add(regionName, index, getCheckAndMutateResult(actionResult,
cells));
} else {
- responseValue = actionResult.getProcessed() ?
- ProtobufUtil.EMPTY_RESULT_EXISTS_TRUE :
- ProtobufUtil.EMPTY_RESULT_EXISTS_FALSE;
+ results.add(regionName, index, getMutateRowResult(actionResult,
cells));
}
- results.add(regionName, index, responseValue);
continue;
}
if (actions.hasCondition()) {
- Result result = null;
- if (actionResult.getResultOrExceptionCount() > 0) {
- ResultOrException roe = actionResult.getResultOrException(0);
- Result r = ProtobufUtil.toResult(roe.getResult(), cells);
- if (!r.isEmpty()) {
- result = r;
- }
- }
- responseValue = new CheckAndMutateResult(actionResult.getProcessed(),
result);
- results.add(regionName, 0, responseValue);
+ results.add(regionName, 0, getCheckAndMutateResult(actionResult,
cells));
} else {
+ if (actionResult.hasProcessed()) {
Review comment:
Mind explaing a bit about this change here? Seems the logic is changed?
##########
File path: hbase-client/src/main/java/org/apache/hadoop/hbase/client/Table.java
##########
@@ -459,9 +459,10 @@ default CheckAndMutateResult checkAndMutate(CheckAndMutate
checkAndMutate) throw
* {@link Put} and {@link Delete} are supported.
*
* @param rm object that specifies the set of mutations to perform atomically
+ * @return results of Increment/Append operations
* @throws IOException
Review comment:
nits: could remove the empty throws so we could fix on checkstyle
warning.
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##########
@@ -972,11 +1003,41 @@ private void doBatchOp(final RegionActionResult.Builder
builder, final HRegion r
OperationStatus[] codes = region.batchMutate(mArray, atomic,
HConstants.NO_NONCE,
HConstants.NO_NONCE);
+ if (atomic) {
+ LinkedList<ResultOrException> resultOrExceptions = new LinkedList<>();
+ List<Result> results = new ArrayList<>();
+ for (i = 0; i < codes.length; i++) {
+ if (codes[i].getResult() != null) {
+ results.add(codes[i].getResult());
+ }
+ if (i != 0) {
+ resultOrExceptions.add(getResultOrException(
+ ClientProtos.Result.getDefaultInstance(), i));
+ }
+ }
+
+ if (results.isEmpty()) {
+ resultOrExceptions.addFirst(getResultOrException(
+ ClientProtos.Result.getDefaultInstance(), 0));
+ } else {
+ // Set the result of the Increment/Append operations to the first
element of the
+ // ResultOrException list
+ Result result = Result.create(results.stream()
+ .filter(r -> r.rawCells() != null)
+ .flatMap(r -> Arrays.stream(r.rawCells()))
+ .collect(Collectors.toList()));
+
resultOrExceptions.addFirst(getResultOrException(ProtobufUtil.toResult(result),
0));
+ }
+
+ builder.addAllResultOrException(resultOrExceptions);
+ return;
+ }
+
for (i = 0; i < codes.length; i++) {
Mutation currentMutation = mArray[i];
ClientProtos.Action currentAction =
mutationActionMap.get(currentMutation);
- int index = currentAction.hasIndex() || !atomic ?
currentAction.getIndex() : i;
- Exception e = null;
+ int index = currentAction.hasIndex() ? currentAction.getIndex() : i;
Review comment:
OK, just saw that there is a return above.
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/RSRpcServices.java
##########
@@ -972,11 +1002,44 @@ private void doBatchOp(final RegionActionResult.Builder
builder, final HRegion r
OperationStatus[] codes = region.batchMutate(mArray, atomic,
HConstants.NO_NONCE,
HConstants.NO_NONCE);
+ if (atomic) {
+ LinkedList<ResultOrException> resultOrExceptions = new LinkedList<>();
Review comment:
Better to use ArrayList instead of LinkedList. ArrayList is faster than
LinkedList for most cases. I can not recall the exact number but at least for
thousands elements ArrayList is still faster.
##########
File path:
hbase-server/src/main/java/org/apache/hadoop/hbase/regionserver/HRegion.java
##########
@@ -4008,28 +4029,10 @@ private static Get toGet(final Mutation mutation)
throws IOException {
return get;
}
- /**
- * Do coprocessor pre-increment or pre-append after row lock call.
- * @return Result returned out of the coprocessor, which means bypass all
further processing
- * and return the preferred Result instead, or null which means proceed.
- */
- private Result doCoprocessorPreCallAfterRowLock(Mutation mutation) throws
IOException {
- assert mutation instanceof Increment || mutation instanceof Append;
- Result result = null;
- if (region.coprocessorHost != null) {
- if (mutation instanceof Increment) {
- result = region.coprocessorHost.preIncrementAfterRowLock((Increment)
mutation);
- } else {
- result = region.coprocessorHost.preAppendAfterRowLock((Append)
mutation);
- }
- }
- return result;
- }
-
private Map<byte[], List<Cell>> reckonDeltas(Mutation mutation, List<Cell>
results,
long now) throws IOException {
assert mutation instanceof Increment || mutation instanceof Append;
- Map<byte[], List<Cell>> ret = new HashMap<>();
+ Map<byte[], List<Cell>> ret = new TreeMap<>(Bytes.BYTES_COMPARATOR);
Review comment:
Good. It is usually dangerous to use byte[] as the key of a HashMap...
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]