rmdmattingly commented on code in PR #5713:
URL: https://github.com/apache/hbase/pull/5713#discussion_r1518985912


##########
hbase-server/src/main/java/org/apache/hadoop/hbase/quotas/DefaultOperationQuota.java:
##########
@@ -158,24 +193,60 @@ public void addMutation(final Mutation mutation) {
    * Update estimate quota(read/write size/capacityUnits) which will be 
consumed
    * @param numWrites the number of write requests
    * @param numReads  the number of read requests
-   * @param numScans  the number of scan requests
    */
-  protected void updateEstimateConsumeQuota(int numWrites, int numReads, int 
numScans) {
+  protected void updateEstimateConsumeBatchQuota(int numWrites, int numReads) {
     writeConsumed = estimateConsume(OperationType.MUTATE, numWrites, 100);
 
     if (useResultSizeBytes) {
       readConsumed = estimateConsume(OperationType.GET, numReads, 100);
-      readConsumed += estimateConsume(OperationType.SCAN, numScans, 1000);
     } else {
       // assume 1 block required for reads. this is probably a low estimate, 
which is okay
       readConsumed = numReads > 0 ? blockSizeBytes : 0;
-      readConsumed += numScans > 0 ? blockSizeBytes : 0;
     }
 
     writeCapacityUnitConsumed = calculateWriteCapacityUnit(writeConsumed);
     readCapacityUnitConsumed = calculateReadCapacityUnit(readConsumed);
   }
 
+  /**
+   * Update estimate quota(read/write size/capacityUnits) which will be 
consumed
+   * @param scanRequest          the scan to be executed
+   * @param maxScannerResultSize the maximum bytes to be returned by the 
scanner
+   * @param maxBlockBytesScanned the maximum bytes scanned in a single RPC 
call by the scanner
+   */
+  protected void updateEstimateConsumeScanQuota(ClientProtos.ScanRequest 
scanRequest,
+    long maxScannerResultSize, long maxBlockBytesScanned) {
+    if (useResultSizeBytes) {
+      readConsumed = estimateConsume(OperationType.GET, 1, 1000);
+    } else {
+      /*
+       * Estimating scan workload is more complicated, and if we severely 
underestimate workloads
+       * then throttled clients will exhaust retries too quickly, and could 
saturate the RPC layer.
+       * We have access to the ScanRequest's nextCallSeq number, the 
maxScannerResultSize, and the
+       * maxBlockBytesScanned by every relevant Scanner#next call. With these 
inputs we can make a
+       * more informed estimate about the scan's workload.
+       */
+      long estimate;
+      if (scanRequest.getNextCallSeq() == 0) {
+        // start scanners with an optimistic 1 block IO estimate
+        // it is better to underestimate a large scan in the beginning
+        // than to overestimate, and block, a small scan
+        estimate = blockSizeBytes;
+      } else {
+        // scanner result sizes will be limited by quota availability, 
regardless of
+        // maxScannerResultSize. This means that we cannot safely assume that 
a long-running
+        // scan with a small maxBlockBytesScanned would not prefer to pull down
+        // a larger payload. So we should estimate with the assumption that 
long-running scans
+        // are appropriately configured to approach their maxScannerResultSize 
per RPC call
+        estimate =

Review Comment:
   > So every time a next() comes in, we double the previous BBS as the new 
limit
   
   Is this your proposal for what we should do, or your read on what the 
current estimation logic will do? If we're worried about overestimation, then I 
think this logic is actually worse than what you've described because we don't 
double the previous BBS — rather we multiply the max BBS by the next call 
sequence. So for 1 block/1 caching case, we'd quickly estimate at a, likely 
much higher, `Math.min(maxScanerResultSize, maxScanEstimate)`. This is more of 
a feature than a bug though, because I think we'd prefer to overestimate 
against a saturated quota. For example, if we only manage to scan a few kb per 
next call because we're riding against the quota limit, but really the scan 
would prefer to pull down 100MB at a time, then we'll continue to use tiny 
estimations without the aforementioned estimate progression.
   
   So, rather than risk falling into a cycle of `quota saturation -> long 
running scans with tiny estimates -> tons of RPC calls -> quota saturation 
...`, I think we'd rather make the assumption describe in the comment here:
   ```
   So we should estimate with the assumption that long-running scans
   are appropriately configured to approach their maxScannerResultSize per RPC 
call
   ```
   at the expense of potentially overestimating some scans. We also mitigate 
the risk of overestimation with the introduction of `maxScanEstimate`



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to