krconv opened a new pull request, #7131:
URL: https://github.com/apache/hbase/pull/7131

   This updates how the client reacts to region merges/splits while processing 
`coprocessorService()`. Instead of failing the region, it resubmits the 
affected range of row keys to try to save the call from needing to be aborted.
   
   Added unit tests that test this scenario. I also compared this behavior to 
the HTable from previous versions; HTable doesn't seem to have this same 
assertion, nor proper handling for split regions, meaning it will potentially 
return incorrect results.
   
   The coprocessor service logic in the client goes like this when we need to 
submit RPCs to multiple regions:
    1. Start a chain of `AsyncTableRegionLocator` requests, starting with the 
start key and continuing to locate the next region until we've located the 
region containing the end key
    2. Via a listener on the locator request for each region, we send the 
coprocessor RPC to the region that was located as soon as the location is 
resolved (likely from the meta cache)
    3. The RPC connection is started to the region, at which point we may find 
out that the region name has changed (and abort the request for that region)
    4. For each region, we continue to send coprocessor RPCs until 
`PartialResultCoprocessorCallback::getNextCallable` returns `null`
    5. Once the last RPC has returned from the last region that the call had 
identified, we call `CoprocessorCallback::onComplete` to indicate the call as 
done.
    
    This PR inserts logic at step 3, starting a new call to the affected range 
covered by the (outdated) region returned by step 2.
    
    Marking this as draft because I'll need some time to fix formatting/linting 
issues, but I'd like to get tests running.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to