sashapolo commented on code in PR #4543:
URL: https://github.com/apache/ignite-3/pull/4543#discussion_r1797216914
##########
modules/raft/src/main/java/org/apache/ignite/internal/raft/RaftGroupServiceImpl.java:
##########
@@ -687,36 +690,50 @@ private void handleErrorResponse(
case EBUSY:
case EAGAIN:
- scheduleRetry(() -> sendWithRetry(peer, requestFactory,
stopTime, fut, retryCount + 1));
+ scheduleRetry(peer, requestFactory, stopTime, fut, retryCount);
break;
- case ENOENT:
- scheduleRetry(() -> {
- // If changing peers or requesting a leader and something
is not found
- // probably target peer is doing rebalancing, try another
peer.
- if (sentRequest instanceof GetLeaderRequest || sentRequest
instanceof ChangePeersAndLearnersAsyncRequest) {
- sendWithRetry(randomNode(peer), requestFactory,
stopTime, fut, retryCount + 1);
- } else {
- sendWithRetry(peer, requestFactory, stopTime, fut,
retryCount + 1);
- }
- });
+ case ENOENT: {
+ Peer newTargetPeer;
+
+ // If changing peers or requesting a leader and something is
not found
+ // probably target peer is doing rebalancing, try another peer.
+ if (sentRequest instanceof GetLeaderRequest || sentRequest
instanceof ChangePeersAndLearnersAsyncRequest) {
+ newTargetPeer = randomNode(peer);
+ } else {
+ newTargetPeer = peer;
+ }
+
+ scheduleRetry(newTargetPeer, requestFactory, stopTime, fut,
retryCount);
break;
+ }
+ case EHOSTDOWN:
+ case ESHUTDOWN:
+ case ENODESHUTDOWN:
case EPERM:
// TODO: IGNITE-15706
case UNKNOWN:
- case EINTERNAL:
+ case EINTERNAL: {
+ Peer newTargetPeer;
+
if (resp.leaderId() == null) {
- scheduleRetry(() -> sendWithRetry(randomNode(peer),
requestFactory, stopTime, fut, retryCount + 1));
+ newTargetPeer = randomNode(peer);
Review Comment:
> Basically we already have a retryContext: stopTime, fut, retryCount. I'd
literally incapsulate that into new RetryContext class along with adding
stoppedPeers set (or excludeFromRetiresPeers, naming might be different).
That's exactly what I was trying to do, and it results in a quite a lot of
changes and more complicated code. If we are ok with this, I'll do it. But how
useful is this change? Shutdown nodes will be eventually removed from the
topology and from the `randomNode` results, because it has a topology filter
inside.
The other thing is: `peers` is a `volatile` field and it may get updated
externally, which actually happens during MS or CMG repair. So we will also
need a way to update all ongoing `RetryContext`s. Again, doable, but that
requires adding tracking code, which complicates the solution even more
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]