I have a question that first started as an issue reported to the grpc-go 
client <https://github.com/grpc/grpc-go/issues/7514>, then a question on 
Gitter 
<https://matrix.to/#/!LWDFtYAcIVyULYUnXs:gitter.im/$NTr_B4whf71VHcXiBnNB78hzqF46RrYSTTFIGXnaIXU?via=gitter.im&via=matrix.org>,
 
and then a question on StackOverflow 
<https://stackoverflow.com/questions/78877444/grpc-client-core-c-retrypolicy-logic-vs-go-client>.
 
But because the SO post was likely too opinion-based, it makes the most 
sense to just ask for clarification her on this list. The StackOverflow 
question is closed but is still a good detailed reference of my question.

The retryPolicy, as it was proposed 
<https://github.com/grpc/proposal/blob/master/A6-client-retries.md#integration-with-service-config>,
 
suggests that implementations should use a backoff that is fully randomized 
from 0 to the actual calculated max backoff. This is how the grpc-go client 
is implemented. However, in the core implementation, which also drives 
python/c++ clients, it is implemented as a growing exponential backoff with 
randomized jitter. I had a lot of issues with the grpc-go implementation, 
in trying get it to actually retry for long enough in an expected window, 
fiddling with the settings that seem to influence very little in terms of 
the min retry time. And I find that the core implementation is more correct 
and reasonable.

Which one is correct? The grpc-go client seems to do what the retryPolicy 
proto doc defines, but feels incorrect. And the core implementation does 
not follow the spec, and feels like a proper implementation of exponential 
backoff.

Justin


-- 
You received this message because you are subscribed to the Google Groups 
"grpc.io" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/grpc-io/69a85400-4dfe-46af-b169-7969a536b5cbn%40googlegroups.com.

Reply via email to