angelfall-yy edited a comment on issue #2176:
URL:
https://github.com/apache/servicecomb-java-chassis/issues/2176#issuecomment-752813602
I read the code, and find the main point is that WeightedResponse used mean
latency of Ribbon Loadbalance Stats.
> private List<Double> doCalculateTotalWeights(List<ServiceCombServer>
servers) {
> List<Double> stats = new ArrayList<>(servers.size() + 1);
> double totalWeights = 0;
> boolean needRandom = false;
> for (ServiceCombServer server : servers) {
> ServerStats serverStats =
loadBalancer.getLoadBalancerStats().getSingleServerStat(server);
> **double avgTime = serverStats.getResponseTimeAvg();**
> if (!needRandom && avgTime > MIN_GAP) {
> needRandom = true;
> }
> totalWeights += avgTime;
> stats.add(avgTime);
> }
> stats.add(totalWeights);
> totalWeightsCache = totalWeights;
> if (needRandom) {
> return stats;
> } else {
> return new ArrayList<>();
> }
> }
if we has one microservice m1, loadbalance used strategy WeightedResponse.It
has two endPointer p1 and p2.
At first, p1's latency is 100ms, p2's latency is 100ms. They run for one
year. The mean latency of p1 is 100ms, p1 handles 50% requests.
One day p1's latency changed to 1000ms, p2's latency is still 100ms. The
mean latency of p1 is (100*365+1000)/366=102ms
WeightedResponse use mean latency of p1 to calc if p1 should select. But
100ms(run well for one year) and 102ms(one day error) is almost the same. So
WeightedResponse' calculations is almost the same. p1 will still handle about
50% requests, when it's latency is 10 times than before.
It is right?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]