One thing that always bothered me about NGrouting is that we only antisipate having to retry once. Originally I wanted to use a Ramon sum to determine the overall time but Ian pointed out that we don't want to retry infinitely. It occurred to me that NGrouting estimates the time for success and then the time to retry if we fail and multiplies each by their respective probabilities. It occurs to me that what we are trying to maximize is the probability that within any given time T we will have the highest probability of having the data. However we know the time T we are trying to optimize for, it is the timeout.
So for one node: ProbRetriveTimeIsUnderTimeout = 1 - pFail^Retries Where: tiemout >= tSuccess + NumFail * tFail timeout - tSuccess >= NumFail * tFail (timeout - tSucces) / tFail >= NumFail so ProbRetriveTimeIsUnderTimeout = 1 - pFail^((int)((timeout-tSucces)/tFail)) or for a given node using global estimators: estimatedTime = pSuccess * tSuccess + pFail * ( tFail + tGlobalSuccess ) where tGlobalSuccess is: timeSuccess + sum (timeFail*pfail^#fails,#fails,0,(int)(timeout - tfail - tSucces) / tFail)) Note that we need to add tfail in because we have already failed once. Also note that all of these numbers are the global estimates. this reduces to: (pfail^( (int)((timeout-tSuccess)/tfail) ) * tfail) / ( pfail-1) - tfail / (pfail-1) + tSuccess We may want to add in a built in fudge factor to insure that we get the data in before the timeout. One possible value to set this to would be the worst case value for tfail. That way if one failure is particularly slow we should still get the data on time. (pfail^( (int)((timeout-tSuccess-fugefactor)/tfail) ) * tfail) / ( pfail-1) - tfail / (pfail-1) + tSuccess also we never retry more than a set number of times so always test to make sure that: (timeout-tSuccess-fugefactor)/tfail < MaxRetrys _______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl
