On 01/30/2014 12:14 PM, Kinkie wrote: > Ok, here's some numbers (same testing methodology as before) > > Trunk: mean RPS (CPU time) > 10029.11 (996.661872) > 9786.60 (1021.695007) > 10116.93 (988.395665) > 9958.71 (1004.039956) > > stdvector: mean RPS (CPUtime) > 9732.57 (1027.426563) > 10388.38 (962.418333) > 10332.17 (967.824790)
OK, so we are probably just looking at noise: The variation within each data set (1.4% or 3.6%) is about the same or more than the difference between the set averages (1.8%). We will test this in a Polygraph lab, but I would not be surprised to see no significant difference there as well. > Some other insights I got by varying parameters: > By raw RPS, it seems that performance varies with the number of number > of parallel clients in this way (best to worst) > 100 > 10 > 500 > 1 Sure, any best-effort test will have such a dependency: Too few best-effort clients do not create enough concurrent requests to keep the proxy busy (it has time to sleep) while too many best-effort robots overwhelm the proxy with too many concurrent requests (the per-request overheads grow, decreasing the total throughput). Cheers, Alex.