On 03/13/2014 12:06 PM, Ben Laurie wrote: > So, total average load is 3 * b * w / l ~ 20,000 web fetches per > second.
This part i follow (you're switching temporal units between months and
years and seconds, but i get roughly the same final figures)
> If we optimise the API we can get that down to 7,000 qps. Each
> query (in the optimised case) would be around 3 kB,
And i agree this seems like a win. Why was the API broken into three
parts instead of the complete proof originally? what (other than
conceptual cleanliness) might we lose by creating the optimized API?
> which gives a bandwidth of around 150 kb/s.
This looks off by a few orders of magnitude to me. 7kqps and 3kB/q
gives me 7000*3000*8 bits per second, which is 168Mbps. Am i missing
something?
Should we be considering swarm-based distribution of this kind of data,
or hierarchical proxying for load distribution?
--dkg
signature.asc
Description: OpenPGP digital signature
_______________________________________________ Trans mailing list [email protected] https://www.ietf.org/mailman/listinfo/trans
