On Fri, 13 Feb 2009 09:11:48 +0100 Francois Deppierraz <[email protected]> wrote:
> Brian Warner wrote: > > > Yeah, this is disappointing. Our automated speed tests [1] show an > > upload speed between 0.8MBps and 1.4MBps, using 100MBps local > > bandwidth, which is much slower than we'd like. > > It means that my upload speed (0.125 MBps) 180ms away from > allmydata.com servers is still 8 times slower than your automated > speed tests. > > Does it means that latency hurts throughput that much ? Yeah, I think so: windowing protocols are a good thing. The speed is certainly affected by encoding/encryption/serialization overhead too, but I don't think we'll even be able to measure those until we remove the slack from the protocol by opening up the window. > For the record, uploading plaintext directly on the webapi of a > production allmydata.com node allows me to saturate my upstream > bandwidth. Thanks to TCP's efficient flow control. Yeah, what's happening there is that you're getting good utilization of a relatively narrow pipe to get your data onto the webapi node, then you sit around waiting while the webapi node gets poor utilization of a really fat pipe to the storage servers. Our hope is that the combination looks close enough to your local upload speed that we can hide our slowness behind your DSL line :-). cheers, -Brian _______________________________________________ tahoe-dev mailing list [email protected] http://allmydata.org/cgi-bin/mailman/listinfo/tahoe-dev
