Dear Kyle (et alia):
I've thought some more about this and talked about it a bit with my
wife, Amber, and I have a few more comments.
* I realized that since your small files were themselves 64 KiB each,
then any segment size = 64 KiB would have the same effect as any
other segment size = 64
Dear Kyle:
I'm grateful to you for spending your time to run these benchmarks and
report the results to us.
I'm wondering what we can do to make sure that these benchmarks serve
their purpose and don't just get wasted.
The best thing, of course, would be if we could make them automated so
that
Brian Warner writes:
Yup. I suspect that your large files are running into python's performance
limits: the best way to speed those up will be to move our transport to
something with less overhead (signed HTTP is our current idea, ticket
#510), then to start looking at what pieces can be
Brian Warner writes:
The fastest data rate you're seeing here is 64MiB/14.80s, so about
4.47MB/s or roughly 35-40Mbps, which is probably about the middle of what
you'd expect out of a 100Mbps ethernet (maybe a bit on the low side, but
not by much). Was the client CPU pegged during the upload?
On 7/25/10 8:10 PM, Kyle Markley wrote:
Brian,
Yeah, I think we're approximately saturating the network during the large
file transfers. But for the small files, both network and CPU load are
very low (under 10%).
Yup. I suspect that your large files are running into python's
performance
Brian Warner wrote:
On 7/25/10 8:10 PM, Kyle Markley wrote:
I wonder whether temporary file creation for the small file transfers
might be part of the problem. I know that temporary files are created on
occasion; could someone explain precisely when?
There aren't very many. The most