On 12/9/11 3:35 AM, Pierre Ossman wrote:
>> If I instead run with the regular 100 Mbps transfer rate but still
>> artificially add latency via qdisc, then I can see a clear benefit, both
>> qualitatively and quantitatively-- as long as I jack up the TCP max
>> buffer sizes to 8 MB in /etc/sysctl.conf:
>>
> 
> The default of 4 MB was insufficient for you? I never hit that size
> during my tests.

The default wmem_max on Red Hat Enterprise 5 is much lower than 4 MB--
more like 256k, I think.  TigerVNC is using something like 2 MB whenever
it's delivering its best performance in the 200 ms configuration.


>> I'm not sure how realistic that configuration is, but it's not the first
>> time I've used it for benchmarking.  I have yet to find a reliable way
>> to limit the bandwidth.  At least on my system, trying to use qdisc for
>> that as well produces very unpredictable results.
> 
> It has worked fine here for us. I can dig up the script we use if you'd
> like.

Yes, I'd like to see that.

------------------------------------------------------------------------------
Cloud Services Checklist: Pricing and Packaging Optimization
This white paper is intended to serve as a reference, checklist and point of 
discussion for anyone considering optimizing the pricing and packaging model 
of a cloud services business. Read Now!
http://www.accelacomm.com/jaw/sfnl/114/51491232/
_______________________________________________
Tigervnc-devel mailing list
Tigervnc-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/tigervnc-devel

Reply via email to