I've missed part of this conversation but here is my two cents on this specific question - just keep increasing the amount of data that you are sending in bursts and the speed of those bursts until you achieve a certain target error rate. i.e. 2% or whatever. After bumping up against failures, you should be able to get a sense of an optimal rate. Be sensitive to TCP congestion at the same time. I back off if the round trip time starts spiking.
Thanks -greg Quoting David Barrett <[EMAIL PROTECTED]>: > > -----Original Message----- > > From: coderman > > Sent: Saturday, April 01, 2006 5:20 PM > > To: Peer-to-peer development. > > Subject: Re: [p2p-hackers] Hard question.... > > > > On 4/1/06, David Barrett <[EMAIL PROTECTED]> wrote: > > > ... > > > Incidentally, how are you measuring "available bandwidth"? > > > > right now i pass the buck and let the user pick a suitable limit. if > > excessive loss is detected continuously the stack can cut by half or > > exit with error. > > > > i'm still looking for better ways to do this; ideally it would be tied > > to kernel level shaping and based on a historical view of channel > > capacity. > > Got it. Has anyone else had good experience trying to measure this > automatically in the real world? > > -david > > > _______________________________________________ > p2p-hackers mailing list > [email protected] > http://zgp.org/mailman/listinfo/p2p-hackers > _______________________________________________ > Here is a web page listing P2P Conferences: > http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences > _______________________________________________ p2p-hackers mailing list [email protected] http://zgp.org/mailman/listinfo/p2p-hackers _______________________________________________ Here is a web page listing P2P Conferences: http://www.neurogrid.net/twiki/bin/view/Main/PeerToPeerConferences
