* Matthew Toseland <toad at amphibian.dyndns.org> [2008-07-11 15:03:59]:

> On Friday 11 July 2008 04:23, Florent Daigni?re wrote:
> > * Matthew Toseland <toad at amphibian.dyndns.org> [2008-07-10 16:55:23]:
> > 
> > > On Thursday 10 July 2008 10:44, Florent Daigni?re wrote:
> > > > * Matthew Toseland <toad at amphibian.dyndns.org> [2008-07-07 12:17:33]:
> > > > > 7. Automatic bandwidth limit calibration. (Toad)
> > > > > 
> > > > > Several other p2p apps implement this, we should too. Bandwidth is 
> *the*
> > > > > scarce resource most of the time, we want to use as much of it as we 
> can
> > > > > without significantly slowing down the user's internet connection (see
> > > > > above).
> > > > 
> > > > I don't think that such a thing can reliably work. It might work in 80%
> > > > of the cases but will badly screw up in others.
> > > 
> > > It works for other p2p's. What specifically is the problem for Freenet? 
> Small 
> > > number of connections?
> > 
> > Small number of connections *and* usage of UDP! Do you know any p2p
> > protocol which uses mainly UDP and does what you call "automatic
> > bandwidth limit calibration"?
> > 
> > E2k uses TCP, bittorrent uses TCP... As far as I know, only their links
> > to DHTs use UDP (Kademilia); they don't use it for data transfert.
> 
> Then how do they get through NATs? Are you sure your information is up to 
> date?

They don't or they fallback to a UDP based scheme to establish the
connection; in any case they don't do the actual data transfert using
UDP, I am sure of that.

> > > > > TO IMPLEMENT IN 0.9:
> > > > > 
> > > > > 1. Streams (low level optimisation)
> > > > > 
> > > > > At the moment, Freenet is quite inefficient in bandwidth usage: the
> > > > > payload percentage, especially on nodes with low bandwidth limits but
> > > > > also on nodes with high bandwidth limits, can be quite low, 70%, maybe
> > > > > 50%... this is unacceptable. A lot of this is due to message padding. 
> If
> > > > > we split block transfers (and large messages) into streams which can 
> be
> > > > > sliced up byte-wise however big we want them, rather than into 1K 
> blocks,
> > > > > then we will need much less padding and so acheive a higher payload
> > > > > percentage. This also has significant side benefits for steganography 
> and
> > > > > for connections with low MTU, since we can (on a link with live 
> streams)
> > > > > construct useful packets of almost any size.
> > > > 
> > > > I am not convinced that I understand the logic here... I can't believe
> > > > you guys plan on implementing (7) without that.
> > > 
> > > How is it related?
> > > > 
> > > > In case you didn't suspect it: here is a big news: 
> > > > "available and effective throughput depend on packet sizes amongst other
> > > > things". At the moment we have oversized packets with a per-link MTU...
> > > > which is poorly guessed and never adapted.
> > > 
> > > And how exactly do you propose we figure out the MTU from Java? More 
> JNI? :(

That's one solution yes.

> > We can do statistics on each link and detect for which MTU value the
> > efficiency is maximum... but that requires to be able to have arbitrary
> > long data packets ;)
> 
> That doesn't tell us anything about available bandwidth. Agreed we need to 
> figure out MTU, but imho it will be rather difficult to figure out the MTU 
> reliably.
> > 
> > Doing it the other way around (doing statistics to determine the
> > send-rate with a fixed, arbitrary MTU) is just stupid.
> > 
> > Most of the QoS schemes on UDP are using the following criteria:
> >     - size of packets (we can't act on that until we have streams)
> >     - send rate -frequency- (if we don't act on size of packets, it
> >       affects the throughput if we change it)
> >     - port number used (we can't act on that until we have
> >       "transport plugins")
> 
> We can use any port number, we just don't want to be fingerprinted/blocked by 
> it.
> > 
> > Keep in mind that UDP is stateless; most of the time QoS is used to
> > prevent bandwidth to be monopolized by one low-priority application...
> > QoS doesn't work well on UDP flows because it's stateless! Hence it's
> > either under-prioritized or over-prioritized (people don't want their
> > dns requests to take ages). Most of the time there is QoS happening on
> > the ISP's side as well (they tend to prioritize ICMP because gamers are
> > interested in round-trip time and most of them don't suspect that they
> > are different policies in between ICMP and UDP -most games are using
> > UDP-).
> > 
> > My point is: if you want auto-bandwidth calibration to work and to be
> > effective, you *will* have to adapt the per-link MTU and as far as I
> > know that can be done only dynamically, using a stream based transport.
> 
> Dealing properly with MTU will be a lot of work. Is it worth it?
> 

I think so

> And then we'd have to figure out how much bandwidth we can use as well.
> 
> And the two mechanisms will interferer with each other's statistics: the only 
> reliable way to know what the MTU is is to set the dont-fragment flag.

Of course they will... and we will have to use both in order to maximize
performances.

As far as I know there is no way to set the DF flag short of using JNI
from java.

NextGen$
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: Digital signature
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080712/07120db1/attachment.pgp>

Reply via email to