On May 14, 2009, at 5:06 PM, Matthew Toseland wrote:

On Thursday 14 May 2009 20:36:31 Robert Hailey wrote:

On May 14, 2009, at 12:17 PM, Matthew Toseland wrote:

Because we were both on the same LAN, it did not connect, until I
told him to
set it to allow local addresses on that peer. There should be a
checkbox when
adding a noderef, defaulting to on, "Friend may be on the same local
network
as me" or something. (#3098)

I think that it should be possible to automatically detect this.
Specifically, if we detect that our peer has the same "external
address" as us, try and connect locally. Is that a reliable indicator?

Not very (what if it changes?) ... we don't want darknet peers to cause us to connect to addresses on our LAN ... otherwise the solution is simply to try
the local addresses included ...

I agree that we don't want to try the local addresses of ~all~ our peers; local connections are in principal an exception and not cause enough to be sending so much local garbage handshaking traffic (that could give away a freenet node).

But I think this is the right way to go... If it is not defined for a given peer (allow/disallow local connections), then we could just calculate the default boolean as his.extAddresses.containsAny(my.extAddresses).... if it matches, then there is an excellent chance of it being a local connection (99%). If it changes, that's great! The decision to try local connections would change too! As it's probably a laptop that has floated out of the LAN (and we would want to stop sending handshakes to the local address anyway). At worst we might send a few garbage handshakes to local non- freenet machine until we connect to the node & find it's new external address.

The reason I think this would solve the problem because (I think) the principal reason that the nodes could not communicate is because the gateway would have to send a packet to itself (many inexpensive or firewalled ones will not).

e.g.
internal -> firewall -> external (generally works w/ current countermeasures)

internal -->
                firewall
internal <--

Does not work (at least for me). So what I propose is really an extension of "detecting the problem" (the gateway which has the same external ip). Don't know about multihoming, but I presume that this is works in the common case.


Once connected to my node, it repeatedly RNFed on the top block of
TUFI.
Performance with one peer is expected to be poor, but it is worse
than IMHO
it could be. Some sort of fair sharing scheme ought to allow a
darknet peer
that isn't doing much to have a few requests accepted while we are
rejecting
tons of requests from other darknet peers and opennet peers. (#3101)

I second that, but I'm not sure as to the best implementation.

On the surface this appears to be the same issue as balancing local/
remote requests. i.e. if your node is busy doing everyone else's work,
your requests should take clear advantage when you finally get around
to clicking a link.

Possibly. It is indeed a load balancing problem. Queueing will help, or maybe
simulated queueing.

I think this conflicts with the current throttling mechanism; piling
on requests till one or both nodes say 'enough',

Is this how it works now?

Yep. To the best of my understanding...

Q: How do we determine if we are going to take a remote request?
A: If there is "room" to incur it's expected bandwidth.

Premise-1: all chks transfer at an equal rate

Result-1: new transfers squeeze bandwidth from other transfers.
Result-2: a node will accept any number of transfers until such a point that all of them would go over the maximum allowed transfer time.

Effectively making every transfer to the slowest-allowed transfer (not counting local traffic or non-busy nodes). This is why I advocated lowering the slowest transfer time as a general speedup.

and if we reserve
some space we will not hit our bandwidth goal. Or that requests are
actually "competing" like collisions on a busy ethernet channel rather
than having an order.

Yes, it is very much like that.

Does that mean that the logical alternative is token passing?

In fact... guarantee-able request acceptance (token passing), might actually be a logical prerequisite for a fair-queueing system.

Interestingly any node can measure how many requests it can accept right now (for bandwidth), but only can guess as to it's peers (by backoff); so then, we may well accept requests which we cannot make good on, because our peers cannot accept them (ethernet collision logic at every hop).

One thing that I was playing around with earlier was re-factoring
PacketThrottle to be viewed from the queue-side. Rather than
"sendThrottledPacket" blocking till "a good time" to enqueue a message
based on the throttle, that all the packets would be serially
available interleaved (e.g. PacketThrottle.getBulkPackets(n); returns
the next 'n' packets).

Good idea... I thought it was somewhat like that already? It is important in
some cases for it to block...

Only in that they feed an outbound message queue rather than actually sending a packet. But it looks to me like a bit of cruft from a previous design.

In any case, you have a design note in the comments of PacketThrottle that it would be better to have a sorted list or red/black tree rather than a ticket system (where all threads wakeup); maybe a new class needs to be written (BulkQueue) that *only* interleaves waiters (round robin?) and the packet throttle then used only for actually sending packets.

--
Robert Hailey

_______________________________________________
Devl mailing list
[email protected]
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to