> > The rationale was to treat the network collectively as a single 
> > receiver.  Do you see reasons why that approach won't work?

> I'm not saying it definitely won't work, but it's so far outside of
> what TCP was designed for that I don't think it can really be considered
> a well-tested system.

I personally think it won't work. The slow nodes will slow down the whole
network because they backpropagate always the "plaese stop sending" messages of
the same nodes. I said in the other thread. I dont care what happens 20 nodes
away, matthew asked if we should ignore them complely?

well: one single node contributes only a tiny amount of the overall
rejectprobability. Maybe those reject messages should backpropagate as a
probability of the likelyhood a message cant find its target.

so if a Server says: sorry I am overloaded (100% reject probabilty at my node)
the next in chain has maybe 5 links. the only reject probabilty available is the
one. which rejects and his own status. lets say he is not very busy aswell. So I
will backpropagate (max (#rejected / #links, myrejectprobability)) in this case
I will backpropagate 20%. Lets say I am the orignator and get back 20%.

(does this sounds familiar?) 

what does mean this to me. It means. There is load on this link, but it tells me
aswell, there is at least 80% hope for the next insert. 

Well this sounds nice, but its late and have not exactly thought about the
consequences... However a Node 20Hops away has still an influence on my sending
rate, but the further away the less the influence is. 

But a tested algorithm should be definitly preferred, I still believe thre is no
way without a good caching at nodes.

> >>> * TCP's congestion control also assumes the sender is well behaved - a
> >>> badly behaved sender can cause all other flows to back off, for  selfish
> >>> or malicious reasons

I think freenet nodes have to be a bit more egoistic, well behaving is good in
theory to do the maths, but in practice everyone wants to get most out of it.
Think of fuqid which just floods the network with hundrets of thounsands of
requests. You have to be a bit aggressive in your tactics because if you won't
be someone else will try it with a modified node. (because Client level has not
the influence anymore it had)


> I think Matthew's right about pushing load back to the sender - the
> question is how to do this over multiple hops in a way that doesn't
> reveal the identity of the sender and gives the sender an incentive to
> slow down (rather than a polite request to do so).

I agree completley. Most users wont be malicous but most want to have a maximal
profit. 

> >>> * "Route as greedily as possible, given the available capacity"
> > The problem here, and it is one we have faced before, is that this 
> > degrades routing
I believe in queues, if a node is not permanently overloaded, you will find a
window where you can send the data. Wait at those links, but dont slow down the
packets behind it, because its likely that not all nodes are overloaded the same
time. If the queue gets too long, ok, then we need some slow down...

we have to use the resources well, and now the are not used well,most
connections are idle. I heard about the bug. I'll see if that helps, but I dont
believe it will help much, because the node still listenes mostly to the slow
nodes. Maybe we should backpropagate idle messages aswell... "hey my node is
idle, its boring send me as much as you can" :)


Reply via email to