On Wed, Nov 26, 2003 at 04:50:11AM +0000, [EMAIL PROTECTED] wrote:
> Hypothetical:
> Routing works, so we have a 20% success ratio.
> The average filesize is 200kB (this is about right on the current
> network, check your datastore - but maybe we need to gather more
> accurate stats on it).
> We have a 256kbps uplink i.e. 32kB/sec, of which we can use all (this is
> optimistic).
> We get a mere 10kqph incoming, and accept all of it.
> 
> I will now demonstrate that this is impossible:
> 10kqph * 0.2 = 2kqph.
> 2000 * 200kB = 409,600,000 bytes
> 409,600,000 bytes / 3600 seconds = 113,777 bytes per second, for
> trailers alone, assuming no connection and search overhead.
> 
> So bandwidth is indeed the limiting factor, and we need to reject
> queries based on bandwidth usage. But I fear that routing may not work
> at all in this case.
> 
> Ideas?
> -----------------------
> Yeah, I told you so. "We are not suffering from load ballencing problems so much as 
> load problems."

Not necessarily. Look at the searchFailedCount.
> 
> Here's the problem, this means the avearge node owner is asking for 10000 * .2 peces 
> of data per hour averaging 200kb each. 10000 * .2 * 200 / 1024 ~= 391MB/hr. Now 
> given that people are alseep 1/3rd of the time, at least half of the rest of the 
> time not useing freenet, and at least 1/4th of all nodes don't make local requests. 
> That means that when someone is useing freenet they actualy generate 4 times that 
> ammount or 1562.5 MB/h in UNIQUE requests. That doesn't count the fact that they 
> have to retry an average of 5 times.

Most of those queries are in fact RETRIES at the client level. Since we
use the HTL from QRs now, it may or may not be a load balancing problem
- certainly we could improve routing by reducing the searchFailedCount,
but perhaps it isn't actually generating any new load in itself.
> 
> So the question is why on earth is freenet letting them initate requests for 7812.5 
> MB of data when even with a saturated connection that is downloading only their 
> stuff, they could only get 32 * 3600 / 1024 = 112.5 MB/h?

Because the requests are all retries. We have a shitload of load because
routing fails to find the documents. We fail to find the documents
because we have a shitload of load!
> 
> Don't let them do it. If that means we through out Frost fine. It would be best if 
> we ultimatly implimented some sort of trust biased routing. I am working on writing 
> some psudo code for how it could be done. I'll post it by the end of the week.

It is impossible to control client level usage of Freenet.
> 
> PS: Toad, I know you don't think TUKs will solve the Frost problem, but I think it 
> can, but this is probably becuse I think TUKs should be done differntly than it says 
> in CVS.

Please enlighten me. However the main reason I don't think TUKs will
solve the Frost problem is a trust issue - on a public or semi-public
board, just insert a TUK with a really high index number.
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to