Toad <[EMAIL PROTECTED]> writes:

> On Mon, Jul 12, 2004 at 11:10:01AM +0000, Wayne McDougall wrote:
> > There's lots of cool stuff with averaging limits, and immediate limits,
> > and gradual adjustment. Together with incoming being not directly under
> > control. It works very well for those of us with monthly bandwidth caps.
> 
> It does?! I thought the average limiter didn't work...

Ahh yes, well that's probably why we get questions like the original point.
I read voraciously, try to make sense of it all, and couple it with my
observations. But it's hard to get a definite answer or know if I'm just
observing noise....

If you'd like me to do some comprehensive *tests* please feel free to ask.
But I'd gather that there are other priorities.

> > >  > My personal experience (counts for very little) is that it took 9 days 
to
> > >  > become better connected - then suddenyl everything started working
> > >  > beautifully.

> Nine days is ridiculous. We must do something about it. :(

It may well be better now, especially with these latest stable releases
which seem much improved, thank you Toad. Again, just holler if you ever
want some testing done.

> Okay, what's the main advantage? Maybe we can improve the fproxy
> interface?

Since you ask:

fproxy will timeout and then I have to start again. And then it won't even
grab the parts it previously downloaded successfully :-( So over a period of
weeks my perception is that I eventually move all the requisite parts into
my local stored, and then fproxy will download it instantly :-)

I certainly don't expect fproxy to be modified but perhaps one easy change
would be an outer loop so it just circles back and tries again. It's
probably just my low bandwidth, but I find that I will request something
(and this includes web pages) and it's not there, and then 5 minutes, 10
minutes, 20 minutes, 1 hour, 8 hours later it's there. 

My assumption has always been that my requests go out in an ever widening
circle off to where the data I want may be found, but my request timesout
before it gets back to me. Eventually (by dint of persistent requests) it
is lodged in local stores that I can reach before timing out.

> > My interest is websites that can never get slashdotted and can host large
> > files while sharing the load, rather than file-sharing...
> 
> Yeah, that would be cool, if it really worked, and if we had enough
> hosts to be able to worry about such things!

Ahh, well I'm here for the long haul....not that I'm any use. :-(
I am a big fan of the privacy elements also.

So be encouraged. You're not just creating an anonymous slow file-sharer.
You know and I know that Freenet is being used for good purposes now
and I can see lots of potential for the future. 





_______________________________________________
Support mailing list
[EMAIL PROTECTED]
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:[EMAIL PROTECTED]

Reply via email to