Tracy R Reed wrote:
On Sun, Nov 02, 2003 at 11:12:33PM +0000, Ian Clarke spake thusly:

If you expect only a .08% psuccess

How do you know that 92% of requests aren't for data that isn't in the network.


You are implying that frost is the cause of this?
Possibly, I would say. For the rest of this message, let's assume so.

> If that's the case I
think the frost project has to die because it is killing the rest of the
network.
Why would that kill the rest of the network?

> But I'm not sure that it is. It would be nice if there were a
psuccess measurement which did not include KSK's.
yes.

With an 8% psuccess rate
I would have to insert a splitfile with around 1000% redundancy to be able
to get it on the first try. Is that what you propose that we do?
No, if KSK's are responsible for the low psuccess rate then the CHK/SSK psuccess rate would be necessarily higher. Ian isn't suggesting that you insert all your files 10 times over.

> Or do we
just click retry a whole lot of times and cross our fingers? I haven't
been able to receive TFE for a couple days and my node has 20G of data in
the store and a DS-3 and has been up for months. Others can retrieve it so
it was definitely inserted today. I recall someone on the channel once
reported a 25% psuccess and you were impressed. Doesn't it seem odd to you
that you would buy that as a possible realistic number and now you ask us
to consider that perhaps 92% of requests are for data that isn't in the
network?
Nothing wrong with being impressed with 25%. Jeez, does the guy have scream bloody murder just because it happens to be lower now? It would make sense to take a calmer look at the situation, realize that psuccess is not the holy grail of network health, and try to come up with better measures. See the "Measuring node/network health" thread.


Freenet is still has a long way to go to get away from alchemy. It's as if
there is no scientific process.
So, spend more time contributing to the discussion about coming up with better measures, etc. than accusing the devels of ... well whatever it is you are accusing them of ... you are being pretty harsh. It isn't helpful (to borrow an Ianism). I know you do try to contribute, but you spend an awful lot of time whining about stuff. 'course I'm spending a lot of time responding to your complaints. It's fun! ...and I have no life! =D TT ??


The above is a good example. Regarding the
major routing bugs that were found: Wouldn't some basic sanity checking
have caught those?
Maybe. Maybe not. Sometimes it's pretty hard to figure out which sanity check you needed until after you already found the problem. However, I agree that we need more/better diagnostics to tell how bad the remaining problems are. The aim is not to tell exactly *what* the problem is, but instead give us a fair measure of how much better we could be doing.

I suggested breaking out some of the methods in the
routing table and providing some inputs and checking that sane outputs
were produced
See, that's contributing!


> and he seemed to scoff. and that's whining.


Reskill made a good point also: The 692 network DID perform better than the unstable network when the unstable network had just as few nodes. We found this out when unstable forked into its own network. Looks like the theory that any build would work well with so few nodes was incorrect too.
And this is just like saying "I told you so. Nyaaaaaa!".



For the millionth time, just because there is no obvious specialization doesn't mean that something is wrong! The specialization might not be obvious for any number of reasons other than that Freenet's routing isn't doing what it should.


You have been saying this since before the two major bugs (and probably a
number of smaller ones) were found that ensured that routing would never
work. Had everyone believed it then perhaps the bugs would not have been
found. Why should we believe it now?
Dude, come on. That is completely not logical. The bugs were not found because Toad and Iakin believed specialization should have been manifest by now. Ask them how they found it, but I'm sure it isn't because they was wondering why nodes weren't apparently specialized. Ian's point is that it is possible to have NGR working perfectly and yet not have specialization. You have a proof that this is not true?

If the routing is that "nonobvious"
then it is too complicated to actually work.
Doesn't even make sense. So is quantum mechanics. And it works.

I still don't understand how
freenet will ever scale without specialization. The need for
specialization was made pretty clear in your original papers and the
simulations.

I think you are right about this. But Ian is just saying that we don't know (yet) at what point the network will *need* specialization. Again, I would suggest that my eHealth would be a good measure of how well the network is specializing relative to its need to specialize.



I have a challenge for you: Name one other computer program that works yet
nobody can explain how.

make a joke or ignore? ... make a joke or ignore? ... make a joke or ignore?


i can't think of a good joke here. Thoughts?

And you intend to put Freenet on this list? I just
don't buy the "nonobvious" routing theory.

-Martin



_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to