Tracy R Reed wrote:
It would be nice if there were a
psuccess measurement which did not include KSK's.

IIRC Frost also inserts under SSKs.


With an 8% psuccess rate
I would have to insert a splitfile with around 1000% redundancy to be able
to get it on the first try.

Erm, no - you are assuming that the psuccess rate is evenly distributed across all data. The psuccess rate for a recently inserted splitfile would be much higher than the global average psuccess which includes lots of legitimate DNFs from Frost.


> I recall someone on the channel once
reported a 25% psuccess and you were impressed. Doesn't it seem odd to you
that you would buy that as a possible realistic number and now you ask us
to consider that perhaps 92% of requests are for data that isn't in the
network?

Things change, more people use Frost etc etc. I am NOT saying that 92% of requests are definitely legitimate, I am just asking you to remember that its a possibility. There is a difference.


Freenet is still has a long way to go to get away from alchemy. It's as if
there is no scientific process. The above is a good example. Regarding the
major routing bugs that were found: Wouldn't some basic sanity checking
have caught those? I suggested breaking out some of the methods in the
routing table and providing some inputs and checking that sane outputs
were produced and he seemed to scoff.

When and who "scoffed"? Quotes please. I was carefully picking through the NGR code and encouraging others to do-so. The scientific method is to conduct an experiment, and see whether things improve. To the extent a scientific method can reasonably be followed with Freenet, it is.


Reskill made a good point also: The 692 network DID perform better than
the unstable network when the unstable network had just as few nodes. We
found this out when unstable forked into its own network. Looks like the
theory that any build would work well with so few nodes was incorrect too.

The unstable network probably outgrew the 692 network within hours of it being created. Also, since you like scientific method so much, what scientific comparison are you applying when you say that 692 performed better?


You have been saying this since before the two major bugs (and probably a
number of smaller ones) were found that ensured that routing would never
work.

It was true then and it is true now. Just because it is possible that a lack of specialization doesn't indicate a routing problem doesn't mean that it doesn't indicate a routing problem. You seem to have trouble with this simple logic.


Had everyone believed it then perhaps the bugs would not have been
> found.

Believed what exactly? I never said that the lack of specialization was definitely not caused by a routing problem, just that it was possible it wasn't.

> Why should we believe it now? If the routing is that "non-obvious"
then it is too complicated to actually work.

By that argument almost none of non-symbolic AI would work since frequently it finds solutions to problems which are extremely difficult for people to decipher.


I still don't understand how
freenet will ever scale without specialization. The need for
specialization was made pretty clear in your original papers and the
simulations.

Can you find information in a scalable manner? Lets, for the sake of argument, assume you can. What is the CHK of the information at the center of your specialization?


I have a challenge for you: Name one other computer program that works yet
nobody can explain how. And you intend to put Freenet on this list? I just
don't buy the "non-obvious" routing theory.

You obviously aren't very familiar with non-symbolic AI. Examples would include anything that relies on a neural network (such as those that analyze your credit card transactions to spot fraud), or a genetic algorithm.


Ian.

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to