Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-10 Thread Tld
[EMAIL PROTECTED] wrote:

maybe the default values for 
 rtMaxNodes=50
 rtMaxRefs=50
 are too low, so

your node's connectivity is bad? i increased them both to hold up to 500
entries and within an hour i got 174 noderefs! (okay, i did not have a
permanent node, so they had just 1-25 keys assigned, but what the heck
;)


Oh my gosh, you are trying to hold 500x500=25 keys!
I am quite sure this would end up with a big use of store space, and unless 
you really have a big one I believe it would eat up space from actual 
content `:|

i wonder if there is any good reason why these values are limited to 50
entries and not to +INF ?

I am quite sure about the MaxNodes: if you have a perfect routing table 
(network is fully connected: everyone knows everyone) then you will 
probably get things with HTL=1 or something very close. On answer, you and 
you alone (maybe one or two more) will have cached the content. Hence, 
things will get much worse (if a couple of nodes go down you loose the 
files). Also, bigger tables mean more CPU consumption.
Also, results get cached in paths. If you get lots of alien keys (those 
that are not near your node's assigned key) you will loose important data 
(the one you should be having), causing great havock. Data key localisation 
is one of the things that make freenet routing algo work.
Maybe there are better reasons, or mine has a flaw in it. Anyway, that's 
what I think :)

--
--- TLD
Oh, how uncomfortable that word must feel on your lips: evil.
 Good...there is no good, there is no evil. There is only flesh,
 and the patterns to which we submit it. [Pinhead]



___
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support


[freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Thomas Wuensche
Hi Free Netters,

I just joined this list, so if the issue has been discussed
before please point me to the right location. Search of the
archive did not give me the required results.

I tend to believe that information in freenet is available
which I can not find reliably. While I can not prove that,
the behaviour in extracting information seems to indicate
it. Sometimes information drops in after repeating requests
for days. I am running a permanent node with a large data-
store, so I should have good prerequisites regarding rou-
ting.

What I found in the freenet configuration file is the
entry maxHopsToLive, which seems to have a default value
of 25 and is described to limit the HTL value of requests
passing a node. If this limit is active it would mean that
using a HTL value beyond that limit will be of no help,
since most likely the majority of users do not change that
default and the HTL will be limited at the first node the
request hits. Is that assumption correct?

I do not really understand why this value is introduced
and used. There is the risk that information available
can not be found, right? And I understand that circular
requests are terminated anyhow, right?

Please enlighten me, whether my assumptions are comple-
tely wrong, whether they are right but default is enough
to search a whole universe full of freenet nodes or whe-
ther the default should be changed. Are there other means
to increase the probability to find information in freenet?

Best regards,

Thomas


___
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support



Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Matthew Toseland
On Mon, Dec 09, 2002 at 11:56:30PM +0100, Thomas Wuensche wrote:
 Hi Free Netters,
 
 I just joined this list, so if the issue has been discussed
 before please point me to the right location. Search of the
 archive did not give me the required results.
 
 I tend to believe that information in freenet is available
 which I can not find reliably. While I can not prove that,
 the behaviour in extracting information seems to indicate
 it. Sometimes information drops in after repeating requests
 for days. I am running a permanent node with a large data-
 store, so I should have good prerequisites regarding rou-
 ting.
 
 What I found in the freenet configuration file is the
 entry maxHopsToLive, which seems to have a default value
 of 25 and is described to limit the HTL value of requests
 passing a node. If this limit is active it would mean that
 using a HTL value beyond that limit will be of no help,
 since most likely the majority of users do not change that
 default and the HTL will be limited at the first node the
 request hits. Is that assumption correct?
Yes.
 
 I do not really understand why this value is introduced
 and used. There is the risk that information available
 can not be found, right? And I understand that circular
 requests are terminated anyhow, right?
Because we want to be able to limit the length of time and the amount of
network resources that a request takes. A request only goes to a certain
number of nodes, the hops-to-live. This means that unlike gnutella,
freenet should be able to scale as a single contiguous network, and an
attacker who just sends zillions of requests only gets (max HTL) *
(number of requests) load/bandwidth for his money, rather than (max HTL)
* (number of nodes).
 
 Please enlighten me, whether my assumptions are comple-
 tely wrong, whether they are right but default is enough
 to search a whole universe full of freenet nodes or whe-
 ther the default should be changed. Are there other means
 to increase the probability to find information in freenet?
We may want to increase the default maximum HTL in 0.5.1, however there 
are lots of nodes left over from 0.5.0 and its immediate successors,
which would tend to prevent this, short of forking the network...
 
 Best regards,
 
 Thomas
 

-- 
Matthew Toseland
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project Inc. from 11/9/02 to 11/1/03
http://freenetproject.org/



msg02323/pgp0.pgp
Description: PGP signature


Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Thomas Wuensche
Matthew,

Thanks for your fast and precise answer. I have removed all text
where I think it does not need more discussion. The following
points remain:

Snip


I do not really understand why this value is introduced
and used. There is the risk that information available
can not be found, right? And I understand that circular
requests are terminated anyhow, right?


Because we want to be able to limit the length of time and the amount of
network resources that a request takes. A request only goes to a certain
number of nodes, the hops-to-live. This means that unlike gnutella,
freenet should be able to scale as a single contiguous network, and an
attacker who just sends zillions of requests only gets (max HTL) *
(number of requests) load/bandwidth for his money, rather than (max HTL)
* (number of nodes).


Oh, agreed, protecting freenet against attackers definitely
is worth that considerations. However with a growing net the
percentual reach of a search is getting smaller. A net which
is willing to give only a fraction of the information it holds
to somebody searching for the information of course is also of
limited value. The problem specially is that information that
is new or rarely requested does propagate badly. From the view
of a node searching for the information it may become available
only after somebody in between has also requested it. The only
chance besides that might be a shifted search horizon of the
requester. Is there a chance that my node gets to know more
nodes out there in the freenet universe and to ask them in an
arbitrary sequence? Or is there another chance to find infor-
mation which is beyond my search range (as seen at a single
point in time)? This would at least help me to find the infor-
mation, even if it costs me more bandwidth. Repeatedly asking
the nodes that did not know it before in the hope they might
know it in the future will also significantly increase network
load, but with less probability of success for the regular
user.


Please enlighten me, whether my assumptions are comple-
tely wrong, whether they are right but default is enough
to search a whole universe full of freenet nodes or whe-
ther the default should be changed. Are there other means
to increase the probability to find information in freenet?


We may want to increase the default maximum HTL in 0.5.1, however there 
are lots of nodes left over from 0.5.0 and its immediate successors,
which would tend to prevent this, short of forking the network...

This would be welcome, please increase it to a value that
holds some margin for future growth. The impact of infor-
mation that is not delivered may be bigger than the impact
of an overload attack. The impact of an overload attack might
also be limited by limiting the bandwidth a node is willing to
provide to a single requester, would that be reasonable?

With regard to new versions, wouldn't it help if a certain
percentage of the nodes had higher limits? These would pro-
vide paths of deeper information flow into the network, even
if other paths stay shallow.

And of course it could easily be improved without a version
change, if people would change it manually. Is there a promi-
nent place where that could be published?

Best regards,


Thomas


___
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support



Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Matthew Toseland
On Tue, Dec 10, 2002 at 03:06:39AM +0100, Thomas Wuensche wrote:
 Matthew,
 
 Thanks for your fast and precise answer. I have removed all text
 where I think it does not need more discussion. The following
 points remain:
 
 Snip
 
 I do not really understand why this value is introduced
 and used. There is the risk that information available
 can not be found, right? And I understand that circular
 requests are terminated anyhow, right?
 
 Because we want to be able to limit the length of time and the amount of
 network resources that a request takes. A request only goes to a certain
 number of nodes, the hops-to-live. This means that unlike gnutella,
 freenet should be able to scale as a single contiguous network, and an
 attacker who just sends zillions of requests only gets (max HTL) *
 (number of requests) load/bandwidth for his money, rather than (max HTL)
 * (number of nodes).
 
 Oh, agreed, protecting freenet against attackers definitely
 is worth that considerations. However with a growing net the
 percentual reach of a search is getting smaller. A net which
Yes, it sounds bad if you don't understand how it works. Read some of
the papers on freenetproject.org. Basically, the idea is that with each
hop the network routes the request closer to the location of the data.
This is accomplished by freenet routing, and is explained in depth on
the web site, but the gist of it is that different nodes specialize in
different sorts of keys, and nodes keep routing tables, which tell
them which nodes have successfully sent back data for which kinds of
keys.
 is willing to give only a fraction of the information it holds
 to somebody searching for the information of course is also of
 limited value. The problem specially is that information that
 is new or rarely requested does propagate badly. From the view
Yeah. The network has some problems at the moment; I do suspect that we
will need to increase the maximum HTL, but that's probably not the main
issue causing slowness and unreliability at the moment on freenet. And
it's better than it has been for a long time.
 of a node searching for the information it may become available
 only after somebody in between has also requested it. The only
 chance besides that might be a shifted search horizon of the
 requester. Is there a chance that my node gets to know more
 nodes out there in the freenet universe and to ask them in an
 arbitrary sequence? Or is there another chance to find infor-
 mation which is beyond my search range (as seen at a single
 point in time)? This would at least help me to find the infor-
 mation, even if it costs me more bandwidth. Repeatedly asking
 the nodes that did not know it before in the hope they might
 know it in the future will also significantly increase network
 load, but with less probability of success for the regular
 user.
 
 Please enlighten me, whether my assumptions are comple-
 tely wrong, whether they are right but default is enough
 to search a whole universe full of freenet nodes or whe-
 ther the default should be changed. Are there other means
 to increase the probability to find information in freenet?
 
 We may want to increase the default maximum HTL in 0.5.1, however there 
 are lots of nodes left over from 0.5.0 and its immediate successors,
 which would tend to prevent this, short of forking the network...
 
 This would be welcome, please increase it to a value that
 holds some margin for future growth. The impact of infor-
 mation that is not delivered may be bigger than the impact
 of an overload attack. The impact of an overload attack might
If freenet routing works, then a fairly low HTL should be sufficient for
a very large network - again, there is more detail on the website.
 also be limited by limiting the bandwidth a node is willing to
 provide to a single requester, would that be reasonable?
Freenet without routing is nothing. We must get freenet routing to work
better. We are working on this. We will probably increase the default
max HTL at some point, but there is a lot we can do other than that to
improve performance.
 
 With regard to new versions, wouldn't it help if a certain
 percentage of the nodes had higher limits? These would pro-
 vide paths of deeper information flow into the network, even
 if other paths stay shallow.
No. A node receives a request at hops-to-live 24, it finds a node that
can accept the request and forwards it to that node at hops-to-live 23,
that node then finds another node and forwards it at 22 etc etc.
 
 And of course it could easily be improved without a version
 change, if people would change it manually. Is there a promi-
 nent place where that could be published?
Yuck. We would never get all freenet users to update it, and besides, we
want to make freenet as low maintenance as reasonably possible.
 
 Best regards,
 
 
 Thomas
 

-- 
Matthew Toseland
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project