[freenet-support] Is my Node used or not?

2002-12-09 Thread Loup1234
First I want to apologise for my bad written English, but English is not my
main language. I want to try it anyway:
I have a permanent node and there are also inbound connections, which are
all successful, so I think, other node can contact my node. But I wonder, why
there is never a request! I mean, even if I have not the data, that a other
node want, there should be at least a request. So I want to ask, if I have
perhaps to do any configuration, to allow requests to my node.

P.S.: I have the newest build ;)

My configuration:

[Freenet node]
# Freenet configuration file
# This file was automatically generated by WinConfig on 12/08/02

# Note that all properties may be overridden from the command line,
# so for example, java Freenet.Node --listenPort 1 will cause
# the setting in this file to be ignored



# Normal entries


# The byte size of the datastore cache file.  Note that it will maintain
# a fixed size. If you change this or the storePath field following,
# your entire datastore will be wiped and replaced with a blank one
storeSize=2097152000

# The path to a single file (including file name, or a comma-separated list
of files,
# containing the data store.  The size of each file is given by storeSize.
# Defaults to cache_port in the main freenet directory.
#storeFile=

# Transient nodes do not give out references to themselves, and should
# therefore not receive any requests.  Set this to yes only if you are
# on a slow, non-permanent connection.
transient=false

# The port to listen for incoming FNP (Freenet Node Protocol) connections
on.
listenPort=23105

# The I.P. address of this node as seen by the public internet.
# This is needed in order for the node to determine its own
# NodeReference.
ipAddress=freenodehost.dyndns.org

# The directory to store any temporary files created by the node. It gets
deleted
# automatically on node start and stop.
tempDir=D:\freenettemp\

# This is used only by Windows configurator, not by node
warnPerm=true



# Advanced Entries


# set to yes if you want your node to announce itself to other nodes
doAnnounce=yes

# file containing initial node references
seedFile=seednodes.ref

# The port to listen for local FCP (Freenet Client Protocol) connections on.
clientPort=8481

# The maximum number of bytes per second to transmit, totaled between
# incoming and outgoing connections.  Ignored if either inputBandwidthLimit
# or outputBandwidthLiit is nonzero.
#bandwidthLimit=0

# If nonzero, specifies an independent limit for outgoing data only.
# (overrides bandwidthLimit if nonzero)
outputBandwidthLimit=12288
inputBandwidthLimit=0

#A comma-separated list of hosts which are allowed to talk to node via FCP
fcpHosts=127.0.0.1,localhost

# The hops that initial requests should make.
initialRequestHTL=17

# If this is set then users that can provide the password can
# can have administrative access. It is recommended that
# you do not use this without also using adminPeer below
# in which case both are required.
#adminPassword=

# If this is set, then users that are authenticated owners
# of the given PK identity can have administrative access.
# If adminPassword is also set both are required.
#adminPeer=

# When forwarding a request, the node will reduce the HTL to this value
# if it is found to be in excess.
maxHopsToLive=25

# Should we use thread-management?  If this number is defined and non-zero,
# this specifies how many inbound connections can be active at once.
maximumThreads=120

# The number of connections that a node can keep open at the same time
maxNodeConnections=60



# Geek Settings


# The number of attempts to make at announcing this node per
# initial peer. Zero means the node will not announce itself
announcementAttempts=100

# The amount of time to wait before initially announcing the node,
# and to base the time the time between retries on. In milliseconds.
announcementDelay=180

# The value to mutliply the last delay time with for each retry.
# That is, for try N, we weight announcementDelay*announcementDelay^N
# before starting.
announcementDelayBase=2

# announcementPeers: undocumented.
announcementPeers=3

# How long to wait for authentication before giving up (in milliseconds)
authTimeout=3

# The interval at which to write out the node's data file
# (the store_port file, *not* the cache_port file).
checkPointInterval=1200

# How long to listen on an inactive connection before closing
# (if reply address is known)
connectionTimeout=60

# The expected standard deviation in hopTimeExpected.
hopTimeDeviation=4000

# The expected time it takes a Freenet node to pass a message.
# Used to calculate timeout values for requests.
hopTimeExpected=7000

# The number of keys to request from the returned close values
# after an Announcement (this is per announcement made).
initialRequests=10

# 

[freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Thomas Wuensche
Hi Free Netters,

I just joined this list, so if the issue has been discussed
before please point me to the right location. Search of the
archive did not give me the required results.

I tend to believe that information in freenet is available
which I can not find reliably. While I can not prove that,
the behaviour in extracting information seems to indicate
it. Sometimes information drops in after repeating requests
for days. I am running a permanent node with a large data-
store, so I should have good prerequisites regarding rou-
ting.

What I found in the freenet configuration file is the
entry maxHopsToLive, which seems to have a default value
of 25 and is described to limit the HTL value of requests
passing a node. If this limit is active it would mean that
using a HTL value beyond that limit will be of no help,
since most likely the majority of users do not change that
default and the HTL will be limited at the first node the
request hits. Is that assumption correct?

I do not really understand why this value is introduced
and used. There is the risk that information available
can not be found, right? And I understand that circular
requests are terminated anyhow, right?

Please enlighten me, whether my assumptions are comple-
tely wrong, whether they are right but default is enough
to search a whole universe full of freenet nodes or whe-
ther the default should be changed. Are there other means
to increase the probability to find information in freenet?

Best regards,

Thomas


___
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support



Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Matthew Toseland
On Mon, Dec 09, 2002 at 11:56:30PM +0100, Thomas Wuensche wrote:
 Hi Free Netters,
 
 I just joined this list, so if the issue has been discussed
 before please point me to the right location. Search of the
 archive did not give me the required results.
 
 I tend to believe that information in freenet is available
 which I can not find reliably. While I can not prove that,
 the behaviour in extracting information seems to indicate
 it. Sometimes information drops in after repeating requests
 for days. I am running a permanent node with a large data-
 store, so I should have good prerequisites regarding rou-
 ting.
 
 What I found in the freenet configuration file is the
 entry maxHopsToLive, which seems to have a default value
 of 25 and is described to limit the HTL value of requests
 passing a node. If this limit is active it would mean that
 using a HTL value beyond that limit will be of no help,
 since most likely the majority of users do not change that
 default and the HTL will be limited at the first node the
 request hits. Is that assumption correct?
Yes.
 
 I do not really understand why this value is introduced
 and used. There is the risk that information available
 can not be found, right? And I understand that circular
 requests are terminated anyhow, right?
Because we want to be able to limit the length of time and the amount of
network resources that a request takes. A request only goes to a certain
number of nodes, the hops-to-live. This means that unlike gnutella,
freenet should be able to scale as a single contiguous network, and an
attacker who just sends zillions of requests only gets (max HTL) *
(number of requests) load/bandwidth for his money, rather than (max HTL)
* (number of nodes).
 
 Please enlighten me, whether my assumptions are comple-
 tely wrong, whether they are right but default is enough
 to search a whole universe full of freenet nodes or whe-
 ther the default should be changed. Are there other means
 to increase the probability to find information in freenet?
We may want to increase the default maximum HTL in 0.5.1, however there 
are lots of nodes left over from 0.5.0 and its immediate successors,
which would tend to prevent this, short of forking the network...
 
 Best regards,
 
 Thomas
 

-- 
Matthew Toseland
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project Inc. from 11/9/02 to 11/1/03
http://freenetproject.org/



msg02323/pgp0.pgp
Description: PGP signature


Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Thomas Wuensche
Matthew,

Thanks for your fast and precise answer. I have removed all text
where I think it does not need more discussion. The following
points remain:

Snip


I do not really understand why this value is introduced
and used. There is the risk that information available
can not be found, right? And I understand that circular
requests are terminated anyhow, right?


Because we want to be able to limit the length of time and the amount of
network resources that a request takes. A request only goes to a certain
number of nodes, the hops-to-live. This means that unlike gnutella,
freenet should be able to scale as a single contiguous network, and an
attacker who just sends zillions of requests only gets (max HTL) *
(number of requests) load/bandwidth for his money, rather than (max HTL)
* (number of nodes).


Oh, agreed, protecting freenet against attackers definitely
is worth that considerations. However with a growing net the
percentual reach of a search is getting smaller. A net which
is willing to give only a fraction of the information it holds
to somebody searching for the information of course is also of
limited value. The problem specially is that information that
is new or rarely requested does propagate badly. From the view
of a node searching for the information it may become available
only after somebody in between has also requested it. The only
chance besides that might be a shifted search horizon of the
requester. Is there a chance that my node gets to know more
nodes out there in the freenet universe and to ask them in an
arbitrary sequence? Or is there another chance to find infor-
mation which is beyond my search range (as seen at a single
point in time)? This would at least help me to find the infor-
mation, even if it costs me more bandwidth. Repeatedly asking
the nodes that did not know it before in the hope they might
know it in the future will also significantly increase network
load, but with less probability of success for the regular
user.


Please enlighten me, whether my assumptions are comple-
tely wrong, whether they are right but default is enough
to search a whole universe full of freenet nodes or whe-
ther the default should be changed. Are there other means
to increase the probability to find information in freenet?


We may want to increase the default maximum HTL in 0.5.1, however there 
are lots of nodes left over from 0.5.0 and its immediate successors,
which would tend to prevent this, short of forking the network...

This would be welcome, please increase it to a value that
holds some margin for future growth. The impact of infor-
mation that is not delivered may be bigger than the impact
of an overload attack. The impact of an overload attack might
also be limited by limiting the bandwidth a node is willing to
provide to a single requester, would that be reasonable?

With regard to new versions, wouldn't it help if a certain
percentage of the nodes had higher limits? These would pro-
vide paths of deeper information flow into the network, even
if other paths stay shallow.

And of course it could easily be improved without a version
change, if people would change it manually. Is there a promi-
nent place where that could be published?

Best regards,


Thomas


___
support mailing list
[EMAIL PROTECTED]
http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/support



Re: [freenet-support] maxHopsToLive in Freenet configuration file

2002-12-09 Thread Matthew Toseland
On Tue, Dec 10, 2002 at 03:06:39AM +0100, Thomas Wuensche wrote:
 Matthew,
 
 Thanks for your fast and precise answer. I have removed all text
 where I think it does not need more discussion. The following
 points remain:
 
 Snip
 
 I do not really understand why this value is introduced
 and used. There is the risk that information available
 can not be found, right? And I understand that circular
 requests are terminated anyhow, right?
 
 Because we want to be able to limit the length of time and the amount of
 network resources that a request takes. A request only goes to a certain
 number of nodes, the hops-to-live. This means that unlike gnutella,
 freenet should be able to scale as a single contiguous network, and an
 attacker who just sends zillions of requests only gets (max HTL) *
 (number of requests) load/bandwidth for his money, rather than (max HTL)
 * (number of nodes).
 
 Oh, agreed, protecting freenet against attackers definitely
 is worth that considerations. However with a growing net the
 percentual reach of a search is getting smaller. A net which
Yes, it sounds bad if you don't understand how it works. Read some of
the papers on freenetproject.org. Basically, the idea is that with each
hop the network routes the request closer to the location of the data.
This is accomplished by freenet routing, and is explained in depth on
the web site, but the gist of it is that different nodes specialize in
different sorts of keys, and nodes keep routing tables, which tell
them which nodes have successfully sent back data for which kinds of
keys.
 is willing to give only a fraction of the information it holds
 to somebody searching for the information of course is also of
 limited value. The problem specially is that information that
 is new or rarely requested does propagate badly. From the view
Yeah. The network has some problems at the moment; I do suspect that we
will need to increase the maximum HTL, but that's probably not the main
issue causing slowness and unreliability at the moment on freenet. And
it's better than it has been for a long time.
 of a node searching for the information it may become available
 only after somebody in between has also requested it. The only
 chance besides that might be a shifted search horizon of the
 requester. Is there a chance that my node gets to know more
 nodes out there in the freenet universe and to ask them in an
 arbitrary sequence? Or is there another chance to find infor-
 mation which is beyond my search range (as seen at a single
 point in time)? This would at least help me to find the infor-
 mation, even if it costs me more bandwidth. Repeatedly asking
 the nodes that did not know it before in the hope they might
 know it in the future will also significantly increase network
 load, but with less probability of success for the regular
 user.
 
 Please enlighten me, whether my assumptions are comple-
 tely wrong, whether they are right but default is enough
 to search a whole universe full of freenet nodes or whe-
 ther the default should be changed. Are there other means
 to increase the probability to find information in freenet?
 
 We may want to increase the default maximum HTL in 0.5.1, however there 
 are lots of nodes left over from 0.5.0 and its immediate successors,
 which would tend to prevent this, short of forking the network...
 
 This would be welcome, please increase it to a value that
 holds some margin for future growth. The impact of infor-
 mation that is not delivered may be bigger than the impact
 of an overload attack. The impact of an overload attack might
If freenet routing works, then a fairly low HTL should be sufficient for
a very large network - again, there is more detail on the website.
 also be limited by limiting the bandwidth a node is willing to
 provide to a single requester, would that be reasonable?
Freenet without routing is nothing. We must get freenet routing to work
better. We are working on this. We will probably increase the default
max HTL at some point, but there is a lot we can do other than that to
improve performance.
 
 With regard to new versions, wouldn't it help if a certain
 percentage of the nodes had higher limits? These would pro-
 vide paths of deeper information flow into the network, even
 if other paths stay shallow.
No. A node receives a request at hops-to-live 24, it finds a node that
can accept the request and forwards it to that node at hops-to-live 23,
that node then finds another node and forwards it at 22 etc etc.
 
 And of course it could easily be improved without a version
 change, if people would change it manually. Is there a promi-
 nent place where that could be published?
Yuck. We would never get all freenet users to update it, and besides, we
want to make freenet as low maintenance as reasonably possible.
 
 Best regards,
 
 
 Thomas
 

-- 
Matthew Toseland
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project 

[freenet-support] enrollment

2002-12-09 Thread Ernest True



Cancel my enrollment to MP3 Download Center. 

I spent 2 hours trying to get the program to 
xomplete the deal. I never was able to get it.
I will notify my Bank to NOT honor your $24.95 
charge.