On Sun, Oct 26, 2003 at 11:25:08AM -0800, Mike Stump wrote:
> This isn't working:
> 
> Oct 26, 2003 5:53:46 PM (freenet.PeerPacketMessage,  write interface thread, 
> NORMAL): Took 1143 seconds to send [EMAIL PROTECTED]:freenet.Message: QueryRejected 
> @null @ 29166ecc7cf191: htl=1, reason=Node 
> overloaded:QueryRejected{Close=false,Sustain=false,DataLength=0,{HopsToLive=1,Reason=Node
>  overloaded,Attenuation=1,UniqueID=29166ecc7cf191,}}:null:true, 
> prio=1(notifySuccess(null)! (last connection registered 7778 seconds ago on [EMAIL 
> PROTECTED] (DSA(64a4 40e5 401b b3af 9d19  47be 0436 189c 3407 
> 5eab),tcp/12.247.157.104:26694, sessions=1, presentations=1, ID=DSA(64a4 40e5 401b 
> b3af 9d19  47be 0436 189c 3407 5eab)): outbound attempts=2:13/15
> Oct 26, 2003 5:53:46 PM (freenet.PeerPacketMessage,  write interface thread, 
> NORMAL): Took 2210 seconds to send [EMAIL PROTECTED]:freenet.Message: QueryRejected 
> @null @ cbb4370c3201e6f4: htl=5, reason=Required protocol version is 
> 1.47:QueryRejected{Close=false,Sustain=false,DataLength=0,{HopsToLive=5,Reason=Required
>  protocol version is 1.47,Attenuation=0,UniqueID=cbb4370c3201e6f4,}}:null:true, 
> prio=1(notifySuccess(null)! (last connection registered 81 seconds ago on [EMAIL 
> PROTECTED] (DSA(d723 922d 6e7a 7226 6203  498d d358 e88c 03dc 
> b14d),tcp/24.208.128.31:30721, sessions=1, presentations=1, ID=DSA(d723 922d 6e7a 
> 7226 6203  498d d358 e88c 03dc b14d)): outbound attempts=0:0/0
> Oct 26, 2003 5:53:49 PM (freenet.PeerPacketMessage,  write interface thread, 
> NORMAL): Took 317 seconds to send [EMAIL PROTECTED]:freenet.Message: QueryRejected 
> @null @ 8c6d4c8c4f4cbf86: htl=18, reason=Node 
> overloaded:QueryRejected{Close=false,Sustain=false,DataLength=0,{HopsToLive=12,Reason=Node
>  overloaded,Attenuation=1,UniqueID=8c6d4c8c4f4cbf86,}}:null:true, 
> prio=1(notifySuccess(null)! (last connection registered 553 seconds ago on [EMAIL 
> PROTECTED] (DSA(66d7 6419 5569 c1b1 469d  0492 702a 00a8 303e 
> 5950),tcp/12.235.108.80:17704, sessions=1, presentations=1, ID=DSA(66d7 6419 5569 
> c1b1 469d  0492 702a 00a8 303e 5950)): outbound attempts=1:0/1
> Oct 26, 2003 5:53:51 PM (freenet.PeerPacketMessage,  write interface thread, 
> NORMAL): Took 2045 seconds to send [EMAIL PROTECTED]:freenet.Message: QueryRejected 
> @null @ 8498fa9f302fb500: htl=7, reason=Required protocol version is 
> 1.47:QueryRejected{Close=false,Sustain=false,DataLength=0,{HopsToLive=7,Reason=Required
>  protocol version is 1.47,Attenuation=0,UniqueID=8498fa9f302fb500,}}:null:true, 
> prio=1(notifySuccess(null)! (last connection registered 86 seconds ago on [EMAIL 
> PROTECTED] (DSA(d723 922d 6e7a 7226 6203  498d d358 e88c 03dc 
> b14d),tcp/24.208.128.31:30721, sessions=1, presentations=1, ID=DSA(d723 922d 6e7a 
> 7226 6203  498d d358 e88c 03dc b14d)): outbound attempts=0:0/0

One possibility for the above is that your actual internet connection is
clogged up despite the node not thinking you are using much bandwidth.
Please check this.

> 
> 
> I have 304 transmitting (from an 8KB/s upstream) and if my CPU was
> faster it feels like it would want to devolve into a 1500 transmitting
> to achieve the 5 bytes/sec transmit rate it wants to be.  The problem
> is that general says I am using 30% of my upstream.  304 transmitting
> using 30%?  Does this mean that 70% of my upstream is free or that 70%
> is used for other things?  If for other things, what things?

Please check that this is sustained long term using outputBytes. Please
check how much of your output bandwidth is *actually* used using some
bandwidth monitor software or something. It could well be that the nodes
that are supplying the data are simply sending it slow, and your node is
performing reasonably.
> 
> Instead, lets queue up the transmits and do the very last one queued
> first, before all others.  If the upstream isn't 100% pegged, open up
> another, keep doing that until we peg the upstream.  Once it is
> pegged, we don't transmit any others until it unpegs.

The next person who suggests this... never mind. Seriously, we can't
just queue sends BECAUSE WE DONT USUALLY HAVE THE DATA TO QUEUE. And if
the upstream is so much as 80% pegged, WE DO ALREADY REJECT ALL QUERIES!
> 
> What we have now feels like a collapsed network that is grinding under
> its own request load.  Queing up a QR for 2000 seconds just isn't
> right.  Is this the design that someone wanted?

That's possible. I don't know why the QRs are queued for that time, they
should have timed out before that, could be a bug, will debug if/when I
reproduce it locally.
> 
> And an indication that something is wrong:
> 
> Lowest global time estimate   233249ms
> Highest global time estimate  1106205ms
> 
> that's over 18 minutes.  My browser times things out after 60 seconds.
> Reading web pages that take 18 minutes per link, isn't useful.  Start
> with the idea that if they don't read it in the next 10 seconds, they
> never want to read it.  Gear the network up to make it happen that
> way.

The average file size is over 100kB.
> 
> As a separate architecture, we need to formalize large content
> transfer.  Here the queuing technology should be equitable, and FIFO
> and can devolve to needing a day to move a file.  Actually, if we
> register postal addresses per DSA and then when things queue up past 5
> days, we can burn a CD and mail it via land mail and get an upper
> limit of a week latency and 462 KB/second throughput, assuming 10
> DVD-RW disks a day.  :-)

How do you make this anonymous? :)
> 
> Hum, maybe a usenet style architecture and we just pass the most
> popular content through to fill the pipe, after that we just trim.
> Usenet offers 0 latency, in some respects great anonymity and privacy
> and low lawsuit potential.  The idea would be that as you request
> content, you connect into the edges and as your upstream edge goes
> away, you connect to their upstream, when they come back, they connect
> lower in the chain.  If you can handle a higher upstream, you migrate
> closer to the center.  If you are unreliable, have a small upstream or
> high latency, you migrate to the edges.  Most content would be from
> one of two connections, one up (nearer to the network center) and one
> down, away from the network center.
> 
> Old content can be gotten from the edges of the network or from a FIFO
> queue from a small fixed % of the upstream of each node.
> 
> As not everyone wants the same fixed content from the network, a
> dynamic ability to have many such networks all running in parallel.
> The trick is then seeing if we can get the anonymity out of such a
> design.
> 
> Usenet is an interesting comparison.  Usenet goes for around $1/gig,
> with a 18 to 180 day retention, depending upon how large the avereage
> post size is.  If some large operator offered freenet access at
> $1/gig, would we have a design that was less secure?  Fred would
> migrate connections to them by itself, and if they were required by
> law to log access, would freenet be any different than usenet?

Whatever. I'm not going to bother even arguing with you today.
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to