Jano wrote: > My Sim.NODES is 100? Mine too, but each Node has an average of five Peers.
> In any case the queues getting larger are the > transfers, and with your below 17KB mean reply size that's clearly a lot > more than 2G. By large. 17 KB is the average size of a reply inside the simulated network. In the simulator itself the messages are much smaller, but even if they're only 40 bytes that would add up to 2 GB. > I'm lost here, I don't see where the 8000 comes from Sorry, that was a typo, it should have been 80000. It comes from the graphs I posted - 80000 successful requests in two hours seems to be the limit with throttling. Without throttling we can go above 80000 but it's not sustainable. So we get 80000 replies in two hours - roughly 11 per second, with an average size of 17 KB. The capacity of the network (ignoring slow nodes) is 15 KB/s * 100 nodes = 1500 KB/s. So the replies can't be travelling much more than 8 hops on average. > But, assuming the 8hop average, we have only ~79k > maximum throughput, and then my last simulation is clearly overboard. Right - I think what we're seeing is that when the network is overloaded, short-range requests are succeeding but long-range requests are failing, so the average route length is no longer 8 hops. Which means route length isn't necessarily a good metric either. :-/ >> Any suggestions for a better metric? > > At this preliminary point, I'd say that remote successes are a good start. > We could later study the hop count of successes in a non-saturated network > and compare to the saturated cases. Good plan - I've added separate counters for local and remote successes. > I'll take advantage of this and redo my lifo changes more > carefully. If I could get svn write right I could put them in a new "phase" > in the sim repository... Sounds good - or even better, add a lifo switch to the command line that sets a static flag in Peer, then check the flag when adding messages to the queue. That way if we make any other changes we don't have to maintain two parallel branches. > This was an oversight in my part, right. Do this mean that the failure will > always be reported by the requesting/inserting node (when timeouting), so > the drops shouldn't be counted in any case? Right, failure or success should only be reported by the node that started the search - that way we record exactly one result per search. Cheers, Michael