On Sun, Sep 08, 2013 at 11:26:17PM +0400, Vladislav Sterzhanov wrote: > If to summarize this post up: "Whooray! The guilty bug that screwed the > network-transfer speed found, fixed and helped to reveal some less major > ones." > > So, indeed it turned out that the mentioned bug lived in PacketSender and > was in the wrong nextActionTime calculation. This value is used to > determine the idle period of PacketSender after sending a packet before > sending the next one. After tracing the PeerNode.getNextUrgentTime() and > lining up all the logic of nextActionTime-estimation-related code it turned > out that it did not correctly include the possibility of sending the > fully-sized packet. In the if-branch that corresponded to the situation > when messageQueue had at least one full packet, the corresponding > nextActionTime adjustment relied solely on the urgent time of the next > packet in the queue: "wait() until the deadline of the most prioritized > message in the queue", whereas in that branch it should be "If there is > enough bandwidth to send fully-assembled package - don't wait(), go > directly to the next iteration". > After finding the cause the bugfix was quite straightforward. While > re-writing this part, found some other minor faults, added some explanation > in comments and fixed them. > All bulk file transfers got the speedup in 10-15 times depending of the > quality of the connection. > > Now at last able to move onward. > > https://github.com/Quadrocube/fred-staging/compare/transport-layer-modifications- > actual transport layer changes > https://github.com/Quadrocube/fred-staging/compare/freenet:next...link-stats-tracking- > testing tools, merge them together if you want to have a look at some > samples
Nice catch! Please don't be too discouraged by the below, but there is more to do, and you'll need to redo your patches... Why did you remove the limit on the number of packets queued in BulkTransmitter? Without this we will run out of memory; these are queued to the packet queue, and are fully in-RAM, they are not affected by the congestion window limits. You should do different thing in different commits. IMHO what we need to do here is always go back around the loop after sending a packet. Your code eventually had a separate loop which re-queries each node; easier to use the real logic IMHO. We can't just check the peer we sent on because others might have it, as you discovered; and going back over all the peers is IMHO ugly, easier to use the same code we use for selecting a peer to send to in the first place, fewer special cases. "// FIXME: BUG! canSendThrottled represents only the ability to send FULL packet, urgents can be of less size than full" This is an intentional design decision. We do not send anything until we can send a full packet. The reason for this is that small packets are hideously inefficient. Degenerating to sending a 1 byte payload with a 60+ byte header (including packet headers) is not acceptable. Sending a packet because we have space to send it, and thus preventing us from sending a packet to any of our other peers which have more data queued, is likely to lead to nasty starvation/unfairness problems IIRC. Your comment explaining when we can send stuff is good though.
signature.asc
Description: Digital signature
_______________________________________________ Devl mailing list Devl@freenetproject.org https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl