On Saturday 24 Aug 2013 20:38:04 Vladislav Sterzhanov wrote:
> - Previous report's img link fixes
> - Adjustments to plot the situations adequately
> - Some sampling results and the issues arised
> _______________________________________
> 
> | So, here are the working links to the previous images:
>    10MiB File transfer: https://www.dropbox.com/s/85n0sk11eqk7p1t/10Mib.png
>    100KiB transfer: https://www.dropbox.com/s/vnlf0gwz5kkqsk3/100Kib.png
> 
> | The problem was indeed in the incorrect reporting of the incoming packet
> sizes but a really opaque one - for some yet unknown for me reason, the
> SentPacket's size is established twice - firstly upon the creating of the
> corresponding NPFPacket by the means of keyContext.sent(sentPacket, seqNum,
> packet.getLength()), and after that in PacketFormat.maybeSendPacket by
> keyContext.sent(packet.getSequenceNumber(), packet.getLength()). But the
> problem was that nevertheless maybeSendPacket included the random
> pre-padding of the packet data, it still reported the old packet size to
> the keyContext. As a consequence, that lapse accumulated with each new
> packet and turned out into a huge inacuracy. After that was fixed I was at
> last able to observe the current packet format's actuall behavior.

Reporting to where?
> 
> | Some plots that illustrate the current situation:
> 10MiB file, the blue one is calculated as the (dataInFlight /
> Node.PACKET_SIZE), the green one - naturally incrementing and decrementing
> the amount on each packet sent\acked\lost. As further tests shown, they are
> almost identical throughout the whole way.
> https://www.dropbox.com/s/374rgdbju4x1myt/triple-track-10Mib.png
> 10Mib file, temporary packet loss at the beginning, two distinct graphs for
> packet\data estimations:
> https://www.dropbox.com/s/8se7j9u1wd6sdp1/temporary_packet_loss_data.png
> https://www.dropbox.com/s/tgvoo2velz5wf2r/temporary_packet_loss_packets.png
> 1Mib file, huge loss during the whole transfer, distinct graphs:
> https://www.dropbox.com/s/ryoepg6654mjag1/decreasing_cwnd_packets.png
> https://www.dropbox.com/s/b4x4jd1sopmfgxw/decreasing_cwnd_data.png
> 
> As you see, in situations where the link is more or less stable, the packet
> format fails to effectively fill up the allowed congestion window. Need to
> solve it first of all, since all other modifications assume that a cwnd
> size is almost every time equal to the actuall amount of data in flight.
> Otherwise, there'll be almost no impact from them.
> So, the suspects for limiting the bandwidth were:
> Congestion control - obviously no, cwnd is held HUGE comparing to actual
> data in flight
> Throttling of LAN connections - no, turned off.
> User-specified bandwidth limiting - no, specified it to 1.5mbps, the upload
> speed never goes up more than 50Kbps

1.5M byte/sec and 50kbyte/sec I assume?

> Bulk transfer hardcoded upper limit (max = Math.min(max, 100);) - no,
> increasing it doesn't seem to change the situation anyhow. And again, there
> are never more than approximately 50 packets in flight at any time
> PacketFormat's MAX_RECEIVE_BUFFER_SIZE - same here..

This is rather odd ...
> 
> Any thoughts about what else might have an impact? The only thought left is
> that the PacketSender simply does not keep pace with the incoming acks
> (which could be possible, the tested laptop is quite old), but I did
> monitor the CPU usage rates and they are far from critical.

Looks like a bug in PacketSender? In a single call to realRun() it will only 
send to one node but if the limit is high it should wake up and send another 
packet?

Attachment: signature.asc
Description: This is a digitally signed message part.

_______________________________________________
Devl mailing list
Devl@freenetproject.org
https://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to