https://bz.apache.org/bugzilla/show_bug.cgi?id=58565

--- Comment #9 from Konstantin Kolinko <knst.koli...@gmail.com> ---
(In reply to comment #8)
> It would seem with the default buffer on Linux + localhost ...

1. If I understand correctly, real networks have some quality-of-service
mechanisms. A well known one is MTU size. For a localhost connection the MTU
size is enormous.

2. I do not know how wget implements its --limit-rate feature. Maybe all those
MBs of data have already arrived and are in it's own (client's) buffer.

3. Maybe in some configurations it makes sense to specify timeouts measured by
throughput, instead of a time unit.

(E.g. it may be easier to configure requirement of minimum throughput of X kb/s
instead of a read timeout of Y sec.) It means that the actual timeout (that
will be used as an argument to APIs) needs to be calculated dynamically based
on amount of data that have been transfered earlier over this connection.

-- 
You are receiving this mail because:
You are the assignee for the bug.

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@tomcat.apache.org
For additional commands, e-mail: dev-h...@tomcat.apache.org

Reply via email to