Hi.

Am 24.08.2018 um 11:04 schrieb Daniel Schneller:
> Hi!
> 
> Thanks for that input. I would like to understand what's going before making
> changes. :)

There is a how-it-works.txt
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=doc/design-thoughts/how-it-works.txt;h=2d1cb89a059e477469b2f980e970c22f4af6da66;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7
document

Well that's tricky as they are ticks in haproxy.

for example:

http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=src/backend.c;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7#l1224

###
1224         /* set connect timeout */
1225         s->si[1].exp = tick_add_ifset(now_ms, s->be->timeout.connect);
###

The ticks are explained here
http://git.haproxy.org/?p=haproxy-1.8.git;a=blob;f=include/common/ticks.h;hb=d804e5e6b76bfd34576305ff33fe32aacb1fa5b7

There is a scheduler which handles the run queue and all this timeouts and other
tasks in haproxy.

More below.

> Cheers,
> Daniel
> 
> 
>> On 24. Aug 2018, at 00:56, Igor Cicimov <ig...@encompasscorporation.com
>> <mailto:ig...@encompasscorporation.com>> wrote:
>>
>> Hi Daniel,
>>
>> We had similar issue in 2015, and the answer was: server timeout was too
>> short. Simple.
>>
>> On Thu, 23 Aug 2018 9:56 pm Daniel Schneller
>> <daniel.schnel...@centerdevice.com 
>> <mailto:daniel.schnel...@centerdevice.com>>
>> wrote:
>>
>>     Friendly bump. 
>>     I'd volunteer to do some documentation amendments once I understand the
>>     issue better :D
>>
>>>     On 21. Aug 2018, at 16:17, Daniel Schneller
>>>     <daniel.schnel...@centerdevice.com
>>>     <mailto:daniel.schnel...@centerdevice.com>> wrote:
>>>
>>>     Hi!
>>>
>>>     I am trying to wrap my head around an issue we are seeing where there 
>>> are
>>>     many HTTP 504 responses sent out to clients.
>>>
>>>     I suspect that due to a client bug they stop sending data midway during
>>>     the data phase of the request, but they keep the connection open.
>>>
>>>     What I see in the haproxy logs is a 504 response with termination flags
>>>     "sHNN". 
>>>     That I read as haproxy getting impatient (timeout server) waiting for
>>>     response headers from the backend. The backend, not having seen the
>>>     complete request yet, can't really answer at this point, of course.
>>>     I am wondering though, why it is that I see the I don't see a 
>>> termination
>>>     state indicating a client problem.
>>>
>>>     So my question (for now ;-)) boils down to these points:
>>>      
>>>     1) When does the server timeout actually start counting? Am I right to
>>>     assume it is from the last moment the server sent or (in this case)
>>>     received some data?

Let's say it like this.
If the connection is established and no event was triggered then the timeout
will be triggered.

>>>     2) If both "timeout server" and "timeout client" are set to the same
>>>     value, and the input stalls (after the headers) longer than that, is it
>>>     just that the implementation is such that the server side timeout "wins"
>>>     when it comes to setting the termination flags? 

As far as I understand haproxy the client will win as this tick starts earlier.

client -> timeout client -> server -> timeout server
Client stalls which is the first element in the chain.

>>>     3) If I set the client timeout shorter than the server timeout and
>>>     produced this situation, should I then see a cD state?  If so, would I 
>>> be
>>>     right to assume that if the server were now to stall, the log could 
>>> again
>>>     be misleading in telling me that the client timeout expired first?

cD yes.
You should see the sD also just aside of cD and some unusual timing values.

https://cbonte.github.io/haproxy-dconv/1.8/configuration.html#8.4

>>>     I understand it is difficult to tell "who's to blame" for an inactivity
>>>     timeout without knowledge about the content or final size of the request
>>>     -- I just need some clarity on how the read the logs :)
>>>
>>>
>>>     Thanks!
>>>     Daniel

Best regards
Aleks

>>>     -- 
>>>     Daniel Schneller
>>>     Principal Cloud Engineer
>>>
>>>     CenterDevice GmbH
>>>     Rheinwerkallee 3
>>>     53227 Bonn
>>>     www.centerdevice.com <http://www.centerdevice.com/>
>>>
>>>     __________________________________________
>>>     Geschäftsführung: Dr. Patrick Peschlow, Dr. Lukas Pustina, Michael
>>>     Rosbach, Handelsregister-Nr.: HRB 18655, HR-Gericht: Bonn,
>>>     USt-IdNr.: DE-815299431
>>>
>>>     Diese E-Mail einschließlich evtl. beigefügter Dateien
>>>     enthält vertrauliche und/oder rechtlich geschützte Informationen.
>>>     Wenn Sie nicht der richtige Adressat sind oder diese E-Mail
>>>     irrtümlich erhalten haben, informieren Sie bitte sofort den Absender
>>>     und löschen Sie diese E-Mail und evtl. beigefügter Dateien umgehend. Das
>>>     unerlaubte Kopieren, Nutzen oder Öffnen evtl. beigefügter Dateien
>>>     sowie die unbefugte Weitergabe dieser E-Mail ist nicht gestattet.
>>>
>>>
>>
> 


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to