Am 24.08.2017 18:24, schrieb Giulio Giovannini:
Hello Stipe.
Thanks for the interest.
I can not confirm 100% that but I can see the smsbox.log flowing without
interruption. So I assume that the lines below are written as a DLR is
picked up by the smsbox and if a new line gets printed the previous DLR
has been processed.
2017-08-24 18:18:27 [27072] [5] INFO: Starting delivery report
<C22834_001> from <Superga>
2017-08-24 18:18:27 [27072] [5] INFO: Starting delivery report
<C11874_013> from <WWM>
2017-08-24 18:18:27 [27072] [5] INFO: Starting delivery report
<C22834_001> from <Superga>
Nonethless, while that log flows fluid, a tcpdump on the machine shows
that HTTP GET are delayed, like accumulated in batches. That is
confirmed on the other end where the HTTP GETs are received in batches.
When traffic is low, there are no losses. When traffic gets intense I
noticed losses in the DLRs on the HTTP receiving end.
Hi Giulio,
now, the only reason I see at the moment for something that "looks" like
a batched HTTP call processing, is the limit that Kannel's HTTP client
sets for how many concurrent calls are allowed to be in a "open state".
Which is controlled via:
group = smsbox
...
max-pending-requests = x
BTW, default is 512 if not configured.
So, what COULD happen is that we load the HTTP server in a high load
situation, the HTTP server may take a bit longer to respond per request,
yielding this "open response" count to 512, and while this is the case
the HTTP client side layer would "block" the further calling towards the
HTTP server. Then HTTP server flushes out a bunch of responses, the
counter drops again for some figure (i.e. 50) and then Kannel's HTTP
client writes again 50, to block again since 512 are again "open". As
long as the input flow is high, this could result in such an effect you see.
But it doesn't explain the differences in the revisions you have been using.
Any chance you can create a test scenario with can be reproduced
deterministically on our side, so we see the differences for both revisions?
Reviewing the changesets from r5180 to r5186 I see only ONE thing
touching gwlib/http.c, which is the HTTP/1.1 keep-alive correction I
added, but this doesn't touch any of the HTTP client side code, as this
is what we do as HTTP server.
So we simply can't qualify any change to be the cause and rather assume
my above scenario to be the case.
--
Best Regards,
Stipe Tolj
-------------------------------------------------------------------
Düsseldorf, NRW, Germany
Kannel Foundation tolj.org system architecture
http://www.kannel.org/ http://www.tolj.org/
stolj at kannel.org st at tolj.org
-------------------------------------------------------------------