Hi spamden,

I just wanted to follow up on this issue. It appeared that the reason for
the sudden excessive queuing was because of another VM on the same VM
server that was doing excessive disk I/O. We moved the VM and now only the
kannel server is running and things seem much better.

FYI....we are using store-type = file. And kannel would definitely start
swapping once queues got too full.

I do have one final question based on your previous reply. You said "Yes,
it's by design, new submitted messages are going right after they are
submitted". If new submitted messages go right away, even though there are
older messages queued, can you briefly explain the logic that kannel uses
to determine whether a message should be queued or sent right away?

Thanks,
Jeff



On Wed, Mar 12, 2014 at 2:02 AM, spameden <spame...@gmail.com> wrote:

>
>
>
> 2014-03-11 23:48 GMT+04:00 Jeff Thorn <j...@thorntechnologies.com>:
>
> Thanks spamden. 1 and 2 seem fine. We are working on 3 and 4. I don't know
>> what would have caused the sudden change in throughput when everything was
>> working fine. But I will send an update for the group if we figure it out.
>>
>> I am noticing something strange though....our MT queues are starting to
>> deplete now. However, if I look at the kannel "store-status" page, I still
>> see messages queued from 4 hours ago. All new MT requests, however, are now
>> going out immediately. Is this by design? Why doesn't kannel send queued
>> messages before newer ones?
>>
>
> Yes, it's by design, new submitted messages are going right after they are
> submitted unless you use an sqlbox with custom SQL query to get only
> certain MT messages to send to.
>
> What do you use for store? Spool or just a single file?
>
> Also check any errors in the dmesg or syslog. And check if kannel is
> swapping or anything else (free -m).
>
>>
>> Thanks,
>> Jeff
>>
>>
>>
>> On Tue, Mar 11, 2014 at 3:05 PM, spameden <spame...@gmail.com> wrote:
>>
>>>
>>>
>>>
>>> 2014-03-11 22:46 GMT+04:00 Jeff Thorn <j...@thorntechnologies.com>:
>>>
>>> Hi group,
>>>>
>>>> We currently have 7 different Tx binds setup to our SMSC. We have been
>>>> sending MT volumes of up to 3,000,000 messages per day at a rate of almost
>>>> 200 / second. We've been doing this for more than a year now with no
>>>> problems.
>>>>
>>>> Suddenly about a week and a half ago, we noticed our throughput drop
>>>> suddenly. The kannel status page shows we are only sending about 150 msgs /
>>>> sec, but each of the 7 binds have over 50,000 MT messages queued. These
>>>> queues eventually empty out, but some messages get delivered hours later
>>>> than when they were sent by our platform.
>>>>
>>>> I am trying to troubleshoot this sudden drop in throughput and
>>>> excessive queuing. The only thing I can think of is network latency between
>>>> our network and the SMSC network. I don't know how to go about
>>>> troubleshooting this though. Is there any other reason that might explain
>>>> the sudden need for excessive queuing?
>>>>
>>>
>>> There are many potential factors could be related to the actual sending
>>> speed.
>>>
>>> Also I don't find speed in the kannel's web interface very accurate.
>>>
>>>  1) set verbose mode to 0 for your SMSC logs and check if there are any
>>> throttling errors
>>> 2) check for maximum number of open files for kannel (ulimit under
>>> kannel user) and adjust if needed (this needed if you use kanel-store =
>>> spool)
>>> 3) check your backend speed (e.g. dlr-url script or MySQL db how much
>>> queries it can handle simultaneously).
>>> 4) contact your SMSC operator for troubleshooting (e.g. tcpdump the
>>> traffic and try to check if there are any network problems, massive tcp
>>> retransmits or anything else)
>>>
>>>
>>>>
>>>> Side question.....if kannel was restarted with so many MT messages
>>>> queued for delivery, would they all be lost?
>>>>
>>>
>>> No, if you're using kannel-spool or kannel-store mode.
>>>
>>>>
>>>> Any assistance or tips for troubleshooting why so many messages are
>>>> suddenly queuing would be extremely appreciated.
>>>>
>>>> Thanks,
>>>> Jeff
>>>>
>>>
>>>
>>
>

Reply via email to