Hello list,

I've got  some updates to show you to better explain my problem.

Situation:
CentOS/6
kannel 1.4.3, used as SMS gateway via http calls originating from a custom 
application my company developed 
kannel.conf: http://pastebin.com/CCX9PGMN

Problem:
Using a multithreaded application to send http requests to kannel, the result 
is that some of them get an answer only after a long delay (sometimes more than 
1 minute). This happens only when a relatively high number of threads is in use 
(>5).
I got this behavior both in virtual and physical environment.
Logfiles (both bearerbox and smsbox) show no errors at all. SMS messages are 
ultimately sent correctly to the SMSC.
Strange thing: the "pause" always happens after about 16.300 requests queued.
The problem seems to be related to the submit interface of kannel; once they're 
queued, everything goes smooth (sms delivered to SMSC at the rate forced by the 
SMSC itself).

To reproduce the issue, I've coded a small Ruby script that is a simplified 
version of the final (more complex) application.
You can find the script here: http://pastebin.com/Pe1QULGS
Feel free to edit the GET parameters and the number of threads and sms per 
thread. 
Here you can find an extract from the results of a 25.000 (25x1000) run: 
http://pastebin.com/RSi42SVE

As far as open files are concerned, during the tests a "lsof -u kannel" has 
never showed values beyond 170, so it shouldn't be the cause of the problem.

I can provide further information, so feel free to ask.

Thanks a lot for your help!

Cheers,
Fabio

Il giorno 09/dic/2011, alle ore 16:29, [email protected] ha scritto:

> Why dont u use SQLBOX? 
> 
> On Fri, Dec 9, 2011 at 7:44 PM, Fabio Sangiovanni <[email protected]> wrote:
> Thanks again Mohammed, I'll change those settings as soon as I get back to 
> work.
> I'll keep you posted! :)
> 
> Cheers,
> Fabio
> 
> 
> Il giorno 09/dic/2011, alle ore 15:21, Mohammed Saleem ha scritto:
> 
>> Hey Fabio
>> 
>> I can't provide a 100% solution for this issue before seeing the log. Try to 
>> check the log and see if there is any error. But most probably it is an 
>> open-files limitation problem, because you haven't set a limit to 
>> "sms-outgoing-queue-limit". But Actually this depends on your throughput and 
>> how many messages you put in the queue of each connection. Try to do the 
>> following:
>> 
>> 
>> (1) increase the limit of open-files for the user running kannel.
>> (2) set a limit for sms-outgoing-queue-limit, for example don't make it more 
>> than 100 per 40 SMS/S throughput. This way kannel will reject any message 
>> when the queue is full (see the HTTP response for the client application 
>> which sends messages, you need to handle this kind of response to retry on 
>> the rejected messages due to Queue Full)
>> 
>> 
>> 
>> 
>> Best Regards,
>> Mohammed M I Sleem
>> 
>> http://www.abusleem.net  - Personal blog
>> 
>> 
>> 
>> On Fri, Dec 9, 2011 at 4:01 PM, Fabio Sangiovanni <[email protected]> 
>> wrote:
>> Hi Mohammed,
>> 
>> thanks for your reply.
>> Unfortunately I won't be at my workplace again until next monday, so I won't 
>> be able to provide detailed logs since then.
>> On the other hand, I can post the kannel.conf file currently in use on the 
>> test machine.
>> Thanks again for your help. I'll post relevant log entries ASAP.
>> 
>> Cheers,
>> Fabio
>> 
>> 
>> ################################
>> #
>> # BEARERBOX
>> #
>> ################################
>> 
>> group = core
>> admin-port = 13000
>> admin-password = SECRET
>> #status-password = SECRET
>> admin-deny-ip = "*.*.*.*"
>> admin-allow-ip = "127.0.0.*;192.168.1.*"
>> smsbox-port = 13003
>> #wapbox-port = 13002
>> #wdp-interface-name = "*"
>> box-deny-ip = "*.*.*.*"
>> box-allow-ip = "127.0.0.1"
>> log-file = "/var/log/kannel/bearerbox.log"
>> access-log = "/var/log/kannel/access.log"
>> store-type = spool
>> store-location = "/var/spool/kannel"
>> log-level = 1
>> #log-level = 0
>> sms-incoming-queue-limit = -1
>> sms-outgoing-queue-limit = -1
>> sms-resend-retry = 0
>> dlr-storage = internal
>> 
>> 
>> ################################
>> #
>> # SMSBOX
>> #
>> ################################
>> 
>> group = smsbox
>> bearerbox-host = "localhost"
>> sendsms-port = 13013
>> sendsms-chars = "0123456789 "
>> log-file = "/var/log/kannel/smsbox.log"
>> log-level = 1
>> #log-level = 0
>> access-log = "/var/log/kannel/access.log"
>> 
>> 
>> ################################
>> #
>> # ACCOUNT/SMSC MAPPINGS
>> #
>> ################################
>> 
>> 
>> # SMSC FAKEPLEX
>> group = sendsms-user
>> username = USERNAME1
>> password = SECRET1
>> name = user_smsc_fake
>> user-deny-ip = "*.*.*.*"
>> user-allow-ip = "127.0.0.*;192.168.1.*"
>> forced-smsc = smscfake
>> default-sender = mycompany
>> max-messages = 3
>> concatenation = 1
>> omit-empty = 1
>> #
>> 
>> # SMSC PRODUCTION
>> group = sendsms-user
>> username = USERNAME2
>> password = SECRET2
>> name = user_smsc_prod
>> user-deny-ip = "*.*.*.*" 
>> user-allow-ip = "127.0.0.*;192.168.1.*"
>> forced-smsc = smscprod
>> default-sender = mycompany
>> max-messages = 3
>> concatenation = 1
>> omit-empty = 1
>> #
>> 
>> 
>> ################################
>> #
>> # SMSC CONFIG
>> #
>> ################################
>> 
>> 
>> # SMSC FAKEPLEX
>> group = smsc
>> smsc = smpp
>> smsc-id = smscfake
>> allowed-smsc-id = smscfake
>> host = <SMSC HOSTNAME (GATEWAY A)>
>> port = 3205
>> receive-port = 3205
>> transceiver-mode = true
>> smsc-username = USERNAME_SMSC
>> smsc-password = SECRET_SMSC
>> system-type = "mycompany3"
>> service-type = 18170
>> max-pending-submits = 10
>> msg-id-type = 0x01
>> log-file = "/var/log/kannel/smsc.gatewayA.log"
>> log-level = 1
>> 
>> group = smsc
>> smsc = smpp
>> smsc-id = smscfake
>> allowed-smsc-id = smscfake
>> host = <SMSC HOSTNAME (GATEWAY B)>
>> port = 3205
>> receive-port = 3205
>> transceiver-mode = true
>> smsc-username = USERNAME_SMSC
>> smsc-password = SECRET_SMSC
>> system-type = "mycompany3"
>> service-type = 18170
>> max-pending-submits = 10
>> msg-id-type = 0x01
>> log-file = "/var/log/kannel/smsc.gatewayB.log"
>> log-level = 1
>> #
>> 
>> # SMSC PRODUCTION
>> group = smsc
>> smsc = smpp
>> smsc-id = smscprod
>> allowed-smsc-id = smscprod
>> host = <SMSC HOSTNAME (GATEWAY A)>
>> port = 3205
>> receive-port = 3205
>> transceiver-mode = true
>> smsc-username = USERNAME_SMSC
>> smsc-password = SECRET_SMSC
>> system-type = "mycompany4"
>> service-type = 16570
>> max-pending-submits = 10
>> msg-id-type = 0x01
>> log-file = "/var/log/kannel/smsc.gatewayA.log"
>> log-level = 1
>> 
>> group = smsc
>> smsc = smpp
>> smsc-id = smscprod
>> allowed-smsc-id = smscprod
>> host = <SMSC HOSTNAME (GATEWAY B)>
>> port = 3205
>> receive-port = 3205
>> transceiver-mode = true
>> smsc-username = USERNAME_SMSC
>> smsc-password = SECRET_SMSC
>> system-type = "mycompany4"
>> service-type = 16570
>> max-pending-submits = 10
>> msg-id-type = 0x01
>> log-file = "/var/log/kannel/smsc.gatewayB.log"
>> log-level = 1
>> #
>> 
>> 
>> Il giorno 07/dic/2011, alle ore 17:08, Mohammed Saleem ha scritto:
>> 
>>> Hi Fabio
>>> 
>>> did you check bearerbox and smsbox for errors? 
>>> 
>>> Also share your full configurations.
>>> 
>>> 
>>> 
>>> On Wed, Dec 7, 2011 at 4:50 PM, Fabio Sangiovanni <[email protected]> 
>>> wrote:
>>> Hello list,
>>> 
>>> the company I work for is using Kannel 1.4.3 on CentOS 6 as SMS gateway 
>>> between our application servers and an SMSC. We have developed a 
>>> multithreaded application that sends HTTP GET requests to our internal 
>>> Kannel proxy. Everything goes smooth up to 5 application threads. The 
>>> problem is that when the number of threads goes beyond 5, the 
>>> responsiveness of Kannel seems to fall down to the point that it doesn't 
>>> even accept more requests. I couldn't figure out where the problem is, nor 
>>> which parameters adjust to widen the number of possible concurrent requests.
>>> 
>>> Are there any limitations on the HTTP side of the gateway? Shouldn't Kannel 
>>> accept requests at max speed, queue them and then deliver to the SMSC 
>>> according to the accept rate of the SMSC itself?
>>> 
>>> I've read through the documentation multiple times but I couldn't find any 
>>> solutions yet.
>>> 
>>> Thanks a lot for your support!
>>> 
>>> Cheers,
>>> Fabio
>>> 
>> 
>> 
> 
> 


Reply via email to