Yes, it seems to be a bug in the current version.

-----Original Message-----
From: Nikos Balkanas [mailto:[email protected]] 
Sent: dinsdag 15 juni 2010 15:35
To: Tomasz; [email protected]
Cc: [email protected]
Subject: Re: Issue with concatenated messages and load balancing over SMPP

Yes. That is pretty much what I said (concerning concatenation). SMPPbox 
seems to need a patch for that.

@Rene: Have you come across this issue in production?

BR,
Nikos
----- Original Message ----- 
From: "Tomasz" <[email protected]>
To: <[email protected]>
Sent: Tuesday, June 15, 2010 3:31 PM
Subject: Re: Issue with concatenated messages and load balancing over SMPP


Hi Nikos,

Sorry, I didn't make it so clear in my first post.

I use other smsc-id for each SMSC but I use also some "virtual"
smsc-id at allowed-smsc-id which is identical for several SMSC. It
allows me to make several SMSC groups with load balancing - each group
has several modems.

Then I specify this "virtual" smsc-id in SMPPBOX config
(route-to-smsc) and kannel makes load-balancing :) Everything works
great with non-concatenated messages.

I've also found that issue is caused by SMPPBOX as it pass to
bearerbox two messages (received from BB 1) instead of 1. It makes
that each part of concatenated is being sent via different smsc, due
to load balancing.

Regards,
Tomasz



> Hi,

> You are not using correct configuration for load-balancing. Use different
> smsc-id for each smsc and simply do not specify smsc in your sendsms url.
> Kannel will load-balance SMS using rand() and load on each smsc. This will
> allow finer control (load) over your smscs, since when you want to use a
> specific SMSc (termination issues?) you can still direct to a specific 
> SMSc
> from your sendsms URL. For that you will want to specify preferred-smsc-id
> as itself in each smsc definition.

> However, I suspect that this will not fix your problem. For a fix, SMPPbox
> needs to reassemble SMS to 1 before sending it to bearerbox 2. Any takers 
> on
> that?

> BR,
> Nikos





Reply via email to