Hi Kalle, I'm hoping I've understood your post. Apologies if I've lost the plot - I've put some questions/comments in-line below.
On Thu, 2004-10-07 at 01:52, Kalle Marjola wrote: > So, what does this first version do: > Instead of immediately saying '202 Send' for valid sendsms request, this > patch waits for bearerbox ACK before replying to HTTP client. > Valid replies are based on msg.h and are now: > 202 0: Accepted for delivery (routed to SMSC driver) > 202 3: Queued for later delivery (all SMSCes currently down) > 403 Forbidden (no SMSC comfigured to accept this, fix something) > 503 Service unavailable (queue full or other problem. Try again later) Knowing up front that your message has been routed to SMSC (when not queued) would be IMHO a great improvement. > This is still far from ideal (to get SMSC reply), but better > than earlier. And this is how it should have been, always, > as why else these is those 'ack' type messages flying around... > (currently they are just discarded) > Performance notes: > * yes, it takes a microsecond longer for Kannel to reply to > sendsms. Moreover, if system is completyl bogged down, HTTP > queries start to build up, too. > * if SMS is multisent (several receivers), status and reply > is based on first ACK from bearerbox, rest are discarded It would be good to understand more the impact on doing this (as you note). Please consider the following scenario ... Often connections to SMSC are bandwidth limited (we have some SMPP links over limited FR for example). What this can mean is (under load) that there is not as much traffic inbound as out. With Kannel as it currently operates - we send and get a 202, the message is queued as SMSC's links are all busy. As a result internal queues can build up faster than messages get popped and sent. Success and failure is therefore measured by using DLR's (which of course don't help the throughput rates ;). Basically put Kannel can queue at a rate easily an order of magnitude greater than it can send let allow have messages acknowledged. What happens if actual delivery of messages takes longer than HTTP timeout? Would we be actually be making things more complicated? Also if load balancing with internal queues - in the event one SMSC link is dropped the message queues are effectively shuffled. What would happen if we're waiting to hear back from SMSC about message and the link is dropped, etc? We would need to cope with these scenarios. Cheers, Alan
