I have managed to get 100 sms/sec with 2mbps and a two-server kannel conf (one bearerbox, one smsbox). Servers had 1gb ram each (cloud servers from gogrid).
Great statement about the sms murphys law btw Falko. So true. Eduardo Sent from my BlackBerry® wireless device -----Original Message----- From: seikath <[email protected]> Date: Tue, 21 Apr 2009 09:14:41 Cc: <[email protected]> Subject: Re: High volume network connectivity so true about Murphys SMS law ... Falko Ziemann wrote: > Hello, > > well, the most OPTIMAL hardware connection is surely SS7. But then > kannel is out of the game and you need a real smsc. But high volume > traffic with constant high traffic should be routed directly to the > carriers/MNOs and no aggregators in between. So best take a phone, try > to get some key-accounters of your local carriers and ask them for rates. > > With a direct connection to the provider and an average commercial > broadband connection to the Internet (SDSL 20 Mbit or something ... > depending on your country and the connection-type to the carriers) 100 > sm/sec are no problem. > > And, by the way: Murphys SMS law: real traffic is always 20% of what the > customers tells you and 10% of what the marketing calculates... > > Regards > Falko > > Am 21.04.2009 um 03:35 schrieb Wade Hought: > >> Hello all, >> >> I'm designing a high-volume sms application (MT & MO, plus reverse >> billing) and wondered if anyone had any recommendations about the most >> optimal hardware connection type you've used to attach to the telecom >> network(s). I'm not in a position to describe the nature of the >> traffic as yet; but it is not spam and the system will not be acting >> as a gateway for anyone else. The traffic could easily reach into >> 100+ sms/sec range for 5-8hr periods in the evening. >> >> I understand this is a broad question to start with, but it seemed >> like a good place to start after seeing how much it would cost to push >> 100k sms per day through a commercial gateway such as clickatell >> (which I've used on prior projects). >> >> Thanks, >> Wade Hought >> >
