Your performance tests, while worth looking as a general reference, are
probably near to useless as benchmarks for other people's platforms.

1. Variations in hardware could alter your figures greatly. Differences in
CPU, memory and/or HDD speeds, as well as using RAID and/or SATA/SCSI/Fiber
interfaces (among many other factors) obviously will alter the results, and
it's _very_ difficult to predict the amount of variation, since it depends
greatly on how do you really use Kannel.

2. Variations at the OS level also play a role: Using different Unix
flavors, different disk partitioning types, ramdisks and a whole lot of
other stuff that could be configured in a million different ways will also
alter the results, and in a very hard to predict fashion.

3. Variations in Kannel configuration: the type of store, either using a DB
or not for DLR's (and the type of DB engine), either if the DB engine and
the application layer are local or remote and another thousand possible
factors. Also if you compiled it as 32 or 64 bit and what kind of compiler
optimizations you did.

4. Last but not least, the application layer: yo don't specify what you're
doing with your messages when they're received, and here's when your
benchmarks become unrealistic: The response time of the applications is KEY
and can pose a severe bottleneck to Kannel, thus affecting the "raw
performance" in very unpredictable ways: An application that takes time to
respond will hold threads open, thus incrementing the "iowait" and "load" of
the Kannel box accordingly. That degrades a great deal performance.

So, since it's near impossible to predict all those factors and the
application level can play a key role into determining the final
performance, I wouldn't take your (nor anyone else's) performance benchmarks
as a strict reference of what Kannel is capable of regarding performance in
my particular platform.

I guess a proper statement could be "my Kannel setup running on OS A is
capable of handling B MO/s and C MT/s using this configuration {...}" but
again, external factors like application's speed, smsc throttling and a
thousand more factors could make very similar setups to perform in very
dissimilar ways.

A possible approach to provide a more relevant benchmark would be to
standarize a series of tests with some fixed variables (specially at the
application layer). That would provide at least a measure to compare against
between installs, but again, it wouldn't be able to predict anyone's
real-life scenario at all, as regular computer benchmarks are unable to
predict how fast your computer will be for the work you'll use it for.

In other words, don't take other people's benchmarks very seriously to
predict what your system will be capable of.

Regards,

Alex

2010/8/14 Nikos Balkanas <[email protected]>

> Nope. The average between various servers that have been tested are
> approximately the ones cited. Standard deviation between optimized
> servers/OS is not that large (+/-50). This includes various
> conditions/architectures, each one with each own results. I just posted the
> most common ones. No optimized filesystem was used anywhere (spool, ext3 on
> linux, ufs on Solaris). Tests are very reproducible and discriminating with
> respect to DB, and possibly filesystem used for storage (not tested yet).
>
> If in doubt run your own tests.
>
>
> Nikos
> ----- Original Message ----- From: "Juan Nin" <[email protected]>
> To: <[email protected]>
> Sent: Saturday, August 14, 2010 8:23 AM
>
> Subject: Re: Kannel profermance
>
>
>  So you're saying that on any server and/or architecture the results
>> will be the same?
>> Doesn't seem very reasonable...
>>
>>
>> 2010/8/14 Nikos Balkanas <[email protected]>:
>>
>>> Actually you have missed a couple of more emails. On fakesmpp submission
>>> I
>>> also posted results from a low-end Solaris 10 64bit box. Very similar to
>>> the
>>> results posted from the linux server. The averages seem pretty solid. So,
>>> contrary to your beliefs, it is not giving out the wrong impression.
>>>
>>> BR,
>>> Nikos
>>> ----- Original Message ----- From: "Juan Nin" <[email protected]>
>>> To: <[email protected]>
>>> Sent: Saturday, August 14, 2010 4:33 AM
>>> Subject: Re: Kannel profermance
>>>
>>>
>>>  Ok, just saw on another thread where you got those values from, but
>>>> again, that's very system specific.
>>>>
>>>> I guess you point was just to say that Kannel was not the issue for
>>>> his bottleneck, but saying "It can handle ~1000 MO/s, 750 MT/s
>>>> (internal DLRs) or 450 MT/s (DB DLRs)" may give the wrong impression
>>>> that that's the maximum it can handle, or that it can always handle
>>>> that load, while on low end servers it may not.
>>>>
>>>> So saying something like that in that way can confuse new users, IMHO.
>>>>
>>>>
>>>> On Fri, Aug 13, 2010 at 10:26 PM, Juan Nin <[email protected]> wrote:
>>>>
>>>>>
>>>>> 2010/8/13 Nikos Balkanas <[email protected]>:
>>>>>
>>>>>>
>>>>>> It is unlikely that kannel is your bottleneck. It can handle ~1000
>>>>>> MO/s,
>>>>>> 750
>>>>>> MT/s (internal DLRs) or 450 MT/s (DB DLRs).
>>>>>>
>>>>>
>>>>>
>>>>> Just curious to know where do you get those values from...
>>>>> What Kannel supports depends on your hardware and architecture
>>>>>
>>>>>
>>>>
>>>>
>>>> --
>>>> Juan Nin
>>>> 3Cinteractive / Mobilizing Great Brands
>>>> http://www.3cinteractive.com
>>>>
>>>>
>>>
>>>
>>
>>
>> --
>> Juan Nin
>> 3Cinteractive / Mobilizing Great Brands
>> http://www.3cinteractive.com
>>
>>
>
>

Reply via email to