Di-Shi,
First of all, thank you for such a thorough performance test and
especially for the detailed documentation. As Jiri points out, we
encourage performance tests, SER standalone or comparing with
alternatives, as this both allows developers to get more feedback on how
their code performs, and users of SER really need performance tests in
order to assess their own setups, make design decisions, and try to
identity where bottlenecks may turn up.
We have an iptel.org page for performance:
http://www.iptel.org/ser/doc/performance
If you allow, I would like to make a link to the test (or you can do it
yourself; the page can be edited).
Some comments to the test:
- I'm still not sure that I understand the Post-Dial Delay. You state
that 20% of the calls will complete on the fourth attempt. So, this
means that three INVITEs will time out on 2,000 ms (as SER 2.0 now has a
higher timer resolution, see
http://www.iptel.org/how_the_new_timer_framework_works), which is 6,000
ms or 6s. Wouldn't you then expect 20% of the calls to complete in >6s
? I may have completely misunderstood this, but calls completing in
less than 6s would do so due to the impreciseness of the old 0.9 timers?
So, to understand what actually happens, a scatter diagram (instead of
the groupings) would be better?
- Do you have any idea what happened when CPU > 90% and SER's call
completion dropped? Looks strange to me. I also noticed that debugging
was turned on (which, btw, we have discussed that we probably will turn
off when we release).
- Kudos to all developers for the near-linear scaling of call per second
per CPU! (which we knew of course, but I still like to point out :-)
- Allow me to quote Jim Dalton's (TransNexus) blog post about the test
(http://transnexus.blogspot.com/2007/04/openser-performance-benchmark.html):
"If we had used all four CPU cores we expect the results would have been
800 calls per second. To be conservative, we would recommend service
providers to plan on maximum CPU utilization of about 60%. This would
establish the OpenSER planning gauideline of 500 calls per second on a
server with two, dual core Xeon CPUs.
If you assume 15% of a service provider's traffic occurs during the busy
hour, a 50% Answer Seizure Ratio (ASR) and a 3 minute average call
duration, then 500 calls per second equates to 540 million minutes of
VoIP traffic per month! We think this is impressive for an open source
SIP proxy running on a server with a retail price of $2,967."
(of course, when referring to openser here, I assume he really means *SER)
g-)
[EMAIL PROTECTED] wrote:
Hi Olaf and Jiri,
Thank you for your comments about the test results.
As we had mentioned, this test was designed to understand the
performance of
OpenSER/SER in product environments. Some of the facts were random.
The PDD
is an indirect measure and not very precise. But there are some
details may
be useful to understand the PDD graph.
For every call,
1. The source is a SIPp client.
2. OpenSER/SER receives the INVITE message from the SIPp client and
requests
routing info from a set of OSP servers.
3. There are 5 destinations configured on the OSP servers.
a. 3 unavailable devices for different reasons. All will cause
OpenSER/SER fr_timer timeout which we set to 2 sec.
b. 1 device rejects the call and replies 404.
c. 1 good device. SIPp server.
OpenSER/SER gets these 5 destinations in random order. The worst
condition, OpenSER/SER trys the SIPp server as the last destination.
The PPD
should be 6 sec. For OpenSER 1.1/1.2 test, it is clear that the PDD
depends
on the load. It is reasonable. For SER, it can be explained as the PPD is
just a little longer than 6 sec for the worst condition. The 6 sec
threshold
is not a good value. It should be set to 6.1 sec. Unforunately, we did not
realize it until we finished the test for SER.
For the OpenSER/SER configurations we used in the test, you can find them
under module/osp/etc/sample-osp-openser.cfg and
module/osp/etc/sample-osp-ser.cfg. We only changed the fr_timer to 2 sec.
and set the OSP server IPs and the local device IP.
Thanks
Di-Shi Sun.
----- Original Message -----
From: "Jiri Kuthan" <[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
To: "Olaf Bergmann" <[EMAIL PROTECTED]
<mailto:[EMAIL PROTECTED]>>; "Di-Shi Sun"
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
Cc: <devel@openser.org <mailto:devel@openser.org>>;
<[EMAIL PROTECTED] <mailto:[EMAIL PROTECTED]>>
Sent: Friday, May 04, 2007 6:29 PM
Subject: Re: [Serdev] OpenSER/SER with OSP performance test results
> At 09:16 04/05/2007, Olaf Bergmann wrote:
> >Di-Shi Sun wrote:
> >> Hi All,
> >>
> >> We have performed a benchmark test on OpenSER V1.1, V1.2 and SER
2.0 to
> >> understand and compare the performance of the three releases in a
> >> simulated production environment.
> >
> >Nice, thanks for this interesting piece of work.
> >
> >> Summary of the test results:
> >> ============================
> >> * The performance of OpenSER V1.2 and SER 2.0 are not materially
> >> different, however, there are two minor differences.
> >> - SER V2.0 requires less memory.
> >> - OpenSER V1.2 has less post dial delay.
> >
> >Could you please comment on the PDD graph? For my understanding, the
> >6+ seconds are caused by your failure scenarios? I wonder why the
> >SER graph seems to be constant while the OpenSER looks like
exponential?
>
> I have been struggling with the measurement too (actually I'm even
> missing PDD definition in the document). In a private conversation with
> authors I learned that the test scenario is actually about randomized-
> order forking, with some of the destinations being unavailable. That
explains why
> SER is having a constant failure rate but it does not explain why
> openser is doing better initially (perhaps blacklisting turned on
> by default in openser?) and going like exponential later.
>
> Some few more results would be good in this context too (graph showing
> the actual delay as opposed to percentage exceeding a threshold -- which
> is fine for the 'big picture' but hard to disaggregate for tracing
> what's actually going on).
>
> Other thing which came out of a private chat with authors is that ser
> measurements are in SER's debugging mode (which is set by default
> on CVS). SER is compiled with PKG_MALLOC, DBG_QM_MALLOC while
> openser without (F_MALLOC).
>
> Otherwise for sake of completenss, I would enjoy attached (open)ser
config
files
> (among others it shall reveal if black-listing is turned on as that
should
> have dramatic impact on the results), described call-flows, and
described
> scenario (I mean details about this randomized-order forking).
>
> Apart from this piece of critique, I think it is a great
contribution and
> with some work along the lines I have suggested it will be an excellent
> document.
>
> -jiri
>
>
>
> --
> Jiri Kuthan http://iptel.org/~jiri/
<http://iptel.org/%7Ejiri/>
>
>
>
------------------------------------------------------------------------
_______________________________________________
Serdev mailing list
[EMAIL PROTECTED]
http://lists.iptel.org/mailman/listinfo/serdev
_______________________________________________
Devel mailing list
Devel@openser.org
http://openser.org/cgi-bin/mailman/listinfo/devel