Hi all,

please find inline couple of comments.

Jiri Kuthan wrote:
At 09:16 04/05/2007, Olaf Bergmann wrote:
Di-Shi Sun wrote:
Hi All,
We have performed a benchmark test on OpenSER V1.1, V1.2 and SER 2.0 to
understand and compare the performance of the three releases in a
simulated production environment.
Nice, thanks for this interesting piece of work.

Summary of the test results:
============================
* The performance of OpenSER V1.2 and SER 2.0 are not materially
different, however, there are two minor differences.
  - SER V2.0 requires less memory.
  - OpenSER V1.2 has less post dial delay.
Could you please comment on the PDD graph? For my understanding, the
6+ seconds are caused by your failure scenarios? I wonder why the
SER graph seems to be constant while the OpenSER looks like exponential?

I have been struggling with the measurement too (actually I'm even missing PDD definition in the document). In a private conversation with
authors I learned that the test scenario is actually about randomized-
order forking, with some of the destinations being unavailable. That explains why SER is having a constant failure rate but it does not explain why
openser is doing better initially (perhaps blacklisting turned on
by default in openser?) and going like exponential later.
yes, backlists are by default turned on in OpenSER, but they are responsible for the boost of PDD value - my understanding (based on latest Di-Shi's email) is that PDD is actually the delay between the INVITE and the final response (200 OK), including all the sequential tries.

I'm quite happy that the blacklist feature proved to be very useful in real cases :).

Anyhow, what the PDD graph does not show is the proxy reaction time (delay between the INVITE and the first 100). This is important as it not affected by the blacklist feature - the blacklists are only checked when the request(s) is/are about to be sent out, but the first 100 trying is automatically sent by TM before that point (it is sent just after the transaction was created).

Here are some interesting data extracted from the pdf:
Invite to 1st 100 message for 220 call rate

OpenSER 1.2
                < 50 ms       94507
                < 100 ms     16731
                < 200 ms     14336
                < 500 ms     3410
                < 1000 ms   2979
                < 2000 ms   30
                < 5000 ms   7
                > 5000 ms   0

SER 2.0
                < 50 ms      26708
                < 100 ms    22578
                < 200 ms    26292
                < 500 ms    15328
                < 1000 ms  17064
                < 2000 ms  10088
                < 5000 ms  5725
                > 5000 ms  5625

I chose this example as the most important result of the tests (generally speaking) is to see the service degradation when the upper limit (as load) is reached. And it is good that the measurements include also this case. See also the call completion rate for 220 cps measurement.

Anyhow, my opinion is that we should look at the overall performance and not only at a specific component performance. In real life there are no setups using only one particular component, but they are a mixture of all proxy features, so the interaction and overall result is more important. ( like society versus individual :D ).
Some few more results would be good in this context too (graph showing
the actual delay as opposed to percentage exceeding a threshold -- which
is fine for the 'big picture' but hard to disaggregate for tracing
what's actually going on).
yes - I think some of the graph are quite cryptic as there is not so much information /details about what it's in there. They can be read by openser & ser gurus, but no chance for ordinary people.

Di-Shi Sun, thanks for the work and your time!

Regards,
Bogdan


_______________________________________________
Devel mailing list
Devel@openser.org
http://openser.org/cgi-bin/mailman/listinfo/devel

Reply via email to