Hi,

We've been doing some research internally at Intel, aiming at using SIPp
to design an implementation of the ETSI IMS Performance Benchmark
specification (TS 186 008 -
http://www.etsi.org/pressroom/Previous/2006/2006_10_tispan.htm).

We have therefore been looking at a significant set of changes to SIPp
which I will summarize below. We have a working prototype with most of
the features implemented but we haven't decided yet whether we want to
go on with this project and eventually contribute the code changes or if
we stop after this round of prototyping. As one of the elements on which
we'll base our decision, I would be very much interested in hearing from
the people using and developing SIPp what they think about these changes
and extra features. Would these be interesting for the wider user
community? Would there be folks volunteering for testing if we
contribute the changes, mainly to check that things that used to work
are still working? Given the extent of the changes, if we decide to
publish them, should we try to merge them back into the main tree or on
a parallel development branch first? etc.

The changes and additions were driven by specific requirements from the
ETSI IMS Benchmark spec which I'll try to quickly summarize here (this
is my own interpretation of the spec):
- A mix of session setup, registration, deregistration and instant
messaging scenarios must be executed concurrently
- Scenarios must be selected at random with defined probability of
occurrence for each type of scenario (for example 30% messaging, 50%
calling, 10% re-registrations, 5% new registrations, 5%
de-registrations)
- The number of scenario attempts per time unit must follow a
statistical Poisson distribution
- Users involved in the scenario must be selected at random, from the
set of users that are suitable for the scenario (e.g. a fresh
registration scenario must pick a user who's not registered yet)

As a result of these requirements, the test system must ensure that the
scenario execution is precisely known beforehand. This means for example
that the test system must be sure that, assuming the system under test
is operating correctly, when it places a call from a user to another
one, the call will succeed and follow exactly the expected scenario.
This requires that the test system picks a user that is registered. This
is not obvious because at the same time users must deregister and later
register again because of the mix of scenarios running concurrently.
Also there are variations on the calling scenario where the called party
rejects the call. In this case, the UAC and UAS sides must have agreed
on the expected behavior.

In addition, we were also looking at making the SIPp-based test system
scalable and highly performing in order to be able to use it for testing
very powerful systems and systems with large numbers of users without
requiring dozens of test systems attacking one single System Under Test
(SUT).

Here is a list of features that we have already implemented or are
considering implementing. Many of them are probably only really useful
in combination with others but I tried to list them separately for the
sake of clarity.

1. Multiple scenarios support per SIPp instance
   A "benchmark" XML file can list multiple scenarios - client and/or
server side - to be loaded by a single SIPp instance. Each scenario has
its own statistics, etc. Keyboard commands can be used to switch between
scenarios (to see the corresp data on screen) and potentially to change
the scenario being executed. But this is really only useful in
combination with the following features.

2. Multiple SIPp instances remotely controlled by a central 'manager'
   Typical setup includes 1 manager (new piece of code not doing any SIP
processing) that coordinates multiple SIPp "agent" instances.
The SIPp instances can be running on the same physical system or on
different ones. This should allow nice scaling.
The manager feeds the same set of scenarios to each SIPp instance before
starting a run and then instructs the SIPp instances to change the
scenario attempt rate according to a configuration file. Each run
specifies the occurrence rate of each scenario, as well as the rate of
scenario attempts, as constant or as increasing steps.

3. Users and user pools
   Each SIPp has a set of users that it represents and whose data it
loads from a data file. Users are placed into pools. New scenario
commands allow picking a user at random from a specified pool. User
pools are used to represent user state. For example a calling scenario
picks a user from the pool of already registered users. The registration
scenario picks users from the "not registered" pool.

4. Inter-SIPp instances control messaging (a sort of much extended 3PCC)
   If the scenario requires interactions with a server side counterpart
(e.g. UAC->SUT->UAS), the client side scenario has a new command to
allow the SIPp playing the client role to select a partner SIPp instance
at random (from those that registered with the manager). It then sends a
custom message (over a separate TCP transport which would normally run
on a separate LAN from the network under test) to the partner telling it
the server scenario that it requires as well as some extra data so the
server side can later identify the first SIP message of the scenario
when the client side sends it. The server scenario then typically starts
- even before a first SIP message is received - by selecting a user from
an appropriate local pool and sending a response to the client side SIPp
telling it the user URI that can be used for the scenario (i.e. the To
URI). This allows the client side to really start the SIP part of the
scenario.

   This scenario user reservation procedure makes it possible to
guarantee the execution of the scenario - assuming the SUT operates
correctly - and also allows the individual scenarios to remain very
simple as they don't have to accommodate multiple possible paths and
outcomes.

5. Poisson distribution for scenario attempts
   New scenario attempts can be initiated following a statistical
Poisson distribution; the user reservation procedure is scheduled so
that the actual SIP scenario start follows the Poisson distribution.

6. Timing measurements between different SIPp instances
   New scenario commands were also added in order to allow agent and
server side to exchange timing information, and also to allow performing
computation on timing measurements. This makes it for example possible
to compute the time it took for the INVITE to get from the UAC, through
the SUT, to the UAS.

   Scenario XML files can also specify maximum values for timing
measurements (direct or computed from other measurements or timestamps,
local or remote). These maximum values are checked at the end of the
call and the call is marked as failed in case a maximum is exceeded
(even though the scenario might have reached its end without actual
timeout or other error). The manager collects the counters of attempted
and failed scenarios (due to scenario error or exceeded max time) and
determines when a run or step has failed (i.e. percentage of failed
scenario attempts went above a threshold - Design Objective in IMS
Benchmark spec) and stops the run.

7. User variables
   Similar to call variables but are attached to a user and can
therefore be carried over from one scenario attempt to the next.
Example: store the Service-Route received during the registration
scenario and use it when later placing a call or sending an IM.

8. Performance improvements
   Under Linux, epoll() allowed to reach a much higher timing precision
in scheduling new calls and also to significantly lower the CPU
utilization of SIPp. The keyboard handling thread was removed and
replaced by polling on stdin, and the remote control thread was also
integrated into the main polling loop so that each SIPp instance runs as
a single threaded process. One can run multiple instances to take
advantage of multi core systems.

9. Scenario statistics
   A new statistics file is created with a line for each scenario
attempted, indicating the scenario executed, the start time, the result
(success or in case of failure, which case of failure) and the timing
measurements as listed in the scenario XML file.

10. Redesign
   Several of the above features required, for ease of implementation,
significant redesign of the existing SIPp structure. For example, socket
handling was moved to a specific set of classes.

Thanks in advance for any feedback on this.

-David

-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Sipp-users mailing list
Sipp-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sipp-users

Reply via email to