David,

I for one am certainly interested in these changes.  There are a lot of 
them that are generally useful (e.g., your performance improvements), and 
also applicable to any interested in user-model based-benchmarking.  I 
have some more comments below.

[EMAIL PROTECTED] wrote on 04/19/2007 07:37:27 AM:
> Hi,
> 
> We've been doing some research internally at Intel, aiming at using SIPp
> to design an implementation of the ETSI IMS Performance Benchmark
> specification (TS 186 008 -
> http://www.etsi.org/pressroom/Previous/2006/2006_10_tispan.htm).
> 
> We have therefore been looking at a significant set of changes to SIPp
> which I will summarize below. We have a working prototype with most of
> the features implemented but we haven't decided yet whether we want to
> go on with this project and eventually contribute the code changes or if
> we stop after this round of prototyping. As one of the elements on which
> we'll base our decision, I would be very much interested in hearing from
> the people using and developing SIPp what they think about these changes
> and extra features. Would these be interesting for the wider user
> community? Would there be folks volunteering for testing if we
> contribute the changes, mainly to check that things that used to work
> are still working? Given the extent of the changes, if we decide to
> publish them, should we try to merge them back into the main tree or on
> a parallel development branch first? etc.
Of course, I would like Olivier to weigh in on this, but once an unstable 
branch is cut, I think that it makes sense to try merging as many of your 
changes as possible back into that branch.  Some of them may be more 
difficult than others, so a parallel branch may be inevitable; but I think 
we should seek to avoid large parallel branches as much as possible.

The biggest logistical problem I see with this patchset is that there has 
probably been quite a bit of divergence of your tree and the SIPp tree in 
the meantime.  For example, your network changes are going to conflict 
with the network changes that I posted on the list a month ago.  If we had 
more visibility into each other's efforts, hopefully there would have been 
peer review and less duplicated effort.

> The changes and additions were driven by specific requirements from the
> ETSI IMS Benchmark spec which I'll try to quickly summarize here (this
> is my own interpretation of the spec):
> - A mix of session setup, registration, deregistration and instant
> messaging scenarios must be executed concurrently
> - Scenarios must be selected at random with defined probability of
> occurrence for each type of scenario (for example 30% messaging, 50%
> calling, 10% re-registrations, 5% new registrations, 5%
> de-registrations)
> - The number of scenario attempts per time unit must follow a
> statistical Poisson distribution
> - Users involved in the scenario must be selected at random, from the
> set of users that are suitable for the scenario (e.g. a fresh
> registration scenario must pick a user who's not registered yet)
> 
> As a result of these requirements, the test system must ensure that the
> scenario execution is precisely known beforehand. This means for example
> that the test system must be sure that, assuming the system under test
> is operating correctly, when it places a call from a user to another
> one, the call will succeed and follow exactly the expected scenario.
> This requires that the test system picks a user that is registered. This
> is not obvious because at the same time users must deregister and later
> register again because of the mix of scenarios running concurrently.
> Also there are variations on the calling scenario where the called party
> rejects the call. In this case, the UAC and UAS sides must have agreed
> on the expected behavior.
> 
> In addition, we were also looking at making the SIPp-based test system
> scalable and highly performing in order to be able to use it for testing
> very powerful systems and systems with large numbers of users without
> requiring dozens of test systems attacking one single System Under Test
> (SUT).
> 
> Here is a list of features that we have already implemented or are
> considering implementing. Many of them are probably only really useful
> in combination with others but I tried to list them separately for the
> sake of clarity.
> 
> 1. Multiple scenarios support per SIPp instance
>    A "benchmark" XML file can list multiple scenarios - client and/or
> server side - to be loaded by a single SIPp instance. Each scenario has
> its own statistics, etc. Keyboard commands can be used to switch between
> scenarios (to see the corresp data on screen) and potentially to change
> the scenario being executed. But this is really only useful in
> combination with the following features.
> 
> 2. Multiple SIPp instances remotely controlled by a central 'manager'
>    Typical setup includes 1 manager (new piece of code not doing any SIP
> processing) that coordinates multiple SIPp "agent" instances.
> The SIPp instances can be running on the same physical system or on
> different ones. This should allow nice scaling.
> The manager feeds the same set of scenarios to each SIPp instance before
> starting a run and then instructs the SIPp instances to change the
> scenario attempt rate according to a configuration file. Each run
> specifies the occurrence rate of each scenario, as well as the rate of
> scenario attempts, as constant or as increasing steps.
This seems very useful.  Does the manager also handle statistics 
reporting?  Right now, we run about half a dozen instances of SIPp, 
collect all of the results, and have to combine them after the fact. 
Having statistics reporting in the manager would be a big win.
 
> 3. Users and user pools
>    Each SIPp has a set of users that it represents and whose data it
> loads from a data file. Users are placed into pools. New scenario
> commands allow picking a user at random from a specified pool. User
> pools are used to represent user state. For example a calling scenario
> picks a user from the pool of already registered users. The registration
> scenario picks users from the "not registered" pool.
This feature also seems like something we would be interested in.  I have 
contributed a simpler model, in which call objects are essentially 
overloaded as user objects; and a fixed number of users are instantiated 
(the user can then wait for a statistically distributed amount of time to 
take actions like calling).  In the most recent patch set, there is a new 
CSV injection mode, "USER", in which the user can pull a specific line 
from the injection file.

> 4. Inter-SIPp instances control messaging (a sort of much extended 3PCC)
>    If the scenario requires interactions with a server side counterpart
> (e.g. UAC->SUT->UAS), the client side scenario has a new command to
> allow the SIPp playing the client role to select a partner SIPp instance
> at random (from those that registered with the manager). It then sends a
> custom message (over a separate TCP transport which would normally run
> on a separate LAN from the network under test) to the partner telling it
> the server scenario that it requires as well as some extra data so the
> server side can later identify the first SIP message of the scenario
> when the client side sends it. The server scenario then typically starts
> - even before a first SIP message is received - by selecting a user from
> an appropriate local pool and sending a response to the client side SIPp
> telling it the user URI that can be used for the scenario (i.e. the To
> URI). This allows the client side to really start the SIP part of the
> scenario.
> 
>    This scenario user reservation procedure makes it possible to
> guarantee the execution of the scenario - assuming the SUT operates
> correctly - and also allows the individual scenarios to remain very
> simple as they don't have to accommodate multiple possible paths and
> outcomes.
> 
> 5. Poisson distribution for scenario attempts
>    New scenario attempts can be initiated following a statistical
> Poisson distribution; the user reservation procedure is scheduled so
> that the actual SIP scenario start follows the Poisson distribution.
> 
> 6. Timing measurements between different SIPp instances
>    New scenario commands were also added in order to allow agent and
> server side to exchange timing information, and also to allow performing
> computation on timing measurements. This makes it for example possible
> to compute the time it took for the INVITE to get from the UAC, through
> the SUT, to the UAS.
> 
>    Scenario XML files can also specify maximum values for timing
> measurements (direct or computed from other measurements or timestamps,
> local or remote). These maximum values are checked at the end of the
> call and the call is marked as failed in case a maximum is exceeded
> (even though the scenario might have reached its end without actual
> timeout or other error). The manager collects the counters of attempted
> and failed scenarios (due to scenario error or exceeded max time) and
> determines when a run or step has failed (i.e. percentage of failed
> scenario attempts went above a threshold - Design Objective in IMS
> Benchmark spec) and stops the run.
The maximum value parameters are also quite interesting and useful from a 
benchmarking perspective.
 
> 7. User variables
>    Similar to call variables but are attached to a user and can
> therefore be carried over from one scenario attempt to the next.
> Example: store the Service-Route received during the registration
> scenario and use it when later placing a call or sending an IM.
> 
> 8. Performance improvements
>    Under Linux, epoll() allowed to reach a much higher timing precision
> in scheduling new calls and also to significantly lower the CPU
> utilization of SIPp. The keyboard handling thread was removed and
> replaced by polling on stdin, and the remote control thread was also
> integrated into the main polling loop so that each SIPp instance runs as
> a single threaded process. One can run multiple instances to take
> advantage of multi core systems.
Excellent.
 
> 9. Scenario statistics
>    A new statistics file is created with a line for each scenario
> attempted, indicating the scenario executed, the start time, the result
> (success or in case of failure, which case of failure) and the timing
> measurements as listed in the scenario XML file.
>
> 10. Redesign
>    Several of the above features required, for ease of implementation,
> significant redesign of the existing SIPp structure. For example, socket
> handling was moved to a specific set of classes.
The network code was certainly in need of a cleanup, as it seems to have 
been designed with UDP in mind and then TCP was added later.  I have also 
posted a newer version that encapsulates the socket into a structure 
(sipp_socket) along with all of the other socket-related inforamtion.  In 
addition to general cleanup, this fixed some issues with truncated message 
appearing on the network, dropped default messages, and two SIPp instances 
deadlocking when they both use TCP.

I look forward to seeing your patch set,
Charles

--
Dr. Charles P. Wright
Research Staff Member
Network Server Systems Software
IBM T.J. Watson Research Center
-------------------------------------------------------------------------
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express and take
control of your XML. No limits. Just data. Click to get it now.
http://sourceforge.net/powerbar/db2/
_______________________________________________
Sipp-users mailing list
Sipp-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/sipp-users

Reply via email to