Hi, I used two subscribers, but I got the same behavior with 10000.
Thanks > Eleanor Merry <[email protected]> > > Hi, > > I think this is a SIPP quirk. The one ‘call’ is the run of SIPP - the > scenario of REGISTERs and INVITEs in the call_load2.xml script. As the test > is terminated, this one call never finishes – so both successful and failed > calls are 0. > > How many subscribers were you using in your stress test? > > Ellie > > From: Doctor Mescaline [mailto:[email protected]] > Sent: 13 January 2015 08:53 > To: Eleanor Merry > Cc: [email protected] > Subject: Re: [Clearwater] Problem with Stress testing > > Hi, > > if you look at the statistics under the log folder, you can see that there is > no succesful calls and no failed calls, while there is the total number of > calls that is greater than zero. Do you think it’s just a problem in the > computation of the statistics? > > thanks > > Il giorno 12/gen/2015, alle ore 18:48, Eleanor Merry > <[email protected]> ha scritto: > > Hi, > > Thanks for the sending the logs over. They look OK – I’m not seeing any > timeouts/unexpected errors (in the sipp.out file, you can see the table of > SIP messages sent during an instance of the stress test, and see how many > messages ended up being retransmitted/timed out, and whether any of the > messages were unexpected). > > I would guess therefore that your system is running correctly, and the > timeout errors you were seeing before were due to running a greater load - > it’s expected that some messages will time out when the system is heavily > loaded/overloaded. > > You’ll also see time outs when you start running at a high load, as our load > monitor has a slow start. > > As background, the load monitor will admit a request if there are available > tokens in its token bucket (at the cost of one token). It also replenishes > the number of tokens based on what the token rate is, and how long it's been > since the token bucket was last replenished. Every 20 requests, it gets the > latency of the requests (as a smoothed mean), and compares this to the target > latency (100ms). If it is less, then the token rate is increased. How much > it's increased by depends on how far below the average latency is to the > target latency. > > This means that if you're going from a cold start (as you do with the stress > tests), then there is a slow start where the load monitor ramps up the token > replacement rate (as there is a limit as to how fast the token rate can rise) > and so a higher rate of time outs to start with. > > Ellie > > From: Doctor Mescaline [mailto:[email protected]] > Sent: 12 January 2015 14:32 > To: Eleanor Merry > Cc: [email protected] > Subject: Re: [Clearwater] Problem with Stress testing > > Hi, > > I have attached the logs from the stress run. > > Thanks for your help > > > > > > Il giorno 08/gen/2015, alle ore 20:16, Eleanor Merry > <[email protected]> ha scritto: > > Hi, > > The REGISTERs are reaching Bono/Sprout – this rules out connectivity problems. > > I don’t see any errors in the logs you’ve shown though – they all look like > part of a registration attempt. In a successful registration, we expect the > client to do a REGISTER with no authentication credentials. This then gets > rejected with a 401 (which contains a WWW-authenticate header). The client > will then resend the REGISTER, with the appropriate authorization, and this > will then be accepted, and a 200 OK is returned. > > Can you send me more details about what errors you’re seeing when you run the > stress tests? I’d like the logs from the stress run (in > /var/log/clearwater-sip-stress/*), as well as the full logs from Sprout (in > /var/log/sprout/sprout_current). > > Thanks, > > Ellie > _______________________________________________ Clearwater mailing list [email protected] http://lists.projectclearwater.org/listinfo/clearwater
