Thank you for your answer.
Then I’ll wait for the new version to run the new test.
However, running the old stress test I saw something strange, there are 2830
Unexpected-Msg as you see in the following log.
Do you know what can cause this problem? ( I didn’t change the sip-stress.xml,
is the original one.)
Thank you again
Fabrizio
------------------------------ Scenario Screen -------- [1-9]: Change Screen --
Users (length) Port Total-time Total-calls Remote-host
15000 (0 ms) 5060 7057.86 s 15000 192.168.3.61:5060(TCP)
Call limit reached (-m 15000), 1.001 s period 1 ms scheduler resolution
12170 calls (limit 15000) Peak was 15000 calls, after 0 s
0 Running, 12172 Paused, 55 Woken up
0 dead call msg (discarded) 0 out-of-call msg (discarded)
12172 open sockets
Messages Retrans Timeout Unexpected-Msg
Pause [0ms/10:00] 15000 0
REGISTER ----------> 15000 0
401 <---------- 15000 0 0 0
REGISTER ----------> 15000 0
200 <---------- 15000 0 0 0
REGISTER ----------> 15000 0
401 <---------- 15000 0 0 0
REGISTER ----------> 15000 0
200 <---------- 15000 0 0 0
Pause [ 10.0s] 15000 0
REGISTER ----------> B-RTD1 318238 0
200 <---------- E-RTD1 318238 0 0 0
REGISTER ----------> B-RTD1 318238 0
200 <---------- E-RTD1 318238 0 0 0
Pause [$reg_pause] 265542 0
Pause [$pre_call_delay] 52696 0
INVITE ----------> B-RTD2 51768 0
100 <---------- 51755 0 0 0
INVITE <---------- 13 0 0 0
100 <---------- 13 0 0 0
INVITE <---------- 48925 0 0 2830
100 ----------> 48938 0
180 ----------> 48938 0
180 <---------- 48938 0 0 0
Pause [$call_answer] 48938 0
200 ----------> 48909 0
200 <---------- 48909 0 0 0
ACK ----------> 48909 0
ACK <---------- 48909 0 0 0
UPDATE ----------> 48909 0
UPDATE <---------- 48909 0 0 0
200 ----------> 48909 0
200 <---------- E-RTD2 48909 0 0 0
Pause [$call_length] 48909 0
BYE ----------> B-RTD3 48751 0
BYE <---------- 48751 0 0 0
200 ----------> 48751 0
200 <---------- E-RTD3 48751 0 0 0
Pause [$post_call_delay] 48751 0
------- Waiting for active calls to end. Press [q] again to force exit. -------
Last Error: Aborting call on unexpected message for Call-Id '7810-9023@1...
Fabrizio
On Jan 27, 2017, at 1:58 PM, Sebastian Rex
<[email protected]<mailto:[email protected]>> wrote:
Hi,
The new-style clearwater-sip-stress-coreonly scripts are broken in the most
recent stable release of Project Clearwater. They will be fixed in the next
release.
To work around this, you could either use the clearwater-sip-stress-coreonly
package from the “latest” repository, rather than “stable” (i.e.
http://repo.cw-ngv.com/latest), or you could continue to use the old-style
clearwater-sip-stress scripts until the next release.
I hope that helps,
Seb.
From: Clearwater [mailto:[email protected]] On
Behalf Of Faustinoni Fabrizio
Sent: 25 January 2017 19:03
To:
[email protected]<mailto:[email protected]>
Subject: Re: [Project Clearwater] Clearwater sip-stress fail due to
Unexpected-Msg 183
Now I’ve enough time to explain my whole scenario.
I’ve deployed a Clearater cluster on Openstack:
* 2 ralf nodes
* 5 sprout nodes
* 5 bono nodes
* 4 homestead
* 4 homer
* 1 ellis
* 1 stress test node
* 1 dns node (bind software)
I’ve followed the Manual installation:
http://clearwater.readthedocs.io/en/stable/Installation_Instructions.html
From each node I can ping:
* The home_domain: demo.clearwater (every time a different bono node
answer)
* The sprout.demo.clearwater (every time a different sprout node answer)
* The homer.demo.clearwater (every time a different homer node answer)
* The hs.demo.clearwater (every time a different homestead node answer)
* Ther ralf.demo.clearwater (every time a different ralf node answer)
* All bono-N.demo.clearwater nodes
* All sprout-N.demo.clearwater nodes
* All homer.-N.demo.clearwater nodes
* All homestead-N.demo.clearwater nodes
If I login in any homestead node and I execute: "nodetool status" the output is
Datacenter: site1
=================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host ID
Rack
UN 192.168.3.50 12.61 MB 256 48.2%
67a708b3-8b18-4bba-804c-228c0a40f519 RAC1
UN 192.168.3.51 12.67 MB 256 48.9%
aff78ecb-b7e0-4099-803a-dc7c1df8545f RAC1
UN 192.168.3.52 15.76 MB 256 50.2%
b78c5a38-e136-4735-a41f-93f2476a012a RAC1
UN 192.168.3.53 16.41 MB 256 52.7%
ddee088a-bff5-4c80-80e0-fcfa37a73077 RAC1
If I execute clearwater-etcdctl cluster-health
member 176dfbe09295f8fb is healthy: got healthy result from
http://192.168.3.50:4000<http://192.168.3.50:4000/>
member 3099bb17539a7c44 is healthy: got healthy result from
http://192.168.3.51:4000<http://192.168.3.51:4000/>
member 327f1928783cf78e is healthy: got healthy result from
http://192.168.3.60:4000<http://192.168.3.60:4000/>
member 32e70f18c5b85cbf is healthy: got healthy result from
http://192.168.3.43:4000<http://192.168.3.43:4000/>
member 439bb97c4682c8c9 is healthy: got healthy result from
http://192.168.3.46:4000<http://192.168.3.46:4000/>
member 44cd47f6fd725bb2 is healthy: got healthy result from
http://192.168.3.44:4000<http://192.168.3.44:4000/>
member 4fd1f4e06276b755 is healthy: got healthy result from
http://192.168.3.56:4000<http://192.168.3.56:4000/>
member 72859264f699bc49 is healthy: got healthy result from
http://192.168.3.61:4000<http://192.168.3.61:4000/>
member 732ad30023c78349 is healthy: got healthy result from
http://192.168.3.53:4000<http://192.168.3.53:4000/>
member 8e4684e027bcf7e1 is healthy: got healthy result from
http://192.168.3.55:4000<http://192.168.3.55:4000/>
member 92687a19b70d0308 is healthy: got healthy result from
http://192.168.3.58:4000<http://192.168.3.58:4000/>
member ac8738522a031a34 is healthy: got healthy result from
http://192.168.3.48:4000<http://192.168.3.48:4000/>
member b124cc2916d37e7f is healthy: got healthy result from
http://192.168.3.54:4000<http://192.168.3.54:4000/>
member be8408958431cf89 is healthy: got healthy result from
http://192.168.3.59:4000<http://192.168.3.59:4000/>
member c0d2f53681553fb6 is healthy: got healthy result from
http://192.168.3.52:4000<http://192.168.3.52:4000/>
member c5e64aae2d1fe7b1 is healthy: got healthy result from
http://192.168.3.42:4000<http://192.168.3.42:4000/>
member c99d1fc5050334f7 is healthy: got healthy result from
http://192.168.3.57:4000<http://192.168.3.57:4000/>
member d5b010e57f2d2145 is healthy: got healthy result from
http://192.168.3.47:4000<http://192.168.3.47:4000/>
member e510c4aa36d98ad8 is healthy: got healthy result from
http://192.168.3.45:4000<http://192.168.3.45:4000/>
member f3f724c6247f3c10 is healthy: got healthy result from
http://192.168.3.49:4000<http://192.168.3.49:4000/>
member f6fb305b3f402aff is healthy: got healthy result from
http://192.168.3.41:4000<http://192.168.3.41:4000/>
cluster is healthy
Sip stress node:
I tried the 2 sip-stress test (the old one and the new one):
The old one:
I execute the provisioning command on homestead:
/usr/share/clearwater/crest/src/metaswitch/crest/tools/stress_provision.sh
I’ve added the configuration reg_max_expires=1800 in the shared_configs and
executed
/usr/share/clearwater/clearwater-config-manager/scripts/upload_shared_config
I’ve created the local_config in the sip_stress and add the local_ip
configuration
I’ve executed sudo apt-get install clearwater-sip-stress
I dind’t change the original sip-stress.xml
I’ve executed /usr/share/clearwater/infrastructure/scripts/sip-stress (which
generate /usr/share/clearwater/sip-stress/users.csv.1)
I’ve started the service: sudo service clearwater-sip-stress restart && tail -f
/var/log/clearwater-sipp/sip-stress.1.out
The output is in the attacched file sip-stress.1.out.zip
Question: There are some Timeouts messages after REGISTER, is this normal? it
is depends of a wrong configuration?
This could be the problem for the timeouts messages?
However the old stress test seems working.
The new stress method:
I’ve created a new virtual machine
I’ve created the local_config in the sip_stress and add the local_ip
configuration
I’ve executed: sudo apt-get install clearwater-sip-stress-coreonly
I’ve executed:
/usr/share/clearwater/bin/run_stress demo.clearwater 20 30
And surprisingly now the test (after that I re-installed the sip-stress) ended
successfully:
Starting initial registration, will take 0 seconds
Initial registration succeeded
Starting test
Test complete
Elapsed time: 00:27:41
Start: 2017-01-25 19:16:23.260399
End: 2017-01-25 19:46:23.300702
Total calls: 6
Successful calls: 0 (0.0%)
Failed calls: 6 (100.0%)
Retransmissions: 0
Average time from INVITE to 180 Ringing: 0.0 ms
# of calls with 0-2ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 2-20ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 20-200ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 200-2000ms from INVITE to 180 Ringing: 0 (0.0%)
# of calls with 2000+ms from INVITE to 180 Ringing: 0 (0.0%)
Total re-REGISTERs: 20
Successful re-REGISTERs: 20 (100.0%)
Failed re-REGISTERS: 0 (0.0%)
REGISTER retransmissions: 0
Average time from REGISTER to 200 OK: 31.0 ms
Log files at /var/log/clearwater-sip-stress/2687_*
But If I check the log: /var/log/clearwater-sip-stress/2687_caller_errors.log
there are some errors (check the attached files: 2687_caller_errors.log)
Question: in some logs I see: sprout issued 1005.1 alarm
what does it mean?
If you need any other information please let me know
Thanks
On Jan 25, 2017, at 4:43 PM, Faustinoni Fabrizio
<[email protected]<mailto:[email protected]>> wrote:
Hi Sebastian,
I;ve subscribed to the mailing list.
How can I know the clearwater versione? I’ve installed it just a couple of days
ago.
On Jan 25, 2017, at 10:38 AM, Sebastian Rex
<[email protected]<mailto:[email protected]>> wrote:
Hi,
It’s just been brought to my attention that you’re not signed up to the mailing
list, so I suspect that you haven’t seen my response, below.
If you want to see all responses, I suggest you sign up to the mailing list
using:http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org
Regards,
Seb.
From: Clearwater [mailto:[email protected]] On
Behalf Of Sebastian Rex
Sent: 24 January 2017 15:22
To:
[email protected]<mailto:[email protected]>
Subject: Re: [Project Clearwater] Clearwater sip-stress fail due to
Unexpected-Msg 183
Hi,
Would you mind also telling us what release of Project Clearwater you’re
running on?
Thanks,
Seb.
From: Clearwater [mailto:[email protected]] On
Behalf Of Faustinoni Fabrizio
Sent: 24 January 2017 14:49
To:
[email protected]<mailto:[email protected]>
Subject: [Project Clearwater] Clearwater sip-stress fail due to Unexpected-Msg
183
Hi,
I deployed a Clearwater Cluster, everything looks fine.
If I start the stress test /usr/share/clearwater/bin/run_stress --sipp-output
demo.clearwater 400 10
the test fail:
Last Error: Aborting call on unexpected message for Call-Id '10-4484@127...
------------------------------ Scenario Screen -------- [1-9]: Change Screen --
Call-rate(length) Port Total-time Total-calls Remote-host
0.1(5000 ms)/1.000s 5061 141.75 s 10 192.168.3.46:5054(TCP)
0 new calls during 1.004 s period 1 ms scheduler resolution
0 calls (limit 1) Peak was 1 calls, after 13 s
1 Running, 2 Paused, 3 Woken up
0 dead call msg (discarded) 0 out-of-call msg (discarded)
3 open sockets
Messages Retrans Timeout Unexpected-Msg
INVITE ----------> 10 0 0
100 <---------- 10 0 0 0
183 <---------- 0 0 0 10
There aren’t error messages in the logs file.
In sprout nodes I can see this log but I don’t understand if this is an error
or just a info:
4-01-2017 14:47:20.227 UTC Status alarm.cpp:62: sprout issued 1004.1 alarm
24-01-2017 14:47:20.227 UTC Status alarm.cpp:62: sprout issued 1001.1 alarm
24-01-2017 14:47:20.227 UTC Status alarm.cpp:62: sprout issued 1002.1 alarm
24-01-2017 14:47:50.227 UTC Status alarm.cpp:62: sprout issued 1004.1 alarm
Thank you
_______________________________________________
Clearwater mailing list
[email protected]<mailto:[email protected]>
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org
_______________________________________________
Clearwater mailing list
[email protected]
http://lists.projectclearwater.org/mailman/listinfo/clearwater_lists.projectclearwater.org