Hello Bogdan,
I was monitoring and trasfered config to dev machine where 0 load or connection and it look like some sort of mini crash, but I can't tell 100%. Here are log, the first line mean INVITE 200 OK reply then all those messages start showing up and 200 OK never reach end point.


Aug 15 20:12:14 aitossbc03 /usr/sbin/opensips[29772]: OnReply_Route3: [INVITE] Direction: [FS ~> Client] and source IP pbx ip Aug 15 20:12:14 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 100 ms ago (now 571950 ms), delaying execution Aug 15 20:12:14 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 200 ms ago (now 572050 ms), delaying execution Aug 15 20:12:14 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 300 ms ago (now 572150 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 400 ms ago (now 572250 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 500 ms ago (now 572350 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 600 ms ago (now 572450 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 700 ms ago (now 572550 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 800 ms ago (now 572650 ms), delaying execution Aug 15 20:12:15 aitossbc03 /usr/sbin/opensips[29765]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 900 ms ago (now 572750 ms), delaying execution




On Thu, Aug 9, 2018 at 6:54 AM, Bogdan-Andrei Iancu <bog...@opensips.org> wrote:
Hi Volga,

The logs report a serious heavy execution of certain timer routines in OpenSIPS, like the presence cleanup takes more than 191 secs....most probably due long lasting DB queries ??? or because of heavy load in OpenSIPS that leads to starvation in handling the timer jobs - what is the internal load of OpenSIPS ? (use the 'load:' class of statistics to check it)

Regards,

Bogdan-Andrei Iancu

OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
OpenSIPS Bootcamp 2018
  http://opensips.org/training/OpenSIPS_Bootcamp_2018/

On 07/01/2018 07:28 AM, volga...@networklab.ca wrote:
Hello Bogdan,
I checked database connection and it looks normal ping less then sec to database. I monitored load on database nodes and didn't notcied any extra load on them.
Right now I see 2 types of messages on all cluster nodes.


Jun 30 22:56:45 aitossbc01 /usr/sbin/opensips[20245]: WARNING:core:timer_ticker: timer task <presence-dbupdate> already scheduled 100410 ms ago (now 700510 ms), skipping execution Jun 30 22:56:45 aitossbc01 /usr/sbin/opensips[20245]: WARNING:core:timer_ticker: timer task <presence-pclean> already scheduled 191700 ms ago (now 700510 ms), delaying execution

and

Jun 30 23:14:36 aitossbc02 /usr/sbin/opensips[9376]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 3880 ms ago (now 113380 ms), delaying execution Jun 30 23:14:36 aitossbc02 /usr/sbin/opensips[9376]: WARNING:core:utimer_ticker: utimer task <tm-utimer> already scheduled 5880 ms ago (now 113380 ms), delaying execution

Also that not affecting call or call quality.
Only filling up logs.

volga629

On Thu, Jun 14, 2018 at 6:48 AM, Bogdan-Andrei Iancu <bog...@opensips.org> wrote:
Hi Volga,

How large is the presentity data set in your system ? I'm asking as the routine that seems to be slow queries the presentity table in order to get the expired presentities - this is done with or without clustering. Still, in clustering, all the nodes are doing this query, putting extra stress on the DB.

Now, for each expire presentity, OpenSIPS has to send a NOTIFY to its subscribers - and here, having clustering enable, it is a difference. IF you have 3 nodes, so 3 shared tags, OpenSIPS will do 3 queries (one per tag) in order to fetch from active_watchers the subscribers with the tag looking for the presentity. So, who large is the subscriber's data set ?

Do you notice any extra DB load when the cleanup timer kicks in ?

Regards,

Bogdan-Andrei Iancu

OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
OpenSIPS Summit 2018
  http://www.opensips.org/events/Summit-2018Amsterdam

On 06/07/2018 03:43 PM, volga...@networklab.ca wrote:
Hello Bogdan-Andrei,
Yes those messages start showing up when cluster enabled. Standalone mode all works no issues. Right now in opensips cluster we have 2 active one backup. In PgSQL 5 nodes 3 active 2 backup. In Mongodb cluster: 2 mongos 3 config 2 shred.
We use postgres BDR cluster and MongoDB Cluster.

Here are configuration

#### Presence
loadmodule "presence.so"
loadmodule "presence_mwi.so"
loadmodule "presence_xml.so"
loadmodule "presence_dialoginfo.so"
loadmodule "presence_callinfo.so"
loadmodule "pua.so"
loadmodule "pua_dialoginfo.so"
loadmodule "xcap.so"
modparam("presence|xcap|pua","db_url","postgres://URI/opensips_prod01")
modparam("presence","server_address","sip:proxy@PUBLIC IP:5082")
modparam("presence", "notify_offline_body", 1)
modparam("presence", "fallback2db", 1)
modparam("presence", "clean_period",  30)
modparam("presence", "mix_dialog_presence", 1)
modparam("presence", "cluster_id", 1)
modparam("presence", "cluster_sharing_tags", "A=active")
modparam("presence", "cluster_federation_mode", 1)
modparam("presence", "cluster_pres_events" ,"presence , dialog;sla")
modparam("presence_xml", "force_active", 1)
modparam("presence_xml", "pidf_manipulation", 1)
modparam("pua_dialoginfo", "presence_server", "sip:proxy@PUBLIC IP:5082")


volga629


On Thu, Jun 7, 2018 at 7:32 AM, Bogdan-Andrei Iancu <bog...@opensips.org> wrote:
Hi Slava,

What is the presence clustering configuration you have here ? also, what is the DB setup in regards to the cluster ?

Also, did you start getting those errors only after enabling the clustering support? have you run an opensips individual presence node to see if you still get them ?

Regards,

Bogdan-Andrei Iancu

OpenSIPS Founder and Developer
  http://www.opensips-solutions.com
OpenSIPS Summit 2018
  http://www.opensips.org/events/Summit-2018Amsterdam

On 06/06/2018 05:20 PM, volga...@networklab.ca wrote:
Hello Everyone,
I am trying put togher 3 nodes ( 2 active 1 backup) presence cluster and log filled with messages regard cleanup timer.
Any help thank you.

opensips-2.4.1.b044f11ee-16.fc27.x86_64


Jun 6 09:15:12 sbc01 /usr/sbin/opensips[4584]: WARNING:core:timer_ticker: timer task <presence-pclean> already scheduled for 5201330 ms (now 5653910 ms), it may overlap.. Jun 6 09:15:13 sbc01 /usr/sbin/opensips[4584]: WARNING:core:timer_ticker: timer task <presence-pclean> already scheduled for 5201330 ms (now 5654900 ms), it may overlap..


Slava.


_______________________________________________
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users










_______________________________________________
Users mailing list
Users@lists.opensips.org
http://lists.opensips.org/cgi-bin/mailman/listinfo/users

Reply via email to