Hi Schneur,
It is strongly recommend that all OpenSIPS nodes in a cluster to have
the same version.
Best regards,
Bogdan-Andrei Iancu
OpenSIPS Founder and Developer
https://www.opensips-solutions.com
OpenSIPS eBootcamp 2021
https://opensips.org/training/OpenSIPS_eBootcamp_2021/
On 1/18/22 6:08 PM, Schneur Rosenberg wrote:
Hi, it seems like it was fixed in 3.2, I will have to migrate all my
servers, I use binary replication will it break if one server is
running 2.4 and the other 3.2? its a active/passive setup so I will
take down one at a time and upgrade it, I'm just worried what will
happen while one is 3.2 and the second one is still 2.4, in the past I
disabled the replication while I was doing the updates and I'm
wondering if its necessary.
thanks
Scott (Schneur)
On Fri, Dec 17, 2021 at 6:21 PM Bogdan-Andrei Iancu <[email protected]> wrote:
While trying to reproduce (as I failed to do so), I noticed you mentioned this
is on version 2.4.11, right ? As I was testing on 3.2 without getting the leak.
Could you try on 3.2/3.1 ? Keep in mind 2.4 is not maintained anymore :(
Regards,
Bogdan-Andrei Iancu
OpenSIPS Founder and Developer
https://www.opensips-solutions.com
OpenSIPS eBootcamp 2021
https://opensips.org/training/OpenSIPS_eBootcamp_2021/
On 12/17/21 1:40 PM, Schneur Rosenberg wrote:
Thanks Bogdan!, this is my entire local_route, all my dst_uri's are IP only.
On Fri, Dec 17, 2021, 12:36 Bogdan-Andrei Iancu <[email protected]> wrote:
Hi Schneur,
I suspect that the leaking mk_proxy is related to the changing of the
RURI in local route. Let me test your snippet. BTW, is that the whole
processing you do in local route? is the $rd (from LB) a FQDN or
straight IP ?
Regards,
Bogdan-Andrei Iancu
OpenSIPS Founder and Developer
https://www.opensips-solutions.com
OpenSIPS eBootcamp 2021
https://opensips.org/training/OpenSIPS_eBootcamp_2021/
On 12/16/21 9:43 AM, Schneur Rosenberg wrote:
Hi Bogdan
I think I found the issue, I recently added these lines of code,
because of a probing issue I was having, I just searched from my
previous tickets and I see that you have warned me about the
implications but for some reason I never read the message.
Here is the old ticket
https://www.mail-archive.com/[email protected]/msg43301.html
the reason I'm using INVITE to probe is because I want the servers
that were probed not only to respond but also check if the database is
working, I did it this way because I had cases where mysql crashed but
my asterisk servers were still responding to the probe but all of the
calls just hung, so I do a invite and it does a DB lookup and it will
only return a positive message if it was able to query the DB, do you
have a better solution? at the time I set it up I couldn't run a query
on receipt of a OPTIONS but perhaps I didn't look good enough :-),
either way can I do anything to make sure this code doesn't leak
memory? this probing has worked for years until I needed the Contact
header.
local_route {
if (is_method("INVITE")&& $fU=="pingTest"){
$ru="sip:s@"+$rd ;
append_hf("Contact: <sip:pingTest@$fd:5060>\r\n");
exit;
}
}
On Fri, Dec 10, 2021 at 2:16 PM Schneur Rosenberg
<[email protected]> wrote:
Hi Bogdan,
I did it on a backup server, its also leaking memory but at a slower
pace, I'm attaching the logs when running kill -SIGUSR1 on the pid
that's growing in size, it still has available memory, I hop this will
give you a clue.
Here is a pastbin to the loggs https://pastebin.com/KJVb9Y75
On Fri, Dec 10, 2021 at 11:00 AM Schneur Rosenberg
<[email protected]> wrote:
Thank you, does this reduce performance? can I leave it enabled on a
production machine? I will wait for the memory leak to be apparent and
I'll post the result.
On Thu, Dec 9, 2021 at 12:31 PM Bogdan-Andrei Iancu <[email protected]> wrote:
Hi Schneur,
Just follow the
https://www.opensips.org/Documentation/TroubleShooting-OutOfMem and
provide the dump. This is the only way to investigate this.
Regards,
Bogdan-Andrei Iancu
OpenSIPS Founder and Developer
https://www.opensips-solutions.com
OpenSIPS eBootcamp 2021
https://opensips.org/training/OpenSIPS_eBootcamp_2021/
On 12/8/21 12:14 PM, Schneur Rosenberg wrote:
I just noticed that process 88 runs the timer handler, perhaps this
might shed light on whats going on.
opensipsctl fifo ps
Process:: ID=88 PID=5327 Type=Timer handler
On Wed, Dec 8, 2021 at 10:55 AM Schneur Rosenberg
<[email protected]> wrote:
Now a few hours later this is what I'm getting
Dec 8 09:50:13 /sbin/opensips[21699]: ERROR:nathelper:nh_timer: out
of pkg memory
Dec 8 09:50:16 /sbin/opensips[21699]: WARNING:core:fm_malloc: not
enough continuous free pkg memory (3024 bytes left, need 5128),
attempting defragmentation... please increase the "-M" command line
parameter!
Dec 8 09:50:16 /sbin/opensips[21699]: ERROR:core:fm_malloc: not
enough free pkg memory (3024 bytes left, need 5128), please increase
the "-M" command line parameter!
Here is the last 20 package memory max_used_size
pkmem:70-max_used_size:: 1009584
pkmem:71-max_used_size:: 1009584
pkmem:72-max_used_size:: 1009584
pkmem:73-max_used_size:: 1009584
pkmem:74-max_used_size:: 1009584
pkmem:75-max_used_size:: 1009584
pkmem:76-max_used_size:: 1009584
pkmem:77-max_used_size:: 1009584
pkmem:78-max_used_size:: 1009584
pkmem:79-max_used_size:: 1009584
pkmem:80-max_used_size:: 1044752
pkmem:81-max_used_size:: 1075552
pkmem:82-max_used_size:: 1116848
pkmem:83-max_used_size:: 1117456
pkmem:84-max_used_size:: 1102640
pkmem:85-max_used_size:: 1306992
pkmem:86-max_used_size:: 1706304
pkmem:87-max_used_size:: 2507000
pkmem:88-max_used_size:: 4194264
pkmem:89-max_used_size:: 1009584
And here is the real used size, you can see that process 88 maxed out
pkmem:69-real_used_size:: 975528
pkmem:70-real_used_size:: 978016
pkmem:71-real_used_size:: 989592
pkmem:72-real_used_size:: 951416
pkmem:73-real_used_size:: 982496
pkmem:74-real_used_size:: 965744
pkmem:75-real_used_size:: 959424
pkmem:76-real_used_size:: 949472
pkmem:77-real_used_size:: 983080
pkmem:78-real_used_size:: 961400
pkmem:79-real_used_size:: 977808
pkmem:80-real_used_size:: 978928
pkmem:81-real_used_size:: 1009936
pkmem:82-real_used_size:: 1110760
pkmem:83-real_used_size:: 1116720
pkmem:84-real_used_size:: 1096568
pkmem:85-real_used_size:: 1300592
pkmem:86-real_used_size:: 1699648
pkmem:87-real_used_size:: 2501096
pkmem:88-real_used_size:: 4191280
pkmem:89-real_used_size:: 882528
On Tue, Dec 7, 2021 at 7:53 PM Schneur Rosenberg
<[email protected]> wrote:
Hi, lately I'm getting these errors in my logs.
ERROR:core:fm_malloc: not enough free pkg memory (1792 bytes left,
need 2184), please increase the "-M" command line para
meter!
CRITICAL:core:hostent_cpy: pkg memory allocation failure
ERROR:nathelper:nh_timer: out of pkg memory
ERROR:core:fm_malloc: not enough free pkg memory (5952 bytes left,
need 5408), please increase the "-M" command line para
meter!
I was on version 2.4.8 and I upgraded to 2.4.11 and I'm monitoring the
max_used_size of the package memory, a few hours later I see that 2
processes keep on getting bigger, so far the rest are pretty stable, I
have 90 processes and 87 and 88 are growing.
here you can see the last few processes, OpenSIPS set aside 4 mb per process.
pkmem:80-max_used_size:: 1009584
pkmem:81-max_used_size:: 1009584
pkmem:82-max_used_size:: 1009584
pkmem:83-max_used_size:: 1009584
pkmem:84-max_used_size:: 1009584
pkmem:85-max_used_size:: 1009584
pkmem:86-max_used_size:: 1143608
pkmem:87-max_used_size:: 1323256
pkmem:88-max_used_size:: 1831928
pkmem:89-max_used_size:: 1009584
Any hints where to start looking besides the solutions fund here.
https://www.opensips.org/Documentation/TroubleShooting-OutOfMem
thank you
Scott
_______________________________________________
Users mailing list
[email protected]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users
_______________________________________________
Users mailing list
[email protected]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users