Hello,

The log_next_state_dlg warning message means that you have received a 1xx reply message after the 200OK reply has came in - it's nothing critical if the communication was via UDP, since the order of the UDP datagrams is not guaranteed.

About the memory issues, it seems you've just ran out of shared memory - the first build_req_buf_from_sip_req error message is miss-leading, it's actually trying to allocate shared memory.

In order to further debug this, can you please follow the steps at http://www.opensips.org/Documentation/TroubleShooting-OutOfMem and send us the output of the memory dump ?

Best Regards,

Vlad Paiu
OpenSIPS Developer
http://www.opensips-solutions.com

On 19.09.2013 23:58, Bobby Smith wrote:
Greetings list,

We're trying to track down some memory issues that we originally thought were related to rabbitmq, but after updating to the latest 1.9 I'm seeing a lot of these errors in the log file:

2013-09-19T20:54:40.115582+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: CRITICAL:dialog:log_next_state_dlg: bogus event 2 in state 3 for dlg 0x2acc8b7780b0 [3248:527118168] with clid '[email protected] <mailto:[email protected]>' and tags '3388204' '13665SIPpTag01563795'

I understand what this means (I think, it's around the order that a 200 OK and ACK are processed), but repeating the same test on a previous revision doesn't show these messages.

Also, after a short amount of time running the test:

2013-09-19T18:02:23.809205+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory 2013-09-19T18:02:23.809231+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:print_uac_request: no more shm_mem 2013-09-19T18:02:23.809242+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory 2013-09-19T18:02:23.809252+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:t_forward_nonack: failure to add branches 2013-09-19T18:02:23.809261+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:print_uac_request: no more shm_mem 2013-09-19T18:02:23.809271+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:t_forward_nonack: failure to add branches 2013-09-19T18:02:23.809279+00:00 registrar2 /usr/local/opensips/sbin/opensips[3918]: ERROR:tm:_reply_light: failed to allocate shmem buffer 2013-09-19T18:02:23.809288+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:_reply_light: failed to allocate shmem buffer 2013-09-19T18:02:23.809297+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:new_t: out of mem 2013-09-19T18:02:23.809306+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:t_newtran: new_t failed 2013-09-19T18:02:23.809911+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:tm:new_t: out of mem 2013-09-19T18:02:23.809942+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:new_t: out of mem 2013-09-19T18:02:23.809970+00:00 registrar2 /usr/local/opensips/sbin/opensips[3917]: ERROR:tm:t_newtran: new_t failed 2013-09-19T18:02:23.809999+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:new_t: out of mem 2013-09-19T18:02:23.810037+00:00 registrar2 /usr/local/opensips/sbin/opensips[3916]: ERROR:tm:t_newtran: new_t failed 2013-09-19T18:02:23.810068+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:tm:t_newtran: new_t failed 2013-09-19T18:02:23.810880+00:00 registrar2 /usr/local/opensips/sbin/opensips[3919]: ERROR:core:build_req_buf_from_sip_req: out of pkg memory 2013-09-19T18:02:23.810921+00:00 registrar2 /usr/local/opensips/sbin/opensips[3921]: ERROR:dialog:dlg_add_leg_info: Failed to resize legs array

It seems very strange we'd run out of both package and shared memory at the same time. When I dump statistics when these messages are propagating in the log, I see:

According to statistics:

shmem:total_size = 1073741824
shmem:used_size = 168525088
shmem:real_used_size = 390522728
shmem:max_used_size = 1060997488
shmem:free_size = 683219096
shmem:fragments = 1106426

pkmem:0-real_used_size = 601136
pkmem:1-real_used_size = 610592
pkmem:2-real_used_size = 50858056
pkmem:3-real_used_size = 610416
pkmem:4-real_used_size = 610416
pkmem:5-real_used_size = 610416
pkmem:6-real_used_size = 610416
pkmem:7-real_used_size = 610416
pkmem:8-real_used_size = 610416
pkmem:9-real_used_size = 610416
pkmem:10-real_used_size = 610416
pkmem:11-real_used_size = 650864
pkmem:12-real_used_size = 654800
pkmem:13-real_used_size = 650944
pkmem:14-real_used_size = 651136
pkmem:15-real_used_size = 650704
pkmem:16-real_used_size = 650888
pkmem:17-real_used_size = 651712
pkmem:18-real_used_size = 651040
pkmem:19-real_used_size = 601136
pkmem:20-real_used_size = 618512
pkmem:21-real_used_size = 669680
pkmem:22-real_used_size = 669680
pkmem:23-real_used_size = 669680
pkmem:24-real_used_size = 669680
pkmem:25-real_used_size = 669680
pkmem:26-real_used_size = 669680
pkmem:27-real_used_size = 669680
pkmem:28-real_used_size = 669680
pkmem:29-real_used_size = 660464

And pkmem is configured for 64MB per process.

Any thoughts? It doesn't seem like transactions are dropping or anything, we just see these strange issues in the logs.

Thanks,





_______________________________________________
Users mailing list
[email protected]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users

_______________________________________________
Users mailing list
[email protected]
http://lists.opensips.org/cgi-bin/mailman/listinfo/users

Reply via email to