Yes, shure.
Process:: ID=42 PID=30896 Type=TCP receiver
# opensips -V
version: opensips 2.4.3 (x86_64/linux)
flags: STATS: On, DISABLE_NAGLE, USE_MCAST, SHM_MMAP, PKG_MALLOC, F_MALLOC,
FAST_LOCK-ADAPTIVE_WAIT
ADAPTIVE_WAIT_LOOPS=1024, MAX_RECV_BUFFER_SIZE 262144, MAX_LISTEN 16,
MAX_URI_SIZE
Actually this looks like a memory leak.
Can you tell us what kind of process is 30896? You can find out running
`opensipsctl fifo ps | grep 30896`
Also, what version of opensips are you using (output of `opensips -V`).
Best regards,
Răzvan
On 12/7/18 10:52 AM, vasilevalex wrote:
Hi all,
Hi all,
Yes, may be my assumption was wrong. @Răzvan please look at logs and routing
script parts:
Process of starting OpenSIPS (I skip repeated messages and add some
comments):
Dec 1 20:13:01 test02 /usr/sbin/opensips[30853]: NOTICE:core:main: version:
opensips 2.4.3 (x86_64/linux)
Dec 1
On 12/6/18 1:16 PM, Vitalii Aleksandrov wrote:
This seems to be more clean, efficient, and if you don't need it, the
OS will not even allocate it (due to the demand-paging mechanism). So
I don't see where you reservations for setting a higher value of the
-M parameter come from.
Best
This seems to be more clean, efficient, and if you don't need it, the
OS will not even allocate it (due to the demand-paging mechanism). So
I don't see where you reservations for setting a higher value of the
-M parameter come from.
Best regards,
Răzvan
Just my 2 cents about PKG mem.
Thanks @Răzvan. As I said, I don't know opensips memory management so deep.
And I'm usually trying to use as many resources as actually required for the
task.
I just didn't want "overselling", like I have 80 opensips processes with 32
Mb each. That's ok if they all decide to use all the memory.
I don't agree with this :). SHM has a completely different purpose (that
sharing data between processes), not just the virtue of being large. And
I'm not arguing about performance here (SHM lock, write/read
operations), but rather about things that this change will influence:
1. fragmentation
Hi all,
Thank you @Liviu, this is exactly what I mean. Either use SHM or may be
split all data into smaller parts to fit PKG memory. This is just thoughts.
Because if people start using real data in cluster they will reach PKG limit
very fast.
--
Sent from:
Hi guys,
I think Alexey's point is different: the idea is that PKG memory is
used to store data which is proportional to SHM memory. The same type
of problem exists with the dialog/usrloc MI "dump" commands: "since
dialogs and contacts are stored in SHM, can we really expect to be able
to
@Alexei: unfortunately there is no way in OpenSIPS to auto-scale the
private or public memory - you'll have to decide from the beginning how
much traffic you are going to support and scale the memory usage
accordingly. Syncing cannot be done using shared memory, so the only
solution I can see
plz tell me how to configure opensips for cluster replication (usrloc or
dialog)
On Mon, Dec 3, 2018 at 1:00 PM vasilevalex
wrote:
> Hi all,
>
> I have simple cluster with full-sharing usrloc. Everything is in memory, no
> DB for usrloc.
> When starting backup server it syncing usrloc data. So
Hi all,
I have simple cluster with full-sharing usrloc. Everything is in memory, no
DB for usrloc.
When starting backup server it syncing usrloc data. So I got errors:
Dec 1 20:15:12 test02 /usr/sbin/opensips[30896]: ERROR:core:fm_malloc: not
enough free pkg memory (30400 bytes left, need
12 matches
Mail list logo