Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-17 Thread Francesco
rocess is running in background for long time and finally I found > it occupies more and more memory, until it exhausted the system memory. It > seems there's memory leak with this way. 2) I provided *ffn for > deallocation but the memory freed back is much slower than consumer. So &

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-16 Thread Luca Boccassi
mq.so.5.2.3 [.] zmq::msg_t::init > > > > > > 1,34% libzmq.so.5.2.3 [.] zmq::pipe_t::check_write > > > > > > 1,24% libzmq.so.5.2.3 [.] > > > > > > zmq::stream_engine_base_t::in_event_internal > > &g

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-16 Thread Francesco
> >>>> Thanks, >>>> Francesco >>>> >>>> >>>> Il giorno ven 19 lug 2019 alle ore 18:15 Francesco >>>> ha scritto: >>>> >>>> >>>> Hi Yan, >>>> Unfortunately I have interrupted my att

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-15 Thread Doron Somech
nstead!). >>> >>> I'm also a bit surprised that nobody has tried and posted online a way >>> to achieve something similar (Memory pool zmq send) ... But anyway It >>> remains in my plans to try that out when I have a bit more spare time... >>> If you m

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-15 Thread Francesco
Yan, Liming (NSB - CN/Hangzhou) < >> liming@nokia-sbell.com> ha scritto: >> >> >> Hi, Francesco >> Could you please share the final solution and benchmark result for plan >> 2? Big Thanks. >> I'm concerning this because I had tried the similar before with &g

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-14 Thread Doron Somech
for > deallocation but the memory freed back is much slower than consumer. So > finally my own customized pool could also be exhausted. How do you solve > this? > I had to turn back to use zmq_send(). I know it has memory copy penalty > but it's the easiest and most stable way to send me

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-14 Thread Jens Auer
e memory freed back is much slower than consumer. So >>> finally my own customized pool could also be exhausted. How do you solve >>> this? >>> I had to turn back to use zmq_send(). I know it has memory copy penalty >>> but it's the easiest and most stable w

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-13 Thread Francesco
> this? >>I had to turn back to use zmq_send(). I know it has memory copy penalty >> but it's the easiest and most stable way to send message. I'm still using >> 0MQ 4.1.x. >>Thanks. >> >> BR >> Yan Limin >> >> -----Original

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-19 Thread Francesco
gt;Thanks. > > BR > Yan Limin > > -Original Message- > From: zeromq-dev [mailto:zeromq-dev-boun...@lists.zeromq.org] On Behalf > Of Luca Boccassi > Sent: Friday, July 05, 2019 4:58 PM > To: ZeroMQ development list > Subject: Re: [zeromq-dev] Memory pool for zmq_m

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-18 Thread Yan, Liming (NSB - CN/Hangzhou)
] On Behalf Of Luca Boccassi Sent: Friday, July 05, 2019 4:58 PM To: ZeroMQ development list Subject: Re: [zeromq-dev] Memory pool for zmq_msg_t There's no need to change the source for experimenting, you can just use _init_data without a callback and with a callback (yes the first case will leak

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-05 Thread Luca Boccassi
There's no need to change the source for experimenting, you can just use _init_data without a callback and with a callback (yes the first case will leak memory but it's just a test), and measure the difference between the two cases. You can then immediately see if it's worth pursuing further

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Francesco
Hi Luca, thanks for the details. Indeed I understand why the "content_t" needs to be allocated dynamically: it's just like the control block used by STL's std::shared_ptr<>. And you're right: I'm not sure how much gain there is in removing 100% of malloc operations from my TX path... still I

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Luca Boccassi
Another reason for that small struct to be on the heap is so that it can be shared among all the copies of the message (eg: a pub socket has N copies of the message on the stack, one for each subscriber). The struct has an atomic counter in it, so that when all the copies of the message on the

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Luca Boccassi
The second malloc cannot be avoided, but it's tiny and fixed in size at compile time, so the compiler and glibc will be able to optimize it to death. The reason for that is that there's not enough room in the 64 bytes to store that structure, and increasing the message allocation on the stack

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Francesco
Hi Stephan, Hi Luca, thanks for your hints. However I inspected https://github.com/dasys-lab/capnzero/blob/master/capnzero/src/Publisher.cpp and I don't think it's saving from malloc()... see my point 2) below: Indeed I realized that probably current ZMQ API does not allow me to achieve the

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Stephan Opfer
On 04.07.19 14:29, Luca Boccassi wrote: How users make use of these primitives is up to them though, I don't think anything special was shared before, as far as I remember. Some example can be found here: https://github.com/dasys-lab/capnzero/tree/master/capnzero/src The classes Publisher

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Luca Boccassi
On Thu, 2019-07-04 at 14:21 +0200, Francesco wrote: > Hi all, > > I'm doing some benchmarking of a library I wrote based on ZMQ. > In most of my use cases if I do a "perf top" on my application thread > I see something like this: > > 12,09% [kernel] [k] sysret_check >7,48%