I was working for Coroware on a similar project about 2.5 years ago. Even on 
lousy hardware, I was hitting link limits long before memory and allocations 
became an issue. 
While your mileage may vary, I suspect that zeromq will do well by you. 
Jacques Richer jric...@jricher.com(602)350-2463

-------- Original message --------From: Stephan Opfer <op...@vs.uni-kassel.de> 
Date: 9/1/17  12:22 AM  (GMT-07:00) To: zeromq-dev@lists.zeromq.org Subject: 
Re: [zeromq-dev] Is shared ownership possible while sending with
  zero-copy? 
Hi Patrik,

it is very likely to be a premature optimization, but on the other hand we 
would like to replace the ROS Middleware with a combination of Cap'n Proto and 
ZeroMQ. So I actually don't know which kind of messages the following 
generations are trying to send. We, that is the Carpe Noctem Cassel RoboCup 
team (www.das-lab.net).

We usually play with 5 robots connected over a local access point. Therefore, 
we use UDP Multicast. An "extrem" example would be the transfer of 2D laser 
scan data: 1080 * 8 byte = 8640 byte, 30 times per second = 253.125 KByte / 
sec. Or for debugging purpose the transfere of a live camera stream with 
roughly 900x900 pixels. The required amount of memory per transfered image 
depends on the compression. For raw images it is 46.35 MBytes / sec.

Greetings,
   Stephan

> I'm just curious, how large are those sensor values, how many do you keep 
> around, and to how many other robots do you intend to send them?

> Could it be premature optimization? Just asking because maybe it's not worth 
> the extra effort to make it zero-copy. Just copy and pass ownership to ZMQ.

> Regards, Patrik

> On 31 Aug 2017, at 20:06, Thomas Rodgers <rodgert at twrodgers.com> wrote:
> 
> Unfortunately that's not possible, libzmq exposes only a C API, and even 
> though it is implemented in C++, it deliberately targets pre-C++11 compilers.
> 
> Further to the 'mark and sweep' idea, or more generally, deferred 
> reclamation. You could have the callback place the message to be freed on a 
> (possibly lock free, Boost has a handy one) queue and signal a 'reaper' 
> thread (waiting on a condition_variable). The reaper thread wakes up, 
> reclaims all queued message buffers then returns to waiting.
> 
>> On Thu, Aug 31, 2017 at 10:55 AM Stephan Opfer <opfer at vs.uni-kassel.de> 
>> wrote:
>> > Another, more complicated way, would be to implement a mark&sweep
>> > garbage collector of sorts: instead of freeing the buffer, the callback
>> > you register with zmq_msg_init_data would mark the buffer as done (in a
>> > thread safe way!). Then your application's garbage collector can sweep
>> > it.
>> 
>> It would be nice, if I could pass over a copy  of (not reference or pointer 
>> to) a shared_ptr that owns the buffer, but with the call back and the "void 
>> * hint" this wasn't possible for me.
>> 
>> --
>> Distributed Systems Research Group
>> Stephan Opfer  T. +49 561 804-6280  F. +49 561 804-6277
>> Univ. Kassel,  FB 16,  Wilhelmshöher Allee 73,  D-34121 Kassel
>> WWW: http://www.uni-kassel.de/go/vs_stephan-opfer/
>> _______________________________________________
>> zeromq-dev mailing list
>> zeromq-dev at lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> _______________________________________________
> zeromq-dev mailing list
> zeromq-dev at lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: 
<https://lists.zeromq.org/pipermail/zeromq-dev/attachments/20170831/1adcf6fa/attachment.html>

-- 
Distributed Systems Research Group
Stephan Opfer  T. +49 561 804-6280  F. +49 561 804-6277
Univ. Kassel,  FB 16,  Wilhelmshöher Allee 73,  D-34121 Kassel
WWW: http://www.uni-kassel.de/go/vs_stephan-opfer/
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev
_______________________________________________
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to