I would like to put forth the motion that the sockets should not be thread 
safe. The multipart messages do not facilitate the ability for thread safety 
without some major complexity. This would involve the complexity of changing 
the API that breaks backwards compatibility or adding locks which would slow 
the API down.

My personal experience is that when people were trying to use a socket over 
multiple threads, it was wrong anyways. The asserts in libzmq catch the problem 
very quickly during development so we never see it in production which I think 
is better than having "demons" in the system and not understanding why.

There are 2 approaches that I have used that seem to work well. 

The first is to use the java synchronized blocks where I knew exactly what was 
being done. I wrapped all the frames so that the send and recv occurred in a 
single block. I wouldn't recommend this as a common approach because it is very 
failure prone.

The second is to use an inproc socket per thread and then create a device that 
transfers this data to the external socket. This has the least amount of 
problems with concurrency. If the community really wanted to create a "thread 
safe socket", I would recommend this approach. We would need to create inproc 
sockets and store them as ThreadLocal. This would then properly abstract out 
each socket for each thread.

Joshua


On Feb 6, 2012, at 3:37 PM, Nadav Samet wrote:

> I like the concept of having thread-safe sockets, but it seems that this 
> implementation does not really deliver what a user would expect from a 
> thread-safe socket. For instance, if two threads try to receive 
> simultaneously from the same socket, each of them might obtain different 
> frames that belong to one multipart message. Similarly, sending messages 
> simultaneously from different threads over the same socket will get the 
> individual frames intertwined.
> 
> There are use cases for locking each individual operation (if you know that 
> there is at most one thread using the socket at a time), but I think that 
> ultimately the binding/end-user application knows better than libzmq what 
> locking semantics it needs for the socket.
> 
> -Nadav
> 
> On Sat, Feb 4, 2012 at 11:45 AM, john skaller <[email protected]> 
> wrote:
> 
> On 04/02/2012, at 9:53 PM, Pieter Hintjens wrote:
> 
> > On Sat, Feb 4, 2012 at 1:26 AM, john skaller
> > <[email protected]> wrote:
> >
> >> Someone else in the community concerned with performance no doubt has
> >> some test code which performs measurements. Such code would be more
> >> reliable than anything I could write.
> >
> > What I would do, would be to take the current performance test
> > programs (latency and throughput tests) and create versions that use
> > locked sockets. These are pretty effective tools for measuring raw 0MQ
> > performance, if you run local and remote on the same multicore box.
> 
> 
> I can have a look at that, however the issue here isn't the performance
> of locked sockets, but performance of the old code vs. the new code
> *not using* the new feature: the real issue is how much non-users
> of the locking feature will pay. This is a bit trickier to check (I need two
> versions of 0MQ to do it)
> 
> The actual cost of using locked sockets has to be compared with the
> cost of "doing it the old 0MQ way" using multi-plexing intermediaries
> as outlined by Chuck previously. The comparison is not just performance
> but also LOC and how easy each is to maintain.
> 
> --
> john skaller
> [email protected]
> 
> 
> 
> 
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev
> 
> 
> 
> -- 
> -Nadav
> 
> _______________________________________________
> zeromq-dev mailing list
> [email protected]
> http://lists.zeromq.org/mailman/listinfo/zeromq-dev

_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to