[zeromq-dev] Question on ZeroMQ patterns

2012-06-29 Thread Felix De Vliegher
Hi list

I'm trying to set up a system where certain jobs can be executed through 
zeromq, but there are currently a few unknowns in how to tackle certain issues. 
Basically, I have a Redis queue with jobs. I pick one job from the queue and 
push it to a broker that distributes it to workers that handle the job.

So far so good, but there's a few extra requirements:
- one job can have multiple sub-jobs which might or might not need to be 
executed in a specific order. item_update 5 could have cache_update 5 and 
clear_proxies 5 as sub-jobs). I'm currently thinking of using the routing 
slip pattern (http://www.eaipatterns.com/RoutingTable.html) to do this.
- some sub-jobs need to wait for other sub-jobs to finish first.
- some jobs need to be published across multiple subscribers, other jobs only 
need to be handled by one worker.
- workers should be divided into groups that will only handle specific tasks 
(majordomo pattern?)
- some workers could forward-publish something themselves to a set of 
subscribers

Right now, I have the following setup:
(Redis queue)  (one or more routers | push) - (pull | one or more 
brokers | push) - (pull | multiple workers | push)  (pull | sink)


The brokers and the sink are the stable part of the architecture. The routers 
are responsible for getting a job from the queue, deciding the sub-jobs for 
each job and attaching the routing slip. What I haven't done yet is 
implementing majordomo to selectively define workers for a certain service, so 
every worker can handle every task right now. The requirement that some jobs 
are pub/sub and other are push/pull also isn't fulfilled.

I was wondering if this is the right approach and if there are better ways of 
setting up messaging, keeping into account the requirements?


Kind regards,

Felix De Vliegher
Egeniq.com


___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Question on ZeroMQ patterns

2012-06-29 Thread Andrew Hume
before i answer, how are you going to implement patterns such as aggregator 
from teh EIA book?
i think that means knowing how you identify tasks/jobs and if the tracking and 
organising of all
that is going to be centralised or distributed.

andrew

On Jun 29, 2012, at 3:08 AM, Felix De Vliegher wrote:

 Hi list
 
 I'm trying to set up a system where certain jobs can be executed through 
 zeromq, but there are currently a few unknowns in how to tackle certain 
 issues. Basically, I have a Redis queue with jobs. I pick one job from the 
 queue and push it to a broker that distributes it to workers that handle the 
 job.
 
 So far so good, but there's a few extra requirements:
 - one job can have multiple sub-jobs which might or might not need to be 
 executed in a specific order. item_update 5 could have cache_update 5 and 
 clear_proxies 5 as sub-jobs). I'm currently thinking of using the routing 
 slip pattern (http://www.eaipatterns.com/RoutingTable.html) to do this.
 - some sub-jobs need to wait for other sub-jobs to finish first.
 - some jobs need to be published across multiple subscribers, other jobs only 
 need to be handled by one worker.
 - workers should be divided into groups that will only handle specific tasks 
 (majordomo pattern?)
 - some workers could forward-publish something themselves to a set of 
 subscribers
 
 Right now, I have the following setup:
 (Redis queue)  (one or more routers | push) - (pull | one or more 
 brokers | push) - (pull | multiple workers | push)  (pull | sink)
 
 
 The brokers and the sink are the stable part of the architecture. The routers 
 are responsible for getting a job from the queue, deciding the sub-jobs for 
 each job and attaching the routing slip. What I haven't done yet is 
 implementing majordomo to selectively define workers for a certain service, 
 so every worker can handle every task right now. The requirement that some 
 jobs are pub/sub and other are push/pull also isn't fulfilled.
 
 I was wondering if this is the right approach and if there are better ways of 
 setting up messaging, keeping into account the requirements?
 
 
 Kind regards,
 
 Felix De Vliegher
 Egeniq.com
 
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


--
Andrew Hume  (best - Telework) +1 623-551-2845
and...@research.att.com  (Work) +1 973-236-2014
ATT Labs - Research; member of USENIX and LOPSA




___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Question on ZeroMQ patterns

2012-06-29 Thread Felix De Vliegher
Hi andrew

The router (or splitter, from the EIA book) would attach a unique identifier to 
each job and store that id and its sub-jobs in Redis. All workers would then 
ultimately report back to the sink, which aggregates the results of the tasks 
that belong together. There might be a better approach though, but this is the 
idea for now :)

Cheers,
Felix

On Friday 29 June 2012 at 12:57, Andrew Hume wrote:

 before i answer, how are you going to implement patterns such as aggregator 
 from teh EIA book?
 i think that means knowing how you identify tasks/jobs and if the tracking 
 and organising of all
 that is going to be centralised or distributed.
 
 andrew
 
 On Jun 29, 2012, at 3:08 AM, Felix De Vliegher wrote:
  Hi list
  
  I'm trying to set up a system where certain jobs can be executed through 
  zeromq, but there are currently a few unknowns in how to tackle certain 
  issues. Basically, I have a Redis queue with jobs. I pick one job from the 
  queue and push it to a broker that distributes it to workers that handle 
  the job.
  
  So far so good, but there's a few extra requirements:
  - one job can have multiple sub-jobs which might or might not need to be 
  executed in a specific order. item_update 5 could have cache_update 5 
  and clear_proxies 5 as sub-jobs). I'm currently thinking of using the 
  routing slip pattern (http://www.eaipatterns.com/RoutingTable.html) to do 
  this.
  - some sub-jobs need to wait for other sub-jobs to finish first.
  - some jobs need to be published across multiple subscribers, other jobs 
  only need to be handled by one worker.
  - workers should be divided into groups that will only handle specific 
  tasks (majordomo pattern?)
  - some workers could forward-publish something themselves to a set of 
  subscribers
  
  Right now, I have the following setup:
  (Redis queue)  (one or more routers | push) - (pull | one or more 
  brokers | push) - (pull | multiple workers | push)  (pull | sink)
  
  
  The brokers and the sink are the stable part of the architecture. The 
  routers are responsible for getting a job from the queue, deciding the 
  sub-jobs for each job and attaching the routing slip. What I haven't done 
  yet is implementing majordomo to selectively define workers for a certain 
  service, so every worker can handle every task right now. The requirement 
  that some jobs are pub/sub and other are push/pull also isn't fulfilled.
  
  I was wondering if this is the right approach and if there are better ways 
  of setting up messaging, keeping into account the requirements?
  
  
  Kind regards,
  
  Felix De Vliegher
  Egeniq.com (http://Egeniq.com)
  
  
  ___
  zeromq-dev mailing list
  zeromq-dev@lists.zeromq.org (mailto:zeromq-dev@lists.zeromq.org)
  http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 
 
 --
 Andrew Hume (best - Telework) +1 623-551-2845
 and...@research.att.com (mailto:and...@research.att.com) (Work) +1 
 973-236-2014
 ATT Labs - Research; member of USENIX and LOPSA
 
 
 
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org (mailto:zeromq-dev@lists.zeromq.org)
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev



___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Question on ZeroMQ patterns

2012-06-29 Thread Andrew Hume
is teh marshalling/scheduling of stuff being done (essentially) single-threaded?
that is, even if teh work is being done in parallel and distributed,
is the organising being done in one place?
(somewhat equivalent to running make foo, where the make can fire off jobs 
elsewhere.)

and how do you feel about networking and worker failures? do you need to be 
resiliant
against them? (and if so, do your jobs have side effects, or are they somehow 
functional?)

andrew

On Jun 29, 2012, at 4:08 AM, Felix De Vliegher wrote:

 Hi andrew
 
 The router (or splitter, from the EIA book) would attach a unique identifier 
 to each job and store that id and its sub-jobs in Redis. All workers would 
 then ultimately report back to the sink, which aggregates the results of the 
 tasks that belong together. There might be a better approach though, but this 
 is the idea for now :)
 
 Cheers,
 Felix
 
 On Friday 29 June 2012 at 12:57, Andrew Hume wrote:
 
 before i answer, how are you going to implement patterns such as aggregator 
 from teh EIA book?
 i think that means knowing how you identify tasks/jobs and if the tracking 
 and organising of all
 that is going to be centralised or distributed.
 
 andrew
 
 On Jun 29, 2012, at 3:08 AM, Felix De Vliegher wrote:
 Hi list
 
 I'm trying to set up a system where certain jobs can be executed through 
 zeromq, but there are currently a few unknowns in how to tackle certain 
 issues. Basically, I have a Redis queue with jobs. I pick one job from the 
 queue and push it to a broker that distributes it to workers that handle 
 the job.
 
 So far so good, but there's a few extra requirements:
 - one job can have multiple sub-jobs which might or might not need to be 
 executed in a specific order. item_update 5 could have cache_update 5 
 and clear_proxies 5 as sub-jobs). I'm currently thinking of using the 
 routing slip pattern (http://www.eaipatterns.com/RoutingTable.html) to do 
 this.
 - some sub-jobs need to wait for other sub-jobs to finish first.
 - some jobs need to be published across multiple subscribers, other jobs 
 only need to be handled by one worker.
 - workers should be divided into groups that will only handle specific 
 tasks (majordomo pattern?)
 - some workers could forward-publish something themselves to a set of 
 subscribers
 
 Right now, I have the following setup:
 (Redis queue)  (one or more routers | push) - (pull | one or more 
 brokers | push) - (pull | multiple workers | push)  (pull | sink)
 
 
 The brokers and the sink are the stable part of the architecture. The 
 routers are responsible for getting a job from the queue, deciding the 
 sub-jobs for each job and attaching the routing slip. What I haven't done 
 yet is implementing majordomo to selectively define workers for a certain 
 service, so every worker can handle every task right now. The requirement 
 that some jobs are pub/sub and other are push/pull also isn't fulfilled.
 
 I was wondering if this is the right approach and if there are better ways 
 of setting up messaging, keeping into account the requirements?
 
 
 Kind regards,
 
 Felix De Vliegher
 Egeniq.com (http://Egeniq.com)
 
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org (mailto:zeromq-dev@lists.zeromq.org)
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 
 
 --
 Andrew Hume (best - Telework) +1 623-551-2845
 and...@research.att.com (mailto:and...@research.att.com) (Work) +1 
 973-236-2014
 ATT Labs - Research; member of USENIX and LOPSA
 
 
 
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org (mailto:zeromq-dev@lists.zeromq.org)
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev
 
 
 
 ___
 zeromq-dev mailing list
 zeromq-dev@lists.zeromq.org
 http://lists.zeromq.org/mailman/listinfo/zeromq-dev


--
Andrew Hume  (best - Telework) +1 623-551-2845
and...@research.att.com  (Work) +1 973-236-2014
ATT Labs - Research; member of USENIX and LOPSA




___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Question on ZeroMQ patterns

2012-06-29 Thread Andrew Hume
a good answer would be fairly long as there are many design choices yet to be 
made.
but let me describe how i would deal with this (given there is still much i 
don't know).

Data things:
tasks are actual computation steps.
we have jobs made up of jobs or tasks (or both).
we store these job description in Redis.
we have queues for tasks waiting to be executed.

Scheduling:
we assume some code which, given a job instance,
can compute if any new task or job can be scheduled.
tasks are scheduled by being put on a task-specific queue
tasks are executed by a worker process asking (REQ/REP)
for the next task (or several tasks) from a specific queue
workers are responsible for sending status back to the task queue
task queues are straightforward (waiting, running+timeout, done)

Control:
pick a way to select the master; this will run the scheduler and be the 
writer
for teh redis store. (lots of algorithms here, like leader 
election.)
deal with configuration somehow (this means setting up the various
workers and messaging helpers (like brokers etc)). this can be 
done statically
or dynamically. try to go simple here; just say how many workers
and of what type.
the workers should not be micro-managed. let them ask for work when 
they are free.
reiliance/shutdown is always delicate. i prefer heartbeat messages from 
teh workers
and shutting them down by special 'task's on the task queue.

Messaging:
once you know the overall structure and roughly how big it is, then you 
can pick
the messaging topology (the guide helps a lot here).


obviously, this needs to be fleshed out, but that seems straightforward.
with maybe one exception; the jobs that need multiple subscribers .
i have assumed (maybe incorrectly) that these jobs can be decomposed
or handled by a single worker.

hope this helps
andrew

On Jun 29, 2012, at 4:41 AM, Felix De Vliegher wrote:

 Yes, scheduling is done in a single thread.
 Preferably, we should be resilient to network and worker failures. There's 
 one use case for a job to re-publish something to other subscribers, so you 
 might say that one has side effects. But most jobs are functional, 
 self-contained units of work.
 
 Regards,
 Felix
 
 On Friday 29 June 2012 at 13:20, Andrew Hume wrote:
 
 is teh marshalling/scheduling of stuff being done (essentially) 
 single-threaded?
 that is, even if teh work is being done in parallel and distributed,
 is the organising being done in one place?
 (somewhat equivalent to running make foo, where the make can fire off jobs 
 elsewhere.)
 
 and how do you feel about networking and worker failures? do you need to be 
 resiliant
 against them? (and if so, do your jobs have side effects, or are they 
 somehow functional?)
 
 andrew
 
 On Jun 29, 2012, at 4:08 AM, Felix De Vliegher wrote:
 Hi andrew
 
 The router (or splitter, from the EIA book) would attach a unique 
 identifier to each job and store that id and its sub-jobs in Redis. All 
 workers would then ultimately report back to the sink, which aggregates the 
 results of the tasks that belong together. There might be a better approach 
 though, but this is the idea for now :)
 
 Cheers,
 Felix
 
 On Friday 29 June 2012 at 12:57, Andrew Hume wrote:
 
 before i answer, how are you going to implement patterns such as 
 aggregator from teh EIA book?
 i think that means knowing how you identify tasks/jobs and if the tracking 
 and organising of all
 that is going to be centralised or distributed.
 
 andrew
 
 On Jun 29, 2012, at 3:08 AM, Felix De Vliegher wrote:
 Hi list
 
 I'm trying to set up a system where certain jobs can be executed through 
 zeromq, but there are currently a few unknowns in how to tackle certain 
 issues. Basically, I have a Redis queue with jobs. I pick one job from 
 the queue and push it to a broker that distributes it to workers that 
 handle the job.
 
 So far so good, but there's a few extra requirements:
 - one job can have multiple sub-jobs which might or might not need to be 
 executed in a specific order. item_update 5 could have cache_update 5 
 and clear_proxies 5 as sub-jobs). I'm currently thinking of using the 
 routing slip pattern (http://www.eaipatterns.com/RoutingTable.html) to do 
 this.
 - some sub-jobs need to wait for other sub-jobs to finish first.
 - some jobs need to be published across multiple subscribers, other jobs 
 only need to be handled by one worker.
 - workers should be divided into groups that will only handle specific 
 tasks (majordomo pattern?)
 - some workers could forward-publish something themselves to a set of 
 subscribers
 
 Right now, I have the following setup:
 (Redis queue)  (one or more routers | push) - (pull | one or 
 more brokers | push) - (pull | 

Re: [zeromq-dev] HWM behaviour blocking

2012-06-29 Thread Paul Colomiets
Hi Justin,

On Thu, Jun 28, 2012 at 9:06 PM, Justin Karneges jus...@affinix.com wrote:
 It's really just for functional completeness of my event-driven wrapper. The
 only time I can see this coming up in practice is an application that pushes a
 message just before exiting.

 For now, I set ZMQ_LINGER to 0 when a socket object is destroyed, making the
 above application impossible to create. What I'm thinking of doing now is
 offering an alternate, blocking-based shutdown method. This would violate the
 spirit of my wrapper, but may work well enough for apps that finish with a
 single socket doing a write-and-exit.


I think you should just set linger and use it. zmq_close() doesn't
block. The zmq_term() blocks. And usually starting an application has
much bigger overhead than sending a message. So in the case of
starting application, doing request(send) and shutting down, this
delay is probably negligible (unless your data is too big and/or
network is overloaded).

-- 
Paul
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Fixing Socket Problem on ZMQ Version 2.2

2012-06-29 Thread Wen Qi Chin
Hello,

I had a question relating to this issue--
https://zeromq.jira.com/browse/LIBZMQ-325. How do we get into the situation
where two sockets connect with the same identify? Why does this happen?

Thank you in advance for the help.

Wen Qi

On Fri, Jun 8, 2012 at 3:14 PM, Wen Qi Chin wchi...@gmail.com wrote:

 Hello Pieter,

 Thank you for your reply. I found your commit changes related to this bug
 fix (


 https://github.com/nzmsv/libzmq/commit/82c06e4417795ebc3e7760af6b02a3d9fd895da6#diff-0),
 and I can try to pull this change into the 2.2 branch.

 Looking at the 2.2 vs. 3.2 version, I believe your change would have been
 done in the xrep.cpp xrep_t::xrecv function in 2.2. There are some major
 differences that would make backporting the change a little tricky. For
 example, in 2.2, the zmq_msg_t looks like it's a just a plain C struct, but
 in 3.2, it looks like it has some methods defined on it (e.g: msg_-size()
 call). Do you have any tips/advice on being able to properly backport the
 change?



 Thank you,

 Wen Qi

 Date: Thu, 7 Jun 2012 18:00:56 -0700
 From: Pieter Hintjens p...@imatix.com
 Subject: Re: [zeromq-dev] Fixing Socket Problem on ZMQ Version 2.2
 To: ZeroMQ development list zeromq-dev@lists.zeromq.org
 Content-Type: text/plain; charset=ISO-8859-1

 Wen,
 Do you want to try backporting the fix to 2.2? It was not very
 complex, as far as I remember. I fixed the code to simply reject the
 duplicate identity and generate a random one for the second socket.
 If you feel like making this change to 2.2, we need a test case and a
 pull request.
 -Pieter
  On Thu, Jun 7, 2012 at 2:22 PM, Wen Qi Chin wchi...@gmail.com wrote:
  Hello,
 
  I am currently using zmq version 2.2, and would like to fix a problem
 which
  causes crashes when two sockets connect with the same identity (see
  https://zeromq.jira.com/browse/LIBZMQ-325?for details). This issue
 seems to

  have been resolved in version 3.2, but because there are many significant
  API changes between 2.2 and 3.2, I will not be able to easily upgrade my
 zmq
  version to pick up the fix. Does anyone know of an easy way to fix this
  problem?on 2.2 instead?
 
  Thank you
 

___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] one issue with zeroMQ 2.2.0(PUSH/PULL mode can not work well in multiprocess application(one parent with a number of childs))

2012-06-29 Thread cleanfield
Dear,


As the guide said, the workers really were running in parallel, if there are 
more than one of them. In my example, the parent process acts as the ventilator 
and sink, a number of its child processes act as worker. The fact is that 
always only one child work in serial. 


I look forward to hearing more from you。


best regards
Yanming Wu






#include zhelpers.h
#include pthread.h

///  libev
#include stdio.h
#include string.h
#include stdlib.h
#include errno.h
#include sys/types.h
#include netinet/in.h
#include sys/socket.h
#include arpa/inet.h
#include pthread.h
#include ev.h


#define MAXLEN 1023
#define PORT 
#define ADDR_IP 127.0.0.1


void worker()
{
printf([%d] start\n, getpid());
void *context = zmq_init (1);

//  Socket to receive messages on
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, tcp://localhost:5557);

//  Socket to send messages to
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_connect (sender, tcp://localhost:5558);

//sleep(20);

//  Process tasks forever
while (1) {
char *string = s_recv (receiver);
//  Simple progress indicator for the viewer
fflush (stdout);
printf ([%d]%s --\n, getpid(), 
string);

//  Do the work
s_sleep (atoi (string));
free (string);

char bout[100] = {0};
sprintf(bout, [%d]%s, getpid(), string);
//  Send results to sink
s_send (sender, bout);
}
zmq_close (receiver);
zmq_close (sender);
zmq_term (context);

exit(0);
}

static void *
send_routine (void *context) {
printf(start send...\n);

//  Socket to send messages on
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_bind (sender, tcp://*:5557);
//  Initialize random number generator
srandom ((unsigned) time (NULL));

//  Send 100 tasks
int task_nbr_total = 100;
int task_nbr = 0;
int total_msec = 0; //  Total expected cost in msecs


for (task_nbr = 0; task_nbr  task_nbr_total; task_nbr++) {
int workload;
//  Random workload from 1 to 100msecs
workload = randof (100) + 1;
total_msec += workload;
char string [10];
sprintf (string, %d, workload);
s_send (sender, string);
}

printf (Total expected cost: %d msec\n, total_msec);

zmq_close (sender);
}

static void *
recv_routine (void *context) {

printf(start recv...\n);

//  Socket to send start of batch message on
void *sink = zmq_socket (context, ZMQ_PULL);
zmq_bind (sink, tcp://*:5558);

int64_t start_time = s_clock ();

//  Process 100 confirmations
for ( int task_nbr = 0; task_nbr  100; task_nbr++) {
char *string = s_recv (sink);
printf(%s +\n, string);
free (string);
//if ((task_nbr / 10) * 10 == task_nbr)
//printf (:);
//else
//printf (.); 
}

//  Calculate and report duration of batch
printf (Total elapsed time: %d msec\n,(int) (s_clock () - start_time));

zmq_close (sink);
}



int main (int argc, char**argv)
{
  /
//spawn child
  pid_t pid;

for(int i = 0; i  5; i++)
{
if( (pid = fork())  0 )
{
printf(error child\n);
return -1;
}
else if( pid == 0)
{
worker();
}
}



void *context = zmq_init (1);




pthread_t send_t;
pthread_create (send_t, NULL, send_routine, context);  





pthread_t recv_t;
pthread_create (recv_t, NULL, recv_routine, context); 

 

pthread_join(recv_t,NULL);
pthread_join(send_t,NULL);


zmq_term (context);
return 0;
}
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] one issue with zeroMQ 2.2.0(PUSH/PULL mode can not work well in multiprocess application(one parent with a number of childs))

2012-06-29 Thread Michel Pelletier
On Fri, Jun 29, 2012 at 6:46 AM, cleanfield cleanfi...@126.com wrote:
 Dear,

 As the guide said, the workers really were running in parallel, if there are
 more than one of them. In my example, the parent process acts as the
 ventilator and sink, a number of its child processes act as worker. The fact
 is that always only one child work in serial.

This is probably because your workers are processing messages faster
than you can PUSH to them, and it appears serial to you.  PUSH is
asynchronous and messages get queued and dealt out round robin to the
workers, who are working in parallel.  There is no synchronization
mechanism that forces them to run serially.  If you push enough
messages that the queue starts to load up, you will see your workers
running in parallel.

Same with the sink, the workers can all PUSH asynchronously in
parallel to the sink, which will queue up the results fairly for the
sink to collect.  The only serial parts are the ventilator producing
work and the sink collecting it, the workers all run in parallel.

-Michel
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Fixing Socket Problem on ZMQ Version 2.2

2012-06-29 Thread Pieter Hintjens
On Sat, Jun 30, 2012 at 4:15 AM, Wen Qi Chin wchi...@gmail.com wrote:

 I had a question relating to this issue--
 https://zeromq.jira.com/browse/LIBZMQ-325. How do we get into the situation
 where two sockets connect with the same identify? Why does this happen?

It's an application error; it's the connecting (client) socket that
decides on an identity (or not, leaving the choice up to the binding
socket). If two clients choose the same identity that's a fault.

-Pieter
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
http://lists.zeromq.org/mailman/listinfo/zeromq-dev