The simplest way to make a lookup service is using the service semantics, and the lookup service can talk to the broker over inproc or tcp as it wants (it could be a thread in the same process, or a separate process).
On Sun, Mar 1, 2015 at 9:00 PM, Kenneth Adam Miller <kennethadammil...@gmail.com> wrote: > So, in order to manage a mutual exchange of address between two concurrent > parties, I thought that on each side I would have a producer produce to a > topic that the opposite side was subscribed to. That means that each side is > both a producer and a consumer. > > I have the two entities running in parallel. The front end client connects > to the malamute broker, and subscribes to the backendEndpoints topic, and > then producing it's endpoint to the frontendEndpoints topic. > > The opposite side does the same thing, with the back end subscribing to the > frontendEndpoints and producing to backendEndpoints. > > > The problem is that if the front end and back end are in their own threads > then only the thread that completes the mlm_set_producer and > mlm_set_consumer call proceed. The one that didn't make it that far will > hang at that mlm_set_x pair point... > > code: > > std::cout << "connectToFrontEnd" << std::endl; > mlm_client_t *frontend_reader = mlm_client_new(); > assert(frontend_reader); > mlm_client_t *frontend_writer = mlm_client_new(); > assert(frontend_writer); > int rc=mlm_client_connect (frontend_reader, "tcp://127.0.0.1:9999" , > 1000, "reader/secret"); > assert(rc==0); > rc=mlm_client_connect (frontend_writer, "tcp://127.0.0.1:9999" , 1000, > "writer/secret"); > assert(rc==0); > std::cout << "frontend mlm clients connected" << std::endl; > > mlm_client_set_consumer(frontend_reader, "backendEndpoints", "*"); > mlm_client_set_producer(frontend_writer, "frontendEndpoints"); > std::cout << "frontend client producers and consumers set" << std::endl; > > > The code looks exactly* the same for the backend, but with some variable and > other changes. > > std::cout << "connectToBackEnd" << std::endl; > mlm_client_t *backend_reader = mlm_client_new(); > assert(backend_reader); > mlm_client_t *backend_writer = mlm_client_new(); > assert(backend_writer); > int rc=mlm_client_connect(backend_reader,"tcp://127.0.0.1:9999", 1000, > "reader/secret"); > assert(rc==0); > rc=mlm_client_connect(backend_writer,"tcp://127.0.0.1:9999", 1000, > "writer/secret"); > assert(rc==0); > std::cout << "backend mlm clients connected" << std::endl; > > mlm_client_set_consumer(backend_reader, "frontendEndpoints", "*"); > mlm_client_set_producer(backend_writer, "backendEndpoints"); > std::cout << "backend client producers and consumers set" << std::endl; > > I only ever will see either "frontend client produces and consumers set" or > "backend client producers and consumers set". > > On Sun, Mar 1, 2015 at 2:00 PM, Pieter Hintjens <p...@imatix.com> wrote: >> >> My assumption is that a broker that's doing a lot of service requests >> won't be showing costs of regular expression matching, compared to the >> workload. >> >> >> On Sun, Mar 1, 2015 at 7:49 PM, Doron Somech <somdo...@gmail.com> wrote: >> >> I did it in actors and then moved it back into the main server as it >> >> was complexity for nothing (at that stage). I'd rather design against >> >> real use than against theory. >> > >> > Don't you worry about the matching performance which will happen on the >> > main >> > thread? Also a usage I can see is to use exact matching (string >> > comparison) >> > over regular expression (I usually use exact matching), this is way I >> > think >> > the plugin model fits the service as well. >> > >> > On Sun, Mar 1, 2015 at 8:09 PM, Pieter Hintjens <p...@imatix.com> wrote: >> >> >> >> On Sun, Mar 1, 2015 at 5:52 PM, Doron Somech <somdo...@gmail.com> >> >> wrote: >> >> >> >> > So I went over the code, really liked it. Very simple. >> >> >> >> Thanks. I like the plugin model, especially neat using CZMQ actors. >> >> >> >> > I have a question regarding services, for each stream you are using a >> >> > dedicate thread (actors) and one thread for managing mailboxes. >> >> > However >> >> > (if >> >> > I understood correctly) for services you are doing the processing >> >> > inside >> >> > the >> >> > server thread, why didn't you use an actor for each service or actor >> >> > to >> >> > manage all services? I think the matching of services can be >> >> > expensive >> >> > and >> >> > block the main thread. >> >> >> >> I did it in actors and then moved it back into the main server as it >> >> was complexity for nothing (at that stage). I'd rather design against >> >> real use than against theory. >> >> >> >> -Pieter >> >> _______________________________________________ >> >> zeromq-dev mailing list >> >> zeromq-dev@lists.zeromq.org >> >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> > >> > >> > >> > _______________________________________________ >> > zeromq-dev mailing list >> > zeromq-dev@lists.zeromq.org >> > http://lists.zeromq.org/mailman/listinfo/zeromq-dev >> > >> _______________________________________________ >> zeromq-dev mailing list >> zeromq-dev@lists.zeromq.org >> http://lists.zeromq.org/mailman/listinfo/zeromq-dev > > > > _______________________________________________ > zeromq-dev mailing list > zeromq-dev@lists.zeromq.org > http://lists.zeromq.org/mailman/listinfo/zeromq-dev > _______________________________________________ zeromq-dev mailing list zeromq-dev@lists.zeromq.org http://lists.zeromq.org/mailman/listinfo/zeromq-dev