Re: [zeromq-dev] ZeroMQ docs

2023-12-01 Thread Francesco
Hi Luca,
thanks! Glad to help and give something back to the community!


Il giorno mar 28 nov 2023 alle ore 14:00 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> On Mon, 27 Nov 2023 at 22:56, Francesco 
> wrote:
> >
> > Hi all,
> >
> > A final update on the ZeroMQ API documentation migration:
> >
> > > I think on documentation side what's really left is just to update
> links still indexed by Google and other search engines like:
> > >  http://api.zeromq.org/3-2:zmq-connect   -->
> https://libzmq.readthedocs.io/en/zeromq3-x/zmq_connect.html
> > > With the help of Kevin Sapper I'm trying to understand how to
> automatically create such redirection from the api.zeromq.org site
> >
> > Also this step is now complete.
> > Now that HTTP redirections are in place I think it will take only a few
> days for search engines like Google to catch up and show the
> readthedocs.io website in their results
> >
> > Francesco
>
> Excellent stuff, thank you
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZeroMQ docs

2023-11-27 Thread Francesco
Hi all,

A final update on the ZeroMQ API documentation migration:

> I think on documentation side what's really left is just to update links
still indexed by Google and other search engines like:
>  http://api.zeromq.org/3-2:zmq-connect   -->
https://libzmq.readthedocs.io/en/zeromq3-x/zmq_connect.html
> With the help of Kevin Sapper I'm trying to understand how to
automatically create such redirection from the api.zeromq.org site

Also this step is now complete.
Now that HTTP redirections are in place I think it will take only a few
days for search engines like Google to catch up and show the readthedocs.io
website in their results

Francesco


Il giorno ven 24 nov 2023 alle ore 22:25 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi all,
> Another small update on the documentation side, in case you are interested:
>
> >* In zeromq.org website: Update the link "Low-level API" to point to
> https://zeromq.readthedocs.io/en/latest/
> Done
>
> >* In zeromq.org website: Create a page to host the contents of
> http://wiki.zeromq.org/docs:contributing
> Done, this is the new page: https://zeromq.org/how-to-contribute/
>
> >* In master branch docs: Update the link in the "Authors" sections to
> point to the corresponding new page of the "new" zeromq.org website
> Done
>
> I think on documentation side what's really left is just to update links
> still indexed by Google and other search engines like:
>   http://api.zeromq.org/3-2:zmq-connect   -->
> https://libzmq.readthedocs.io/en/zeromq3-x/zmq_connect.html
>
> With the help of Kevin Sapper I'm trying to understand how to
> automatically create such redirection from the api.zeromq.org site
>
> Francesco
>
>
> Il giorno sab 4 nov 2023 alle ore 23:30 Francesco <
> francesco.monto...@gmail.com> ha scritto:
>
>> hi Brett, hi Arnaud,
>>
>> >  Just to say that this is really great work!  Kudos to you and Luca.
>> ...
>> > I would just like to add that this is really much appreciated work!
>>
>> Thanks !
>> I hope that the renewed look (together with the "always up to date with
>> zero maintainer work") will help to show to a the wider community that the
>> zeromq project is still there; there are users and it's a valid alternative
>> to other (perhaps more popular) messaging frameworks like Kafka, Pulsar or
>> NATS.
>> In this regard, in the future I would like to write some blog post on
>> the topic "ZeroMQ inside Kubernetes".  I have accumulated quite a long
>> (>3yrs) experience in running ZeroMQ services in scalable workloads inside
>> Kubernetes and developed a number of techniques to address problems
>> inherently connected to peer-to-peer (brokerless) communications.
>>
>> Anyway, for now I think we should "finish" the documentation effort.
>> About that:
>>
>> > * In master branch docs: Fix the "See Also" sections in the doc pages,
>> by using unordered lists, instead of space-separated single-line list
>>
>> this has been done & merged
>>
>> > * In api.zeromq.org wikidot: Setup the redirection to the new website;
>> according to https://www.wikidot.com/doc-modules:redirect-module, it's
>> enough to put
>> >[[module Redirect destination="
>> https://zeromq.readthedocs.io/en/latest/;]]
>> >   in the wiki page... Luca / Kevin, my understanding is that you have
>> administrative access to the wikidot for api.zeromq.org... can you try
>> setting up this redirect?
>>
>> this has been done by Luca
>>
>> >* In zeromq.org website: Update the link "Low-level API" to point to
>> https://zeromq.readthedocs.io/en/latest/
>> >* In zeromq.org website: Create a page to host the contents of
>> http://wiki.zeromq.org/docs:contributing
>> >* In master branch docs: Update the link in the "Authors" sections to
>> point to the corresponding new page of the "new" zeromq.org website
>>
>> For these steps I opened a first MR on zeromq.org repo:
>> https://github.com/zeromq/zeromq.org/pull/141
>> and I'm waiting for some feedback from Kevin Sapper which looks like the
>> de-facto maintainer of the zeromq.org website (looking at commit
>> history) :)
>>
>>
>> > I'm also curious how we can make this work from zproject. But that
>> might be a later step.
>>
>> I have saved the (very basic / raw) script I used to convert the
>> Asciidoc-py format to the "modern" Asciidoc format, so I can share them or
>> run them on other docs if useful.
>> To be hon

Re: [zeromq-dev] ZeroMQ docs

2023-11-24 Thread Francesco
Hi all,
Another small update on the documentation side, in case you are interested:

>* In zeromq.org website: Update the link "Low-level API" to point to
https://zeromq.readthedocs.io/en/latest/
Done

>* In zeromq.org website: Create a page to host the contents of
http://wiki.zeromq.org/docs:contributing
Done, this is the new page: https://zeromq.org/how-to-contribute/

>* In master branch docs: Update the link in the "Authors" sections to
point to the corresponding new page of the "new" zeromq.org website
Done

I think on documentation side what's really left is just to update links
still indexed by Google and other search engines like:
  http://api.zeromq.org/3-2:zmq-connect   -->
https://libzmq.readthedocs.io/en/zeromq3-x/zmq_connect.html

With the help of Kevin Sapper I'm trying to understand how to automatically
create such redirection from the api.zeromq.org site

Francesco


Il giorno sab 4 nov 2023 alle ore 23:30 Francesco <
francesco.monto...@gmail.com> ha scritto:

> hi Brett, hi Arnaud,
>
> >  Just to say that this is really great work!  Kudos to you and Luca.
> ...
> > I would just like to add that this is really much appreciated work!
>
> Thanks !
> I hope that the renewed look (together with the "always up to date with
> zero maintainer work") will help to show to a the wider community that the
> zeromq project is still there; there are users and it's a valid alternative
> to other (perhaps more popular) messaging frameworks like Kafka, Pulsar or
> NATS.
> In this regard, in the future I would like to write some blog post on
> the topic "ZeroMQ inside Kubernetes".  I have accumulated quite a long
> (>3yrs) experience in running ZeroMQ services in scalable workloads inside
> Kubernetes and developed a number of techniques to address problems
> inherently connected to peer-to-peer (brokerless) communications.
>
> Anyway, for now I think we should "finish" the documentation effort.
> About that:
>
> > * In master branch docs: Fix the "See Also" sections in the doc pages,
> by using unordered lists, instead of space-separated single-line list
>
> this has been done & merged
>
> > * In api.zeromq.org wikidot: Setup the redirection to the new website;
> according to https://www.wikidot.com/doc-modules:redirect-module, it's
> enough to put
> >[[module Redirect destination="
> https://zeromq.readthedocs.io/en/latest/;]]
> >   in the wiki page... Luca / Kevin, my understanding is that you have
> administrative access to the wikidot for api.zeromq.org... can you try
> setting up this redirect?
>
> this has been done by Luca
>
> >* In zeromq.org website: Update the link "Low-level API" to point to
> https://zeromq.readthedocs.io/en/latest/
> >* In zeromq.org website: Create a page to host the contents of
> http://wiki.zeromq.org/docs:contributing
> >* In master branch docs: Update the link in the "Authors" sections to
> point to the corresponding new page of the "new" zeromq.org website
>
> For these steps I opened a first MR on zeromq.org repo:
> https://github.com/zeromq/zeromq.org/pull/141
> and I'm waiting for some feedback from Kevin Sapper which looks like the
> de-facto maintainer of the zeromq.org website (looking at commit history)
> :)
>
>
> > I'm also curious how we can make this work from zproject. But that might
> be a later step.
>
> I have saved the (very basic / raw) script I used to convert the
> Asciidoc-py format to the "modern" Asciidoc format, so I can share them or
> run them on other docs if useful.
> To be honest I never used any other project outside libzmq so I'm quite
> new to e.g. czmq or zproject.
>
> Thanks,
> Francesco
>
>
> Il giorno sab 4 nov 2023 alle ore 22:50 Arnaud Loonstra <
> arn...@sphaero.org> ha scritto:
>
>> I would just like to add that this is really much appreciated work!
>>
>> I'm also curious how we can make this work from zproject. But that might
>> be a later step.
>>
>> Rg,
>>
>> Arnaud
>> On 03/11/2023 10:29, Francesco wrote:
>>
>> Hi all,
>>
>> As an update on this topic: with help from Luca the *conversion of
>> documentation from the old Asciidoc-py has been completed*.
>> As bonus: Github is able to render Asciidocs natively, so e.g. you can
>> see the documentation rendered on the fly by just browsing Github repo,
>> e.g. see
>> https://github.com/zeromq/libzmq/blob/master/doc/zmq_connect.adoc.
>> Additionally the docs get published to Github Pages:
>>  https://zeromq.github.io/libzmq/ <https://zeromq.github.io/libzmq/>
>>
>>

Re: [zeromq-dev] PUB/SUB not sending data

2023-11-23 Thread Francesco
Hi Raul,
I think the example at
https://learning-0mq-with-pyzmq.readthedocs.io/en/latest/pyzmq/patterns/pubsub.html
is using a "while True" just for the sake of the example.
The main point is that the PUB socket is global, it is initialized once,
and then has a long-lifetime (basically it's deleted only when the program
exits).
I think that if you stick to this design criteria (instead of creating the
PUB socket, initialize and push data on that all the time you have
something to send) your issue will be solved :)

HTH,
Francesco

Il giorno gio 23 nov 2023 alle ore 11:29 Raúl Parada Medina <
raul.parada.med...@gmail.com> ha scritto:

> Hi,
>
> I've followed this tutorial, however, it includes the while True statement
> which I would like to avoid and it's more complex as required.
> Best,
> Raúl
>
> Missatge de Brett Viren  del dia dc., 22 de nov. 2023 a les
> 15:00:
>
>> Hi again,
>>
>> Perhaps copy-paste the example:
>>
>>
>> https://learning-0mq-with-pyzmq.readthedocs.io/en/latest/pyzmq/patterns/pubsub.html
>>
>> and play with it to get some understanding of the socket lifetimes and
>> timing of the subscription phase.
>>
>> You might start with that code and modify it to get closer to what you
>> actually want and in the way will get past the blockers.
>>
>> -Brett.
>>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] PUB/SUB not sending data

2023-11-22 Thread Francesco
Hi Brett, Hi Raul,

My feeling is that there is a misconception as well, but that's related to
the socket lifetime.
A PUB socket is not an object meant to be created, have 1 msg pushed out
and then immediately destroyed, as it happens inside the senderzmq() python
function of original email.
The point is that the PUB socket creates a zmq "server" endpoint and
* the SUB needs to be able to connect to it (this requires a TCP handshake
and more stuff happening transparently to the user in the ZMQ background
threads)
* the SUB needs to be able to subscribe to it
Only after these 2 things happen, the subscriber can truly receive data.
Creating a PUB socket and immediately destroying it means all these
operations need to be repeated. And if you remove the "while Running" from
senderzmq() what happens is that the PUB socket is created, you ask ZMQ to
accept ("bind") connections coming to it, then (at very high speed) you
send a message but no subscriber can realistically be already connected, so
it gets dropped.

Long story short:
* try putting a sleep after the bind() and the send_string() of about
500msec -- then your application should work even if you remove the "while
Running"
* a better design for your "sender" application is to create the PUB socket
once (as global variable or somewhere else) and then _reuse_ it
everytime you need to send out a latitude/longitude sample

HTH,
Francesco




Il giorno mer 22 nov 2023 alle ore 14:14 Brett Viren via zeromq-dev <
zeromq-dev@lists.zeromq.org> ha scritto:

> Hi Raúl,
>
> I feel that your questions may imply a deeper misunderstanding than what
> they ask directly so let me offer a few comments that hopefully gets to
> the real problems.  Feel free to ask more.
>
> Of course the "while True" loop in the sender will send only identical
> copies of the same message (and will do so at a very high rate).
>
> There is no "listen new data" feature that I can think of.
>
> Any code that calls the senderzmq() will find that this function never
> returns.
>
> It seems senderzmq() should be rewritten to simply omit the "while True"
> loop so that the caller may call it many times and each time with some
> new values of lat and lon.
>
> One concrete thing I notice in your code.  Your the SUB socket in the
> receiver does not register any topic.  See:
>
>   https://pyzmq.readthedocs.io/en/latest/api/zmq.html#zmq.Socket.subscribe
>
> Because of that, I expect the call to recv_string() never returns.
>
>
> -Brett.
>
> Raúl Parada Medina  writes:
>
> > Hi,
> >
> > I'm sending data in python between two connect machines within the same
> network (successfully
> > connected).
> >
> > The publisher code is defined as:
> >
> > def senderzmq(self, lat, lon):
> > sock = zmq.Context().socket(zmq.PUB)
> > sock.bind("tcp://*:1")
> > running = True
> > while running:
> > sock.send_string(f"Lat {lat}, Lon={lon}")
> >
> > The receiver has the following format:
> >
> > import zmq
> > context=zmq.Context()
> > socket=context.socket(zmq.SUB)
> > socket.connect("tcp://192.168.1.35:1")
> > while True:
> >  message=socket.recv_string()
> >  print(message)
> >
> > The above development works, however, the sender always sends the same
> data, it looks the sender
> > remains within the while True state not listening new data. If I remove
> the while True, the above
> > doesn't work. Any idea how to exit the while True but transmitting new
> data at each loop?
> >
> > Thanks.
> > Raúl
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> >
> https://urldefense.com/v3/__https://lists.zeromq.org/mailman/listinfo/zeromq-dev__;!!P4SdNyxKAPE!FdPTbAQ8PU-86XJs4hElWe3bCxrehQdjDjBH4O0bBEi2pIuVDIcT8FEo2UiwWt8dgjvqYmU0y8Gy0XiNlliTOuM$
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Radix tree perf?

2023-11-14 Thread Francesco
Hi Axel,
I'm interested in this topic as well.
I think the radix tree has been proposed as an alternative to mtrie to
allow better memory usage.
See
https://github.com/zeromq/libzmq/issues/1400
for the details.
In particular in comment
https://github.com/zeromq/libzmq/issues/1400#issuecomment-432774639, there
are benchmarks reporting, for 1M key case, that the radix tree is twice
_FASTER_ than the generic trie. In your test, you seem to find quite the
opposite:

>keys = 100, queries = 100, key size = 20
>[trie]
>Average lookup time = 17.0 ns
>[radix_tree]
>Average lookup time = 462.3 ns

Can you check how to reproduce the same type of table reported in
https://github.com/zeromq/libzmq/issues/1400#issuecomment-432774639 ?
Maybe that will shed some light on this discrepancy.

Maybe the discrepancy comes from the large refactor of the zeromq generic
mtrie done in this commit

https://github.com/zeromq/libzmq/commit/ab301ebf799b4dbddb1351d77da49b2e6e1cf8ec

and which came AFTER the integration of the radix tree and thus has
probably obsoleted the results reported in
https://github.com/zeromq/libzmq/issues/1400#issuecomment-432774639.

Thanks,
Francesco

PS: my interest on this topic comes from the fact that apparently
integration tests might take a lot of time when the zmq code is
instrumented with Google ASAN, due (apparently) to the large number of
de-allocations happening inside the generic mtrie, after we switched to the
implementation of commit ab301ebf799b4dbddb1351d77da49b2e6e1cf8ec





Il giorno dom 12 nov 2023 alle ore 05:05 Axel R.  ha
scritto:

> I'm running the *benchmark_radix_tree* test app and what I see is the
> radix tree is considerably slower than the trie from 1 to at least 10
> million keys.
>
> Is the test representative of real-world use in ZeroMQ? If yes, in which
> circumstance(s) would one want to enable that option?
>
> keys = 1, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.0 ns
> [radix_tree]
> Average lookup time = 31.6 ns
>
> keys = 10, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.2 ns
> [radix_tree]
> Average lookup time = 41.1 ns
>
> keys = 100, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.2 ns
> [radix_tree]
> Average lookup time = 58.0 ns
>
> keys = 1000, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.2 ns
> [radix_tree]
> Average lookup time = 74.3 ns
>
> keys = 1, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.3 ns
> [radix_tree]
> Average lookup time = 117.1 ns
>
> keys = 10, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.0 ns
> [radix_tree]
> Average lookup time = 217.4 ns
>
> keys = 100, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.0 ns
> [radix_tree]
> Average lookup time = 462.3 ns
>
> keys = 1000, queries = 100, key size = 20
> [trie]
> Average lookup time = 17.3 ns
> [radix_tree]
> Average lookup time = 722.6 ns
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] content based routing for a thread pool

2023-11-13 Thread Francesco
Hi Lei,
I didn't really follow this email thread closely but I happened to read
this:

I think it suggests that as long as the "active" message has not finished,
> meaning it's still sending the "MORE" flag, a ROUTER socket will not
> receive() a frame from a different client. Given that the proxy loop is
> single threaded, that means it won't tend to other clients until the
> current one is finished. If some of the message frames have delays, they
> will surely cause delays in the processing of frames from other clients as
> well.
>
> I think this means the multipart is not meant to be used to carry really
> long repeating parts but rather just one part of the message with different
> "fields".
>

Definitively true. As far as zmq_msg_recv() and zmq_msg_send() are
concerned, they will consider a block of message parts always "all
together". For example you cannot receive only 1 part out of 2 of a ZMQ
message. And the messages cannot be delivered "partially". That's the nice
thing about zmq multipart: the parts are all delivered all together, as a
strong guarantee. This helps to reduce the complexity (you don't need to
think how to deal with a half-received ZMQ message).



> If I want to improve my threadpool to have better handling of queuing of
> the workers, also improved reliability by bringing in heartbeats, I will
> probably need to bring in the PPP. But the example in the guide seems to be
> having the proxy, ppqueue, pushing tasks to the workers. If I want to have
> the queue in the proxy, then I should have the workers "pulling" tasks. In
> the earlier part of the guide, it says this is achieved by having worker
> send a "READY" message to proxy and then proxy should reply with the task.
> So the core of my question here is, the worker can not know how long the
> task queue is in the proxy so it's possible queue is empty. In this case
> what should the proxy do? My guess is not replying to worker but do all the
> other things so that the worker should keep polling and that will be most
> efficient, right?
>

Just my 2 cents here: if the proxy has an empty queue and a worker is
asking for some work to do, then the proxy should just answer "there's
nothing for you" and keep moving. The worker OTOH can implement a simple
sleeping logic (one simple logic I used in production and turned out to
work well enough: after N _consecutive_ loops in which you get " there's
nothing for you", sleep for X msecs, then restart).

HTH,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-11-09 Thread Francesco
Hi all,
Short update on this.

>I think technically speaking if we want to register libzmq as RTD "libzmq"
project, we need to point the "zeromq" project to another git repo
(something like "zeromq-readthedocs").
>From such a new git repo then we should setup a (very basic) RTD pipeline
to deploy the "landing" page.
>Then after we have de-coupled the libzmq from the "zeromq" RTD project I
think we will be able to bind it to the "libzmq" RTD project.

This has been done: the docs for libzmq now are in the "right place":
https://libzmq.readthedocs.io/en/latest/
The https://zeromq.readthedocs.io/ domain has been re-created and bound to
a new empty github repo (https://github.com/zeromq/zeromq-readthedocs).
This could be expanded in the future to produce a landing page (I think it
makes sense only if other projects of the zeromq community move to RTD).
For now it's setup to redirect straight:

   https://zeromq.readthedocs.io/ --> https://libzmq.readthedocs.io/

Francesco


Il giorno ven 3 nov 2023 alle ore 16:34 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi Brett,
>
> I mostly agree with you.
>
> > What might go in the newly available /zeromq/?  Some thoughts:
> > Landing :: Develop some minimal content consisting of a link to
> >  zeromq.org and links to all the other RTD projects (I know of only 3:
> >  the new /libzmq/, /pyzmq/ and /learning-0mq-with-pyzmq/).
>
> My preference would go on this option "Landing". I think a very simple
> page with the documentation for other projects can help.
> Moreover using the "subprojects" feature we would be able to serve the
> documentation from the zeromq.readthedocs.io website for all zeromq
> projects (libzmq, others in future).
>
> I think technically speaking if we want to register libzmq as RTD "libzmq"
> project, we need to point the "zeromq" project to another git repo
> (something like "zeromq-readthedocs").
> From such a new git repo then we should setup a (very basic) RTD pipeline
> to deploy the "landing" page.
> Then after we have de-coupled the libzmq from the "zeromq" RTD project I
> think we will be able to bind it to the "libzmq" RTD project.
>
> Luca, what do you think of the above plan?
>
> Thanks,
> Francesco
>
>
> Il giorno ven 3 nov 2023 alle ore 14:47 Brett Viren  ha
> scritto:
>
>> Hi Francesco,
>>
>> I think it is better to move your current /zeromq/ content to be under
>> /libzmq/ and then use /zeromq/ in some other way.
>>
>> The reason is that the term "zeromq" implies a larger scope than does
>> "libzmq".  In informal discussions, context will sometimes make the
>> meaning clear.  Here, I think it is best to be precise.
>>
>>
>> What might go in the newly available /zeromq/?  Some thoughts:
>>
>> - Nothing :: Hold on to /zeromq/ in the RTD namespace but otherwise
>>   leave it empty.
>>
>> - Landing :: Develop some minimal content consisting of a link to
>>   zeromq.org and links to all the other RTD projects (I know of only 3:
>>   the new /libzmq/, /pyzmq/ and /learning-0mq-with-pyzmq/).
>>
>> - Redirect :: If technically possible, make /zeromq/ so that
>>   zeromq.readthedocs.io gives an HTTP redirect to zeromq.org.
>>
>> - Mirror :: Serve a copy of zeromq.org content, maybe even versioned.
>>
>>
>> -Brett.
>>
>>
>> Francesco  writes:
>>
>> > Hi all,
>> >
>> > As an update on this topic: RTD support contacted me yesterday and they
>> promptly renamed the old "libzmq"
>> > abandoned project inside RTD, so that the name "libzmq" is now
>> available.
>> > On the other hand, right now we have already imported the libzmq
>> project inside RTD with the name
>> > "zeromq".
>> > I asked for directions to the RTD support.
>> >
>> > I feel that both
>> >https://readthedocs.org/projects/zeromq/   [currently up and
>> running!]
>> >https://readthedocs.org/projects/libzmq [now available]
>> > are both good. Maybe "libzmq" is more specific (let's say in future
>> also czmq wants to have docs on RTD,
>> > it could register "czmq" subproject and that would be more coherently
>> matched by having "libzmq" instead
>> > of "zeromq")
>> >
>> > What do you think?
>> >
>> > Thanks,
>> > Francesco
>> >
>> > Il giorno mar 31 ott 2023 alle ore 15:45 Francesco <
>> fra

Re: [zeromq-dev] ZeroMQ docs

2023-11-04 Thread Francesco
hi Brett, hi Arnaud,

>  Just to say that this is really great work!  Kudos to you and Luca.
...
> I would just like to add that this is really much appreciated work!

Thanks !
I hope that the renewed look (together with the "always up to date with
zero maintainer work") will help to show to a the wider community that the
zeromq project is still there; there are users and it's a valid alternative
to other (perhaps more popular) messaging frameworks like Kafka, Pulsar or
NATS.
In this regard, in the future I would like to write some blog post on
the topic "ZeroMQ inside Kubernetes".  I have accumulated quite a long
(>3yrs) experience in running ZeroMQ services in scalable workloads inside
Kubernetes and developed a number of techniques to address problems
inherently connected to peer-to-peer (brokerless) communications.

Anyway, for now I think we should "finish" the documentation effort.
About that:

> * In master branch docs: Fix the "See Also" sections in the doc pages, by
using unordered lists, instead of space-separated single-line list

this has been done & merged

> * In api.zeromq.org wikidot: Setup the redirection to the new website;
according to https://www.wikidot.com/doc-modules:redirect-module, it's
enough to put
>[[module Redirect destination="https://zeromq.readthedocs.io/en/latest/
"]]
>   in the wiki page... Luca / Kevin, my understanding is that you have
administrative access to the wikidot for api.zeromq.org... can you try
setting up this redirect?

this has been done by Luca

>* In zeromq.org website: Update the link "Low-level API" to point to
https://zeromq.readthedocs.io/en/latest/
>* In zeromq.org website: Create a page to host the contents of
http://wiki.zeromq.org/docs:contributing
>* In master branch docs: Update the link in the "Authors" sections to
point to the corresponding new page of the "new" zeromq.org website

For these steps I opened a first MR on zeromq.org repo:
https://github.com/zeromq/zeromq.org/pull/141
and I'm waiting for some feedback from Kevin Sapper which looks like the
de-facto maintainer of the zeromq.org website (looking at commit history) :)


> I'm also curious how we can make this work from zproject. But that might
be a later step.

I have saved the (very basic / raw) script I used to convert the
Asciidoc-py format to the "modern" Asciidoc format, so I can share them or
run them on other docs if useful.
To be honest I never used any other project outside libzmq so I'm quite new
to e.g. czmq or zproject.

Thanks,
Francesco


Il giorno sab 4 nov 2023 alle ore 22:50 Arnaud Loonstra 
ha scritto:

> I would just like to add that this is really much appreciated work!
>
> I'm also curious how we can make this work from zproject. But that might
> be a later step.
>
> Rg,
>
> Arnaud
> On 03/11/2023 10:29, Francesco wrote:
>
> Hi all,
>
> As an update on this topic: with help from Luca the *conversion of
> documentation from the old Asciidoc-py has been completed*.
> As bonus: Github is able to render Asciidocs natively, so e.g. you can see
> the documentation rendered on the fly by just browsing Github repo, e.g.
> see https://github.com/zeromq/libzmq/blob/master/doc/zmq_connect.adoc.
> Additionally the docs get published to Github Pages:
>  https://zeromq.github.io/libzmq/ <https://zeromq.github.io/libzmq/>
>
> *The integration with ReadTheDocs is also complete*, you can check out
> the result at: https://zeromq.readthedocs.io/en/latest/
> Please note that from the flyout menu it's possible to browse docs for:
> libzmq 3.2.6, libzmq 4.0.10, libzmq 4.1.8, latest libzmq (master). Just
> like what we have in http://api.zeromq.org/.
> Unlike http://api.zeromq.org/ however, the ReadTheDocs website is
> automatically updated anytime there is a git checkin, so it will always
> show up-to-date documentation.
> Also the rendering of the page is using a bigger font and is somewhat less
> compact compared to http://api.zeromq.org/.
> Any comment is welcome.
>
> *Next steps* I think are:
> * In master branch docs: Fix the "See Also" sections in the doc pages, by
> using unordered lists, instead of space-separated single-line list
> * In zeromq.org website: Update the link "Low-level API" to point to
> https://zeromq.readthedocs.io/en/latest/
> * In api.zeromq.org wikidot: Setup the redirection to the new website;
> according to https://www.wikidot.com/doc-modules:redirect-module, it's
> enough to put
> [[module Redirect destination="
> https://zeromq.readthedocs.io/en/latest/;]]
>in the wiki page... Luca / Kevin, my understanding is that you have
> administrative access to the wikidot for api.zeromq.org... can you try
> setting up this redirect?
>
> Bonus:
> *

Re: [zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-11-03 Thread Francesco
Hi Brett,

I mostly agree with you.

> What might go in the newly available /zeromq/?  Some thoughts:
> Landing :: Develop some minimal content consisting of a link to
>  zeromq.org and links to all the other RTD projects (I know of only 3:
>  the new /libzmq/, /pyzmq/ and /learning-0mq-with-pyzmq/).

My preference would go on this option "Landing". I think a very simple page
with the documentation for other projects can help.
Moreover using the "subprojects" feature we would be able to serve the
documentation from the zeromq.readthedocs.io website for all zeromq
projects (libzmq, others in future).

I think technically speaking if we want to register libzmq as RTD "libzmq"
project, we need to point the "zeromq" project to another git repo
(something like "zeromq-readthedocs").
>From such a new git repo then we should setup a (very basic) RTD pipeline
to deploy the "landing" page.
Then after we have de-coupled the libzmq from the "zeromq" RTD project I
think we will be able to bind it to the "libzmq" RTD project.

Luca, what do you think of the above plan?

Thanks,
Francesco


Il giorno ven 3 nov 2023 alle ore 14:47 Brett Viren  ha scritto:

> Hi Francesco,
>
> I think it is better to move your current /zeromq/ content to be under
> /libzmq/ and then use /zeromq/ in some other way.
>
> The reason is that the term "zeromq" implies a larger scope than does
> "libzmq".  In informal discussions, context will sometimes make the
> meaning clear.  Here, I think it is best to be precise.
>
>
> What might go in the newly available /zeromq/?  Some thoughts:
>
> - Nothing :: Hold on to /zeromq/ in the RTD namespace but otherwise
>   leave it empty.
>
> - Landing :: Develop some minimal content consisting of a link to
>   zeromq.org and links to all the other RTD projects (I know of only 3:
>   the new /libzmq/, /pyzmq/ and /learning-0mq-with-pyzmq/).
>
> - Redirect :: If technically possible, make /zeromq/ so that
>   zeromq.readthedocs.io gives an HTTP redirect to zeromq.org.
>
> - Mirror :: Serve a copy of zeromq.org content, maybe even versioned.
>
>
> -Brett.
>
>
> Francesco  writes:
>
> > Hi all,
> >
> > As an update on this topic: RTD support contacted me yesterday and they
> promptly renamed the old "libzmq"
> > abandoned project inside RTD, so that the name "libzmq" is now available.
> > On the other hand, right now we have already imported the libzmq project
> inside RTD with the name
> > "zeromq".
> > I asked for directions to the RTD support.
> >
> > I feel that both
> >https://readthedocs.org/projects/zeromq/   [currently up and
> running!]
> >https://readthedocs.org/projects/libzmq [now available]
> > are both good. Maybe "libzmq" is more specific (let's say in future also
> czmq wants to have docs on RTD,
> > it could register "czmq" subproject and that would be more coherently
> matched by having "libzmq" instead
> > of "zeromq")
> >
> > What do you think?
> >
> > Thanks,
> > Francesco
> >
> > Il giorno mar 31 ott 2023 alle ore 15:45 Francesco <
> francesco.monto...@gmail.com> ha scritto:
> >
> > Hi Brett,
> >
> > > RTD provides a "custom domain" aka a "subdomain" namespace.   I
> believe this would allow
> > >  zeromq.readthedocs.io/libzmq
> >
> > yes, I agree it should be feasible.
> >
> > > I do not know how to best map this to libzmq's development model.
> >
> > I think Luca has just created the zeromq
> https://readthedocs.org/projects/zeromq/  project.
> > RTD allows to add as many maintainers as needed to a single
> "project" (like zeromq)... AFAICT all
> > maintainers have same rights/permissions.
> > I think it would be best to have all libzmq maintainers added there
> to ensure there will be always
> > someone with the rights to update/tweak config settings also in
> upcoming years.
> > And for sure I can be there to help as I can.
> >
> > Thanks,
> > Francesco
> >
> > Il giorno mar 31 ott 2023 alle ore 15:37 Brett Viren 
> ha scritto:
> >
> > Francesco  writes:
> >
> > > In meanwhile perhaps some libzmq maintainer can simply
> register a new
> > > project named "zeromq" and then later on we can setup some
> kind of
> > > redirection rule
> >
> > RTD provides a "custom domain" aka a "subdomain&

Re: [zeromq-dev] ZeroMQ docs

2023-11-03 Thread Francesco
Hi all,

As an update on this topic: with help from Luca the *conversion of
documentation from the old Asciidoc-py has been completed*.
As bonus: Github is able to render Asciidocs natively, so e.g. you can see
the documentation rendered on the fly by just browsing Github repo, e.g.
see https://github.com/zeromq/libzmq/blob/master/doc/zmq_connect.adoc.
Additionally the docs get published to Github Pages:
 https://zeromq.github.io/libzmq/ <https://zeromq.github.io/libzmq/>

*The integration with ReadTheDocs is also complete*, you can check out the
result at: https://zeromq.readthedocs.io/en/latest/
Please note that from the flyout menu it's possible to browse docs for:
libzmq 3.2.6, libzmq 4.0.10, libzmq 4.1.8, latest libzmq (master). Just
like what we have in http://api.zeromq.org/.
Unlike http://api.zeromq.org/ however, the ReadTheDocs website is
automatically updated anytime there is a git checkin, so it will always
show up-to-date documentation.
Also the rendering of the page is using a bigger font and is somewhat less
compact compared to http://api.zeromq.org/.
Any comment is welcome.

*Next steps* I think are:
* In master branch docs: Fix the "See Also" sections in the doc pages, by
using unordered lists, instead of space-separated single-line list
* In zeromq.org website: Update the link "Low-level API" to point to
https://zeromq.readthedocs.io/en/latest/
* In api.zeromq.org wikidot: Setup the redirection to the new website;
according to https://www.wikidot.com/doc-modules:redirect-module, it's
enough to put
[[module Redirect destination="https://zeromq.readthedocs.io/en/latest/
"]]
   in the wiki page... Luca / Kevin, my understanding is that you have
administrative access to the wikidot for api.zeromq.org... can you try
setting up this redirect?

Bonus:
* In zeromq.org website: Create a page to host the contents of
http://wiki.zeromq.org/docs:contributing
* In master branch docs: Update the link in the "Authors" sections to point
to the corresponding new page of the "new" zeromq.org website

I will try to contribute some PRs to the zeromq.org website repo to address
above points

Thanks,

Francesco



Il giorno mar 24 ott 2023 alle ore 15:35 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi Brett,
>
>
> FWIW, I think it is very reasonable to accept some syntax change in
>> order to migrate to a better supported document compiler and to gain the
>> new functionality of readthedocs.  So, for whatever it may be worth, I
>> take back my initial opinion to leave the API .txt files as-is.
>>
>
> Actually in my PR I'm proposing to rename all .txt files to .adoc to
> clarify they use Asciidoc syntax. It's clearly the correct extension that
> should be used according to Asciidoc online resources.
> As per the content: I migrated them to use the more modern asciidoc syntax.
>
>
>> Francesco, I know you have already invested effort down the asciidoctor
>> path but maybe it is worth considering to jump fully to a flavor of
>> markdown (eg github's)?
>>
>
> I'm not sure. Of course they're biased but Asciidoc community claims to be
> better and a "more sound alternative" to Markdown, see:
>https://docs.asciidoctor.org/asciidoc/latest/asciidoc-vs-markdown/
>
> A less-biased comparison (much longer to read -- I didn't read it all) is
> at:
>   https://www.dewanahmed.com/markdown-asciidoc-restructuredtext/
>
>
> Actually, when I
>> started writing I half assumed they were in markdown and half assumed
>> they were in some special markdown'ish GSL syntax and only later figured
>> out they were asciidoc.  But then I got confused about how to compile
>> them (ie,
>> asciidoc-py vs asciidoctor that you described).  Certainly, my stumbles
>> were due to my ignorance/assumptions but had the docs been in markdown,
>> all these little frictions would not have shown up.  Again, opinion fwiw.
>>
> I totally agree.
> I also contributed some fixes/new features in the past and documenting
> them was tricky. The .txt extension does not help at all.
> The use of archaic tools makes it very hard to the "casual" contributor to
> see the actual documentation rendered.
> Even more tricky: I was expecting api.zeromq.org to automatically get
> updated after some time the PR was merged... I discovered it's not the case
> :)
>
> Last but not least: my use case for spending a few hours on this PR /
> documentation improvement was very simple: I noticed a very nice new option
> (ZMQ_BUSY_POLL) in the release notes. Then I started to search for docs
> to share with the rest of the teams. I found only the .txt version, nothing
> I could easily link in an email or share with simplicity... that was the
> trigger point... :)
>
>
> Thanks,
> Francesco
>
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-11-03 Thread Francesco
Hi all,

As an update on this topic: RTD support contacted me yesterday and they
promptly renamed the old "libzmq" abandoned project inside RTD, so that the
name "libzmq" is now available.
On the other hand, right now we have already imported the libzmq project
inside RTD with the name "zeromq".
I asked for directions to the RTD support.

I feel that both
   https://readthedocs.org/projects/zeromq/   [currently up and running!]
   https://readthedocs.org/projects/
<https://readthedocs.org/projects/zeromq/>libzmq [now available]
are both good. Maybe "libzmq" is more specific (let's say in future also
czmq wants to have docs on RTD, it could register "czmq" subproject and
that would be more coherently matched by having "libzmq" instead of
"zeromq")

What do you think?

Thanks,
Francesco



Il giorno mar 31 ott 2023 alle ore 15:45 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi Brett,
>
> > RTD provides a "custom domain" aka a "subdomain" namespace.   I believe
> this would allow
> >  zeromq.readthedocs.io/libzmq
>
> yes, I agree it should be feasible.
>
> > I do not know how to best map this to libzmq's development model.
>
> I think Luca has just created the zeromq
> https://readthedocs.org/projects/zeromq/  project.
> RTD allows to add as many maintainers as needed to a single "project"
> (like zeromq)... AFAICT all maintainers have same rights/permissions.
> I think it would be best to have all libzmq maintainers added there to
> ensure there will be always someone with the rights to update/tweak config
> settings also in upcoming years.
> And for sure I can be there to help as I can.
>
> Thanks,
> Francesco
>
>
> Il giorno mar 31 ott 2023 alle ore 15:37 Brett Viren  ha
> scritto:
>
>> Francesco  writes:
>>
>> > In meanwhile perhaps some libzmq maintainer can simply register a new
>> > project named "zeromq" and then later on we can setup some kind of
>> > redirection rule
>>
>> RTD provides a "custom domain" aka a "subdomain" namespace.   I
>> believe this would allow
>>
>>   zeromq.readthedocs.io/libzmq
>>
>> This nicely mirrors GitHub's .githup.io/ namespace for its
>> "pages" which naturally gives a spot from other zeromq repos to have
>> their docs on RTD.  Though some, at least PyZMQ, already have their own
>> subdomain on RTD.
>>
>>
>> The best I can tell from RTD's documentation is that their auth model
>> assumes a single individual "owns" the project or subdomain names in the
>> namespace.  I do not know how to best map this to libzmq's development
>> model.
>>
>> But, (purely IMO) I think it is reasonable for you, Francesco, to "own"
>> the "zeromq" subdomain and "libzmq" project name on RTD.  After all, you
>> are the one actively doing the work.  If at some future time you wish to
>> transfer ownership you could of course seek someone to take it.  Even
>> letting things languish in the future seems okay to me as some future
>> interested person can follow the RTD procedure and take over the name
>> and the responsibility.
>>
>>
>> -Brett.
>>
>>
>>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-10-31 Thread Francesco
Hi Brett,

> RTD provides a "custom domain" aka a "subdomain" namespace.   I believe
this would allow
>  zeromq.readthedocs.io/libzmq

yes, I agree it should be feasible.

> I do not know how to best map this to libzmq's development model.

I think Luca has just created the zeromq
https://readthedocs.org/projects/zeromq/  project.
RTD allows to add as many maintainers as needed to a single "project" (like
zeromq)... AFAICT all maintainers have same rights/permissions.
I think it would be best to have all libzmq maintainers added there to
ensure there will be always someone with the rights to update/tweak config
settings also in upcoming years.
And for sure I can be there to help as I can.

Thanks,
Francesco


Il giorno mar 31 ott 2023 alle ore 15:37 Brett Viren  ha
scritto:

> Francesco  writes:
>
> > In meanwhile perhaps some libzmq maintainer can simply register a new
> > project named "zeromq" and then later on we can setup some kind of
> > redirection rule
>
> RTD provides a "custom domain" aka a "subdomain" namespace.   I
> believe this would allow
>
>   zeromq.readthedocs.io/libzmq
>
> This nicely mirrors GitHub's .githup.io/ namespace for its
> "pages" which naturally gives a spot from other zeromq repos to have
> their docs on RTD.  Though some, at least PyZMQ, already have their own
> subdomain on RTD.
>
>
> The best I can tell from RTD's documentation is that their auth model
> assumes a single individual "owns" the project or subdomain names in the
> namespace.  I do not know how to best map this to libzmq's development
> model.
>
> But, (purely IMO) I think it is reasonable for you, Francesco, to "own"
> the "zeromq" subdomain and "libzmq" project name on RTD.  After all, you
> are the one actively doing the work.  If at some future time you wish to
> transfer ownership you could of course seek someone to take it.  Even
> letting things languish in the future seems okay to me as some future
> interested person can follow the RTD procedure and take over the name
> and the responsibility.
>
>
> -Brett.
>
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-10-31 Thread Francesco
hi Brett, Arnaud,
Thanks for the fast answers.
I was not aware of the specific RTD procedure to claim an abandoned project.
I have submitted a ticket on their website to "reclaim" the libzmq project
and bind it to https://github.com/zeromq/libzmq .Unfortunately the form
does not provide any sort of ticket number after submission. I will post
updates I receive from RTD on this mailing list.

They mention a 6weeks period before a user can be declared as
"unreachable"...let's see...
In meanwhile perhaps some libzmq maintainer can simply register a new
project named "zeromq" and then later on we can setup some kind of
redirection rule

Thanks,
Francesco


Il giorno mar 31 ott 2023 alle ore 13:41 Brett Viren  ha
scritto:

> Hi Francesco,
>
> I think you should have no problem reclaiming the name "libzmq" on RTD.
>
> I find this policy page:
>
>   https://docs.readthedocs.io/en/stable/abandoned-projects.html
>
> Scroll down for the info to provide and the link for how to request a
> change.
>
> Cheers,
> -Brett.
>
> Francesco  writes:
>
> > Hi all,
> >
> > I noticed that there is a user "shanesquarestream" that has registered
> the project "libzmq" inside readthedocs.io:
> >https://readthedocs.org/projects/libzmq/
> >
> > The readthedocs.io website points to a fork of libzmq which has been
> deleted since then:
> > https://github.com/squarestreams/libzmq
> >
> > Searching for that user "squarestreams" or  "shanesquarestream" inside
> github provides no results.
> >
> > Anybody knows such user/fork of libzmq?
> >
> > I think it would be nice to get https://readthedocs.org/projects/libzmq/
> registered against the official libzmq
> > project, not to a dead fork...
> >
> > Thanks!
> >
> > Francesco
> >
> > PS: we can still register "https://readthedocs.org/projects/zeromq; but
> I think it might be confusing to have a dead
> > page for libzmq...
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> >
> https://urldefense.com/v3/__https://lists.zeromq.org/mailman/listinfo/zeromq-dev__;!!P4SdNyxKAPE!BHlLzrcR3cSsf8k5iVeF1bjvGRB56QZDwjOjZhcQ2NtnJuJgzI8VmJw0jHh7jmvktjxEYwt0_V1KLCOVU_XSvV8$
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Cross compiling for STM32MPU - ARM

2023-10-31 Thread Francesco
Hi Venkat,

I suggest you read the page:
https://www.gnu.org/software/automake/manual/html_node/Cross_002dCompilation.html
which describes how to do a cross-compiler build of a project using
Automake in general (e.g. you're missing the --build option).
It seems that all the compiler errors you're having are not related at all
to libzmq.

My suggestion is for you as exercise/test to first cross-compile a simple
helloworld application that uses automake (you can follow automake
tutorials on the web to create such simple app).
Once you get that cross-compiled, you can try cross compiling libzmq

HTH,
Francesco



Il giorno mar 31 ott 2023 alle ore 08:37 Edwin van den Oetelaar <
ed...@oetelaar.com> ha scritto:

> ChatGPT4 suggest using --with-sysroot and using the correct -I flags for
> pointing to the correct include directories eg:
>
> ./configure --host=arm-linux-gnueabihf --with-sysroot=/path/to/arm/sysroot
>
> This is not a specific issue with zeromq but skills about using the tools
> of the trade.
>
> Good luck,
> Edwin
>
>
>
> On Tue, 31 Oct 2023 at 04:32, Venkat Krishna via zeromq-dev <
> zeromq-dev@lists.zeromq.org> wrote:
>
>> Hi,
>>
>> I'm trying to cross compile the libzmq for my arm stm32 unit and I'm
>> facing some issues. I'm a noob so I don't know if I'm missing something.
>> Any help is greatly appreciated!
>>
>> I'm using Ubuntu 22.04 as my host, and I installed the
>> `gcc-arm-linux-gnueabi, gcc-arm-linux-gnueabihf, g++-arm-linux-gnueabi,
>> g++-arm-linux-gnueabihf packages and also the STM's sdk that comes with a
>> set of cross compilers.
>>
>> Here's what I've tried:
>>
>>1. » ./configure --host=arm-none-linux-gnueabi
>>CC=arm-linux-gnueabi-gcc CXX=arm-linux-gnueabi-g++​
>>   1. When I run `make check after this, I get the following errors.
>>   Making check in doc
>>   make[1]: Entering directory
>>   '/home/venkatkrishna/Documents/libzmq/doc'
>>   make[1]: Nothing to be done for 'check'.
>>   make[1]: Leaving directory
>>   '/home/venkatkrishna/Documents/libzmq/doc'
>>   make[1]: Entering directory '/home/venkatkrishna/Documents/libzmq'
>> CXX  src/libzmq_la-address.lo
>>   In file included from /usr/arm-linux-gnueabihf/include/stdio.h:430,
>>from src/../include/zmq.h:32,
>>from src/precompiled.hpp:30,
>>from src/address.cpp:3:
>>   /usr/include/x86_64-linux-gnu/bits/floatn.h:74:70: error: unknown
>>   machine mode ‘__TC__’
>>  74 | typedef _Complex float __cfloat128 __attribute__
>>   ((__mode__ (__TC__)));
>> |
>>  ^
>>   /usr/include/x86_64-linux-gnu/bits/floatn.h:86:9: error:
>>   ‘__float128’ does not name a type; did you mean ‘__cfloat128’?
>>  86 | typedef __float128 _Float128;
>> | ^~
>> | __cfloat128
>>   In file included from
>>   /usr/arm-linux-gnueabihf/include/c++/11/cwchar:44,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/bits/postypes.h:40,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/bits/char_traits.h:40,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/string:40,
>>from src/address.hpp:8,
>>from src/address.cpp:5:
>>   /usr/arm-linux-gnueabihf/include/wchar.h:407:8: error: ‘_Float128’
>>   does not name a type; did you mean ‘_Float32x’?
>> 407 | extern _Float128 wcstof128 (const wchar_t *__restrict
>>   __nptr,
>> |^
>> |_Float32x
>>   /usr/arm-linux-gnueabihf/include/wchar.h:524:8: error: ‘_Float128’
>>   does not name a type; did you mean ‘_Float32x’?
>> 524 | extern _Float128 wcstof128_l (const wchar_t *__restrict
>>   __nptr,
>> |^
>> |_Float32x
>>   In file included from
>>   /usr/arm-linux-gnueabihf/include/c++/11/cstdlib:75,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/ext/string_conversions.h:41,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/bits/basic_string.h:6608,
>>from
>>   /usr/arm-linux-gnueabihf/include/c++/11/string:55,
>>from src/address.hpp:8,
>>from src/address.cpp:5:
>>   /usr/arm-linux-g

[zeromq-dev] About libzmq "domain" inside readthedocs.io

2023-10-31 Thread Francesco
Hi all,

I noticed that there is a user "shanesquarestream" that has registered the
project "libzmq" inside readthedocs.io:
   https://readthedocs.org/projects/libzmq/

The readthedocs.io website points to a fork of libzmq which has been
deleted since then: https://github.com/squarestreams/libzmq

Searching for that user "squarestreams" or  "shanesquarestream" inside
github provides no results.

Anybody knows such user/fork of libzmq?

I think it would be nice to get https://readthedocs.org/projects/libzmq/
registered against the official libzmq project, not to a dead fork...

Thanks!

Francesco


PS: we can still register "https://readthedocs.org/projects/zeromq; but I
think it might be confusing to have a dead page for libzmq...
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZeroMQ docs

2023-10-24 Thread Francesco
Hi Brett,


FWIW, I think it is very reasonable to accept some syntax change in
> order to migrate to a better supported document compiler and to gain the
> new functionality of readthedocs.  So, for whatever it may be worth, I
> take back my initial opinion to leave the API .txt files as-is.
>

Actually in my PR I'm proposing to rename all .txt files to .adoc to
clarify they use Asciidoc syntax. It's clearly the correct extension that
should be used according to Asciidoc online resources.
As per the content: I migrated them to use the more modern asciidoc syntax.


> Francesco, I know you have already invested effort down the asciidoctor
> path but maybe it is worth considering to jump fully to a flavor of
> markdown (eg github's)?
>

I'm not sure. Of course they're biased but Asciidoc community claims to be
better and a "more sound alternative" to Markdown, see:
   https://docs.asciidoctor.org/asciidoc/latest/asciidoc-vs-markdown/

A less-biased comparison (much longer to read -- I didn't read it all) is
at:
  https://www.dewanahmed.com/markdown-asciidoc-restructuredtext/


Actually, when I
> started writing I half assumed they were in markdown and half assumed
> they were in some special markdown'ish GSL syntax and only later figured
> out they were asciidoc.  But then I got confused about how to compile them
> (ie,
> asciidoc-py vs asciidoctor that you described).  Certainly, my stumbles
> were due to my ignorance/assumptions but had the docs been in markdown,
> all these little frictions would not have shown up.  Again, opinion fwiw.
>
I totally agree.
I also contributed some fixes/new features in the past and documenting them
was tricky. The .txt extension does not help at all.
The use of archaic tools makes it very hard to the "casual" contributor to
see the actual documentation rendered.
Even more tricky: I was expecting api.zeromq.org to automatically get
updated after some time the PR was merged... I discovered it's not the case
:)

Last but not least: my use case for spending a few hours on this PR /
documentation improvement was very simple: I noticed a very nice new option
(ZMQ_BUSY_POLL) in the release notes. Then I started to search for docs to
share with the rest of the teams. I found only the .txt version, nothing I
could easily link in an email or share with simplicity... that was the
trigger point... :)


Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] *** SPAM *** Re: ZeroMQ docs

2023-10-24 Thread Francesco
Hi Fabrice,
Thanks for the pointer.
Pandoc however seems to be able to convert only _to_ Asciidoc, not _from_
Asciidoc (by looking at https://pandoc.org/).
Anyway, thanks to some regex I managed to upgrade the format of the
Asciidoc documentation from the legacy format to the "modern/current"
Asciidoc format.

I have updated the PR at https://github.com/zeromq/libzmq/pull/4607
Documentation rendered as Github Pages in my own fork:
https://f18m.github.io/libzmq/

Any comment is welcome

Thanks,
Francesco


Il giorno mar 24 ott 2023 alle ore 09:20 Fabrice Bacchella <
fabrice.bacche...@orange.fr> ha scritto:

> Did you try pandoc ?
>
> Le 23 oct. 2023 à 23:16, Francesco  a écrit
> :
>
> Update: apparently the point a) is blocked by point b).
>
> In more details:  the Asciidoc modern syntax to get a cross-reference
> correctly rendered in both manpages and HTMLs is:
>
>  xref:name_of_doc.adoc[name_of_doc]
>
> This will produce a valid link to "name_of_doc.html" for HTML output and a
> simple "name_of_doc" span of text for manpage output. This is the fix
> mentioned in step a).
> However this syntax is accepted only when Asciidoctor is NOT running in
> legacy/deprecated mode.
> To avoid that, I first need step b).
>
> Shall I put steps a) and b) together in my same WIP PR ?
> It will be harder to review it...
>
> thanks
>
>
> Il giorno lun 23 ott 2023 alle ore 22:43 Francesco <
> francesco.monto...@gmail.com> ha scritto:
>
>> Hi all,
>>
>> Here's an update on my attempt to refresh the doc system for libzmq API.
>>
>> *Current status:*
>>   libzmq is built around the "ancient" python Asciidoc tool. That tool is
>> unmaintained for several years and has been replaced by the Asciidoctor
>> tool (see
>> https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/).
>>   Note that the original tool used for interpreting the .txt files was
>> named "asciidoc" just like the language markup contained in the .txt files
>> itself. To avoid confusion, that tool is now referred to as "asciidoc-py".
>>   The tool asciidoc-py is the one unmaintained. The language Asciidoc
>> itself instead is still maintained and developed, but Asciidoctor is the
>> only updated tool to process Asciidoc documents.
>>   The manpages are built today in libzmq through this chain:
>>   .txt  --[asciidoc]-->   .xml docbook--[xmlto]-->.3 or .7
>> manpages
>>   where the [] indicate the tool used for the conversion. Also the
>> utility "xmlto" seems quite unmaintained.
>>   Finally the wikidot website http://api.zeromq.org/ is built from some
>> scripts located in the "ztools" repo that basically leverages the ability
>> of that wiki to produce a listing of all wiki pages uploaded by group; the
>> group used is the ZMQ API version. This makes it possible to document
>> multiple versions of the libzmq API in the same website/wiki. However
>> wikidot itself seems unmaintained as well.
>>
>> *Where I got so far:*
>>   I managed to convert to obtain usable and nicely-formatted HTML docs
>> running Asciidoctor on libzmq docs, after some mass-replacement passes to
>> fix some syntax issues.
>>   Asciidoctor is still processing all libzmq docs using the so-called
>> "compatibility mode".
>>  In my libzmq fork I enabled Github pages and I got them deployed on
>> every checkin of my branch.
>>  Documentation rendered as Github Pages in my own fork:
>> https://f18m.github.io/libzmq/
>>  PR: https://github.com/zeromq/libzmq/pull/4607
>>
>> *Next steps:*
>>   a) I'm fighting a little bit with Asciidoctor to get the right
>> rendering also for manpages.
>>   b) Some smart mass-replace is still needed to convert from the
>> deprecated Asciidoc format to the "modern" Asciidoc (see
>> https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/#updated-and-deprecated-asciidoc-syntax
>> )
>>   c) The Github pages approach is able to deploy only the documentation
>> for the latest "master" branch. Maintaining documentation for the multiple
>> API versions is probably best achieved using the more popular
>> readthedocs.io. As pointed out already in this email thread,
>> readthedocs.io is mostly designed around Sphinx and MkDocs but most
>> recent versions are flexible enough to accomodate also Asciidoc
>> documentation. I think eadthedocs.io is the best solution to store
>> different versions of libzmq API.
>>
>> Please let me know if you have any comments.
>> In my opinion to simplify 

Re: [zeromq-dev] ZeroMQ docs

2023-10-23 Thread Francesco
Update: apparently the point a) is blocked by point b).

In more details:  the Asciidoc modern syntax to get a cross-reference
correctly rendered in both manpages and HTMLs is:

 xref:name_of_doc.adoc[name_of_doc]

This will produce a valid link to "name_of_doc.html" for HTML output and a
simple "name_of_doc" span of text for manpage output. This is the fix
mentioned in step a).
However this syntax is accepted only when Asciidoctor is NOT running in
legacy/deprecated mode.
To avoid that, I first need step b).

Shall I put steps a) and b) together in my same WIP PR ?
It will be harder to review it...

thanks


Il giorno lun 23 ott 2023 alle ore 22:43 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi all,
>
> Here's an update on my attempt to refresh the doc system for libzmq API.
>
> *Current status:*
>   libzmq is built around the "ancient" python Asciidoc tool. That tool is
> unmaintained for several years and has been replaced by the Asciidoctor
> tool (see
> https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/).
>   Note that the original tool used for interpreting the .txt files was
> named "asciidoc" just like the language markup contained in the .txt files
> itself. To avoid confusion, that tool is now referred to as "asciidoc-py".
>   The tool asciidoc-py is the one unmaintained. The language Asciidoc
> itself instead is still maintained and developed, but Asciidoctor is the
> only updated tool to process Asciidoc documents.
>   The manpages are built today in libzmq through this chain:
>   .txt  --[asciidoc]-->   .xml docbook--[xmlto]-->.3 or .7
> manpages
>   where the [] indicate the tool used for the conversion. Also the utility
> "xmlto" seems quite unmaintained.
>   Finally the wikidot website http://api.zeromq.org/ is built from some
> scripts located in the "ztools" repo that basically leverages the ability
> of that wiki to produce a listing of all wiki pages uploaded by group; the
> group used is the ZMQ API version. This makes it possible to document
> multiple versions of the libzmq API in the same website/wiki. However
> wikidot itself seems unmaintained as well.
>
> *Where I got so far:*
>   I managed to convert to obtain usable and nicely-formatted HTML docs
> running Asciidoctor on libzmq docs, after some mass-replacement passes to
> fix some syntax issues.
>   Asciidoctor is still processing all libzmq docs using the so-called
> "compatibility mode".
>  In my libzmq fork I enabled Github pages and I got them deployed on every
> checkin of my branch.
>  Documentation rendered as Github Pages in my own fork:
> https://f18m.github.io/libzmq/
>  PR: https://github.com/zeromq/libzmq/pull/4607
>
> *Next steps:*
>   a) I'm fighting a little bit with Asciidoctor to get the right rendering
> also for manpages.
>   b) Some smart mass-replace is still needed to convert from the
> deprecated Asciidoc format to the "modern" Asciidoc (see
> https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/#updated-and-deprecated-asciidoc-syntax
> )
>   c) The Github pages approach is able to deploy only the documentation
> for the latest "master" branch. Maintaining documentation for the multiple
> API versions is probably best achieved using the more popular
> readthedocs.io. As pointed out already in this email thread,
> readthedocs.io is mostly designed around Sphinx and MkDocs but most
> recent versions are flexible enough to accomodate also Asciidoc
> documentation. I think eadthedocs.io is the best solution to store
> different versions of libzmq API.
>
> Please let me know if you have any comments.
> In my opinion to simplify the PR review, after step a) it's best to do a
> first merge, and then carry out points b) and c) in 2 more separate PRs.
>
> What do you think?
>
> Thanks,
>
>
>
> Il giorno ven 20 ott 2023 alle ore 18:19 Brett Viren <
> brett.vi...@gmail.com> ha scritto:
>
>> On Fri, Oct 20, 2023 at 12:03 PM Francesco 
>> wrote:
>> >
>> > Maybe an even simpler solution is to activate the Github "Pages"
>> support in libzmq.org and link it with a github action that just uses
>> the Asciidoctor generator to convert all of doc/*.txt into static HTML.
>> >
>> > What do you think about this?
>>
>> This sounds like a very good idea to me.  And, it's even easier as the
>> existing libzmq build already produces the HTML.
>>
>> On could prototype some additional build action that populate the
>> special gh-pages by committing these generated HTML files.  This can
>> be tested using a personal fork of libzmq to make your own
&

Re: [zeromq-dev] ZeroMQ docs

2023-10-23 Thread Francesco
Hi all,

Here's an update on my attempt to refresh the doc system for libzmq API.

*Current status:*
  libzmq is built around the "ancient" python Asciidoc tool. That tool is
unmaintained for several years and has been replaced by the Asciidoctor
tool (see
https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/).
  Note that the original tool used for interpreting the .txt files was
named "asciidoc" just like the language markup contained in the .txt files
itself. To avoid confusion, that tool is now referred to as "asciidoc-py".
  The tool asciidoc-py is the one unmaintained. The language Asciidoc
itself instead is still maintained and developed, but Asciidoctor is the
only updated tool to process Asciidoc documents.
  The manpages are built today in libzmq through this chain:
  .txt  --[asciidoc]-->   .xml docbook--[xmlto]-->.3 or .7
manpages
  where the [] indicate the tool used for the conversion. Also the utility
"xmlto" seems quite unmaintained.
  Finally the wikidot website http://api.zeromq.org/ is built from some
scripts located in the "ztools" repo that basically leverages the ability
of that wiki to produce a listing of all wiki pages uploaded by group; the
group used is the ZMQ API version. This makes it possible to document
multiple versions of the libzmq API in the same website/wiki. However
wikidot itself seems unmaintained as well.

*Where I got so far:*
  I managed to convert to obtain usable and nicely-formatted HTML docs
running Asciidoctor on libzmq docs, after some mass-replacement passes to
fix some syntax issues.
  Asciidoctor is still processing all libzmq docs using the so-called
"compatibility mode".
 In my libzmq fork I enabled Github pages and I got them deployed on every
checkin of my branch.
 Documentation rendered as Github Pages in my own fork:
https://f18m.github.io/libzmq/
 PR: https://github.com/zeromq/libzmq/pull/4607

*Next steps:*
  a) I'm fighting a little bit with Asciidoctor to get the right rendering
also for manpages.
  b) Some smart mass-replace is still needed to convert from the deprecated
Asciidoc format to the "modern" Asciidoc (see
https://docs.asciidoctor.org/asciidoctor/latest/migrate/asciidoc-py/#updated-and-deprecated-asciidoc-syntax
)
  c) The Github pages approach is able to deploy only the documentation for
the latest "master" branch. Maintaining documentation for the multiple API
versions is probably best achieved using the more popular readthedocs.io.
As pointed out already in this email thread, readthedocs.io is mostly
designed around Sphinx and MkDocs but most recent versions are flexible
enough to accomodate also Asciidoc documentation. I think eadthedocs.io is
the best solution to store different versions of libzmq API.

Please let me know if you have any comments.
In my opinion to simplify the PR review, after step a) it's best to do a
first merge, and then carry out points b) and c) in 2 more separate PRs.

What do you think?

Thanks,



Il giorno ven 20 ott 2023 alle ore 18:19 Brett Viren 
ha scritto:

> On Fri, Oct 20, 2023 at 12:03 PM Francesco 
> wrote:
> >
> > Maybe an even simpler solution is to activate the Github "Pages" support
> in libzmq.org and link it with a github action that just uses the
> Asciidoctor generator to convert all of doc/*.txt into static HTML.
> >
> > What do you think about this?
>
> This sounds like a very good idea to me.  And, it's even easier as the
> existing libzmq build already produces the HTML.
>
> On could prototype some additional build action that populate the
> special gh-pages by committing these generated HTML files.  This can
> be tested using a personal fork of libzmq to make your own
> https://.github.io/libzmq/.  When that works, a PR to libzmq
> would be needed.  Bonus if some .github/ CI bits could automate this.
> And, someone with GitHub permissions would need to go into libzmq's
> repo settings to turn on the publish setting.
>
> -Brett.
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZeroMQ docs

2023-10-20 Thread Francesco
Maybe an even simpler solution is to activate the Github "Pages" support in
libzmq.org and link it with a github action that just uses the Asciidoctor
generator to convert all of doc/*.txt into static HTML.

What do you think about this?




Il giorno ven 20 ott 2023 alle ore 17:03 Francesco <
francesco.monto...@gmail.com> ha scritto:

> hi Brett,
> thanks for your answer.
> I checked zeromq.org (I had some trouble using Docker to get the website
> up: https://github.com/zeromq/zeromq.org/issues/125 and then I installed
> locally hugo but I discovered it needs a quite old version 0.57.2 built in
> "extended" mode). I'm not really a web developer so I'm not sure how
> difficult it is to upgrade to latest "hugo" (
> https://github.com/gohugoio/hugo/).
> Anyway.
> The thing is that api.zeromq.org is probably served by some other source.
> I guess somebody has credentials to log on http://www.wikidot.com/ and
> update that page, but I don't think there is much to do in the "zeromq.org"
> repo. Of course I may be missing something.
>
> Personally, I think the look of api.zeromq.org is not the best one.
> readthedocs.io looks more like a de-facto standard for documentation in
> open source world (in my view)...
>
> thanks,
> Francesco
>
>
> Il giorno ven 20 ott 2023 alle ore 15:00 Brett Viren <
> brett.vi...@gmail.com> ha scritto:
>
>> Hi Francesco,
>>
>> I agree a refresh of the online API docs would be good.  I think the
>> zeromq.org website takes its content from:
>>
>>   https://github.com/zeromq/zeromq.org
>>
>> A PR to that repo is likely the first step to get zeromq.org updated.
>>
>> It would be extra good if the API docs for development and releases
>> could be refreshed in a more automated way.
>>
>>
>> I personally like having all the documentation under *.zeromq.org but I
>> see benefit and no downside to also having a copy of the API docs served
>> from readthedocs.
>>
>> The current API documentation source files are in AsciiDoc format under
>> libzmq/doc/*.txt and there are HTML and Unix man page build targets.
>> These should of course be retained.
>>
>> Readthedocs suggests a procedure to build from AsciiDoc sources.
>>
>>   https://docs.readthedocs.io/en/stable/build-customization.html#asciidoc
>>
>> Perhaps a PR to libzmq that adds something under libzmq/.github/ is the
>> path to get this new API doc target working?
>>
>>
>> -Brett.
>>
>> On Fri, Oct 20, 2023 at 6:09 AM Francesco 
>> wrote:
>> >
>> > Another point I forgot: I think it would be nice to switch to
>> https://about.readthedocs.com/ as  a way to publish the libzmq API...
>> >
>> >
>> > Il giorno ven 20 ott 2023 alle ore 12:00 Francesco <
>> francesco.monto...@gmail.com> ha scritto:
>> >>
>> >> Hi all,
>> >> I'm happy to see that version 4.3.5 has been published, thanks Luca
>> and all other contributors for making that happen!
>> >>
>> >> However I noticed that http://api.zeromq.org/master:_start is still
>> mentioning version 4.3.2 of the API.
>> >>
>> >> Do you think it's possible to get there updated docs?
>> >> If there is any work to be done, I can try to help the best I can...
>> >>
>> >> Thanks,
>> >> Francesco
>> >>
>> >>
>> > ___
>> > zeromq-dev mailing list
>> > zeromq-dev@lists.zeromq.org
>> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZeroMQ docs

2023-10-20 Thread Francesco
hi Brett,
thanks for your answer.
I checked zeromq.org (I had some trouble using Docker to get the website
up: https://github.com/zeromq/zeromq.org/issues/125 and then I installed
locally hugo but I discovered it needs a quite old version 0.57.2 built in
"extended" mode). I'm not really a web developer so I'm not sure how
difficult it is to upgrade to latest "hugo" (
https://github.com/gohugoio/hugo/).
Anyway.
The thing is that api.zeromq.org is probably served by some other source. I
guess somebody has credentials to log on http://www.wikidot.com/ and update
that page, but I don't think there is much to do in the "zeromq.org" repo.
Of course I may be missing something.

Personally, I think the look of api.zeromq.org is not the best one.
readthedocs.io looks more like a de-facto standard for documentation in
open source world (in my view)...

thanks,
Francesco


Il giorno ven 20 ott 2023 alle ore 15:00 Brett Viren 
ha scritto:

> Hi Francesco,
>
> I agree a refresh of the online API docs would be good.  I think the
> zeromq.org website takes its content from:
>
>   https://github.com/zeromq/zeromq.org
>
> A PR to that repo is likely the first step to get zeromq.org updated.
>
> It would be extra good if the API docs for development and releases
> could be refreshed in a more automated way.
>
>
> I personally like having all the documentation under *.zeromq.org but I
> see benefit and no downside to also having a copy of the API docs served
> from readthedocs.
>
> The current API documentation source files are in AsciiDoc format under
> libzmq/doc/*.txt and there are HTML and Unix man page build targets.
> These should of course be retained.
>
> Readthedocs suggests a procedure to build from AsciiDoc sources.
>
>   https://docs.readthedocs.io/en/stable/build-customization.html#asciidoc
>
> Perhaps a PR to libzmq that adds something under libzmq/.github/ is the
> path to get this new API doc target working?
>
>
> -Brett.
>
> On Fri, Oct 20, 2023 at 6:09 AM Francesco 
> wrote:
> >
> > Another point I forgot: I think it would be nice to switch to
> https://about.readthedocs.com/ as  a way to publish the libzmq API...
> >
> >
> > Il giorno ven 20 ott 2023 alle ore 12:00 Francesco <
> francesco.monto...@gmail.com> ha scritto:
> >>
> >> Hi all,
> >> I'm happy to see that version 4.3.5 has been published, thanks Luca and
> all other contributors for making that happen!
> >>
> >> However I noticed that http://api.zeromq.org/master:_start is still
> mentioning version 4.3.2 of the API.
> >>
> >> Do you think it's possible to get there updated docs?
> >> If there is any work to be done, I can try to help the best I can...
> >>
> >> Thanks,
> >> Francesco
> >>
> >>
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZeroMQ docs

2023-10-20 Thread Francesco
Another point I forgot: I think it would be nice to switch to
https://about.readthedocs.com/ as  a way to publish the libzmq API...


Il giorno ven 20 ott 2023 alle ore 12:00 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi all,
> I'm happy to see that version 4.3.5 has been published, thanks Luca and
> all other contributors for making that happen!
>
> However I noticed that http://api.zeromq.org/master:_start is still
> mentioning version 4.3.2 of the API.
>
> Do you think it's possible to get there updated docs?
> If there is any work to be done, I can try to help the best I can...
>
> Thanks,
> Francesco
>
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] ZeroMQ docs

2023-10-20 Thread Francesco
Hi all,
I'm happy to see that version 4.3.5 has been published, thanks Luca and all
other contributors for making that happen!

However I noticed that http://api.zeromq.org/master:_start is still
mentioning version 4.3.2 of the API.

Do you think it's possible to get there updated docs?
If there is any work to be done, I can try to help the best I can...

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Relicensing completion and feature removal, need clean-room reimplementations of zmq_proxy_steerable() and ZMQ_RECONNECT_IVL_MAX

2023-06-05 Thread Francesco
Ok never mind, I cannot reach out to him, sorry.


Il giorno lun 5 giu 2023 alle ore 15:54 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> Hi,
>
> It's mentioned in the issue:
>
> Laurent Alebarde 
> https://github.com/lalebarde
>
> On Mon, 5 Jun 2023 at 14:49, Francesco 
> wrote:
> >
> > Hi Luca,
> > sorry but can you list the references to the 3 authors that did not
> re-license these 3 features?
> > In particular the one about zmq_proxy_steerable()... I remember I did
> provide some patches and also one other colleague did... if he was the
> author of that feature I can surely reach out to him...
> >
> > Thanks,
> > Francesco
> >
> >
> > Il giorno lun 5 giu 2023 alle ore 13:49 Luca Boccassi 
> ha scritto:
> >>
> >> Hi,
> >>
> >> As you might or might not be aware, we have been working hard for years
> >> to complete the libzmq relicensing effort that Pieter started long ago,
> >> from LGPL3+exceptions to standard MPL2:
> >>
> >> https://github.com/zeromq/libzmq/issues/2376
> >>
> >> After a lot of work, we are down to only 3 grants missing, covering the
> >> following:
> >>
> >> - tweetnacl integration as alternative to libsodium (relicensing
> >> NACKed)
> >> - zmq_proxy_steerable() (no answer)
> >> - ZMQ_RECONNECT_IVL_MAX (no answer)
> >>
> >> We have been waiting for years, with many requests without responses
> >> for the latter two, and it doesn't make sense to wait anymore.
> >>
> >> So with this PR the above functionality will simply be removed:
> >>
> >> https://github.com/zeromq/libzmq/pull/4554
> >>
> >> I will merge it later today.
> >>
> >> Tweetnacl is really not a problem, it was always intended as a local-
> >> only thing to facilitate zmq_curve development, which is long done. The
> >> only supported production encryption implementation is libsodium, which
> >> is available everywhere, so I do not intend to put tweetnacl
> >> integration back. If anybody was using curve+tweetnacl in a production
> >> setting, they were doing something _very_ wrong and need to stop asap
> >> anyway. It's a footgun and it's best to be rid of it.
> >>
> >> The other two are more problematic, as they are public APIs. The PR
> >> changes them to be empty stubs that return EOPNOTSUPP (so that ABI
> >> doesn't change). But I will make at least an attempt to get a
> >> reimplementation so:
> >>
> >> If you are able to, and you have NEVER LOOKED AT THE PREVIOUS
> >> IMPLEMENTATIONS (I will require you to state this explicitly in the
> >> commit messages), please consider helping out and reimplementing these
> >> two APIs, based solely on the public documentation:
> >>
> >> http://api.zeromq.org/4-3:zmq-proxy-steerable
> >>
> http://api.zeromq.org/4-3:zmq-setsockopt#:~:text=connection%2Doriented%20transports-,ZMQ_RECONNECT_IVL_MAX,-%3A%20Set%20maximum%20reconnection
> >>
> >> I cannot stress this enough, it must be a clean-room reimplementation
> >> so no prior knowledge of the previous implementation details nor
> >> looking at the previos implementation is allowed (hence why I cannot do
> >> it myself).
> >>
> >> If you are able to help, please speak up. These are not difficult to
> >> add, especially the socket option should be very straightforward. I
> >> will of course be able to review and provide guidance.
> >>
> >> After merging the above PR I will complete the relicensing shortly
> >> after, taking advantage of the switch to also use the standard SPDX
> >> format in source files. The relicense grants will be moved to:
> >> https://github.com/rlenferink/libzmq-relicense
> >> for archival.
> >>
> >> --
> >> Kind regards,
> >> Luca Boccassi
> >> ___
> >> zeromq-dev mailing list
> >> zeromq-dev@lists.zeromq.org
> >> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Relicensing completion and feature removal, need clean-room reimplementations of zmq_proxy_steerable() and ZMQ_RECONNECT_IVL_MAX

2023-06-05 Thread Francesco
Hi Luca,
sorry but can you list the references to the 3 authors that did not
re-license these 3 features?
In particular the one about zmq_proxy_steerable()... I remember I did
provide some patches and also one other colleague did... if he was the
author of that feature I can surely reach out to him...

Thanks,
Francesco


Il giorno lun 5 giu 2023 alle ore 13:49 Luca Boccassi  ha
scritto:

> Hi,
>
> As you might or might not be aware, we have been working hard for years
> to complete the libzmq relicensing effort that Pieter started long ago,
> from LGPL3+exceptions to standard MPL2:
>
> https://github.com/zeromq/libzmq/issues/2376
>
> After a lot of work, we are down to only 3 grants missing, covering the
> following:
>
> - tweetnacl integration as alternative to libsodium (relicensing
> NACKed)
> - zmq_proxy_steerable() (no answer)
> - ZMQ_RECONNECT_IVL_MAX (no answer)
>
> We have been waiting for years, with many requests without responses
> for the latter two, and it doesn't make sense to wait anymore.
>
> So with this PR the above functionality will simply be removed:
>
> https://github.com/zeromq/libzmq/pull/4554
>
> I will merge it later today.
>
> Tweetnacl is really not a problem, it was always intended as a local-
> only thing to facilitate zmq_curve development, which is long done. The
> only supported production encryption implementation is libsodium, which
> is available everywhere, so I do not intend to put tweetnacl
> integration back. If anybody was using curve+tweetnacl in a production
> setting, they were doing something _very_ wrong and need to stop asap
> anyway. It's a footgun and it's best to be rid of it.
>
> The other two are more problematic, as they are public APIs. The PR
> changes them to be empty stubs that return EOPNOTSUPP (so that ABI
> doesn't change). But I will make at least an attempt to get a
> reimplementation so:
>
> If you are able to, and you have NEVER LOOKED AT THE PREVIOUS
> IMPLEMENTATIONS (I will require you to state this explicitly in the
> commit messages), please consider helping out and reimplementing these
> two APIs, based solely on the public documentation:
>
> http://api.zeromq.org/4-3:zmq-proxy-steerable
>
> http://api.zeromq.org/4-3:zmq-setsockopt#:~:text=connection%2Doriented%20transports-,ZMQ_RECONNECT_IVL_MAX,-%3A%20Set%20maximum%20reconnection
>
> I cannot stress this enough, it must be a clean-room reimplementation
> so no prior knowledge of the previous implementation details nor
> looking at the previos implementation is allowed (hence why I cannot do
> it myself).
>
> If you are able to help, please speak up. These are not difficult to
> add, especially the socket option should be very straightforward. I
> will of course be able to review and provide guidance.
>
> After merging the above PR I will complete the relicensing shortly
> after, taking advantage of the switch to also use the standard SPDX
> format in source files. The relicense grants will be moved to:
> https://github.com/rlenferink/libzmq-relicense
> for archival.
>
> --
> Kind regards,
> Luca Boccassi
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [zeromq-announce] When is new version of libzmq getting released?

2023-05-16 Thread Francesco
Hi all,
Let me add myself (and actually the company I work for) as a +1 voters for
a new release.
We're using libzmq in production since several years and we're rebuilding
it in our CI/CD from a specific master version of ~1yr ago. Still having a
version 4.3.5 would be really good to clearly mark the point in time and
communicate everyone that...well...the project is not dead! :)

If some help is needed to get the release done I think I can volunteer to
help...

Thanks,

Francesco



Il lun 15 mag 2023, 17:09 Bill Torpey  ha scritto:

> Hi All:
>
> FWIW, in my shop procedures to release code into prod are very strict, and
> versioning is a key part of that.  A single release consists of a dozen or
> so component packages — some of these are open-source project hosted by
> others  (e.g., https://github.com/zeromq/libzmq
> <https://github.com/nyfix/libzmq>), some are open-source projects that we
> host ourselves (e.g., https://github.com/nyfix/OZ), and some are internal
> closed-source projects.
>
> In order to build the open-source components, both our own and others’, we
> need to create a “parent” project that provides the required tooling,
> boilerplate, etc.  for our internal build process, and then pull in the
> open-source “core” (e.g., using git submodules).  For open-source projects
> that we don’t host ourselves, the submodule points to a fork that can
> contain commits that are essential to us, but for one reason or another
> have not (yet) been accepted upstream.
>
> As you can imagine, this is all a major PITA. Anything that makes this
> process easier to track and audit is helpful.
>
> I’ll also add that not having defined releases is a major impediment to
> incorporating ZeroMQ (or any other project) in a typical corporate
> environment.
>
> Regards,
>
> Bill
>
>
> On May 15, 2023, at 10:34 AM, Gaurav Gupta  wrote:
>
> Thanks to all for sharing their inputs.
>
> I would agree that it's time to create a new version. And 320 commits is
> not a small number, even if there is no significant feature in those 320
> commits.
>
> Would request the team to please release a new version
>
> Regards,
> Gaurav
>
> On Mon, May 15, 2023 at 8:03 PM Matthias Gabriel <
> matthias.gabr...@etit.tu-chemnitz.de> wrote:
>
>> Sorry, there was a typo:
>>
>> Maybe it helps turning the question around: what keeps us from releasinf
>> the next version (point release). If nobody has a good argument then it's
>> time, I'd say :)
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-29 Thread Francesco
Hi everybody,

just for the sake of mailing list history: the feature has been merged
in master (thanks Luca!) and the new socket option is named
ZMQ_TOPICS_COUNT. It can be used against both PUB and SUB sockets and
I think is very useful for debugging purposes.

Thanks,
Francesco


Il giorno ven 18 nov 2022 alle ore 16:14 Francesco
 ha scritto:
>
> Hi everybody,
>
> A PR implementing this idea (ZMQ_SUBSCRIPTION socket option) is now
> available here:
>   https://github.com/zeromq/libzmq/pull/4459
>
> Any comment is appreciated,
>
> thanks,
> Francesco
>
>
>
> Il giorno mer 16 nov 2022 alle ore 16:10 Bill Torpey
>  ha scritto:
> >
> > Hi Francesco:
> >
> > Just to be clear, I’m not a maintainer, just an interested party.  (At my 
> > day job I created https://github.com/nyfix/OZ which powers 
> > https://www.broadridge.com/financial-services/capital-markets/trading-and-connectivity/order-routing-network).
> >   I believe that Luca is currently the main person responsible for the repo.
> >
> > As for your proposed PR, anything that provides more visibility to what is 
> > going on “under the hood” with ZeroMQ is A Good Thing, I think.
> >
> > Regards,
> >
> > Bill
> >
> > On Nov 16, 2022, at 6:54 AM, Francesco  wrote:
> >
> > Hi Bill,
> > ok thanks, sure. I can prepare such PR... I just wanted to get a
> > feedback from other maintainers... I think PRs are mostly reviewed and
> > merged by Luca at this point right?
> >
> > Luca,
> > what do you think about my proposal of new getsockopt to get number of
> > actual subscriptions?
> > Example usage:
> >
> > /* Retrieve number of subscriptions */
> > int subscriptions;
> > size_t subscriptions_size = sizeof (subscriptions);
> > rc = zmq_getsockopt (socket, ZMQ_SUBSCRIPTION_COUNT, ,
> > _size );
> >
> > // NOTE: ZMQ_SUBSCRIPTION_COUNT would be applicable only to XPUB, PUB,
> > XSUB, SUB socket types
> >
> >
> > Thanks,
> > Francesco
> >
> >
> > PS: I think it would be nice to have visibility about subscriptions
> > added/removed also on the socket monitor... but that's a lot of more
> > detailed information... I think the basic use case is just to get the
> > whole number of subscriptions (for debugging you often know how many
> > subscriptions were sent and it's useful to check if any subscription
> > has been dropped for some reason)
> >
> >
> >
> > Il giorno mer 16 nov 2022 alle ore 01:32 Bill Torpey
> >  ha scritto:
> >
> >
> > Sorry Francesco — I meant your PR, I just mixed up the names.
> >
> > B.
> >
> > On Nov 15, 2022, at 5:01 PM, Francesco  wrote:
> >
> > Hi Bill,
> >
> > Arnaud’s PR sounds useful — more visibility can only be a good thing.
> >
> >
> > sorry I'm missing which PR you are talking about... is there an
> > existing PR to add more visibility (I'd love that)? Or you're
> > referring to the proposal I did in my first mail?
> >
> > thanks,
> > Francesco
> >
> > Il giorno mar 15 nov 2022 alle ore 22:58 Bill Torpey
> >  ha scritto:
> >
> >
> > The problem with all the socket monitor stuff is that it’s async — that 
> > makes it dangerous to act on.  It’s great for monitoring/debugging -- for 
> > real-time control not so much.
> >
> > Arnaud’s PR sounds useful — more visibility can only be a good thing.
> >
> > Bill
> >
> > On Nov 15, 2022, at 10:43 AM, Arnaud Loonstra  wrote:
> >
> > On 15-11-2022 15:57, Francesco wrote:
> >
> > Hi zeromq team,
> > For "observability" / debugging I think it would be really really
> > useful to be able to retrieve the number of subscriptions recorded by
> > the 'mtrie_t' object inside a (X)PUB socket.
> > Would you accept a PR adding such option?
> > Thanks,
> > Francesco
> >
> >
> > Isn't that possible through the socket monitor?
> >
> > http://api.zeromq.org/4-1:zmq-socket-monitor
> >
> > Rg,
> >
> > Arnaud
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-18 Thread Francesco
Hi everybody,

A PR implementing this idea (ZMQ_SUBSCRIPTION socket option) is now
available here:
  https://github.com/zeromq/libzmq/pull/4459

Any comment is appreciated,

thanks,
Francesco



Il giorno mer 16 nov 2022 alle ore 16:10 Bill Torpey
 ha scritto:
>
> Hi Francesco:
>
> Just to be clear, I’m not a maintainer, just an interested party.  (At my day 
> job I created https://github.com/nyfix/OZ which powers 
> https://www.broadridge.com/financial-services/capital-markets/trading-and-connectivity/order-routing-network).
>   I believe that Luca is currently the main person responsible for the repo.
>
> As for your proposed PR, anything that provides more visibility to what is 
> going on “under the hood” with ZeroMQ is A Good Thing, I think.
>
> Regards,
>
> Bill
>
> On Nov 16, 2022, at 6:54 AM, Francesco  wrote:
>
> Hi Bill,
> ok thanks, sure. I can prepare such PR... I just wanted to get a
> feedback from other maintainers... I think PRs are mostly reviewed and
> merged by Luca at this point right?
>
> Luca,
> what do you think about my proposal of new getsockopt to get number of
> actual subscriptions?
> Example usage:
>
> /* Retrieve number of subscriptions */
> int subscriptions;
> size_t subscriptions_size = sizeof (subscriptions);
> rc = zmq_getsockopt (socket, ZMQ_SUBSCRIPTION_COUNT, ,
> _size );
>
> // NOTE: ZMQ_SUBSCRIPTION_COUNT would be applicable only to XPUB, PUB,
> XSUB, SUB socket types
>
>
> Thanks,
> Francesco
>
>
> PS: I think it would be nice to have visibility about subscriptions
> added/removed also on the socket monitor... but that's a lot of more
> detailed information... I think the basic use case is just to get the
> whole number of subscriptions (for debugging you often know how many
> subscriptions were sent and it's useful to check if any subscription
> has been dropped for some reason)
>
>
>
> Il giorno mer 16 nov 2022 alle ore 01:32 Bill Torpey
>  ha scritto:
>
>
> Sorry Francesco — I meant your PR, I just mixed up the names.
>
> B.
>
> On Nov 15, 2022, at 5:01 PM, Francesco  wrote:
>
> Hi Bill,
>
> Arnaud’s PR sounds useful — more visibility can only be a good thing.
>
>
> sorry I'm missing which PR you are talking about... is there an
> existing PR to add more visibility (I'd love that)? Or you're
> referring to the proposal I did in my first mail?
>
> thanks,
> Francesco
>
> Il giorno mar 15 nov 2022 alle ore 22:58 Bill Torpey
>  ha scritto:
>
>
> The problem with all the socket monitor stuff is that it’s async — that makes 
> it dangerous to act on.  It’s great for monitoring/debugging -- for real-time 
> control not so much.
>
> Arnaud’s PR sounds useful — more visibility can only be a good thing.
>
> Bill
>
> On Nov 15, 2022, at 10:43 AM, Arnaud Loonstra  wrote:
>
> On 15-11-2022 15:57, Francesco wrote:
>
> Hi zeromq team,
> For "observability" / debugging I think it would be really really
> useful to be able to retrieve the number of subscriptions recorded by
> the 'mtrie_t' object inside a (X)PUB socket.
> Would you accept a PR adding such option?
> Thanks,
> Francesco
>
>
> Isn't that possible through the socket monitor?
>
> http://api.zeromq.org/4-1:zmq-socket-monitor
>
> Rg,
>
> Arnaud
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-16 Thread Francesco
Hi Bill,
ok thanks, sure. I can prepare such PR... I just wanted to get a
feedback from other maintainers... I think PRs are mostly reviewed and
merged by Luca at this point right?

Luca,
what do you think about my proposal of new getsockopt to get number of
actual subscriptions?
Example usage:

/* Retrieve number of subscriptions */
int subscriptions;
size_t subscriptions_size = sizeof (subscriptions);
rc = zmq_getsockopt (socket, ZMQ_SUBSCRIPTION_COUNT, ,
_size );

// NOTE: ZMQ_SUBSCRIPTION_COUNT would be applicable only to XPUB, PUB,
XSUB, SUB socket types


Thanks,
Francesco


PS: I think it would be nice to have visibility about subscriptions
added/removed also on the socket monitor... but that's a lot of more
detailed information... I think the basic use case is just to get the
whole number of subscriptions (for debugging you often know how many
subscriptions were sent and it's useful to check if any subscription
has been dropped for some reason)



Il giorno mer 16 nov 2022 alle ore 01:32 Bill Torpey
 ha scritto:
>
> Sorry Francesco — I meant your PR, I just mixed up the names.
>
> B.
>
> > On Nov 15, 2022, at 5:01 PM, Francesco  wrote:
> >
> > Hi Bill,
> >
> >> Arnaud’s PR sounds useful — more visibility can only be a good thing.
> >
> > sorry I'm missing which PR you are talking about... is there an
> > existing PR to add more visibility (I'd love that)? Or you're
> > referring to the proposal I did in my first mail?
> >
> > thanks,
> > Francesco
> >
> > Il giorno mar 15 nov 2022 alle ore 22:58 Bill Torpey
> >  ha scritto:
> >>
> >> The problem with all the socket monitor stuff is that it’s async — that 
> >> makes it dangerous to act on.  It’s great for monitoring/debugging -- for 
> >> real-time control not so much.
> >>
> >> Arnaud’s PR sounds useful — more visibility can only be a good thing.
> >>
> >> Bill
> >>
> >>> On Nov 15, 2022, at 10:43 AM, Arnaud Loonstra  wrote:
> >>>
> >>> On 15-11-2022 15:57, Francesco wrote:
> >>>> Hi zeromq team,
> >>>> For "observability" / debugging I think it would be really really
> >>>> useful to be able to retrieve the number of subscriptions recorded by
> >>>> the 'mtrie_t' object inside a (X)PUB socket.
> >>>> Would you accept a PR adding such option?
> >>>> Thanks,
> >>>> Francesco
> >>>
> >>> Isn't that possible through the socket monitor?
> >>>
> >>> http://api.zeromq.org/4-1:zmq-socket-monitor
> >>>
> >>> Rg,
> >>>
> >>> Arnaud
> >>> ___
> >>> zeromq-dev mailing list
> >>> zeromq-dev@lists.zeromq.org
> >>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> >>
> >> ___
> >> zeromq-dev mailing list
> >> zeromq-dev@lists.zeromq.org
> >> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-15 Thread Francesco
Hi Bill,

> Arnaud’s PR sounds useful — more visibility can only be a good thing.

sorry I'm missing which PR you are talking about... is there an
existing PR to add more visibility (I'd love that)? Or you're
referring to the proposal I did in my first mail?

thanks,
Francesco

Il giorno mar 15 nov 2022 alle ore 22:58 Bill Torpey
 ha scritto:
>
> The problem with all the socket monitor stuff is that it’s async — that makes 
> it dangerous to act on.  It’s great for monitoring/debugging -- for real-time 
> control not so much.
>
> Arnaud’s PR sounds useful — more visibility can only be a good thing.
>
> Bill
>
> > On Nov 15, 2022, at 10:43 AM, Arnaud Loonstra  wrote:
> >
> > On 15-11-2022 15:57, Francesco wrote:
> >> Hi zeromq team,
> >> For "observability" / debugging I think it would be really really
> >> useful to be able to retrieve the number of subscriptions recorded by
> >> the 'mtrie_t' object inside a (X)PUB socket.
> >> Would you accept a PR adding such option?
> >> Thanks,
> >> Francesco
> >
> > Isn't that possible through the socket monitor?
> >
> > http://api.zeromq.org/4-1:zmq-socket-monitor
> >
> > Rg,
> >
> > Arnaud
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-15 Thread Francesco
Hi Arnaud,
As far as I know (by reading docs) zmq_socket_monitor() and also the
newer zmq_socket_monitor_versioned() do not allow to monitor
subscriptions, just endpoints... what am I missing?

Thanks,
Francesco

Il giorno mar 15 nov 2022 alle ore 16:43 Arnaud Loonstra
 ha scritto:
>
> On 15-11-2022 15:57, Francesco wrote:
> > Hi zeromq team,
> > For "observability" / debugging I think it would be really really
> > useful to be able to retrieve the number of subscriptions recorded by
> > the 'mtrie_t' object inside a (X)PUB socket.
> >
> > Would you accept a PR adding such option?
> >
> > Thanks,
> > Francesco
>
> Isn't that possible through the socket monitor?
>
> http://api.zeromq.org/4-1:zmq-socket-monitor
>
> Rg,
>
> Arnaud
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Adding new zmq_getsockopt() to retrieve number of subscriptions from XPUB socket

2022-11-15 Thread Francesco
Hi zeromq team,
For "observability" / debugging I think it would be really really
useful to be able to retrieve the number of subscriptions recorded by
the 'mtrie_t' object inside a (X)PUB socket.

Would you accept a PR adding such option?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Time-reordering queue

2022-11-10 Thread Francesco
Hi Bill,

> At the risk of telling you something you already know, have you thought
about how these timestamps are being generated?  I assume that the multiple
PUBs are, or at least can be, located on different machines.  In that
scenario the problem of keeping clocks in sync between the machines becomes
non-trivial.

Yes, you're right the producers will be (not always) on different servers.
These microservices are actually running in a Kubernetes cluster and we
ensure that all the worker nodes have a valid NTP configuration (typically
referring to stratum 1 clock sources)... I know that the time sync across
the servers will be mostly likely in the 500 usec-5msec range and
honestly when we are in msec-range, the accuracy is very bad considering
that the microservices exchange up to 300-500 kmsg/sec, taking about 1Gbps
of bandwidth for each link...
However, that's the best we have currently.

>  The best explanation I’ve seen is this:
https://queue.acm.org/detail.cfm?id=2878574, which in turn references
Lamport’s seminal work:
http://lamport.azurewebsites.net/pubs/time-clocks.pdf.

Thanks - I read the first one about NTP and PTP and it's a good basic
introduction to these 2 protocols.
On the same page I suggest another not-so-technical reading:
https://www.newyorker.com/tech/annals-of-technology/the-thorny-problem-of-keeping-the-internets-time
That article goes over the history of NTP and its creator and long-time
maintainer...

Thanks!

Francesco



Il giorno gio 10 nov 2022 alle ore 15:04 Bill Torpey 
ha scritto:

> Hi Francesco:
>
> At the risk of telling you something you already know, have you thought
> about how these timestamps are being generated?  I assume that the multiple
> PUBs are, or at least can be, located on different machines.  In that
> scenario the problem of keeping clocks in sync between the machines becomes
> non-trivial.
>
> The best explanation I’ve seen is this:
> https://queue.acm.org/detail.cfm?id=2878574, which in turn references
> Lamport’s seminal work:
> http://lamport.azurewebsites.net/pubs/time-clocks.pdf.
>
> Hope this helps.
>
> Bill Torpey
>
>
> On Nov 9, 2022, at 5:19 PM, Francesco 
> wrote:
>
> Hi all,
>
> I have written two applications using ZMQ PUB-SUB pattern (over TCP
> transport).
> The subscriber application has its SUB socket connected to multiple PUBs
> (multiple tcp endpoints). Each message sent by the PUB encodes the
> timestamp (as obtained from clock_gettime() syscall at TX side using
> monotonically increasing clock) of the event described by the ZMQ message.
>
> The subscriber needs to process the data stream _strictly_ in order.
> However the multiple publishers have no coordination and they will emit
> messages at different rates, each with its own timestamp. The only
> guarantee that I have, according to ZMQ docs, is that the SUB socket will
> perform "fair dequeueing", but that's not enough to guarantee that every
> zmq_msg_t received from the SUB socket will have a monotonically increasing
> timestamp: it depends on the filling level of the TCP rx/tx kernel buffers,
> the zmq HWMs, etc.
>
> For this reason I'm looking for some algorithm that
> * allows me to push zmq_msg_t pulled out of the SUB socket (without strict
> time ordering)
> * allows me to pull out zmq_msg_t that have a timestamp monotonically
> increasing
> * introduces a fixed max latency of N msecs (configurable)
>
> Do you have any pointer for such kind of problem?
> Anybody already hit a similar issue?
>
> Thanks for any help,
>
> Francesco Montorsi
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Time-reordering queue

2022-11-10 Thread Francesco
Hi Brett,
thanks, this is really very helpful!
I went through the README and the code... just one question:

As the very first paragraph indicates, "The streams need not be
synchronized in (real) time but must be strictly ordered in each
stream"... I guess that, translating in ZMQ context, it means that the
data structure is born to sort (by time) packets received over a set of k
different SUB (or similar) sockets having each 1 endpoint; in such context
each ZMQ SUB socket would be 1 stream for the zipper. This is the only way
to satisfy the ordering criteria inside each stream, right?
I'm asking because in my case the application is using just 1 single SUB
that is connected with zmq_connect() to a number N of TCP endpoints. That
makes the order of packets obtained from  zmq_msg_recv()  not
time-ordered...

Thanks!!

Francesco


Il giorno gio 10 nov 2022 alle ore 11:12 Brett Viren 
ha scritto:

> Hi Francesco,
>
> I implemented such an algorithm in C++ which I call "zipper".
>
> The idea is simply to maintain a min-heap priority queue keyed on the
> timestamp and surround that with policy logic to decide when to push
> and pop based on examining the system clock.  I've implemented two
> policies.  Either a maximum latency bound is asserted at the cost of
> possible message loss or the merge is lossless at the risk of unbound
> latency.
>
> It is a rather simple pattern and this description alone may be enough
> to implement it yourself but you may also take a look at this repo
> with code, performance results and other docs.
>
> https://github.com/brettviren/zipper
>
> Though I failed to make it explicit, this code may be considered
> licensed under the LGPL.  Let me know if you wish to use the code and
> I'll add proper license info.
>
> The zipper.hpp implementation is in terms of C++ data objects and
> independent from zeromq per se (only needs C++ standard library).
> But, it was written with the assumption that it would be sandwiched
> between ZeroMQ input and output sockets.  Providing a layer to marshal
> data in to / out from the zipper is then the duty of the application.
>
> Note, my repo was for development purposes.  The zipper.hpp file was
> then copied into a production repository and that copy may have some
> bug fixes which I have not ported back to the stand-alone development
> version.  The production version is here:
>
> https://github.com/DUNE-DAQ/trigger/blob/develop/plugins/zipper.hpp
>
> -Brett.
>
> On Wed, Nov 9, 2022 at 5:20 PM Francesco 
> wrote:
> >
> > Hi all,
> >
> > I have written two applications using ZMQ PUB-SUB pattern (over TCP
> transport).
> > The subscriber application has its SUB socket connected to multiple PUBs
> (multiple tcp endpoints). Each message sent by the PUB encodes the
> timestamp (as obtained from clock_gettime() syscall at TX side using
> monotonically increasing clock) of the event described by the ZMQ message.
> >
> > The subscriber needs to process the data stream _strictly_ in order.
> However the multiple publishers have no coordination and they will emit
> messages at different rates, each with its own timestamp. The only
> guarantee that I have, according to ZMQ docs, is that the SUB socket will
> perform "fair dequeueing", but that's not enough to guarantee that every
> zmq_msg_t received from the SUB socket will have a monotonically increasing
> timestamp: it depends on the filling level of the TCP rx/tx kernel buffers,
> the zmq HWMs, etc.
> >
> > For this reason I'm looking for some algorithm that
> > * allows me to push zmq_msg_t pulled out of the SUB socket (without
> strict time ordering)
> > * allows me to pull out zmq_msg_t that have a timestamp monotonically
> increasing
> > * introduces a fixed max latency of N msecs (configurable)
> >
> > Do you have any pointer for such kind of problem?
> > Anybody already hit a similar issue?
> >
> > Thanks for any help,
> >
> > Francesco Montorsi
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Time-reordering queue

2022-11-09 Thread Francesco
Hi all,

I have written two applications using ZMQ PUB-SUB pattern (over TCP
transport).
The subscriber application has its SUB socket connected to multiple PUBs
(multiple tcp endpoints). Each message sent by the PUB encodes the
timestamp (as obtained from clock_gettime() syscall at TX side using
monotonically increasing clock) of the event described by the ZMQ message.

The subscriber needs to process the data stream _strictly_ in order.
However the multiple publishers have no coordination and they will emit
messages at different rates, each with its own timestamp. The only
guarantee that I have, according to ZMQ docs, is that the SUB socket will
perform "fair dequeueing", but that's not enough to guarantee that every
zmq_msg_t received from the SUB socket will have a monotonically increasing
timestamp: it depends on the filling level of the TCP rx/tx kernel buffers,
the zmq HWMs, etc.

For this reason I'm looking for some algorithm that
* allows me to push zmq_msg_t pulled out of the SUB socket (without strict
time ordering)
* allows me to pull out zmq_msg_t that have a timestamp monotonically
increasing
* introduces a fixed max latency of N msecs (configurable)

Do you have any pointer for such kind of problem?
Anybody already hit a similar issue?

Thanks for any help,

Francesco Montorsi
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] How to tell from an exception thrown by recv_string() if this exception is for the listening socket or for the client socket?

2022-10-09 Thread Francesco
Hi Torsten, Yuri,
I'm not a core developer of ZMQ but a ZMQ user since many years... here's
my take on this:

Il giorno dom 9 ott 2022 alle ore 09:49 Torsten Wierschin <
torsten.wiersc...@gmail.com> ha scritto:

> Yuri  schrieb am Sa., 8. Okt. 2022, 20:46:
>
>> On 10/8/22 07:09, orzodk wrote:
>> > My understanding of ZMQ is that the implementation details of the "under
>> > the hood" socket are hidden from you intentionally. I'm not sure how one
>> > would catch that. Hopefully someone else can answer.
>>
>>
>> But the abstraction level is too deep and it prevents access to
>> important and relevant information.
>>
> I agree.
>
> The scenerio is:
> - connection established and working
> - server now vanishes unintentionally
> - client side is not able to reestablish connection iff server reappears
>
> The abstraction level at first seems unable to handle such a simple
scenario perhaps.
But if you read the ZMQ guide, one the concept that it conveys is that you
need to build some protocol on top of ZMQ transport that fullfills all your
application needs.
In other words: if your need is to ensure that 100% of the time there is a
point-to-point connection (server-client) "working" (usable to move
bytes/information between the 2 points), then you should e.g. design "keep
alive" frames (or "ping/pong") in your protocol so that both sides have the
ability to detect unhealthy connection and re-act.
In other words: in your scenario above, if you have ping/pong frames and
logic to check how much time has elapsed since the last "ping", you will be
able to understand that the TCP server has vanished on both application
sides.

You might debate on the fact that just handling the "listening TCP socket
error" would be easier than building ping/pong frames, timeout logic, etc.
However, consider that handling such TCP-server-level errors would be not
enough to detect "stale connections" or dysfunctional networking; I'll make
a very practical example for me: I've written applications that are
deployed inside a Kubernetes using Istio service mesh (
https://istio.io/latest/docs/ops/deployment/architecture/):
[image: image.png]
In such context, all TCP connections between servers/clients are
transparently redirected to the Envoy sidecar. Real data flow happens only
between 2 Envoy sidecars.
Sometimes it happens (for a number of reasons) that the TCP connection
between 2 Envoys break. My app would never realize that by just handling
"listening TCP socket error": the TCP listen server on "Service A" in that
Istio arch picture is running just fine; if the problem happens on the
green line "Mesh traffic", the only way you can detect that is to have
ping/pong frames (or some other protocol-level indication).

This is just an example of problems that does not impact directly the TCP
sockets of the 2 servers where your application are running but that result
in their inability to communicate.

So in some sense I can say that the ZMQ abstraction forces you to write
reliable protocols/applications that take an "holistic" approach to
networking without restricting your focus on just the most obvious
networking issues (like e.g. a server that cannot start with errno "port
already in use")

HTH,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZMQ I/O threads CPU usage

2021-04-11 Thread Francesco
Hi all,

An update on this topic: I didn't give up yet :)
I'm trying to rewrite the ZMQ proxy code in a way that includes a "batching
queue" between the frontend-> backend direction (that's the direction where
in my use case most of the data is moved).
The intent is to have such "batching queue" allow me to better tune the
"throughput vs latency" tradeoff. I hope I can share positive results soon.

I have a question on the current code in proxy.cpp (the version using ZMQ
poller mechanism). I see there's a lot of logic that will substitute the
in-use poller depending on whether the frontend socket or the backend
socket gets blocked or not.
My question is: is this logic really necessary?* Is there some side effect
in using zmq_poller_wait_all() on a socket in mute state having reached its
HWM ? *

Thanks,
Francesco





Il giorno ven 2 apr 2021 alle ore 16:06 Brett Viren 
ha scritto:

> Francesco  writes:
>
> > Here's what I get varying the spin_loop duration:
>
> Thanks for sharing it.  It's cool to see the effect in action!
>
> > Do you think the cpu load of zmq background thread would be caused by
> > the much more frequent TCP ACKs coming from the SUB when the
> > "batching" suddenly stops happening ?
>
> Well, I don't know enough about all the mechanisms to personally say it
> is the TCP ACKs which are the driver of the effect.  Though, that
> certainly sounds reasonable to me.
>
> > Your suggestion is that if the application thread is fast enough (spin
> loop is "short enough")
> > then the while() loop body is actually executed 2-3-4 times and we
> send() a large TCP packet,
> > thereby reducing both syscall overhead and number of TCP acks from the
> SUB (and thus kernel
> > overhead).
> > If instead the  application thread is not fast enough (spin loop is "too
> long") then the while
> > () loop body executes only once and we send my 300B frames one by one to
> the zmq::tcp_write()
> > and send() syscall. That would kill performances of zmq background
> thread.
> > Is that correct?
>
> Yep, that's the basic premise I had.
>
> Though, I don't know the exact mechanisms beyond "more stuff happens
> when many, tiny packets are sent". :)
>
> > Now the other 1M$ question: if that's the case, is there any tuning I
> > can do to force the zmq background thread to wait for some time before
> > invoking send() ?
>
> > I'm thinking that I could try to change the option TCP_NODELAY that is
> set on the tcp socket
> > with the option TCP_CORK instead and see what happens. In this way I
> basically go to the
> > opposite direction in the throughput-vs-latency tradeoff ...
> > Or maybe I could change libzmq source code to invoke tcp_write() only
> e.g. every N times
> > out_event() is invoked? I think I risk getting some bytes stuck into the
> stream engine if at
> > some point I stop sending out messages though
> >
> > Any other suggestion?
>
> Nothing specific.
>
> As you say, it's a throughput-vs-latency problem.  And in this case it
> is a bit more complicated because the particular size/time parameters
> bring the problem to a place where the Nagle "step function" matters.
>
> Two approaches to try, with maybe not much hope of huge improvements, is
> to push Nagle's algorithm out of libzmq and either back down to the TCP
> stack or up into the application layer.
>
> I don't know how to tell libzmq to give this optimization back to the
> TCP stack.  I recall reading (maybe on this list) about someone doing
> work in this direction.  I also don't remember the outcome of that work
> but I'd guess there was not much benefit.  The libzmq developers took
> the effort to bring Nagle up into libzmq (presumably) because libzmq has
> more knowledge than exists down in the TCP stack and so can perform the
> optimization more... er, optimally.
>
> Likewise, doing message batching in the application may or may not help.
> But, in this case it would be rather easy to try.  And there are two
> approaches to try.  Either send N 300B parts in an N-part multipart
> message or enact join/split operations in the application layer.
>
> In particular, if the application can directly deal with concatenated
> parts so no explicit join/split is required, then you may solve this
> problem.  At least, reading N 300B blocks "in place" on the recv() side
> should be easy enough.  As an example, zproto-generated code uses this
> trick to "unpack-in-place" highly structured data.
>
>
> My other general suggestion is to step back and see what the application
> actually requires w.r.t. throughput-vs-latency.  Testing the

Re: [zeromq-dev] ZMQ Proxy crash in every 3 days

2021-04-05 Thread Francesco
Hi Ashok,

Not sure what you mean with "crash" in a Python context, however, if this
can help somehow, here's my experience with ZMQ proxy: in my company we're
running it 24/7 for months without issues. There are some differences
though:
a) we use libzmq C API from a C/C++ software
b) the proxy is of type XSUB/XPUB like yours but the transport for frontend
socket is "inproc"
c) we use zmq_proxy_steerable()

As I said this is probably not helping much but the message is: I have
experience with the proxy running in all sort of corner conditions (mute
state on some socket, all queues full, etc) and never found any issue so far

Francesco

Il giorno lun 5 apr 2021 alle ore 07:08 Ashok Kumar Karasala <
ashokrj...@gmail.com> ha scritto:

> Hi Team,
>
> We are facing a ZMQ proxy crash every 3 days and when we look at the
> system graphs we don't see any CPU or memory spikes. From the code, we have
> gracefully handled the socket's initialization and termination.
>
>
> We couldn't trace back to any system resource being the issue to crash. Is
> there any way to debug this?
>
> At the time of crash :
>  CPU, Memory and open file's count are under the allocated limits.
>
>
> Proxy code :
>
> def main(lang):
> global context
> context = zmq.Context()
> frontend = context.socket(zmq.XPUB)
> frontend.bind("tcp://*:%s" % (,))
>
> backend = context.socket(zmq.XSUB)
> backend.bind("tcp://*:%s" % (,))
>
> try:
> zmq.proxy(frontend, backend)
> except Exception as e:
> print(e)
>
> frontend.close()
> backend.close()
> context.term()
>
> Thanks & Regards,
>
> Ashok K.
>
>
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] ZMQ I/O threads CPU usage

2021-04-01 Thread Francesco
Hi Brett,
thanks for your email, it helped a lot!
I think you're right. Here's what I get varying the spin_loop duration:

* spin_loop_time = 2.5usec (my original test)  -> ZMQ background thread
100% cpu usage
* spin_loop_time = 2.0usec (my original test)  -> ZMQ background thread
100% cpu usage
* spin_loop_time = *1.6usec* (my original test)  -> ZMQ background thread
100% cpu usage
* spin_loop_time =* 1.590usec* (my original test)  -> *ZMQ background
thread 100% cpu usage *
* spin_loop_time =* 1.585usec* (my original test)  -> *ZMQ background
thread 10% cpu usage  (a bit weird)*
* spin_loop_time =* 1.580usec* (my original test)  -> *ZMQ background
thread 17% cpu usage *
* spin_loop_time =* 1.570usec* (my original test)  -> *ZMQ background
thread 15% cpu usage *
* spin_loop_time =* 1.560usec* (my original test)  -> *ZMQ background
thread 15% cpu usage *
* spin_loop_time =* 1.550usec* (my original test)  -> *ZMQ background
thread 10% cpu usage*
* spin_loop_time = 1.40usec (my original test)  -> ZMQ background thread
10% cpu usage
* spin_loop_time = 0.01usec (my original test)  -> ZMQ background thread
10% cpu usage

All of these using the same pkt size of 300B.
In practice this is a step function I would say. It's enough a 10-20 nsec
difference to jump from 10-15% to 100% cpu usage. It's incredible.
My spin loop routine has probably a fixed offset of about 150nsec using
the clock_gettime(CLOCK_REALTIME) routine, but I think it's important to
have the delta, not the absolute numbers here. Any small variation in the
pkt size, HW of pub or NIC would probably change slightly the slope of
the "step function".

Many thanks for suggesting this test. TCP "inefficiency" was my first
suspect (see my other email thread "Inefficient TCP connection for my
PUB-SUB zmq communication") but I was expecting a much much smoother
transition.
Do you think the cpu load of zmq background thread would be caused by the
much more frequent TCP ACKs coming from the SUB when the "batching"
suddenly stops happening ?

I guess the relevant point of zmq code is this one:

void zmq::stream_engine_base_t::out_event ()
{
[...]
_outpos = NULL;
_outsize = _encoder->encode (&_outpos, 0);

while (_outsize < static_cast (*_options.out_batch_size*)) {
if ((this->*_next_msg) (&_tx_msg) == -1)
break;
_encoder->load_msg (&_tx_msg);
unsigned char *bufptr = _outpos + _outsize;
const size_t n =
  _encoder->encode (, _options.out_batch_size -
_outsize);
zmq_assert (n > 0);
if (_outpos == NULL)
_outpos = bufptr;
_outsize += n;
}
  [...]
//  If there are any data to write in write buffer, write as much as
//  possible to the socket. Note that amount of data to write can be
//  arbitrarily large. However, we assume that underlying TCP layer has
//  limited transmission buffer and thus the actual number of bytes
//  written should be reasonably modest.
const int nbytes = *write *(_outpos, _outsize); //* this
calls zmq::tcp_write() which calls send() syscall*
  [...]
}

Your suggestion is that if the application thread is fast enough (spin loop
is "short enough") then the while() loop body is actually executed 2-3-4
times and we send() a large TCP packet, thereby reducing both syscall
overhead and number of TCP acks from the SUB (and thus kernel overhead).
If instead the  application thread is not fast enough (spin loop is "too
long") then the while() loop body executes only once and we send my 300B
frames one by one to the zmq::tcp_write() and send() syscall. That would
kill performances of zmq background thread.
Is that correct?

Now the other 1M$ question: if that's the case, is there any tuning I can
do to force the zmq background thread to wait for some time before invoking
send() ?
I'm thinking that I could try to change the option TCP_NODELAY that is set
on the tcp socket with the option TCP_CORK instead and see what happens. In
this way I basically go to the opposite direction in the
throughput-vs-latency tradeoff ...
Or maybe I could change libzmq source code to invoke tcp_write() only e.g.
every N times out_event() is invoked? I think I risk getting some bytes
stuck into the stream engine if at some point I stop sending out messages
though

Any other suggestion?

Thanks again a lot!

Francesco


Il giorno gio 1 apr 2021 alle ore 14:51 Brett Viren 
ha scritto:

> Francesco  writes:
>
> > So in first scenario the zmq background thread used only 12% of cpu to
> fill 914Mbps ; in second
> > scenario it uses 97% to fill 700Mbps...
> >
> > how's that possible?
>
> This is a pure guess: are you experiencing ZeroMQ's internal Nagle's
> algorithm?
>
> The guess applies if a) your messages are "sma

[zeromq-dev] ZMQ I/O threads CPU usage

2021-03-31 Thread Francesco
Hi all,

I found a sort of weird effect impacting greatly on the CPU usage of ZMQ
background threads. I found this behaviour with a very small benchmarking
utility I wrote using directly libzmq API, very small and self-contained (I
can share that on github if anyone's willing to take a look!)
This is 100% reproducible, at least on my Centos7 machine.

Here's the thing: the app with --pub CLI option starts a PUB server and
--sub starts a SUB server. I run the pub on a server connected by a 1Gbps
link to the sub server.
If in the pub server the logic is:

set ZMQ_XPUB_NODROP on the pub_skt
while (true)
   zmq_msg_send(dummy_msg, pub_skt, 0);
   // no wait of any kind

I measure around 940Mbps of throughput and the cpu usage of my application
thread is just 8% and zmq background thread is just 12% of cpu usage.
Wonderful.

Now if I change the logic to be:

set ZMQ_XPUB_NODROP on the pub_skt
while (true) {
   zmq_msg_send(dummy_msg, pub_skt, 0);
   spin_loop(2.5usec)
}

I measure a throughput of 700Mbps (expected due to the spin_loop that
simulates the time it takes a real application to produce a msg) and my
application thread goes to 100%... that's fine due to the spin_loop.
However the strange thing is that the ZMQ background thread also tops at
97% of CPU usage !!!

So in first scenario the zmq background thread used only 12% of cpu to fill
914Mbps ; in second scenario it uses 97% to fill 700Mbps...

how's that possible?

Any help GREATLY appreciated

Thanks!!

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Inefficient TCP connection for my PUB-SUB zmq communication

2021-03-31 Thread Francesco
Hi all,

Another update on this topic.
- I managed to do a much better capture of the TCP traffic between the PUB
->SUB using a switch Port Mirror feature. Indeed now larger-than-MTU
packets disappeared
- I measured with more accuracy the message generation rate for my use case
and it turns out to be 2.5usec.
- I measured average msg size more precisely; in my use case it's 296B
- Some trivial computations on the PHY link I'm using (with raw speed of
20Gbps) show that to send ~300B on a 20Gbps link takes just 20nsec
- Some trivial computations also provide as TCP upperbound for a frame
generation time of 2.5usec on a 20Gbps link to be around roughly 900Mbps
which is exactly what I'm measuring as outgoing throughput from the PUB
socket

Based on the considerations above I now believe that my problem is not
anymore the "quality"  of the TCP connection produced by my PUB socket...
my software is bound by the frame generation rate, not by the speed of the
link.

However I'm still far from solving my original problem. My software is
receiving roughly 900Mbps (from a SUB socket) and generating 900Mbps (out
of a PUB socket). To do that it's scaled to 16 ZMQ background threads
(!!)... that really sounds too much.

I'll start a different email thread (just for the mailing list history)
with another "strange" effect I found and that's impacting on the CPU usage
of ZMQ background threads...

Francesco


Il giorno dom 28 mar 2021 alle ore 17:43 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi all,
>
> A few more questions after inspecting ZMQ source code:
> - I see that in June 2019 the following PR was merged:
> https://github.com/zeromq/libzmq/pull/3555   This one exposes 
> ZMQ_OUT_BATCH_SIZE.
> At first look it may seem exactly what I was looking for, but the thing is
> that the default value is already quite high (8192)... in my use case
> probably it would be enough to coalesce together a max of 5 or 6 messages
> to reach the MTU size.
> - The thread that is publishing on my PUB zmq socket probably takes
> between 100-500usec to generate a new message. That means that to generate
> 5 messages in worst case it might take 2.5msec. I would be OK to pay this
> latency in order to improve throughput... .is there any way to achieve
> that? What happens if I disable the code in ZMQ that sets TCP_NODELAY and
> replace it with TCP_CORK ? Do you think I could get some kind of breakage
> of my PUB/SUB connections?
>
> and one consideration:
>  - I discovered why my tcpdump capture contains larger-than-MTU packets
> (even though they are <1%): the reason is that capturing traffic on the
> same server sending/receiving the traffic is not a  good idea:
>
> https://blog.packet-foo.com/2014/05/the-drawbacks-of-local-packet-captures/
> https://packetbomb.com/how-can-the-packet-size-be-greater-than-the-mtu/
> I will try to acquire tcpdumps from the SPAN port of a managed switch. I
> don't think the results will change much though
>
> Thanks for any hint,
> Francesco
>
>
> Il giorno sab 27 mar 2021 alle ore 10:22 Francesco <
> francesco.monto...@gmail.com> ha scritto:
>
>> Hi Jim,
>> You're right and I have in plan to change the MTU to 9000 for sure.
>> However even now, with the MTU being 1500, I see most packets are very far
>> from the limit.
>> Attached is a screenshot of the capture:
>>
>> [image: tcp_capture.png]
>>
>> By looking at the timestamps I see that the packets of size 583B and 376B
>> are spaced just 100us roughly and between the packet of 376B and 366B are
>> spaced 400us.
>> In this case I'd be more than welcome to pay some extra latency and merge
>> all these 3 packets together.
>>
>> After some more digging I found this code in ZMQ:
>>
>> //  Disable Nagle's algorithm. We are doing data batching on 0MQ
>> level,
>> //  so using Nagle wouldn't improve throughput in anyway, but it would
>> //  hurt latency.
>> int nodelay = 1;
>> const int rc =
>>   setsockopt (s_, IPPROTO_TCP, TCP_NODELAY,
>>   reinterpret_cast (), sizeof (int));
>> assert_success_or_recoverable (s_, rc);
>> if (rc != 0)
>> return rc;
>>
>> Now my next question is: where is this " data batching on 0MQ level"
>> happening? Can I tune it somehow? Can I restore Nagle algorithm ?
>> I saw also from here
>>   https://man7.org/linux/man-pages/man7/tcp.7.html
>> that there's the possibility to set TCP_CORK as option on the socket to
>> try to optimize throughput ... any way to do that through ZMQ?
>>
>> Thanks!!
>>
>> Francesco
>>
>>
>>
>>
>> Il giorno sab 27 mar 2021 alle

Re: [zeromq-dev] Inefficient TCP connection for my PUB-SUB zmq communication

2021-03-28 Thread Francesco
Hi all,

A few more questions after inspecting ZMQ source code:
- I see that in June 2019 the following PR was merged:
https://github.com/zeromq/libzmq/pull/3555   This one exposes
ZMQ_OUT_BATCH_SIZE.
At first look it may seem exactly what I was looking for, but the thing is
that the default value is already quite high (8192)... in my use case
probably it would be enough to coalesce together a max of 5 or 6 messages
to reach the MTU size.
- The thread that is publishing on my PUB zmq socket probably takes between
100-500usec to generate a new message. That means that to generate 5
messages in worst case it might take 2.5msec. I would be OK to pay this
latency in order to improve throughput... .is there any way to achieve
that? What happens if I disable the code in ZMQ that sets TCP_NODELAY and
replace it with TCP_CORK ? Do you think I could get some kind of breakage
of my PUB/SUB connections?

and one consideration:
 - I discovered why my tcpdump capture contains larger-than-MTU packets
(even though they are <1%): the reason is that capturing traffic on the
same server sending/receiving the traffic is not a  good idea:
 https://blog.packet-foo.com/2014/05/the-drawbacks-of-local-packet-captures/
https://packetbomb.com/how-can-the-packet-size-be-greater-than-the-mtu/
I will try to acquire tcpdumps from the SPAN port of a managed switch. I
don't think the results will change much though

Thanks for any hint,
Francesco


Il giorno sab 27 mar 2021 alle ore 10:22 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi Jim,
> You're right and I have in plan to change the MTU to 9000 for sure.
> However even now, with the MTU being 1500, I see most packets are very far
> from the limit.
> Attached is a screenshot of the capture:
>
> [image: tcp_capture.png]
>
> By looking at the timestamps I see that the packets of size 583B and 376B
> are spaced just 100us roughly and between the packet of 376B and 366B are
> spaced 400us.
> In this case I'd be more than welcome to pay some extra latency and merge
> all these 3 packets together.
>
> After some more digging I found this code in ZMQ:
>
> //  Disable Nagle's algorithm. We are doing data batching on 0MQ level,
> //  so using Nagle wouldn't improve throughput in anyway, but it would
> //  hurt latency.
> int nodelay = 1;
> const int rc =
>   setsockopt (s_, IPPROTO_TCP, TCP_NODELAY,
>   reinterpret_cast (), sizeof (int));
> assert_success_or_recoverable (s_, rc);
> if (rc != 0)
> return rc;
>
> Now my next question is: where is this " data batching on 0MQ level"
> happening? Can I tune it somehow? Can I restore Nagle algorithm ?
> I saw also from here
>   https://man7.org/linux/man-pages/man7/tcp.7.html
> that there's the possibility to set TCP_CORK as option on the socket to
> try to optimize throughput ... any way to do that through ZMQ?
>
> Thanks!!
>
> Francesco
>
>
>
>
> Il giorno sab 27 mar 2021 alle ore 05:01 Jim Melton  ha
> scritto:
>
>> Small TCP packets will never achieve maximum throughput. This is
>> independent of ZMQ. Each TCP packet requires a synchronous round-trip.
>>
>> For a 20 Gbps network, you need a larger MTU to achieve close to
>> theoretical bandwidth, and each packet needs to be close to MTU. Jumbo MTU
>> is typically 9000 bytes. The TCP ACK packets will kill your throughput,
>> though.
>> --
>> Jim Melton
>> (303) 829-0447
>> http://blogs.melton.space/pharisee/
>> jim@melton.space
>>
>>
>>
>>
>> On Mar 26, 2021, at 4:17 PM, Francesco 
>> wrote:
>>
>> Hi all,
>>
>> I'm using ZMQ in a product that moves a lot of data using TCP as
>> transport and PUB-SUB as communication pattern. "A lot" here means around
>> 1Gbps. The software is actually a mono-directional chain of small
>> components each linked to the previous with a SUB socket (to receive data)
>> and a PUB socket (to send data to next stage).
>> I'm debugging an issue with one of these components receiving 1.1Gbps
>> from its SUB socket and sending out 1.1Gbps on its PUB socket (no wonder
>> the two numbers match since the component does not aggregation whatsoever).
>>
>> The "problem" is that we are currently using 16 ZMQ background threads to
>> move a total of 2.2Gbps for that software component (note the physical
>> links can carry up to 20Gbps so we're far from saturation of the link).
>> IIRC the "golden rule" for sizing number of ZMQ background threads is 1Gbps
>> = 1 thread.
>> As you can see we're very far from this golden rule, and that's what I'm
>> trying to debug.
>>
>> The ZMQ background threads have

Re: [zeromq-dev] Inefficient TCP connection for my PUB-SUB zmq communication

2021-03-28 Thread Francesco
Hi Kent,

Good post.
>
> I don't have any answers for you, but I look forward to seeing a reply
> from someone who does.
>
> Oh, I do have questions: What are the sizes of the data you are handing to
> zeromq?
>
My typical ZMQ messages are around 150-300bytes only.


> Is it chopping your data up into smaller pieces, or is it failing to
> coalesce small pieces into bigger packets?
>
The latter: I see ZMQ / TCP stack is faling to coalesce small pieces into
bigger packets


> Maybe zeromq has tension between latency and throughput, from when I last
> programmed zeromq I don't remember any tuning parameters for that, are
> there any? Does your kernel have a coalescing feature you could tweak into
> service?
>
The only one I could find is TCP_CORK... my problem is how to set it on the
ZMQ socket since zmq_setsockopt() does not expose it...

Thanks,
Francesco

>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Inefficient TCP connection for my PUB-SUB zmq communication

2021-03-27 Thread Francesco
Hi Jim,
You're right and I have in plan to change the MTU to 9000 for sure. However
even now, with the MTU being 1500, I see most packets are very far from the
limit.
Attached is a screenshot of the capture:

[image: tcp_capture.png]

By looking at the timestamps I see that the packets of size 583B and 376B
are spaced just 100us roughly and between the packet of 376B and 366B are
spaced 400us.
In this case I'd be more than welcome to pay some extra latency and merge
all these 3 packets together.

After some more digging I found this code in ZMQ:

//  Disable Nagle's algorithm. We are doing data batching on 0MQ level,
//  so using Nagle wouldn't improve throughput in anyway, but it would
//  hurt latency.
int nodelay = 1;
const int rc =
  setsockopt (s_, IPPROTO_TCP, TCP_NODELAY,
  reinterpret_cast (), sizeof (int));
assert_success_or_recoverable (s_, rc);
if (rc != 0)
return rc;

Now my next question is: where is this " data batching on 0MQ level"
happening? Can I tune it somehow? Can I restore Nagle algorithm ?
I saw also from here
  https://man7.org/linux/man-pages/man7/tcp.7.html
that there's the possibility to set TCP_CORK as option on the socket to try
to optimize throughput ... any way to do that through ZMQ?

Thanks!!

Francesco




Il giorno sab 27 mar 2021 alle ore 05:01 Jim Melton  ha
scritto:

> Small TCP packets will never achieve maximum throughput. This is
> independent of ZMQ. Each TCP packet requires a synchronous round-trip.
>
> For a 20 Gbps network, you need a larger MTU to achieve close to
> theoretical bandwidth, and each packet needs to be close to MTU. Jumbo MTU
> is typically 9000 bytes. The TCP ACK packets will kill your throughput,
> though.
> --
> Jim Melton
> (303) 829-0447
> http://blogs.melton.space/pharisee/
> jim@melton.space
>
>
>
>
> On Mar 26, 2021, at 4:17 PM, Francesco 
> wrote:
>
> Hi all,
>
> I'm using ZMQ in a product that moves a lot of data using TCP as transport
> and PUB-SUB as communication pattern. "A lot" here means around 1Gbps. The
> software is actually a mono-directional chain of small components each
> linked to the previous with a SUB socket (to receive data) and a PUB socket
> (to send data to next stage).
> I'm debugging an issue with one of these components receiving 1.1Gbps from
> its SUB socket and sending out 1.1Gbps on its PUB socket (no wonder the two
> numbers match since the component does not aggregation whatsoever).
>
> The "problem" is that we are currently using 16 ZMQ background threads to
> move a total of 2.2Gbps for that software component (note the physical
> links can carry up to 20Gbps so we're far from saturation of the link).
> IIRC the "golden rule" for sizing number of ZMQ background threads is 1Gbps
> = 1 thread.
> As you can see we're very far from this golden rule, and that's what I'm
> trying to debug.
>
> The ZMQ background threads have a CPU usage ranging from 98% to 80%.
> Using "strace" I see that most of the time for these threads is spent in
> the "sendto" syscall.
> So I started digging on the quality of the TX side of the TCP connection,
> recording a short trace of the traffic outgoing from the software component.
>
> Analyzing the traffic with wireshark it turns out that the TCP packets for
> the PUB connection are pretty small:
> * 50% of them are 66B long; these are the TCP ACK packets (incoming)
> * 21% of them are in the range 160B-320B
> * 18% in the range 320B-640B
> * 5% in range 640B-1280B
> * just 3% reach the MTU equal to 1500B
> * [there are a <1% fraction that also exceed the MTU=1500B of the link,
> which I'm not sure how is possible]
>
> My belief is that having a fewer number of packets, all close to the MTU
> of the link should greatly improve the performances. Would you agree with
> that?
> Is there any configuration I can apply on the PUB socket to force the
> Linux TCP stack to generate fewer but larger TCP segments on the wire?
>
> Thanks for any hint,
>
> Francesco
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Inefficient TCP connection for my PUB-SUB zmq communication

2021-03-26 Thread Francesco
Hi all,

I'm using ZMQ in a product that moves a lot of data using TCP as transport
and PUB-SUB as communication pattern. "A lot" here means around 1Gbps. The
software is actually a mono-directional chain of small components each
linked to the previous with a SUB socket (to receive data) and a PUB socket
(to send data to next stage).
I'm debugging an issue with one of these components receiving 1.1Gbps from
its SUB socket and sending out 1.1Gbps on its PUB socket (no wonder the two
numbers match since the component does not aggregation whatsoever).

The "problem" is that we are currently using 16 ZMQ background threads to
move a total of 2.2Gbps for that software component (note the physical
links can carry up to 20Gbps so we're far from saturation of the link).
IIRC the "golden rule" for sizing number of ZMQ background threads is 1Gbps
= 1 thread.
As you can see we're very far from this golden rule, and that's what I'm
trying to debug.

The ZMQ background threads have a CPU usage ranging from 98% to 80%.
Using "strace" I see that most of the time for these threads is spent in
the "sendto" syscall.
So I started digging on the quality of the TX side of the TCP connection,
recording a short trace of the traffic outgoing from the software component.

Analyzing the traffic with wireshark it turns out that the TCP packets for
the PUB connection are pretty small:
* 50% of them are 66B long; these are the TCP ACK packets (incoming)
* 21% of them are in the range 160B-320B
* 18% in the range 320B-640B
* 5% in range 640B-1280B
* just 3% reach the MTU equal to 1500B
* [there are a <1% fraction that also exceed the MTU=1500B of the link,
which I'm not sure how is possible]

My belief is that having a fewer number of packets, all close to the MTU of
the link should greatly improve the performances. Would you agree with that?
Is there any configuration I can apply on the PUB socket to force the Linux
TCP stack to generate fewer but larger TCP segments on the wire?

Thanks for any hint,

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] detecting messages being dropped on a PUB socket?

2020-12-09 Thread Francesco
Hi Luca,
sorry to jump into this thread: but

> We have added queue stats capability to socket monitors last year,

Any documentation is available for this new feature?
I could not find any at http://api.zeromq.org/master:zmq-socket-monitor

Thanks!
Francesco


Il giorno mer 9 dic 2020 alle ore 11:24 Luca Boccassi  ha
scritto:

> On Tue, 2020-12-08 at 22:05 +0100, Arnaud Loonstra wrote:
> > Hey all,
> >
> > I might me missing something but is there any way of detecting messages
> > being dropped due to the high watermark on a PUB socket?
> >
> > It's clear how to do this is on a socket that blocks in the mute state,
> > but dropping message on mute is not?
> >
> > for example:
> >
> > // test hwm
> > sock_t *push = zsock_new(ZMQ_PUB);
> > zsock_t *pull = zsock_new(ZMQ_SUB);
> > zsock_set_rcvhwm(pull, 100);
> > zsock_set_sndhwm(push, 100);
> > zsock_set_sndtimeo(push,0);
> > zsock_bind(push, "inproc://test");
> > zsock_connect(pull, "inproc://test");
> > zsock_set_subscribe(pull, "");
> > zclock_sleep(10);
> > int rc = 0;
> > int count = 0;
> > while( rc == 0 && count < 1) {
> >  rc = zstr_send(push, "BOE");
> >  count++;
> > }
> >
> > The count will be 1 due zstr_send dropping messages but returning 0;
> > If we then receive the buffered messages we wil receive 20 messages
> > (rcvhwm + sndhwm)
> >
> > int count2 = 0;
> > char *m = "";
> > while ( m )
> > {
> >  char *m = zstr_recv_nowait(pull);
> >  count2++;
> > }
> >
> > Rg,
> >
> > Arnaud
>
> There is no deterministic way, and when you think about the conditions
> that might cause that to happen, it makes sense that there isn't.
>
> We have added queue stats capability to socket monitors last year,
> which will give you some insights - but it's of course asynchronous and
> best-effort, and thus useful for debugging and statistics but not for
> making logic decisions.
> The OS should provide you with data about the state of the underlying
> network interfaces/buffers.
>
> --
> Kind regards,
> Luca Boccassi
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] next release?

2020-06-08 Thread Francesco
Hi all,

I was going through the commit history and looks like there are a lot of
commits since last release 4.3.2 so just wondering whether a new one will
be done in the short term... thanks

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Performance results on 100Gbps direct link

2019-10-02 Thread Francesco
Il mer 2 ott 2019, 19:05 Doron Somech  ha scritto:

>
> You don't need to create multiple sockets, just call connect multiple
> times with same address.
>
Wow, really??
I wish I had known that, I already changed quite a bit of code to use
multiple zmq sockets to make better use of background zmq threads!!

I will try connecting multiple times... At this point I suggest modifying
the benchmark utility to just do this trick and update the performance
graphs in the wiki with new results!

Francesco


On Wed, Oct 2, 2019, 19:45 Brett Viren via zeromq-dev <
> zeromq-dev@lists.zeromq.org> wrote:
>
>> Hi Francesco,
>>
>> I confirm your benchmark using two systems with the same 100 Gbps
>> Mellanox NICs but with an intervening Juniper QFX5200 switch (100 Gbps
>> ports).
>>
>> To reach ~25 Gbps with the largest message sizes required "jumbo frame"
>> MTU.  The default mtu=1500 allows only ~20 Gbps.  I also tried two more
>> doubling of zmsg size in the benchmark and these produce no significant
>> increase in throughput.  OTOH, pinning the receiver (local_thr) to a CPU
>> gets it up to 33 Gbps.
>>
>> I note that iperf3 can achieve almost 40 Gbps (20 Gbps w MTU=1500).
>> Multiple simultaneous iperf3 tests can, in aggregate, use 90-100 Gbps.
>>
>> In both the ZMQ and singular iperf3 tests, it seems that CPU is the
>> bottleneck.  For ZeroMQ the receiver's I/O thread is pegged at 100%.
>> With iperf3 it's that of the client/sender.  The other ends in both
>> cases are at about 50%.
>>
>> The zguide suggests to use one I/O thread per GByte/s (faq says "Gbps")
>> so I tried the naive thing and hacked the ZMQ remote_thr.cpp and
>> local_thr.cpp so each use ten I/O threads.  While I see all ten threads
>> in "top -H", still only one thread uses any CPU and it remains pegged at
>> 100% on the receiver (local_thr) and about 50% on the sender
>> (remote_thr).  I think now that I misinterpreted this advice and it's
>> really relevant to the case of handling a very large number of
>> connections.
>>
>>
>> Any suggestions on how to let ZeroMQ get higher throughput at 100 Gbps?
>> If so, I'll give them a try.
>>
>>
>> Cheers,
>> -Brett.
>>
>> Francesco  writes:
>>
>> > Hi all,
>> >
>> > I placed here:
>> >   http://zeromq.org/results:100gbe-tests-v432
>> > the results I collected using 2 Mellanox ConnectX-5 linked by 100Gbps
>> > fiber cable.
>> >
>> > The results are not too much different from those at 10gpbs
>> > (http://zeromq.org/results:10gbe-tests-v432 )... the difference in TCP
>> > throughput is that
>> >  - even using 100kB-long messages we still cannot saturate the link
>> >  - latency is very much improved for messages > 10kB long
>> >
>> > Hopefully we will be able to improve performances in the future to
>> > improve these benchmarks...
>> >
>> > Francesco
>> > ___
>> > zeromq-dev mailing list
>> > zeromq-dev@lists.zeromq.org
>> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-17 Thread Francesco
Hi Luca,
THanks for the explanation. It seems like there is no need to do memory
pooling for packet RX right?
One allocation every 19kB seems pretty efficient already (nice work! :))

Still I wonder if we can improve somehow the performance of
zmq::v2_decoder_t::size_ready
since that function appears to be the bottleneck of my latest performance
benchmarks. (See my previous email).
My feeling is that if memory management is not a problem along the RX path
then a single zmq background IO thread/core (on a fast CPU) should be able
to do more than the approx 2 Mpps limit that I found...
My concern is that it's a fundamental limit in zmq scalability: since a
single zmq socket is always handled by a single zmq background thread that
means that , even if I buy 100gbps of bandwidth, I will not be able to use
more than 2/3gbps sending messages 64B long on that socket.

Thanks for any hint or comment,
Francesco



Il ven 16 ago 2019, 17:20 Luca Boccassi  ha
scritto:

> The messages structures themselves are always on the stack. The TCP
> receive is batched, and if there are multiple messages in an 8KB kernel
> buffer, each message's content_t simply points to the right place for the
> data in that shared buffer, which is refcounted. The content_t structure is
> also in the same memory zone, which is split to allow enough content_t for
> 8KB/minimum_size_msg+1 messages - so in practice there is one allocation on
> ~19KB which is shared with as many messages as their data can fit in 8KB
> that are received in one TCP read.
>
> On Fri, 2019-08-16 at 16:46 +0200, Francesco wrote:
>
> Hi Doron,
> Ok the zmq_msg_init_allocator approach looks fine to me. I hope I have
> time to work on that in the next couple of weeks (unless someone else wants
> to step in of course :-) ).
>
> Anyway the current approach works for sending messages...I wonder how the
> Rx side works and if we could exploit memory pooling also for that... Is
> there any kind of documentation on how the engine works for Rx (or some
> email thread) perhaps?
>
> I know there is some zero copy mechanism in place but it's not totally
> clear to me: is the zmq_msg_t coming out of zmq API pointing directly to
> the kernel buffers?
>
> Thanks
> Francesco
>
>
> Il gio 15 ago 2019, 11:39 Doron Somech  ha scritto:
>
> maybe zmq_msg_init_allocator which accept the allocator.
>
> With that pattern we do need the release method, the zmq_msg will handle
> it internally and register the release method as the free method of the
> zmq_msg. They do need to have the same signature.
>
> On Thu, Aug 15, 2019 at 12:35 PM Francesco 
> wrote:
>
> Hi Doron, hi Jens,
> Yes the allocator method is a nice solution.
> I think it would be nice to have libzmq provide also a memory pool
> implementation but use as default the malloc/free implementation for
> backward compatibility.
>
> It's also important to have a smart allocator that internally contains not
> just  one but several pools for different packet size classes,to avoid
> memory waste. But I think this can fit easily in the allocator pattern
> sketched out by Jens.
>
> Btw another issue unrelated to the allocator API but regarding performance
> aspects: I think it's important to avoid not only the msg buffer but also
> the allocation of the content_t structure and indeed in my preliminary
> merge request I did modify zmq_msg_t of type_lmsg to use the first 40b
> inside the pooled buffer.
> Of course this approach is not backward compatible with the _init_data()
> semantics.
> How do you think this would best be approached?
> I guess we may have a new _init_data_and_controlblock() helper that does
> the trick of taking the first 40bytes of the provided buffer?
>
> Thanks
> Francesco
>
>
> Il mer 14 ago 2019, 22:23 Doron Somech  ha scritto:
>
> Jens I like the idea.
>
> We actually don't need the release method.
> The signature of the allocate should receive zmq_msg and allocate it.
>
> int ()(zmq_msg *msg, size_t size, void *obj);
>
> When the allocator will create the zmq_msg it will provide the release
> method to the zmq_msg in the constructor.
>
> This is important in order to forward messages between sockets, so the
> release method is part of the msg. This is already supported by zmq_msg
> which accept free method with a hint (obj in your example).
>
> The return value of allocate will be success indication, like the rest of
> zeromq methods.
>
> zeromq actually already support pool mechanism when sending, using zmq_msg
> api. Receiving is the problem, your suggestion solve it nicely.
>
> By the way, memory pool already supported in NetMQ in a very similar
> solution as you suggested. (It is global for all sockets without override)
>
>
>
&g

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-16 Thread Francesco
Hi Doron,
Ok the zmq_msg_init_allocator approach looks fine to me. I hope I have time
to work on that in the next couple of weeks (unless someone else wants to
step in of course :-) ).

Anyway the current approach works for sending messages...I wonder how the
Rx side works and if we could exploit memory pooling also for that... Is
there any kind of documentation on how the engine works for Rx (or some
email thread) perhaps?

I know there is some zero copy mechanism in place but it's not totally
clear to me: is the zmq_msg_t coming out of zmq API pointing directly to
the kernel buffers?

Thanks
Francesco


Il gio 15 ago 2019, 11:39 Doron Somech  ha scritto:

> maybe zmq_msg_init_allocator which accept the allocator.
>
> With that pattern we do need the release method, the zmq_msg will handle
> it internally and register the release method as the free method of the
> zmq_msg. They do need to have the same signature.
>
> On Thu, Aug 15, 2019 at 12:35 PM Francesco 
> wrote:
>
>> Hi Doron, hi Jens,
>> Yes the allocator method is a nice solution.
>> I think it would be nice to have libzmq provide also a memory pool
>> implementation but use as default the malloc/free implementation for
>> backward compatibility.
>>
>> It's also important to have a smart allocator that internally contains
>> not just  one but several pools for different packet size classes,to avoid
>> memory waste. But I think this can fit easily in the allocator pattern
>> sketched out by Jens.
>>
>> Btw another issue unrelated to the allocator API but regarding
>> performance aspects: I think it's important to avoid not only the msg
>> buffer but also the allocation of the content_t structure and indeed in my
>> preliminary merge request I did modify zmq_msg_t of type_lmsg to use the
>> first 40b inside the pooled buffer.
>> Of course this approach is not backward compatible with the _init_data()
>> semantics.
>> How do you think this would best be approached?
>> I guess we may have a new _init_data_and_controlblock() helper that does
>> the trick of taking the first 40bytes of the provided buffer?
>>
>> Thanks
>> Francesco
>>
>>
>> Il mer 14 ago 2019, 22:23 Doron Somech  ha scritto:
>>
>>> Jens I like the idea.
>>>
>>> We actually don't need the release method.
>>> The signature of the allocate should receive zmq_msg and allocate it.
>>>
>>> int ()(zmq_msg *msg, size_t size, void *obj);
>>>
>>> When the allocator will create the zmq_msg it will provide the release
>>> method to the zmq_msg in the constructor.
>>>
>>> This is important in order to forward messages between sockets, so the
>>> release method is part of the msg. This is already supported by zmq_msg
>>> which accept free method with a hint (obj in your example).
>>>
>>> The return value of allocate will be success indication, like the rest
>>> of zeromq methods.
>>>
>>> zeromq actually already support pool mechanism when sending, using
>>> zmq_msg api. Receiving is the problem, your suggestion solve it nicely.
>>>
>>> By the way, memory pool already supported in NetMQ in a very similar
>>> solution as you suggested. (It is global for all sockets without override)
>>>
>>>
>>>
>>> On Wed, Aug 14, 2019, 22:41 Jens Auer  wrote:
>>>
>>>> Hi,
>>>>
>>>> Maybe this can be combined with a request that I have seen a couple of
>>>> times to be able to configure the allocator used in libzmq? I am thinking
>>>> of something like
>>>>
>>>> struct zmq_allocator {
>>>> void* obj;
>>>> void* ()(size_t n, void* obj);
>>>> void ()(void* ptr, void* obj);
>>>> };
>>>>
>>>> void* useMalloc(size_t n, void*) {return malloc(n);}
>>>> void freeMalloc(void* ptr) {free(ptr);}
>>>>
>>>> zmq_allocator& zmg_default_allocator() {
>>>> static zmg_allocator defaultAllocator = {nullptr, useMalloc,
>>>> freeMalloc};
>>>> Return defaultAllocator;
>>>> }
>>>>
>>>> The context could then store the allocator for libzmq, and users could
>>>> set a specific allocator as a context option, e.g. with a zmq_ctx_set. A
>>>> socket created for a context can then inherit the default allocator or set
>>>> a special allocator as a socket option.
>>>>
>>>> class MemoryPool {…}; // hopefully thread-safe
>>>> void* poolAllocate(size_t n) {r

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-15 Thread Francesco
Hi Doron, hi Jens,
Yes the allocator method is a nice solution.
I think it would be nice to have libzmq provide also a memory pool
implementation but use as default the malloc/free implementation for
backward compatibility.

It's also important to have a smart allocator that internally contains not
just  one but several pools for different packet size classes,to avoid
memory waste. But I think this can fit easily in the allocator pattern
sketched out by Jens.

Btw another issue unrelated to the allocator API but regarding performance
aspects: I think it's important to avoid not only the msg buffer but also
the allocation of the content_t structure and indeed in my preliminary
merge request I did modify zmq_msg_t of type_lmsg to use the first 40b
inside the pooled buffer.
Of course this approach is not backward compatible with the _init_data()
semantics.
How do you think this would best be approached?
I guess we may have a new _init_data_and_controlblock() helper that does
the trick of taking the first 40bytes of the provided buffer?

Thanks
Francesco


Il mer 14 ago 2019, 22:23 Doron Somech  ha scritto:

> Jens I like the idea.
>
> We actually don't need the release method.
> The signature of the allocate should receive zmq_msg and allocate it.
>
> int ()(zmq_msg *msg, size_t size, void *obj);
>
> When the allocator will create the zmq_msg it will provide the release
> method to the zmq_msg in the constructor.
>
> This is important in order to forward messages between sockets, so the
> release method is part of the msg. This is already supported by zmq_msg
> which accept free method with a hint (obj in your example).
>
> The return value of allocate will be success indication, like the rest of
> zeromq methods.
>
> zeromq actually already support pool mechanism when sending, using zmq_msg
> api. Receiving is the problem, your suggestion solve it nicely.
>
> By the way, memory pool already supported in NetMQ in a very similar
> solution as you suggested. (It is global for all sockets without override)
>
>
>
> On Wed, Aug 14, 2019, 22:41 Jens Auer  wrote:
>
>> Hi,
>>
>> Maybe this can be combined with a request that I have seen a couple of
>> times to be able to configure the allocator used in libzmq? I am thinking
>> of something like
>>
>> struct zmq_allocator {
>> void* obj;
>> void* ()(size_t n, void* obj);
>> void ()(void* ptr, void* obj);
>> };
>>
>> void* useMalloc(size_t n, void*) {return malloc(n);}
>> void freeMalloc(void* ptr) {free(ptr);}
>>
>> zmq_allocator& zmg_default_allocator() {
>> static zmg_allocator defaultAllocator = {nullptr, useMalloc,
>> freeMalloc};
>> Return defaultAllocator;
>> }
>>
>> The context could then store the allocator for libzmq, and users could
>> set a specific allocator as a context option, e.g. with a zmq_ctx_set. A
>> socket created for a context can then inherit the default allocator or set
>> a special allocator as a socket option.
>>
>> class MemoryPool {…}; // hopefully thread-safe
>> void* poolAllocate(size_t n) {return
>>
>> MemoryPool pool;
>>
>> void* allocatePool(size_t n, void* pool) {return
>> static_cast(pool)->allocate(n);}
>> void releasePool(void* ptr, void* pool)
>> {static_cast(pool)->release(ptr);}
>>
>> zmq_allocator pooledAllocator {
>> , allocatePool, releasePool
>> }
>>
>> void* cdx = zmq_ctx_new();
>> zmq_ctx_set(ZMQ_ALLOCATOR, );
>>
>> Cheers,
>> Jens
>>
>> Am 13.08.2019 um 13:24 schrieb Francesco :
>>
>> Hi all,
>>
>> today I've taken some time to attempt building a memory-pooling
>> mechanism in ZMQ local_thr/remote_thr benchmarking utilities.
>> Here's the result:
>> https://github.com/zeromq/libzmq/pull/3631
>> This PR is a work in progress and is a simple modification to show the
>> effects of avoiding malloc/free when creating zmq_msg_t with the
>> standard benchmark utils of ZMQ.
>>
>> In particular the very fast, zero-lock,
>> single-producer/single-consumer queue from:
>> https://github.com/cameron314/readerwriterqueue
>> is used to maintain between the "remote_thr" main thread and its ZMQ
>> background IO thread a list of free buffers that can be used.
>>
>> Here are the graphical results:
>> with mallocs / no memory pool:
>>
>> https://cdn1.imggmi.com/uploads/2019/8/13/9f009b91df394fa945cd2519fd993f50-full.png
>> with memory pool:
>>
>> https://cdn1.imggmi.com/uploads/2019/8/13/f3ae0d6d58e9721b63129c23fe7347a6-full.png
>>
>> Doing the math the memory pooled approach shows:

[zeromq-dev] Why UDP transport is just for radio/dish?

2019-08-14 Thread Francesco
Hi all,
Out of curiosity I would like to test performances using UDP transport
instead of TCP (my wild guess would be that zmq needs to do less
framing/unframing work since UDP is already packet based)... But apparently
push/pull sockets cannot use UDP transport...why?

Thanks
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] New website for zeromq

2019-08-13 Thread Francesco
Hi Doron,
Very nice work! Indeed looks more modern.

Just a suggestion: I noticed that the Universal, Smart, High-speed,
Multi-Transport boxes in the homepage are not clickable.
I would make at least a couple of them clickable:
 - Universal: all language bindings,
 - High-speed: the performance page in the wiki

Moreover I would make clickable the entire box perhaps: it took me a
while to notice that the "Community" and "The Guide" were clickable
links :)

Last thing: I just opened a ticket for having a direct link to the API
reference manual The Guide is fine for starters but for day-to-day
development the reference manual is really important

Thanks!
Francesco



Il giorno mar 13 ago 2019 alle ore 19:22 Doron Somech
 ha scritto:
>
> Hi All,
>
> The new zeromq website is up and running, check it out at:
> https://zeromq.org/
>
> Github repository:
> https://github.com/zeromq/zeromq.org
>
> If you want to contribute, some content is still missing, check-out the 
> issues page:
> https://github.com/zeromq/zeromq.org/issues
>
> Binaries and installation instructions to iOS and Android are missing, so if 
> anyone has experiences with those that will be a great contribution.
>
> More language pages or examples to existing language pages will also be great.
>
> We also have a new twitter account:
> https://twitter.com/libzmq
>
> Doron
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-08-13 Thread Francesco
Hi all,

today I've taken some time to attempt building a memory-pooling
mechanism in ZMQ local_thr/remote_thr benchmarking utilities.
Here's the result:
 https://github.com/zeromq/libzmq/pull/3631
This PR is a work in progress and is a simple modification to show the
effects of avoiding malloc/free when creating zmq_msg_t with the
standard benchmark utils of ZMQ.

In particular the very fast, zero-lock,
single-producer/single-consumer queue from:
https://github.com/cameron314/readerwriterqueue
is used to maintain between the "remote_thr" main thread and its ZMQ
background IO thread a list of free buffers that can be used.

Here are the graphical results:
with mallocs / no memory pool:
   
https://cdn1.imggmi.com/uploads/2019/8/13/9f009b91df394fa945cd2519fd993f50-full.png
with memory pool:
   
https://cdn1.imggmi.com/uploads/2019/8/13/f3ae0d6d58e9721b63129c23fe7347a6-full.png

Doing the math the memory pooled approach shows:

mostly the same performances for messages <= 32B
+15% pps/throughput increase @ 64B,
+60% pps/throughput increase @ 128B,
+70% pps/throughput increase @ 210B

[the tests were stopped at 210B because my current quick-dirty memory
pool approach has fixed max msg size of about 210B].

Honestly this is not a huge speedup, even if still interesting.
Indeed with these changes the performances now seem to be bounded by
the "local_thr" side and not by the "remote_thr" anymore. Indeed the
zmq background IO thread for local_thr is the only thread at 100% in
the 2 systems and its "perf top" now shows:

  15,02%  libzmq.so.5.2.3 [.] zmq::metadata_t::add_ref
  14,91%  libzmq.so.5.2.3 [.] zmq::v2_decoder_t::size_ready
   8,94%  libzmq.so.5.2.3 [.] zmq::ypipe_t::write
   6,97%  libzmq.so.5.2.3 [.] zmq::msg_t::close
   5,48%  libzmq.so.5.2.3 [.]
zmq::decoder_base_t ha scritto:
>
> Hi Yan,
> Unfortunately I have interrupted my attempts in this area after getting some 
> strange results (possibly due to the fact that I tried in a complex 
> application context... I should probably try hacking a simple zeromq example 
> instead!).
>
> I'm also a bit surprised that nobody has tried and posted online a way to 
> achieve something similar (Memory pool zmq send) ... But anyway It remains in 
> my plans to try that out when I have a bit more spare time...
> If you manage to have some results earlier, I would be eager to know :-)
>
> Francesco
>
>
> Il ven 19 lug 2019, 04:02 Yan, Liming (NSB - CN/Hangzhou) 
>  ha scritto:
>>
>> Hi,  Francesco
>>Could you please share the final solution and benchmark result for plan 
>> 2?  Big Thanks.
>>I'm concerning this because I had tried the similar before with 
>> zmq_msg_init_data() and zmq_msg_send() but failed because of two issues.  1) 
>>  My process is running in background for long time and finally I found it 
>> occupies more and more memory, until it exhausted the system memory. It 
>> seems there's memory leak with this way.   2) I provided *ffn for 
>> deallocation but the memory freed back is much slower than consumer. So 
>> finally my own customized pool could also be exhausted. How do you solve 
>> this?
>>I had to turn back to use zmq_send(). I know it has memory copy penalty 
>> but it's the easiest and most stable way to send message. I'm still using 
>> 0MQ 4.1.x.
>>Thanks.
>>
>> BR
>> Yan Limin
>>
>> -Original Message-
>> From: zeromq-dev [mailto:zeromq-dev-boun...@lists.zeromq.org] On Behalf Of 
>> Luca Boccassi
>> Sent: Friday, July 05, 2019 4:58 PM
>> To: ZeroMQ development list 
>> Subject: Re: [zeromq-dev] Memory pool for zmq_msg_t
>>
>> There's no need to change the source for experimenting, you can just use 
>> _init_data without a callback and with a callback (yes the first case will 
>> leak memory but it's just a test), and measure the difference between the 
>> two cases. You can then immediately see if it's worth pursuing further 
>> optimisations or not.
>>
>> _external_storage is an implementation detail, and it's non-shared because 
>> it's used in the receive case only, as it's used with a reference to the TCP 
>> buffer used in the system call for zero-copy receives. Exposing that means 
>> that those kind of messages could not be used with pub-sub or radio-dish, as 
>> they can't have multiple references without copying them, which means there 
>> would be a semantic difference between the different message initialisation 
>> APIs, unlike now when the difference is only in who owns the buffer. It 
>> would make the API quite messy in my opinion, and be quite confusing as 
>> pub/sub is probably the most well known pattern.
>>
>> On Th

Re: [zeromq-dev] Message batching in zmq

2019-08-12 Thread Francesco
Hi Doron,

Il giorno lun 12 ago 2019 alle ore 12:13 Doron Somech
 ha scritto:
>
> It is not waiting to batch up.
> The background IO thread dequeue messages from internal queue of messages 
> waiting to be sent.
> Zeromq dequeue messages until that queue is empty or the buffer is full, so 
> not waiting for anything.

Right ok, I didn't meant to say that it was literally waiting doing a
sleep() but my naive reasoning would be that the ZMQ background IO
thread should always have its queue full of messages to send over TCP
so that message batching up to 8KB should be happening all the time...
but then my question (why I don't get a flat curve up to 8kB message
sizes) applies :)

I did some further investigation and I found that, in the 10Gbps
environment setup I benchmarked
(http://zeromq.org/results:10gbe-tests-v432) the performances are
bounded by the remote_thr side, when sending 64B frames. Here is what
"perf top" reports on the 2 worker threads of the remote_thr app:

main remote_thr thread:

 23,33%  libzmq.so.5.2.3   [.] zmq::ypipe_t::flush
  22,86%  libc-2.17.so  [.] malloc
  20,00%  libc-2.17.so  [.] _int_malloc
  11,51%  libzmq.so.5.2.3   [.] zmq::pipe_t::write
   4,35%  libzmq.so.5.2.3   [.] zmq::ypipe_t::write
   2,38%  libzmq.so.5.2.3   [.] zmq::socket_base_t::send
   1,81%  libzmq.so.5.2.3   [.] zmq::lb_t::sendpipe
   1,36%  libzmq.so.5.2.3   [.] zmq::msg_t::init_size
   1,33%  libzmq.so.5.2.3   [.] zmq::pipe_t::flush

zmq bg IO remote_thr thread:

  38,35%  libc-2.17.so[.] _int_free
  13,61%  libzmq.so.5.2.3 [.] zmq::pipe_t::read
   9,24%  libc-2.17.so[.] __memcpy_ssse3_back
   8,99%  libzmq.so.5.2.3 [.] zmq::msg_t::size
   3,22%  libzmq.so.5.2.3 [.] zmq::encoder_base_t::encode
   2,34%  [kernel][k] sysret_check
   2,20%  libzmq.so.5.2.3 [.] zmq::ypipe_t::check_read
   2,15%  libzmq.so.5.2.3 [.] zmq::ypipe_t::read
   1,32%  libc-2.17.so[.] free

So my feeling is that even if the message batching is happening, right
now it's the zmq_msg_init_size() call that is limiting the
performances actually.
This is the same problem I experienced in a more complex contest and
that I described in this email thread:
  https://lists.zeromq.org/pipermail/zeromq-dev/2019-July/033012.html


> If we would support the zerocopy we can make the buffer larger than 8kb, and 
> when the buffer is full we would use the zerocopy flag.

Right. However before getting benefits from the new kernel zerocopy
flag I think we should somehow allow the libzmq users to use some kind
of memory pooling, otherwise my feeling is that the performance
benefit would be neglible... what do you think?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Message batching in zmq

2019-08-12 Thread Francesco
Hi Doron,

Il giorno dom 11 ago 2019 alle ore 15:14 Doron Somech
 ha scritto:
> Actually, zeromq is not waiting for buffers to be filled, the engine batch as 
> much messages as possible (or until the buffer is full).
>
> If msg is larger than the buffer the message will be sent as is, using the 
> message own buffer (zerocopy can benefit here as well).
> We will have to free the message buffer or continue batching only after the 
> kernel free the buffer.

ok, thanks for the clarification. I guess that what you are describing
is the code at
  zmq::stream_engine_base_t::out_event()
right?
It's not totally clear to me by reading that code but I'm no way
expert of ZMQ internals :)

I just wonder if I can get some statistics of how much batching is ZMQ
actually doing... my feeling is that it's not doing much... because as
I wrote in my other email: if ZMQ engine is indeed waiting to batch up
to 8KB of data before going down the Linux kernel stack via send()
API, all the difference between sending 256B frames and 8KB frames is
just the overhead of ZMQ engine for doing that batching?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Message batching in zmq

2019-08-12 Thread Francesco
Hi Luca,

Il giorno dom 11 ago 2019 alle ore 13:00 Luca Boccassi
 ha scritto:
> Batching happens in the engine - the default data size is 8KB, tuned
> via src/config.hpp - although it's extremely tricky and it's very very
> easy to shot on oneself's foot by changing those parameters.

Well I digged a little more in the source code and apparently that 8KB
value is written in the
options_t.out_batch_size
structure and initialized in "options.cpp" and can be even modified
using the zmq_setsockopt() using the ZMQ_OUT_BATCH_SIZE undocumented
option.
I guess the fact it's undocumented at
  http://api.zeromq.org/master:zmq-setsockopt
is because of the fact that, as you say, it's easy to shoot on oneself's foot

> Tweaking those can be done, but it needs a very very well defined test
> setup with a precise workload, which we don't really have.
>
> Yes adjusting the heuristics for MSG_ZEROCOPY is one of the TODOs to be
> able to use it effectively.
>
ok I see.

Still I don't understand one thing.
If the sockets in the benchmark utilities local_thr/remote_thr are
using that default 8KB value, then shouldn't I see an almost-flat
curve in this graph
   
http://zeromq.wdfiles.com/local--files/results:10gbe-tests-v432/pushpull_tcp_thr_results.png
in the message size range [0-8KB] ?

I mean: if ZMQ engine is indeed waiting to batch up to 8KB of data
before going down the Linux kernel stack via send() API, all the
difference between sending 256B frames and 8KB frames is just the
overhead of ZMQ engine for doing that batching?
What am I missing?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Performance results on 100Gbps direct link

2019-08-12 Thread Francesco
Hi Benjamin,

Il giorno dom 11 ago 2019 alle ore 13:05 Benjamin Henrion
 ha scritto:
> Do you have a test which can show first that you can saturate the 100gbps 
> link?
>
> Like an iperf or a simple wget test?

I don't have a graph just like the one I created with ZMQ benchmark
utilities but using a custom DPDK app creating packets of size
256-512B I've been able to saturate the 100Gbps link using the 16 CPU
cores of the first CPU, with hyperthreading off (the server has 2
CPUs/NUMA nodes, each with 16 physical cores that become 32 with HT
for a total of 64).
This is not that far from the official Mellanox reports using DPDK:
  https://fast.dpdk.org/doc/perf/DPDK_19_05_Mellanox_NIC_performance_report.pdf
  (see chapter 5)
where they declare they can the line rate of a 100Gbps link using 12
cores (well, they're using Xeon Platinum CPUs while my server had Gold
CPUs and moreover the DPDK application they used, l3fwd, is very
simple indeed).
And yes: the line rate for 64B packets at 100Gbps is an astonishingly
148 million packets per seconds.

Of course I don't think we can ever reach that performances with using
regular Linux kernel TCP stack (btw that DPDK example sends frames
over IPv4 but I don't know if over IPv4 it uses UDP, TCP or something
else).
However currently ZMQ does about 1Mpps @ 64B which is pretty far from
that 148Mpps... I wonder if that could be improved :)

As you point out it would be interesting to run a test using iperf or
wget that indeed use the Linux kernel TCP stack as well... when I will
have access to that setup I can try.

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Performance results on 100Gbps direct link

2019-08-09 Thread Francesco
Hi all,

I placed here:
  http://zeromq.org/results:100gbe-tests-v432
the results I collected using 2 Mellanox ConnectX-5 linked by 100Gbps
fiber cable.

The results are not too much different from those at 10gpbs
(http://zeromq.org/results:10gbe-tests-v432)... the difference in TCP
throughput is that
 - even using 100kB-long messages we still cannot saturate the link
 - latency is very much improved for messages > 10kB long

Hopefully we will be able to improve performances in the future to
improve these benchmarks...

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Message batching in zmq

2019-08-09 Thread Francesco
Hi Luca,
[changing subject to make it easier to search this message in future]

Il giorno mer 7 ago 2019 alle ore 16:05 Luca Boccassi
 ha scritto:
>
> On Wed, 2019-08-07 at 14:50 +0200, Francesco wrote:
> > > Another improvement that can help is using the new zero-copy kernel
> > > TCP read/write APIs - I had started something a couple of years
> > > back, but again didn't have time to complete it.
> > >
> >
> > This looks very interesting as well.. do you have any pointer to
> > these new zero-copy kernel APIs?
> >
> > Thanks,
> > Francesco
>
> Kernel docs:
>
> https://www.kernel.org/doc/html/v5.2/networking/msg_zerocopy.html
>
> This is the initial experiment, very much incomplete:
>
> https://github.com/bluca/libzmq/commit/d021ea5f2c7526b388cb8f8005298e30b4cadd62

Thanks, this looks really interesting, however the kernel docs state
very clearly that "MSG_ZEROCOPY is generally only effective at writes
over around 10 KB."...
 this raises the question: can we better tune the ZMQ message batching
algorithm?

I see in
zmq::tune_tcp_socket()
the following comment:
//  Disable Nagle's algorithm. We are doing data batching on 0MQ level,
//  so using Nagle wouldn't improve throughput in anyway, but it would
//  hurt latency.

but did not find (didn't look too much though) the place where ZMQ
actually does its own data batching. Nor could I find any option in
zmq_setsockopt() to tune somehow that batching.

E.g. assuming I don't mind at all about message latency, can I improve
the throughput by forcing ZMQ somehow to only call the send() system
call when I have queued 10kB of data to the zmq socket?

Thanks!
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-07 Thread Francesco
Hi Brett,

>- The page shows PUB/SUB on inproc://, how does PUB/SUB perform on 10  GbE
tcp://?

Unfortunately there is no benchmark utility for PUB/SUB which is what I use
all the times - so I would be curious to try... would ZeroMQ developers
accept a merge request to allow local_thr/remote_thr to take from command
line the socket pair to use for testing (instead of hardcoding the
PUSH/PULL pair)?

> - How are dropped messages accounted in the PUB/SUB tests?

If you are referring the PUB/SUB proxy throughput graph, consider it's
generated by this util
  https://github.com/zeromq/libzmq/blob/master/perf/proxy_thr.cpp
which uses ZMQ_XPUB_NODROP=1 so there are no drops at all :)

> And, one thing I don't understand (maybe just a curiosity): PUSH/PULL
shows somewhat better PPS throughput with tcp:// than with inproc:// for
> messages between 100-1000 Bytes.  Naively, I'd have thought shared memory
would beat network for message throughput for any given message size.

Yeah, good point... probably the graph needs some "zoom-in" to make that
more evident but anyway I would propose the following explanation: TCP
transport has much more buffer than INPROC transport (because beside the
HWM buffers owned by ZMQ it also has TCP kernel buffers to take in count...
on those boxes with plenty of RAM the kernel socket buffers can be "big"
like 16MB)... perhaps that explains why with TCP the NIC always has its TX
queues filled and finally achieves a higher PPS ? Not sure...

Francesco




Il giorno mer 7 ago 2019 alle ore 17:32 Brett Viren  ha scritto:

> This is very useful information.  Thank you for sharing it.
>
> I have some 10 GbE hardware on order and hope to reproduce this myself
> and answer these questions but I'm curious to know:
>
> - The page shows PUB/SUB on inproc://, how does PUB/SUB perform on 10
>   GbE tcp://?
>
> - How are dropped messages accounted in the PUB/SUB tests?
>
>
> And, one thing I don't understand (maybe just a curiosity): PUSH/PULL
> shows somewhat better PPS throughput with tcp:// than with inproc:// for
> messages between 100-1000 Bytes.  Naively, I'd have thought shared
> memory would beat network for message throughput for any given message
> size.
>
> Thanks again,
> -Brett.
>
> Francesco  writes:
>
> > Hi Luca, Hi all,
> >
> > I generated the results graph and put all of them here:
> >
> >  http://zeromq.org/results:10gbe-tests-v432
> >
> > I would say the results are ok but perhaps there's room for improvements.
> > For example: the local_thr/remote_thr benchmarks show that ZeroMQ is
> able to fill the
> > 10Gbps link only using message sizes of about 10kB.
> > The CPUs of the test spiked at about 3.5 Mpps @ 16B message-size  which
> is a bit far
> > from the theoretical max of Ethernet that for 84B frames (on the wire)
> is 14.8Mpps
> > (see https://kb.juniper.net/InfoCenter/index?page=content=KB14737).
> >
> > I wonder how ZeroMQ message batching mechanism works for small messages
> (<1kB)
> > on TCP... anybody can shed some light on this? Thanks!
> >
> > Francesco
> >
> > PS: any project to use something like F-stack (http://www.f-stack.org/)
> on top of DPDK
> > as backend for ZeroMQ :) ?
> >
> > Il giorno dom 4 ago 2019 alle ore 20:41 Luca Boccassi <
> luca.bocca...@gmail.com> ha
> > scritto:
> >
> >  Looks great, thank you!
> >
> >  On Sun, 4 Aug 2019, 18:28 Francesco, 
> wrote:
> >
> >  Hi,
> >
> >   > There's nothing that I know of for that purpose
> >
> >  I wrote a 70lines bash script to automate the collection of performance
> results
> >  using "{local/remote/inproc/proxy}_{thr/lat}" ZMQ performance utils...
> >  I created a PR for that: https://github.com/zeromq/libzmq/pull/3607
> >
> >  Let me know if that works for you.
> >
> >  As soon as I have the HW available I will use them to generate the new
> >  graphs...
> >
> >  Thanks
> >
> >  Francesco
> >
> >  Il giorno sab 3 ago 2019 alle ore 11:39 Luca Boccassi
> >   ha scritto:
> >
> >  There's nothing that I know of for that purpose
> >
> >  On Sat, 3 Aug 2019, 10:24 Francesco, 
> >  wrote:
> >
> >  Hi Luca,
> >  I don't have a wikidot account... however I have a basic question before
> >  getting there:
> > local_thr / remote_thr
> >  utilities are just producing a text output... is there any script to:
> >  1) run them automatically to generate all points of the
> >  per-message-size graphs (http://zeromq.org/results:10gbe-tests-v031)
> >  ?
> >  2) produce the actual graph from the collected text ou

Re: [zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-07 Thread Francesco
Hi Luca,

Il giorno mer 7 ago 2019 alle ore 11:48 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> Thank you, that's great!
>
I may be able to repeat also the same tests, still on 10Gb optical link,
using 2 Mellanox CX5 NICs... I will post here new results if I can.

Getting anywhere close to 10gig line rate without bypassing the kernel TCP
> stack is not really likely - as you correctly pointed out, the way to do
> that would be to use a different stack based on DPDK (or XDP) like VPP or
> F-stack. I have briefly looked into this in the past, but didn't have time
> to do anything. It's a lot of integration work.
>

I agree - I wonder if things could be different if ZMQ had a sponsor to
support it...

Another improvement that can help is using the new zero-copy kernel TCP
> read/write APIs - I had started something a couple of years back, but again
> didn't have time to complete it.
>

This looks very interesting as well.. do you have any pointer to these new
zero-copy kernel APIs?

Thanks,
Francesco




>
> On Wed, 2019-08-07 at 00:34 +0200, Francesco wrote:
>
> Hi Luca, Hi all,
>
> I generated the results graph and put all of them here:
>
>  http://zeromq.org/results:10gbe-tests-v432
>
> I would say the results are ok but perhaps there's room for improvements.
> For example: the local_thr/remote_thr benchmarks show that ZeroMQ is able
> to fill the 10Gbps link only using message sizes of about 10kB.
> The CPUs of the test spiked at about 3.5 Mpps @ 16B message-size  which is
> a bit far from the theoretical max of Ethernet that for 84B frames (on the
> wire) is 14.8Mpps (see
> https://kb.juniper.net/InfoCenter/index?page=content=KB14737).
>
> I wonder how ZeroMQ message batching mechanism works for small messages
> (<1kB) on TCP... anybody can shed some light on this? Thanks!
>
> Francesco
>
>
> PS: any project to use something like F-stack (http://www.f-stack.org/)
> on top of DPDK as backend for ZeroMQ :) ?
>
>
>
>
> Il giorno dom 4 ago 2019 alle ore 20:41 Luca Boccassi <
> luca.bocca...@gmail.com> ha scritto:
>
> Looks great, thank you!
>
> On Sun, 4 Aug 2019, 18:28 Francesco,  wrote:
>
> Hi,
>
>  > There's nothing that I know of for that purpose
>
> I wrote a 70lines bash script to automate the collection of performance
> results using "{local/remote/inproc/proxy}_{thr/lat}" ZMQ performance
> utils...
> I created a PR for that: https://github.com/zeromq/libzmq/pull/3607
>
> Let me know if that works for you.
>
> As soon as I have the HW available I will use them to generate the new
> graphs...
>
> Thanks
>
> Francesco
>
> Il giorno sab 3 ago 2019 alle ore 11:39 Luca Boccassi <
> luca.bocca...@gmail.com> ha scritto:
>
> There's nothing that I know of for that purpose
>
> On Sat, 3 Aug 2019, 10:24 Francesco,  wrote:
>
> Hi Luca,
> I don't have a wikidot account... however I have a basic question before
> getting there:
>local_thr / remote_thr
> utilities are just producing a text output... is there any script to:
> 1) run them automatically to generate all points of the per-message-size
> graphs (http://zeromq.org/results:10gbe-tests-v031) ?
> 2) produce the actual graph from the collected text outputs ?
>
> Thanks,
> Francesco
>
>
>
> Il giorno sab 3 ago 2019 alle ore 00:40 Luca Boccassi <
> luca.bocca...@gmail.com> ha scritto:
>
> Yes please!
>
> Do you have an account on wikidot to edit the page?
>
> On Fri, 2 Aug 2019, 21:54 Francesco,  wrote:
>
> Hi all,
> I noticed that all performance results reported at this page:
>   http://zeromq.org/area:results
> seem a bit outdated (most updated version looks like  ØMQ/2.0.6 !)... has
> anybody updated results?
> Alternatively I may be able to generate measurements on latest libzmq on
> 10G NICs... would you be interested in putting on that page updated results?
>
> Thanks,
> Francesco
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
> ___
> zeromq-dev mailing list
> zeromq-dev@list

Re: [zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-06 Thread Francesco
Hi Luca, Hi all,

I generated the results graph and put all of them here:

 http://zeromq.org/results:10gbe-tests-v432

I would say the results are ok but perhaps there's room for improvements.
For example: the local_thr/remote_thr benchmarks show that ZeroMQ is able
to fill the 10Gbps link only using message sizes of about 10kB.
The CPUs of the test spiked at about 3.5 Mpps @ 16B message-size  which is
a bit far from the theoretical max of Ethernet that for 84B frames (on the
wire) is 14.8Mpps (see
https://kb.juniper.net/InfoCenter/index?page=content=KB14737).

I wonder how ZeroMQ message batching mechanism works for small messages
(<1kB) on TCP... anybody can shed some light on this? Thanks!

Francesco


PS: any project to use something like F-stack (http://www.f-stack.org/) on
top of DPDK as backend for ZeroMQ :) ?




Il giorno dom 4 ago 2019 alle ore 20:41 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> Looks great, thank you!
>
> On Sun, 4 Aug 2019, 18:28 Francesco,  wrote:
>
>> Hi,
>>
>>  > There's nothing that I know of for that purpose
>>
>> I wrote a 70lines bash script to automate the collection of performance
>> results using "{local/remote/inproc/proxy}_{thr/lat}" ZMQ performance
>> utils...
>> I created a PR for that: https://github.com/zeromq/libzmq/pull/3607
>>
>> Let me know if that works for you.
>>
>> As soon as I have the HW available I will use them to generate the new
>> graphs...
>>
>> Thanks
>>
>> Francesco
>>
>> Il giorno sab 3 ago 2019 alle ore 11:39 Luca Boccassi <
>> luca.bocca...@gmail.com> ha scritto:
>>
>>> There's nothing that I know of for that purpose
>>>
>>> On Sat, 3 Aug 2019, 10:24 Francesco, 
>>> wrote:
>>>
>>>> Hi Luca,
>>>> I don't have a wikidot account... however I have a basic question
>>>> before getting there:
>>>>local_thr / remote_thr
>>>> utilities are just producing a text output... is there any script to:
>>>> 1) run them automatically to generate all points of the
>>>> per-message-size graphs (http://zeromq.org/results:10gbe-tests-v031) ?
>>>> 2) produce the actual graph from the collected text outputs ?
>>>>
>>>> Thanks,
>>>> Francesco
>>>>
>>>>
>>>>
>>>> Il giorno sab 3 ago 2019 alle ore 00:40 Luca Boccassi <
>>>> luca.bocca...@gmail.com> ha scritto:
>>>>
>>>>> Yes please!
>>>>>
>>>>> Do you have an account on wikidot to edit the page?
>>>>>
>>>>> On Fri, 2 Aug 2019, 21:54 Francesco, 
>>>>> wrote:
>>>>>
>>>>>> Hi all,
>>>>>> I noticed that all performance results reported at this page:
>>>>>>   http://zeromq.org/area:results
>>>>>> seem a bit outdated (most updated version looks like  ØMQ/2.0.6
>>>>>> !)... has anybody updated results?
>>>>>> Alternatively I may be able to generate measurements on latest libzmq
>>>>>> on 10G NICs... would you be interested in putting on that page updated
>>>>>> results?
>>>>>>
>>>>>> Thanks,
>>>>>> Francesco
>>>>>>
>>>>>>
>>>>>> ___
>>>>>> zeromq-dev mailing list
>>>>>> zeromq-dev@lists.zeromq.org
>>>>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>>
>>>>> ___
>>>>> zeromq-dev mailing list
>>>>> zeromq-dev@lists.zeromq.org
>>>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>>
>>>> ___
>>>> zeromq-dev mailing list
>>>> zeromq-dev@lists.zeromq.org
>>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>> ___
>>> zeromq-dev mailing list
>>> zeromq-dev@lists.zeromq.org
>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-04 Thread Francesco
Hi,

 > There's nothing that I know of for that purpose

I wrote a 70lines bash script to automate the collection of performance
results using "{local/remote/inproc/proxy}_{thr/lat}" ZMQ performance
utils...
I created a PR for that: https://github.com/zeromq/libzmq/pull/3607

Let me know if that works for you.

As soon as I have the HW available I will use them to generate the new
graphs...

Thanks

Francesco

Il giorno sab 3 ago 2019 alle ore 11:39 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> There's nothing that I know of for that purpose
>
> On Sat, 3 Aug 2019, 10:24 Francesco,  wrote:
>
>> Hi Luca,
>> I don't have a wikidot account... however I have a basic question before
>> getting there:
>>local_thr / remote_thr
>> utilities are just producing a text output... is there any script to:
>> 1) run them automatically to generate all points of the per-message-size
>> graphs (http://zeromq.org/results:10gbe-tests-v031) ?
>> 2) produce the actual graph from the collected text outputs ?
>>
>> Thanks,
>> Francesco
>>
>>
>>
>> Il giorno sab 3 ago 2019 alle ore 00:40 Luca Boccassi <
>> luca.bocca...@gmail.com> ha scritto:
>>
>>> Yes please!
>>>
>>> Do you have an account on wikidot to edit the page?
>>>
>>> On Fri, 2 Aug 2019, 21:54 Francesco, 
>>> wrote:
>>>
>>>> Hi all,
>>>> I noticed that all performance results reported at this page:
>>>>   http://zeromq.org/area:results
>>>> seem a bit outdated (most updated version looks like  ØMQ/2.0.6 !)...
>>>> has anybody updated results?
>>>> Alternatively I may be able to generate measurements on latest libzmq
>>>> on 10G NICs... would you be interested in putting on that page updated
>>>> results?
>>>>
>>>> Thanks,
>>>> Francesco
>>>>
>>>>
>>>> ___
>>>> zeromq-dev mailing list
>>>> zeromq-dev@lists.zeromq.org
>>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>>
>>> ___
>>> zeromq-dev mailing list
>>> zeromq-dev@lists.zeromq.org
>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-03 Thread Francesco
Hi Luca,
I don't have a wikidot account... however I have a basic question before
getting there:
   local_thr / remote_thr
utilities are just producing a text output... is there any script to:
1) run them automatically to generate all points of the per-message-size
graphs (http://zeromq.org/results:10gbe-tests-v031) ?
2) produce the actual graph from the collected text outputs ?

Thanks,
Francesco



Il giorno sab 3 ago 2019 alle ore 00:40 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> Yes please!
>
> Do you have an account on wikidot to edit the page?
>
> On Fri, 2 Aug 2019, 21:54 Francesco,  wrote:
>
>> Hi all,
>> I noticed that all performance results reported at this page:
>>   http://zeromq.org/area:results
>> seem a bit outdated (most updated version looks like  ØMQ/2.0.6 !)...
>> has anybody updated results?
>> Alternatively I may be able to generate measurements on latest libzmq on
>> 10G NICs... would you be interested in putting on that page updated results?
>>
>> Thanks,
>> Francesco
>>
>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Anybody with updated performance results http://zeromq.org/area:results ?

2019-08-02 Thread Francesco
Hi all,
I noticed that all performance results reported at this page:
  http://zeromq.org/area:results
seem a bit outdated (most updated version looks like  ØMQ/2.0.6 !)... has
anybody updated results?
Alternatively I may be able to generate measurements on latest libzmq on
10G NICs... would you be interested in putting on that page updated results?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] differentiate send() ZMQError cases

2019-08-01 Thread Francesco
Hi Doug,
Just a suggestion about HWM: if you want to set it very low like 1 for
testing, then make sure you are NOT using the TCP transport: with TCP
transport the TCP kernel socket buffers are involved as well wrt the max
messages you can queue before hitting HWM... That means with TCP transport
its more difficult to understand when HWM Is reached, specially because the
kernel socket buffers are sized using bytes as unit and not zmq messages
(unlike hwm thresholds)!

HTH,
Francesco

Il gio 1 ago 2019, 18:52 Doug Meyer  ha scritto:

> Dear zmq community,
>
> I've made a bit more progress. I note that went sending with
> zmq.ROUTER_MANDATORY set (1) and the peer not connected, I can observe that
> the ZMQError errno property has the value 65 and the strerror property is
> "Host unreachable" which is quite nice, though the errno is a mystery as it
> does not seem to be related to the POSIX errno values as I expected
> (EHOSTUNREACH is 113). As far as I could tell, it seems like libzmq uses
> the standard POSIX errno.h values, so I'm perplexed at the value 65 in this
> case. Does anybody have insight?
>
> I've not been able to see the case where the HWM is reached, in order to
> see what the errno value is for that case. Could somebody please provide me
> guidance for HWM setting, please? My case is a ROUTER sending to a DEALER.
> The router has zmq.ROUTER_MANDATORY set. On the DEALER sideTCP connection:
>
> slave = context.socket(zmq.DEALER)
>
> I've tried all of the following to get a HWM set to 1 message so that it
> is easy to see what happens when the master/ROUTER sends to the slave and
> the HWM is reached. The slave connects and then never does a recv().
>
> slave.set_hwm(1)
> slave.set(zmq.RCVHWM, 1)
> slave.sndhwm = slave.rcvhwm = 1
>
> No matter what I do, I can queue up a lot of messages. I've also attempted
> to set the send and receive queue size on the master/ROUTER side at the
> same time, but still I can queue up a large number of messages.
>
> Thanks so much for the help!
>
> Blessings,
> Doug
>
>
>
> On Wed, Jul 31, 2019 at 11:55 AM Doug Meyer  wrote:
>
>> Dear zmq community,
>>
>> I'm just getting my feet down on libzmq/pyzmq over here. I've read most
>> of the zguide and at least some of the pyzmq and libzmq API information.
>> We're python focused over here, and I'm running the following versions for
>> my proto work:
>>
>> python 3.7.1
>> libzmq 4.3.1
>> pyzmq 18.0.1
>>
>> I've been writing some very basic prototype code with a ROUTER-DEALER
>> pattern.
>>
>>- The Master creates a ROUTER socket and binds to a TCP port.
>>- The Slaves create a DEALER and connect to the TCP port.
>>
>> The code works fine at a basic level. And my basic discovery and protocol
>> stuff is good enough for the proof of concept that I'm building. ZMQ's very
>> straight-forward at this basic level, and I'm appreciative of the docs and
>> examples. Thanks to the ZMQ team!
>>
>> Where I'm getting stuck is related to being able to identify and handle
>> some exceptional cases when sending. My questions surround
>> blocking/nonblocking send in conjunction with the zmq.ROUTER_MANDATORY
>> socket option and the zero (0) and zmq.NOBLOCK send() flag.
>>
>> My understanding (and experience) is that if zmq.ROUTER_MANDATORY is NOT
>> set, then sends which cannot be sent either because the Identity has not
>> registered with the router, or the HWM has been reached, will be silently
>> dropped. I see that, and there's no issue there.
>>
>> There is discussion in the zmq library docs of the ability to block on
>> the send. However, I have not found any way to block on send with pyzmq. If
>> I set zmq.ROUTER_MANDATORY and issue the send() or send_multipart() without
>> flags, I get zmq.error.ZMQError. Is there a way to make the send block? Is
>> there any effective difference between flags=0 and flags=zmq.NOBLOCK,
>> because I can't detect a difference behaviorally, though those two flags
>> are listed separately in the pyzmq API info for send().
>>
>> Probably most critical for me, is there any way to differentiate between
>> these send() failures:
>> 1. HWM mark reached
>> 2. Identity is unknown (the slave with that Identity is not connected)
>> 3. All other errors than (1) and (2)
>>
>> I can imagine wanting to handle and report case (1) differently than (2),
>> as my decision tree and diagnostics are quite different and point to
>> distinct failure modes. Does pyzmq support any way to differentiate these?
>>
>> Thank you so much for your time.
>>
>> Blessings,
>> Doug
>>
>> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-19 Thread Francesco
Hi Yan,
Unfortunately I have interrupted my attempts in this area after getting
some strange results (possibly due to the fact that I tried in a complex
application context... I should probably try hacking a simple zeromq
example instead!).

I'm also a bit surprised that nobody has tried and posted online a way to
achieve something similar (Memory pool zmq send) ... But anyway It remains
in my plans to try that out when I have a bit more spare time...
If you manage to have some results earlier, I would be eager to know :-)

Francesco


Il ven 19 lug 2019, 04:02 Yan, Liming (NSB - CN/Hangzhou) <
liming@nokia-sbell.com> ha scritto:

> Hi,  Francesco
>Could you please share the final solution and benchmark result for plan
> 2?  Big Thanks.
>I'm concerning this because I had tried the similar before with
> zmq_msg_init_data() and zmq_msg_send() but failed because of two issues.
> 1)  My process is running in background for long time and finally I found
> it occupies more and more memory, until it exhausted the system memory. It
> seems there's memory leak with this way.   2) I provided *ffn for
> deallocation but the memory freed back is much slower than consumer. So
> finally my own customized pool could also be exhausted. How do you solve
> this?
>I had to turn back to use zmq_send(). I know it has memory copy penalty
> but it's the easiest and most stable way to send message. I'm still using
> 0MQ 4.1.x.
>Thanks.
>
> BR
> Yan Limin
>
> -Original Message-
> From: zeromq-dev [mailto:zeromq-dev-boun...@lists.zeromq.org] On Behalf
> Of Luca Boccassi
> Sent: Friday, July 05, 2019 4:58 PM
> To: ZeroMQ development list 
> Subject: Re: [zeromq-dev] Memory pool for zmq_msg_t
>
> There's no need to change the source for experimenting, you can just use
> _init_data without a callback and with a callback (yes the first case will
> leak memory but it's just a test), and measure the difference between the
> two cases. You can then immediately see if it's worth pursuing further
> optimisations or not.
>
> _external_storage is an implementation detail, and it's non-shared because
> it's used in the receive case only, as it's used with a reference to the
> TCP buffer used in the system call for zero-copy receives. Exposing that
> means that those kind of messages could not be used with pub-sub or
> radio-dish, as they can't have multiple references without copying them,
> which means there would be a semantic difference between the different
> message initialisation APIs, unlike now when the difference is only in who
> owns the buffer. It would make the API quite messy in my opinion, and be
> quite confusing as pub/sub is probably the most well known pattern.
>
> On Thu, 2019-07-04 at 23:20 +0200, Francesco wrote:
> > Hi Luca,
> > thanks for the details. Indeed I understand why the "content_t" needs
> > to be allocated dynamically: it's just like the control block used by
> > STL's std::shared_ptr<>.
> >
> > And you're right: I'm not sure how much gain there is in removing 100%
> > of malloc operations from my TX path... still I would be curious to
> > find it out but right now it seems I need to patch ZMQ source code to
> > achieve that.
> >
> > Anyway I wonder if it could be possible to expose in the public API a
> > method like "zmq::msg_t::init_external_storage()" that, AFAICS, allows
> > to create a non-shared zero-copy long message... it appears to be used
> > only by v2 decoder internally right now...
> > Is there a specific reason why that's not accessible from the public
> > API?
> >
> > Thanks,
> > Francesco
> >
> >
> >
> >
> >
> > Il giorno gio 4 lug 2019 alle ore 20:25 Luca Boccassi <
> > luca.bocca...@gmail.com> ha scritto:
> > > Another reason for that small struct to be on the heap is so that it
> > > can be shared among all the copies of the message (eg: a pub socket
> > > has N copies of the message on the stack, one for each subscriber).
> > > The struct has an atomic counter in it, so that when all the copies
> > > of the message on the stack have been closed, the userspace buffer
> > > deallocation callback can be invoked. If the atomic counter were on
> > > the stack inlined in the message, this wouldn't work.
> > > So even if room were to be found, a malloc would still be needed.
> > >
> > > If you _really_ are worried about it, and testing shows it makes a
> > > difference, then one option could be to pre-allocate a set of these
> > > metadata structures at startup, and just assign them when the
> > > message is created. It's p

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Francesco
Hi Luca,
thanks for the details. Indeed I understand why the "content_t" needs to be
allocated dynamically: it's just like the control block used by STL's
std::shared_ptr<>.

And you're right: I'm not sure how much gain there is in removing 100% of
malloc operations from my TX path... still I would be curious to find it
out but right now it seems I need to patch ZMQ source code to achieve that.

Anyway I wonder if it could be possible to expose in the public API a
method like "zmq::msg_t::init_external_storage()" that, AFAICS, allows to
create a non-shared zero-copy long message... it appears to be used only by
v2 decoder internally right now...
Is there a specific reason why that's not accessible from the public API?

Thanks,
Francesco




Il giorno gio 4 lug 2019 alle ore 20:25 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> Another reason for that small struct to be on the heap is so that it
> can be shared among all the copies of the message (eg: a pub socket has
> N copies of the message on the stack, one for each subscriber). The
> struct has an atomic counter in it, so that when all the copies of the
> message on the stack have been closed, the userspace buffer
> deallocation callback can be invoked. If the atomic counter were on the
> stack inlined in the message, this wouldn't work.
> So even if room were to be found, a malloc would still be needed.
>
> If you _really_ are worried about it, and testing shows it makes a
> difference, then one option could be to pre-allocate a set of these
> metadata structures at startup, and just assign them when the message
> is created. It's possible, but increases complexity quite a bit, so it
> needs to be worth it.
>
> On Thu, 2019-07-04 at 17:42 +0100, Luca Boccassi wrote:
> > The second malloc cannot be avoided, but it's tiny and fixed in size
> > at
> > compile time, so the compiler and glibc will be able to optimize it
> > to
> > death.
> >
> > The reason for that is that there's not enough room in the 64 bytes
> > to
> > store that structure, and increasing the message allocation on the
> > stack past 64 bytes means it will no longer fit in a single cache
> > line,
> > which will incur in a performance penalty far worse than the small
> > malloc (I tested this some time ago). That is of course unless you
> > are
> > running on s390 or a POWER with 256 bytes cacheline, but given it's
> > part of the ABI it would be a bit of a mess for the benefit of very
> > few
> > users if any.
> >
> > So I'd recommend to just go with the second plan, and compare what
> > the
> > result is when passing a deallocation function vs not passing it (yes
> > it will leak the memory but it's just for the test). My bet is that
> > the
> > difference will not be that large.
> >
> > On Thu, 2019-07-04 at 16:30 +0200, Francesco wrote:
> > > Hi Stephan, Hi Luca,
> > >
> > > thanks for your hints. However I inspected
> > >
> https://github.com/dasys-lab/capnzero/blob/master/capnzero/src/Publisher.cpp
> > >
> > >  and I don't think it's saving from malloc()...  see my point 2)
> > > below:
> > >
> > > Indeed I realized that probably current ZMQ API does not allow me
> > > to
> > > achieve the 100% of what I intended to do.
> > > Let me rephrase my target: my target is to be able to
> > >  - memory pool creation: do a large memory allocation of, say, 1M
> > > zmq_msg_t only at the start of my program; let's say I create all
> > > these zmq_msg_t of a size of 2k bytes each (let's assume this is
> > > the
> > > max size of message possible in my app)
> > >  - during application lifetime: call zmq_msg_send() at anytime
> > > always
> > > avoiding malloc() operations (just picking the first available
> > > unused
> > > entry of zmq_msg_t from the memory pool).
> > >
> > > Initially I thought that was possible but I think I have identified
> > > 2
> > > blocking issues:
> > > 1) If I try to recycle zmq_msg_t directly: in this case I will fail
> > > because I cannot really change only the "size" member of a
> > > zmq_msg_t
> > > without reallocating it... so that I'm forced (in my example) to
> > > always send 2k bytes out (!!)
> > > 2) if I do create only a memory pool of buffers of 2k bytes and
> > > then
> > > wrap the first available buffer inside a zmq_msg_t (allocated on
> > > the
> > > stack, not in the heap): in this case I need to know when the
> > > internals of ZMQ have completed using the 

Re: [zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Francesco
Hi Stephan, Hi Luca,

thanks for your hints. However I inspected
https://github.com/dasys-lab/capnzero/blob/master/capnzero/src/Publisher.cpp
and
I don't think it's saving from malloc()...  see my point 2) below:

Indeed I realized that probably current ZMQ API does not allow me to
achieve the 100% of what I intended to do.
Let me rephrase my target: my target is to be able to
 - memory pool creation: do a large memory allocation of, say, 1M zmq_msg_t
only at the start of my program; let's say I create all these zmq_msg_t of
a size of 2k bytes each (let's assume this is the max size of message
possible in my app)
 - during application lifetime: call zmq_msg_send() at anytime always
avoiding malloc() operations (just picking the first available unused entry
of zmq_msg_t from the memory pool).

Initially I thought that was possible but I think I have identified 2
blocking issues:
1) If I try to recycle zmq_msg_t directly: in this case I will fail because
I cannot really change only the "size" member of a zmq_msg_t without
reallocating it... so that I'm forced (in my example) to always send 2k
bytes out (!!)
2) if I do create only a memory pool of buffers of 2k bytes and then wrap
the first available buffer inside a zmq_msg_t (allocated on the stack, not
in the heap): in this case I need to know when the internals of ZMQ have
completed using the zmq_msg_t and thus when I can mark that buffer as
available again in my memory pool. However I see that zmq_msg_init_data()
ZMQ code contains:

//  Initialize constant message if there's no need to deallocate
if (ffn_ == NULL) {
...
_u.cmsg.data = data_;
_u.cmsg.size = size_;
...
} else {
...
_u.lmsg.content =
  static_cast (malloc (sizeof (content_t)));
...
_u.lmsg.content->data = data_;
_u.lmsg.content->size = size_;
_u.lmsg.content->ffn = ffn_;
_u.lmsg.content->hint = hint_;
new (&_u.lmsg.content->refcnt) zmq::atomic_counter_t ();
}

So that I skip malloc() operation only if I pass ffn_ == NULL. The problem
is that if I pass ffn_ == NULL, then I have no way to know when the
internals of ZMQ have completed using the zmq_msg_t...

Any way to workaround either issue 1) or issue 2) ?

I understand that the malloc is just of size(content_t)~= 40B... but still
I'd like to avoid it...

Thanks!
Francesco




Il giorno gio 4 lug 2019 alle ore 14:58 Stephan Opfer <
op...@vs.uni-kassel.de> ha scritto:

>
> On 04.07.19 14:29, Luca Boccassi wrote:
> > How users make use of these primitives is up to them though, I don't
> > think anything special was shared before, as far as I remember.
>
> Some example can be found here:
> https://github.com/dasys-lab/capnzero/tree/master/capnzero/src
>
> The classes Publisher and Subscriber should replace the publisher and
> subscriber in a former Robot-Operating-System-based System. I hope that
> the subscriber is actually using the method Luca is talking about on the
> receiving side.
>
> The message data here is a Cap'n Proto container that we "simply"
> serialize and send via ZeroMQ -> therefore the name Cap'nZero ;-)
>
> --
> Distributed Systems Research Group
> Stephan Opfer  T. +49 561 804-6279  F. +49 561 804-6277
> Univ. Kassel,  FB 16,  Wilhelmshöher Allee 73,  D-34121 Kassel
> WWW: http://www.uni-kassel.de/go/vs_stephan-opfer/
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Memory pool for zmq_msg_t

2019-07-04 Thread Francesco
Hi all,

I'm doing some benchmarking of a library I wrote based on ZMQ.
In most of my use cases if I do a "perf top" on my application thread I see
something like this:

  12,09%  [kernel]  [k] sysret_check
   7,48%  [kernel]  [k] system_call_after_swapgs
   5,64%  libc-2.25.so  [.] _int_malloc
   3,40%  libzmq.so.5.2.1   [.] zmq::socket_base_t::send
   3,20%  [kernel]  [k] do_sys_poll

That is, ignoring the calls to Linux kernel, I see that malloc() is the
most time-consuming operation my software is doing. After some
investigation that's due to the use I do of zmq_msg_init_size().

Now I wonder: somebody has ever tried to avoid this kind of malloc() by
using the zmq_msg_init_data() API instead and some sort of memory pool for
zmq_msg_t objects?

I've seen some proposal in this email thread:

https://lists.zeromq.org/mailman/private/zeromq-dev/2016-November/031131.html
but as far as I know nothing was submitted to the zmq community, right?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zeromq website is not updated to zmq 4.3.x releases

2019-02-11 Thread Francesco
Thanks!

Il giorno dom 10 feb 2019 alle ore 19:56 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> The websites were regenerated today
>
> On Sat, 2019-02-09 at 10:02 +0100, Francesco wrote:
> > Hi all,
> > Sorry to bother you but the API docs are not updated for zmq 4.3.x...
> > or
> > did I miss the new page?
> >
> > Thanks!
> > Francesco
> >
> >
> > Il giorno sab 2 feb 2019 alle ore 10:58 Francesco <
> > francesco.monto...@gmail.com> ha scritto:
> >
> > > Wonderful, thanks!
> > > I see the updated download page, not yet the API documentation page
> > > but I
> > > guess it's coming
> > >
> > > Thanks,
> > > Francesco
> > >
> > >
> > > Il giorno ven 1 feb 2019 alle ore 08:48 Luca Boccassi <
> > > luca.bocca...@gmail.com> ha scritto:
> > >
> > > > On Thu, 2019-01-31 at 20:28 +0100, Francesco wrote:
> > > > > Hi all,
> > > > > I just noticed that:
> > > > >  1) http://zeromq.org/intro:get-the-software  does not mention
> > > > > 4.3.x
> > > > > as
> > > > > stable release..but in github changelog
> > > > > https://github.com/zeromq/libzmq/releases that's declared as
> > > > > stable
> > > >
> > > > Hi,
> > > >
> > > > I've updated that page to avoid listing individual releases and
> > > > just
> > > > point to Github. No point in having to keep it up to date
> > > > manually.
> > > >
> > > > >  2) the documentation at http://api.zeromq.org does not contain
> > > > > a
> > > > > "4.3
> > > > > stable" option... indeed even if I choose "master" I do not see
> > > > > the
> > > > > same
> > > > > docs that are in git repo... probably they need to be
> > > > > regenerated...
> > > >
> > > > I'll try to regenerate that today.
> > > >
> > > > --
> > > > Kind regards,
> > > > Luca Boccassi___
> > > > zeromq-dev mailing list
> > > > zeromq-dev@lists.zeromq.org
> > > > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> > > >
> >
> > ___
> > zeromq-dev mailing list
> > zeromq-dev@lists.zeromq.org
> > https://lists.zeromq.org/mailman/listinfo/zeromq-dev
> --
> Kind regards,
> Luca Boccassi___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zeromq website is not updated to zmq 4.3.x releases

2019-02-09 Thread Francesco
Hi all,
Sorry to bother you but the API docs are not updated for zmq 4.3.x... or
did I miss the new page?

Thanks!
Francesco


Il giorno sab 2 feb 2019 alle ore 10:58 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Wonderful, thanks!
> I see the updated download page, not yet the API documentation page but I
> guess it's coming
>
> Thanks,
> Francesco
>
>
> Il giorno ven 1 feb 2019 alle ore 08:48 Luca Boccassi <
> luca.bocca...@gmail.com> ha scritto:
>
>> On Thu, 2019-01-31 at 20:28 +0100, Francesco wrote:
>> > Hi all,
>> > I just noticed that:
>> >  1) http://zeromq.org/intro:get-the-software  does not mention 4.3.x
>> > as
>> > stable release..but in github changelog
>> > https://github.com/zeromq/libzmq/releases that's declared as stable
>>
>> Hi,
>>
>> I've updated that page to avoid listing individual releases and just
>> point to Github. No point in having to keep it up to date manually.
>>
>> >  2) the documentation at http://api.zeromq.org does not contain a
>> > "4.3
>> > stable" option... indeed even if I choose "master" I do not see the
>> > same
>> > docs that are in git repo... probably they need to be regenerated...
>>
>> I'll try to regenerate that today.
>>
>> --
>> Kind regards,
>> Luca Boccassi___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] zeromq website is not updated to zmq 4.3.x releases

2019-02-02 Thread Francesco
Wonderful, thanks!
I see the updated download page, not yet the API documentation page but I
guess it's coming

Thanks,
Francesco


Il giorno ven 1 feb 2019 alle ore 08:48 Luca Boccassi <
luca.bocca...@gmail.com> ha scritto:

> On Thu, 2019-01-31 at 20:28 +0100, Francesco wrote:
> > Hi all,
> > I just noticed that:
> >  1) http://zeromq.org/intro:get-the-software  does not mention 4.3.x
> > as
> > stable release..but in github changelog
> > https://github.com/zeromq/libzmq/releases that's declared as stable
>
> Hi,
>
> I've updated that page to avoid listing individual releases and just
> point to Github. No point in having to keep it up to date manually.
>
> >  2) the documentation at http://api.zeromq.org does not contain a
> > "4.3
> > stable" option... indeed even if I choose "master" I do not see the
> > same
> > docs that are in git repo... probably they need to be regenerated...
>
> I'll try to regenerate that today.
>
> --
> Kind regards,
> Luca Boccassi___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] zeromq website is not updated to zmq 4.3.x releases

2019-01-31 Thread Francesco
Hi all,
I just noticed that:
 1) http://zeromq.org/intro:get-the-software  does not mention 4.3.x as
stable release..but in github changelog
https://github.com/zeromq/libzmq/releases that's declared as stable

 2) the documentation at http://api.zeromq.org does not contain a "4.3
stable" option... indeed even if I choose "master" I do not see the same
docs that are in git repo... probably they need to be regenerated...

Just my 2 cents :)

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Interface Redis - zeromq

2018-09-18 Thread Francesco
Hi all,
just curious: is there any open source library that you know of that allows
you to connect to Redis using zeromq (with a ZMQ_STREAM)?

I implemented my own zeromq layer that employs a portion of hiredis C
library for parsing Redis replies but avoids using hiredis TCP socket
handling code entirely.
However I would be interested in comparing my implementation with what's
outside...

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Suggestion about online docs

2018-09-09 Thread Francesco
Hi all,
Is it just me or everytime I google a ZMQ API Google points me to the v2.1
docs: e.g. if I search "ZMQ_PAIR" I get as first result:
   http://api.zeromq.org/2-1:zmq-socket

I think it would be nice and useful to have a link "go to this manpage for
ZMQ master branch" either at the top or bottom of older doc pages...

Just my though

Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Possible bug in ZMQ HWM handling?

2018-09-04 Thread Francesco
Hi all,
Just to follow up on this: I discovered the reason why HWM were apparently
not working on the setup I described on my first email.
The reason is the following: in my setup the ZMQ proxy backend socket is a
TCP socket.
Now I discovered that when using the TCP transport, then if you set HWM=100
on the socket, you cannot expect the zmq_msg_send() to block on the 101-th
message: the reason is that the socket will block only if both ZMQ pipes
and kernel TCP buffers are full. Depending on the size of the TCP buffer,
that may happen after a lot more messages than 100.

Anyway I think that ZMQ HWM hanlding is not very clear and well documented.
I found some tickets still in OPEN status about this:
  https://github.com/zeromq/libzmq/issues/2724
  https://github.com/zeromq/libzmq/issues/1373

So I spent some time extending existing HWM unit test and creating a new
one to stress the ZMQ proxy when its HWM is reached.
I just created a work-in-progress PR here:
  https://github.com/zeromq/libzmq/pull/3242

If maintainers can take a look and let me know what they think, I can
complete that PR and put it in a form where it can be merged...

Thanks,
Francesco








Il giorno ven 31 ago 2018 alle ore 19:36 Francesco <
francesco.monto...@gmail.com> ha scritto:

> I forgot to specify: this happens with ZeroMQ 4.2.3 on Linux.
>
> Il giorno ven 31 ago 2018 alle ore 18:54 Francesco <
> francesco.monto...@gmail.com> ha scritto:
>
>> Hi all,
>> I spent quite a bit of time to look into a weird issue.
>> Here's my setup:
>>  1 PUB socket  connected to a STEERABLE PROXY FRONTED socket, using
>> INPROC transport
>>  1 SUB socket  connected to the STEERABLE PROXY BACKEND socket, using
>> TCP transport
>>
>> Before starting stuff (i.e. calling zmq_connect() or zmq_bind()) I lower
>> all HWMs (for both TX/RX) of all sockets to just "10".  Moreover I set
>> ZMQ_XPUB_NODROP=1.
>> Now my expected behaviour is that after 20 messages sent from the PUB
>> socket, the zmq_send() on that PUB becomes blocking.
>> You may argue that since I have the XPUB/XSUB sockets of the proxy in the
>> middle, the value after it becomes blocking is 40. That's ok.
>>
>> The experimental result I found is that the PUB NEVER becomes blocking.
>> If I send 1 messages from the PUB they all go through.
>> In the SUB I sleep 1 second after EVERY message received. It takes a
>> while but in the end the SUB receives all the 1 messages...
>>
>> Is there an explanation for this? Who buffered all those messages
>> (against my will) ?
>>
>> Thanks for any hint!
>> Francesco
>>
>>
>>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Possible bug in ZMQ HWM handling?

2018-08-31 Thread Francesco
I forgot to specify: this happens with ZeroMQ 4.2.3 on Linux.

Il giorno ven 31 ago 2018 alle ore 18:54 Francesco <
francesco.monto...@gmail.com> ha scritto:

> Hi all,
> I spent quite a bit of time to look into a weird issue.
> Here's my setup:
>  1 PUB socket  connected to a STEERABLE PROXY FRONTED socket, using
> INPROC transport
>  1 SUB socket  connected to the STEERABLE PROXY BACKEND socket, using
> TCP transport
>
> Before starting stuff (i.e. calling zmq_connect() or zmq_bind()) I lower
> all HWMs (for both TX/RX) of all sockets to just "10".  Moreover I set
> ZMQ_XPUB_NODROP=1.
> Now my expected behaviour is that after 20 messages sent from the PUB
> socket, the zmq_send() on that PUB becomes blocking.
> You may argue that since I have the XPUB/XSUB sockets of the proxy in the
> middle, the value after it becomes blocking is 40. That's ok.
>
> The experimental result I found is that the PUB NEVER becomes blocking.
> If I send 1 messages from the PUB they all go through.
> In the SUB I sleep 1 second after EVERY message received. It takes a while
> but in the end the SUB receives all the 1 messages...
>
> Is there an explanation for this? Who buffered all those messages (against
> my will) ?
>
> Thanks for any hint!
> Francesco
>
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] Possible bug in ZMQ HWM handling?

2018-08-31 Thread Francesco
Hi all,
I spent quite a bit of time to look into a weird issue.
Here's my setup:
 1 PUB socket  connected to a STEERABLE PROXY FRONTED socket, using
INPROC transport
 1 SUB socket  connected to the STEERABLE PROXY BACKEND socket, using
TCP transport

Before starting stuff (i.e. calling zmq_connect() or zmq_bind()) I lower
all HWMs (for both TX/RX) of all sockets to just "10".  Moreover I set
ZMQ_XPUB_NODROP=1.
Now my expected behaviour is that after 20 messages sent from the PUB
socket, the zmq_send() on that PUB becomes blocking.
You may argue that since I have the XPUB/XSUB sockets of the proxy in the
middle, the value after it becomes blocking is 40. That's ok.

The experimental result I found is that the PUB NEVER becomes blocking.
If I send 1 messages from the PUB they all go through.
In the SUB I sleep 1 second after EVERY message received. It takes a while
but in the end the SUB receives all the 1 messages...

Is there an explanation for this? Who buffered all those messages (against
my will) ?

Thanks for any hint!
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] HWM on ZMQ_PUB does not work for me (with code)

2018-08-23 Thread Francesco
Maybe the OP wants to use
 - ZMQ_XPUB_NODROP=1
 - ZMQ_SNDTIMEO=0
In that way as James wrote the slowest consumer will provoke zmq_send() to
return EAGAIN as soon as the slowest consumer's queue is full...

HTH,
Francesco



Il giorno gio 23 ago 2018 alle ore 09:51 James Harvey <
jamesdillonhar...@gmail.com> ha scritto:

> ZMQ_PUB does not block which makes sense as you could end up with one slow
> consumer slowing down everyone.
>
> Maybe ZMQ_CONFLATE is what you need?
>
> or If you only have one consumer try a different socket type that does
> block.
>
>
>
> On Wed, Aug 22, 2018 at 10:54 PM vincent freedom 
> wrote:
>
>> Not sure why I am not getting any responses.
>>
>> I did the googling. I know there is a ticket/task to improve the
>> documentation
>> on HWM.
>>
>> Anyone?
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] problem with sending and recieving in a simple PAIR setup

2018-08-18 Thread Francesco
Hi Michael,
Are you using the same socket from 2 different threads (to run rtx and ttx
functions) right?
That's not allowed... Zmq socket objects are not thread safe...

HTH,
Francesco

Il sab 18 ago 2018, 06:18 Michael Hansen  ha
scritto:

> Hi,
> I am not sure I see where there is missing some indentation.
> To me it looks okay regarding indentation.
> rtx is just a def with a while loop where recv is called and printed in
> every iteration.
>
> On Sat, Aug 18, 2018 at 12:44 AM, Tomer Eliyahu 
> wrote:
>
>> Aren't you missing indentation in rtx functions?
>>
>> On Sat, Aug 18, 2018, 01:20 Michael Hansen 
>> wrote:
>>
>>> Hello, I am quite new to ZMQ which seems like a really nice library.
>>> However i seem to be running into some issues already.
>>> I have to make a setup with just 2 peers who can send to each other.
>>> There is no order in how the clients communicate - however in generel
>>> data flows
>>> from one to the other and commands from the other to the first.
>>>
>>> I made a small test as below but communication seem to hang up and only
>>> one part is sending.
>>> Also - sometimes when i try to run this, i get the following error:
>>>
>>> 'Resource temporarily unavailable (bundled/zeromq/src/signaler.cpp:301)
>>>
>>> Abort trap: 6'
>>>
>>>
>>> Here are the 2 components, what am I doing wrong?
>>>
>>>
>>>
>>>
>>> # ==
>>> # = SERVER A
>>> # ==
>>>
>>>
>>> import threading
>>> import time
>>> import zmq
>>>
>>>
>>> port = 
>>> context = zmq.Context()
>>> socket = context.socket(zmq.PAIR)
>>> socket.connect("tcp://127.0.0.1:%s" % port)
>>>
>>>
>>> def rtx(socket):
>>> print('A started receiving...')
>>> while 1:
>>> print(b'A RECEIVED : %s' % socket.recv())
>>>
>>>
>>> def ttx(socket):
>>> print('A started transmitting...')
>>> while 1:
>>> socket.send(b'Message from A')
>>> time.sleep(1)
>>>
>>>
>>> threading.Thread(target=rtx, args=(socket,)).start()
>>> threading.Thread(target=ttx, args=(socket,)).start()
>>>
>>>
>>>
>>>
>>> # ==
>>> # = SERVER B
>>> # ==
>>>
>>>
>>> import threading
>>> import time
>>> import zmq
>>>
>>>
>>> port = 
>>> context = zmq.Context()
>>> socket = context.socket(zmq.PAIR)
>>> socket.bind("tcp://127.0.0.1:%s" % port)
>>>
>>>
>>> def rtx(socket):
>>> print('B started receiving...')
>>> while 1:
>>> print(b'B RECEIVED : %s' % socket.recv())
>>>
>>>
>>> def ttx(socket):
>>> print('B started transmitting...')
>>> while 1:
>>> socket.send(b'Message from B')
>>> time.sleep(1)
>>>
>>>
>>> threading.Thread(target=rtx, args=(socket,)).start()
>>> threading.Thread(target=ttx, args=(socket,)).start()
>>>
>>> ___
>>> zeromq-dev mailing list
>>> zeromq-dev@lists.zeromq.org
>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] cppzmq - how to use ::monitor_t properly?

2018-05-25 Thread Francesco
Hi Attila,
I use the zmq::monitor_t::monitor() call from the context of a secondary
thread. In practice every time I want to monitor a socket (mostly for
debugging) I create a new thread dedicated to its monitoring. This is an
easy solution although probably is not the best one, specially if you need
to monitor several sockets...

HTH,
Francesco


2018-05-25 9:09 GMT+02:00 Attila Magyari <att...@gmail.com>:

> Hello,
>
> How do you use the cppzmq's zmq::monitor_t::monitor() call correctly? It
> is an infinitely blocking method, and I don't know how to exit from it
> gracefully.
>
> Is the use of socket monitoring discouraged? Are there better alternatives
> to get connection status of the socket?
>
> Thank you!
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [REP-REQ] Sockets timeouts and connection reset

2018-04-14 Thread Francesco
Hi Victor,
did you look at ZMQ_REQ_RELAXED option?

With that I think you should manage to use REQ/REP without having to
destroy sockets and recreate them...

HTH,
Francesco



2018-04-12 14:51 GMT+02:00 DUMAS, Victor <victor.du...@stago.com>:

> Francesco,
>
>
>
> Thanks for you reply.
>
> I need to reset the sockets periodically because setting no timeouts on
> them ends up in a REQ/REP deadlock where:
>
>- PeerA sends a message
>- PeerB receives it and sends a  ACK message
>- PeerA never receives the ACK
>- PeerB is waiting for the next message to arrive because the (REP)
>reply is not blocking
>
>
>
> From what I can see with wireshark the following sequence happens in a
> loop:
>
>- [SYN], PeerB -> PeerA
>- [RST,ACK], PeerA -> PeerB
>- [TCP Retransmission], PeerB -> PeerA
>- [RST,ACK], PeerA -> PeerB
>- [TCP Retransmission], PeerB -> PeerA
>- [RST,ACK], PeerA -> PeerB
>
>
>
> Victor
>
>
>
> *De : *Francesco <francesco.monto...@gmail.com>
> *Envoyé le :*Saturday, April 7, 2018 11:06 AM
> *À : *ZeroMQ development list <zeromq-dev@lists.zeromq.org>
> *Objet :*Re: [zeromq-dev] [REP-REQ] Sockets timeouts and connection reset
>
>
> Hi Victor,
>
>
> 2018-04-06 17:44 GMT+02:00 DUMAS, Victor <victor.du...@stago.com>:
>
>> In order to not block forever each socket has sending and receiving
>> timeouts. In case those timeouts are reached the sockets are destroyed and
>> recreated.
>>
>
> this statement has triggered my attention: why would you do that ?
> ZMQ sockets are meant to be long-lived objects from my understanding and
> if something bad happens at networking level (somebody pulls off the cable)
> they will automatically reconnect using ZMQ background threads once network
> connectivity is restored... I think that in your scenario destroying the
> sockets and recreating them is not really needed and does not really help:
> just keep retrying or (if you're using TCP transport and you have timeouts
> big enough) you can simply provide an error message of some kind (e.g.
> network failure)...
>
> Just my 2 cents,
> Francesco
>
>
>
>
>
> Click here <https://www.mailcontrol.com/sr/MZbqvYs5QwJvpeaetUwhCQ==> to
> report this email as spam.
>
>
> This message has been scanned for malware by Websense. www.websense.com
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] User-space polling of ZMQ sockets

2018-04-12 Thread Francesco
Btw,
I realized that I actually already hit this "phenomenon" of poll() being
called so fast!!!
This is the link of the thread where I raised the issue:
   https://lists.zeromq.org/pipermail/zeromq-dev/2017-October/031974.html

At that time I solved by simply using the ZMQ_RCVTIMEO option as suggested
by Luca... this time it's slightly different because I'm already using that
option but now I have 2 zmq sockets to dequeue so that "inbound poll rate"
issue is doubled... I will check if changing that zmq non-configurable
internal parameter makes a performance difference or not!

Francesco



2018-04-12 22:47 GMT+02:00 Francesco <francesco.monto...@gmail.com>:

> [sorry I hit "send" too early! here's the complete email]
>
> Hi Luca, Hi Bill,
> thanks for the answers.
> Actually I verified that the configure script of zmq seems to be using
> epoll:
>
>configure: Choosing polling system from 'kqueue epoll devpoll pollset
> poll select'...
>configure: Using 'epoll' polling system with CLOEXEC
>
> and indeed inside platform.hpp I found:
>
>   /* Use 'epoll' polling system */
>   #define ZMQ_USE_EPOLL 1
>
>   /* Use 'epoll' polling system with CLOEXEC */
>   #define ZMQ_USE_EPOLL_CLOEXEC 1
>
> I also read the links you sent me Bill: actually since I'm monitoring only
> 2 file descriptors (2 zmq sockets) I think that poll() and epoll()
> difference is hardly measurable.
> My feeling is just that whatever code ends up calling poll() so often will
> pay a cost in performance due to the syscall overhead:
>
>http://arkanis.de/weblog/2017-01-05-measurements-of-system-
> call-performance-and-overhead
>http://man7.org/linux/man-pages/man7/vdso.7.html
>
> So my question is actually the following one:
>In a ZMQ application, in which thread context are the "real" TCP socket
> file descriptors actually polled?
>Is that polling happening in ZMQ background threads?
>Or rather it happens inside the thread that calls the zmq_msg_recv() /
> zmq_poll() /  zmq_poller_wait_all()  ?
>
> Because from looking briefly at zmq code it looks like zmq_msg_recv()  is
> just talking with the zmq background thread via some kind of inter-thread
> signaler.
> So in my view I _guess_ that the real polling of the TCP socket should
> happen in zmq background threads and the application threads just dequeues
> some kind of queue that lives in those background threads. However that
> seem to be proved wrong by a simple test: I tried to put a breakpoint on
> the syscall poll()... the stacktrace I get is:
>
> #0  poll () at ../sysdeps/unix/syscall-template.S:84
> #1  0x763db0ba in zmq::signaler_t::wait(int) ()
> #2  0x763bb045 in zmq::mailbox_t::recv(zmq::command_t*, int) ()
> #3  0x763dba37 in zmq::socket_base_t::process_commands(int, bool)
> #4  0x763ddbca in zmq::socket_base_t::recv(zmq::msg_t*, int) ()
> #5  0x76400620 in zmq_msg_recv ()
> [my code stack frames]
>
> so it seems like it's the application threads that ends up calling that
> poll().
> Note that I set the RX timeout on my socket equal to 0 (ZMQ_DONTWAIT)...
>
>
> Thanks,
> Francesco
>
>
>
>
>
> 2018-04-12 22:39 GMT+02:00 Francesco <francesco.monto...@gmail.com>:
>
>> Hi Luca, Hi Bill,
>> thanks for the answers.
>> Actually I verified that the configure script of zmq seems to be using
>> epoll:
>>
>>configure: Choosing polling system from 'kqueue epoll devpoll pollset
>> poll select'...
>>configure: Using 'epoll' polling system with CLOEXEC
>>
>> and indeed inside platform.hpp I found:
>>
>>   /* Use 'epoll' polling system */
>>   #define ZMQ_USE_EPOLL 1
>>
>>   /* Use 'epoll' polling system with CLOEXEC */
>>   #define ZMQ_USE_EPOLL_CLOEXEC 1
>>
>> I also read the links you sent me Bill: actually since I'm monitoring
>> only 2 file descriptors (2 zmq sockets) I think that poll() and epoll()
>> difference is hardly measurable.
>> My feeling is just that whatever code ends up calling poll() so often
>> will pay a lot of
>>
>>http://arkanis.de/weblog/2017-01-05-measurements-of-system-
>> call-performance-and-overhead
>>
>>
>>
>>
>>
>>
>> 2018-04-11 21:05 GMT+02:00 Bill Torpey <wallstp...@gmail.com>:
>>
>>> Well, a little googling found this, which is a pretty good writeup:
>>> https://jvns.ca/blog/2017/06/03/async-io-on-linux--
>>> select--poll--and-epoll/
>>>
>>>
>>> On Apr 11, 2018, at 2:42 PM, Bill Torpey <wallstp...@gmail.com> wrote:
>>>
>>> So, are there any benchmark te

Re: [zeromq-dev] User-space polling of ZMQ sockets

2018-04-12 Thread Francesco
 [sorry I hit "send" too early! here's the complete email]

Hi Luca, Hi Bill,
thanks for the answers.
Actually I verified that the configure script of zmq seems to be using
epoll:

   configure: Choosing polling system from 'kqueue epoll devpoll pollset
poll select'...
   configure: Using 'epoll' polling system with CLOEXEC

and indeed inside platform.hpp I found:

  /* Use 'epoll' polling system */
  #define ZMQ_USE_EPOLL 1

  /* Use 'epoll' polling system with CLOEXEC */
  #define ZMQ_USE_EPOLL_CLOEXEC 1

I also read the links you sent me Bill: actually since I'm monitoring only
2 file descriptors (2 zmq sockets) I think that poll() and epoll()
difference is hardly measurable.
My feeling is just that whatever code ends up calling poll() so often will
pay a cost in performance due to the syscall overhead:

   http://arkanis.de/weblog/2017-01-05-measurements-of-
system-call-performance-and-overhead
   http://man7.org/linux/man-pages/man7/vdso.7.html

So my question is actually the following one:
   In a ZMQ application, in which thread context are the "real" TCP socket
file descriptors actually polled?
   Is that polling happening in ZMQ background threads?
   Or rather it happens inside the thread that calls the zmq_msg_recv() /
zmq_poll() /  zmq_poller_wait_all()  ?

Because from looking briefly at zmq code it looks like zmq_msg_recv()  is
just talking with the zmq background thread via some kind of inter-thread
signaler.
So in my view I _guess_ that the real polling of the TCP socket should
happen in zmq background threads and the application threads just dequeues
some kind of queue that lives in those background threads. However that
seem to be proved wrong by a simple test: I tried to put a breakpoint on
the syscall poll()... the stacktrace I get is:

#0  poll () at ../sysdeps/unix/syscall-template.S:84
#1  0x763db0ba in zmq::signaler_t::wait(int) ()
#2  0x763bb045 in zmq::mailbox_t::recv(zmq::command_t*, int) ()
#3  0x763dba37 in zmq::socket_base_t::process_commands(int, bool)
#4  0x763ddbca in zmq::socket_base_t::recv(zmq::msg_t*, int) ()
#5  0x76400620 in zmq_msg_recv ()
[my code stack frames]

so it seems like it's the application threads that ends up calling that
poll().
Note that I set the RX timeout on my socket equal to 0 (ZMQ_DONTWAIT)...


Thanks,
Francesco





2018-04-12 22:39 GMT+02:00 Francesco <francesco.monto...@gmail.com>:

> Hi Luca, Hi Bill,
> thanks for the answers.
> Actually I verified that the configure script of zmq seems to be using
> epoll:
>
>configure: Choosing polling system from 'kqueue epoll devpoll pollset
> poll select'...
>configure: Using 'epoll' polling system with CLOEXEC
>
> and indeed inside platform.hpp I found:
>
>   /* Use 'epoll' polling system */
>   #define ZMQ_USE_EPOLL 1
>
>   /* Use 'epoll' polling system with CLOEXEC */
>   #define ZMQ_USE_EPOLL_CLOEXEC 1
>
> I also read the links you sent me Bill: actually since I'm monitoring only
> 2 file descriptors (2 zmq sockets) I think that poll() and epoll()
> difference is hardly measurable.
> My feeling is just that whatever code ends up calling poll() so often will
> pay a lot of
>
>http://arkanis.de/weblog/2017-01-05-measurements-of-
> system-call-performance-and-overhead
>
>
>
>
>
>
> 2018-04-11 21:05 GMT+02:00 Bill Torpey <wallstp...@gmail.com>:
>
>> Well, a little googling found this, which is a pretty good writeup:
>> https://jvns.ca/blog/2017/06/03/async-io-on-linux--
>> select--poll--and-epoll/
>>
>>
>> On Apr 11, 2018, at 2:42 PM, Bill Torpey <wallstp...@gmail.com> wrote:
>>
>> So, are there any benchmark tests that can be used to quantify the
>> overhead of zmq_poll?  It seems like this question keeps coming up, and it
>> would certainly be nice to have some real numbers (and the code used to
>> generate them).
>>
>> Having said that, there are several mechanisms that zmq_poll can use, and
>> there are apparently significant performance differences between them.  My
>> understanding of the conventional wisdom is that epoll is preferable to
>> poll, which is preferable to select — but I don’t have any data to back
>> that up.
>>
>> In any event, you can examine the output of the libzmq build to see which
>> mechanism is being used:
>>
>> -- Looking for kqueue
>> -- Looking for kqueue - not found
>> -- Looking for epoll_create
>> -- Looking for epoll_create - found
>> -- Looking for epoll_create1
>> -- Looking for epoll_create1 - found
>> *-- Detected epoll polling method*
>>
>> I’ve found one (very old!) post that discusses these differences:
>> https://cs.uwaterloo.ca/~brecht/servers/epoll/.  If anyone can suggest
>> additio

Re: [zeromq-dev] User-space polling of ZMQ sockets

2018-04-12 Thread Francesco
Hi Luca, Hi Bill,
thanks for the answers.
Actually I verified that the configure script of zmq seems to be using
epoll:

   configure: Choosing polling system from 'kqueue epoll devpoll pollset
poll select'...
   configure: Using 'epoll' polling system with CLOEXEC

and indeed inside platform.hpp I found:

  /* Use 'epoll' polling system */
  #define ZMQ_USE_EPOLL 1

  /* Use 'epoll' polling system with CLOEXEC */
  #define ZMQ_USE_EPOLL_CLOEXEC 1

I also read the links you sent me Bill: actually since I'm monitoring only
2 file descriptors (2 zmq sockets) I think that poll() and epoll()
difference is hardly measurable.
My feeling is just that whatever code ends up calling poll() so often will
pay a lot of


http://arkanis.de/weblog/2017-01-05-measurements-of-system-call-performance-and-overhead






2018-04-11 21:05 GMT+02:00 Bill Torpey <wallstp...@gmail.com>:

> Well, a little googling found this, which is a pretty good writeup:
> https://jvns.ca/blog/2017/06/03/async-io-on-linux--
> select--poll--and-epoll/
>
>
> On Apr 11, 2018, at 2:42 PM, Bill Torpey <wallstp...@gmail.com> wrote:
>
> So, are there any benchmark tests that can be used to quantify the
> overhead of zmq_poll?  It seems like this question keeps coming up, and it
> would certainly be nice to have some real numbers (and the code used to
> generate them).
>
> Having said that, there are several mechanisms that zmq_poll can use, and
> there are apparently significant performance differences between them.  My
> understanding of the conventional wisdom is that epoll is preferable to
> poll, which is preferable to select — but I don’t have any data to back
> that up.
>
> In any event, you can examine the output of the libzmq build to see which
> mechanism is being used:
>
> -- Looking for kqueue
> -- Looking for kqueue - not found
> -- Looking for epoll_create
> -- Looking for epoll_create - found
> -- Looking for epoll_create1
> -- Looking for epoll_create1 - found
> *-- Detected epoll polling method*
>
> I’ve found one (very old!) post that discusses these differences:
> https://cs.uwaterloo.ca/~brecht/servers/epoll/.  If anyone can suggest
> additional sources, please do.
>
>
> On Apr 11, 2018, at 1:46 PM, Luca Boccassi <luca.bocca...@gmail.com>
> wrote:
>
> On Wed, 2018-04-11 at 18:47 +0200, Francesco wrote:
>
> Hi all,
>
> I'm using zmq_poller_wait_all() API in one of my threads because I
> need to
> poll over 2 ZMQ sockets.
> I'm receiving a lot of traffic on both these sockets.
> I think the performance of my software is bad and that may be due
> IMHO to
> the huge amount of poll() syscalls that my thread does... I think the
> overhead of doing the system call is what is killing me...
>
> so my question is: when zmq_poller_wait_all() polls zmq FDs it is
> actually
> polling the real socket FD or rather some ZMQ internal structure?
> If the latter is true, do I have a way to poll more than 1 zmq socket
> without doing a system call?
>
>
> THanks,
> Francesco
>
>
> I'm not familiar with the zmq_poller API so someone else might help -
> in general, a common optimisation is to drain the socket using non-
> blocking receives when poll wakes up (taking care of using heuristic to
> avoid starving the other sockets), so that you can process multiple
> messages per poll
>
> --
> Kind regards,
> Luca Boccassi___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] User-space polling of ZMQ sockets

2018-04-11 Thread Francesco
Hi all,

I'm using zmq_poller_wait_all() API in one of my threads because I need to
poll over 2 ZMQ sockets.
I'm receiving a lot of traffic on both these sockets.
I think the performance of my software is bad and that may be due IMHO to
the huge amount of poll() syscalls that my thread does... I think the
overhead of doing the system call is what is killing me...

so my question is: when zmq_poller_wait_all() polls zmq FDs it is actually
polling the real socket FD or rather some ZMQ internal structure?
If the latter is true, do I have a way to poll more than 1 zmq socket
without doing a system call?


THanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] [REP-REQ] Sockets timeouts and connection reset

2018-04-07 Thread Francesco
Hi Victor,


2018-04-06 17:44 GMT+02:00 DUMAS, Victor <victor.du...@stago.com>:

> In order to not block forever each socket has sending and receiving
> timeouts. In case those timeouts are reached the sockets are destroyed and
> recreated.
>

this statement has triggered my attention: why would you do that ?
ZMQ sockets are meant to be long-lived objects from my understanding and if
something bad happens at networking level (somebody pulls off the cable)
they will automatically reconnect using ZMQ background threads once network
connectivity is restored... I think that in your scenario destroying the
sockets and recreating them is not really needed and does not really help:
just keep retrying or (if you're using TCP transport and you have timeouts
big enough) you can simply provide an error message of some kind (e.g.
network failure)...

Just my 2 cents,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Test test_pair_tcp.cpp hangs at bounce()

2018-03-01 Thread Francesco
Hi,
maybe I'm wrong but I think that ZMQ monitor socket do not work on inproc
transport...

HTH,
Francesco



2018-02-28 23:34 GMT+01:00 Manuel Segura <manuel.segu...@gmail.com>:

> I tried to add socket monitors to check the handshake, but it hangs when
> trying to read in an event from the monitor socket in the first call to
> zmq_msg_recv(), specifically this line:
>
> if (zmq_msg_recv (, monitor, 0) == -1)
>
> It appears to be the same problem with receiving when I called
> bounce() from testutils.hpp, only this time its with the inproc protocol.
>
> --Manuel
>
> On Wed, Feb 28, 2018 at 12:38 PM, Manuel Segura <manuel.segu...@gmail.com>
> wrote:
>
>> Hi Luca,
>>
>> The test_pair_ipc.cpp test fails as well in the same place. Those are the
>> only two I've tried so far.
>>
>> I'll add socket monitors and let you know about the handshake.
>>
>> Thanks,
>>
>> Manuel
>>
>> On Wed, Feb 28, 2018 at 12:08 PM, Luca Boccassi <luca.bocca...@gmail.com>
>> wrote:
>>
>>> On Wed, 2018-02-28 at 11:40 -0800, Manuel Segura wrote:
>>> > Hello,
>>> >
>>> > I'm porting libzmq to VxWorks and the test_pair_tcp.cpp test hangs
>>> > inside
>>> > the bounce() function call, specifically the first zmq_recv() call.
>>> > I've
>>> > traced this to zmq_recv() => s_recvmsg() => s_->recv() =>
>>> > process_commands()
>>> > => mailbox->recv() => signaler.wait() => select(). It seems xrecv()
>>> > fails
>>> > and it goes into a blocking wait.
>>> >
>>> > What would be some reasons that this would hang?
>>> >
>>> > Thank you,
>>> >
>>> > Manuel
>>>
>>> Hi,
>>>
>>> Do other tests fail? Or only that one?
>>>
>>> You can try and add socket monitors to check that the handshake
>>> succeeds:
>>>
>>> https://github.com/zeromq/libzmq/blob/master/doc/zmq_socket_monitor.txt
>>>
>>> Alternatively, given it's a tcp test, you can use wireshark/tshark with
>>> the zmtp dissector to snoop on the wire:
>>>
>>> https://github.com/whitequark/zmtp-wireshark
>>>
>>> --
>>> Kind regards,
>>> Luca Boccassi
>>> ___
>>> zeromq-dev mailing list
>>> zeromq-dev@lists.zeromq.org
>>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>>
>>>
>>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] are zmq::atomic_ptr_t<> Helgrind warnings known?

2018-02-25 Thread Francesco
tion we have that ZMQ is race-free is based
> on “black-box” testing.  The good news is that ZMQ appears to be used
> widely enough that the code is well-tested.  (Of course, failure to find a
> bug doesn’t prove that there isn’t one, so having a rigorous analysis of
> ZMQ’s threading behavior would be A Good Thing — but a big job).
>

Yeah, I agree!


Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] are zmq::atomic_ptr_t<> Helgrind warnings known?

2018-02-24 Thread Francesco
Actually even building zeromq with -fsanitize=thread I still get a lot of
data races about ZMQ atomics:

WARNING: ThreadSanitizer: data race (pid=25770)
  Atomic write of size 4 at 0x7d417e48 by thread T4:
#0 __tsan_atomic32_fetch_add
../../../../gcc-5.3.0/libsanitizer/tsan/tsan_interface_atomic.cc:611
(libtsan.so.0+0x0005860f)
#1 zmq::atomic_counter_t::add(unsigned int) src/atomic_counter.hpp:116
(libzmq.so.5+0x0001a087)
#2 zmq::poller_base_t::adjust_load(int) src/poller_base.cpp:53
(libzmq.so.5+0x0006d20d)
#3 zmq::epoll_t::add_fd(int, zmq::i_poll_events*) src/epoll.cpp:91
(libzmq.so.5+0x0003c229)
#4 zmq::io_object_t::add_fd(int) src/io_object.cpp:66
(libzmq.so.5+0x000400ef)
#5 zmq::tcp_connecter_t::start_connecting() src/tcp_connecter.cpp:204
(libzmq.so.5+0x000abf30)
#6 zmq::tcp_connecter_t::process_plug() src/tcp_connecter.cpp:94
(libzmq.so.5+0x000ab6cb)
#7 zmq::object_t::process_command(zmq::command_t&) src/object.cpp:90
(libzmq.so.5+0x00059e37)
#8 zmq::io_thread_t::in_event() src/io_thread.cpp:85
(libzmq.so.5+0x00040cb3)
#9 zmq::epoll_t::loop() src/epoll.cpp:188 (libzmq.so.5+0x0003ce6d)
#10 zmq::epoll_t::worker_routine(void*) src/epoll.cpp:203
(libzmq.so.5+0x0003cfb6)
#11 thread_routine src/thread.cpp:109 (libzmq.so.5+0x000af00b)

  Previous read of size 4 at 0x7d417e48 by thread T2:
#0 zmq::atomic_counter_t::get() const src/atomic_counter.hpp:210
(libzmq.so.5+0x00061d20)
#1 zmq::poller_base_t::get_load() src/poller_base.cpp:47
(libzmq.so.5+0x0006d1c6)
#2 zmq::io_thread_t::get_load() src/io_thread.cpp:72
(libzmq.so.5+0x00040bef)
#3 zmq::ctx_t::choose_io_thread(unsigned long) src/ctx.cpp:451
(libzmq.so.5+0x000184ff)
#4 zmq::object_t::choose_io_thread(unsigned long) src/object.cpp:199
(libzmq.so.5+0x0005a67f)
#5 zmq::socket_base_t::bind(char const*) src/socket_base.cpp:610
(libzmq.so.5+0x0008bca6)
#6 zmq_bind src/zmq.cpp:326 (libzmq.so.5+0x000c8735)
   

or about ypipe:

WARNING: ThreadSanitizer: data race (pid=25770)
  Read of size 8 at 0x7d640001e0c0 by thread T4:
#0 zmq::ypipe_t<zmq::command_t, 16>::read(zmq::command_t*)
src/ypipe.hpp:170 (libzmq.so.5+0x00047e49)
#1 zmq::mailbox_t::recv(zmq::command_t*, int) src/mailbox.cpp:73
(libzmq.so.5+0x00047793)
#2 zmq::io_thread_t::in_event() src/io_thread.cpp:86
(libzmq.so.5+0x00040cd5)
#3 zmq::epoll_t::loop() src/epoll.cpp:188 (libzmq.so.5+0x0003ce6d)
#4 zmq::epoll_t::worker_routine(void*) src/epoll.cpp:203
(libzmq.so.5+0x0003cfb6)
#5 thread_routine src/thread.cpp:109 (libzmq.so.5+0x000af00b)

  Previous write of size 8 at 0x7d640001e0c0 by thread T2 (mutexes: write
M500):
#0 zmq::ypipe_t<zmq::command_t, 16>::write(zmq::command_t const&, bool)
src/ypipe.hpp:85 (libzmq.so.5+0x00047f05)
#1 zmq::mailbox_t::send(zmq::command_t const&) src/mailbox.cpp:62
(libzmq.so.5+0x000476dc)
#2 zmq::ctx_t::send_command(unsigned int, zmq::command_t const&)
src/ctx.cpp:438 (libzmq.so.5+0x00018420)
#3 zmq::object_t::send_command(zmq::command_t&) src/object.cpp:474
(libzmq.so.5+0x0005c07b)
#4 zmq::object_t::send_plug(zmq::own_t*, bool) src/object.cpp:220
(libzmq.so.5+0x0005a7ef)
#5 zmq::own_t::launch_child(zmq::own_t*) src/own.cpp:87
(libzmq.so.5+0x0006134f)
#6 zmq::socket_base_t::add_endpoint(char const*, zmq::own_t*,
zmq::pipe_t*) src/socket_base.cpp:1006 (libzmq.so.5+0x0008e081)
#7 zmq::socket_base_t::bind(char const*) src/socket_base.cpp:630
(libzmq.so.5+0x0008be89)
#8 zmq_bind src/zmq.cpp:326 (libzmq.so.5+0x000c8735)
...


I tried forcing use of C++11 internal atomics and all the warnings
about  zmq::atomic_ptr_t<> disappear (I still get tons about ypipe and
"pipe")..
maybe it's best to prefer C++11 atomics when available?

Btw I will try to write a suppression file for ThreadSanitizer and ZMQ... I
just start to doubt: is all ZMQ code really race-free? :)


Thanks,
Francesco




2018-02-24 16:44 GMT+01:00 Francesco <francesco.monto...@gmail.com>:

> Hi Bill,
> indeed I've found ThreadSanitizer to be more effective i.e., it contains
> much less false positives compared to Helgrind... still I'm getting several
> race condition warnings out of libzmq 4.2.3... indeed I actually built only
> my application code with -fsanitize=thread... do you know if I need to
> rebuild libzmq with that option as well? I will try to see if it makes any
> difference!
>
> Thanks,
> Francesco
>
>
>
>
> 2018-02-24 15:17 GMT+01:00 Bill Torpey <wallstp...@gmail.com>:
>
>> I’m using clang’s Thread Sanitizer for a similar purpose, and just
>> happened to notice that the TSAN docs use ZeroMQ as one of the example
>> suppressions:  https://github.com/google/san
>> itizers/wiki/ThreadSanitiz

Re: [zeromq-dev] are zmq::atomic_ptr_t<> Helgrind warnings known?

2018-02-24 Thread Francesco
Hi Bill,
indeed I've found ThreadSanitizer to be more effective i.e., it contains
much less false positives compared to Helgrind... still I'm getting several
race condition warnings out of libzmq 4.2.3... indeed I actually built only
my application code with -fsanitize=thread... do you know if I need to
rebuild libzmq with that option as well? I will try to see if it makes any
difference!

Thanks,
Francesco




2018-02-24 15:17 GMT+01:00 Bill Torpey <wallstp...@gmail.com>:

> I’m using clang’s Thread Sanitizer for a similar purpose, and just
> happened to notice that the TSAN docs use ZeroMQ as one of the example
> suppressions:  https://github.com/google/sanitizers/wiki/
> ThreadSanitizerSuppressions
>
> I assume that the reason for suppressing libzmq.so has to do with (legacy)
> sockets not being thread-safe, so the code may exhibit race conditions that
> are irrelevant given that the code is not intended to be called from
> multiple threads.
>
> FWIW, you may want to check out the clang sanitizers — they have certain
> advantages over valgrind — (faster, multi-threaded, etc.) if you are able
> to instrument the code at build time.
>
>
> On Feb 23, 2018, at 6:52 AM, Luca Boccassi <luca.bocca...@gmail.com>
> wrote:
>
> On Fri, 2018-02-23 at 12:22 +0100, Francesco wrote:
>
> Hi all,
> I'm trying to further debug the problem I described in my earlier
> mail (
> https://lists.zeromq.org/pipermail/zeromq-dev/2018-February/032303.ht
> ml) so
> I decided to use Helgrind to find race conditions in my code.
>
> My problem is that apparently Helgrind 3.12.0 is reporting race
> conditions
> against zmq::atomic_ptr_t<> implementation.
> Now I know that Helgrind has troubles with C++11 atomics but by
> looking at
> the code I see that ZMQ is not using them (note: I do have
> ZMQ_ATOMIC_PTR_CXX11 defined but I also have ZMQ_ATOMIC_PTR_INTRINSIC
> defined, so the latter wins!).
>
> In particular Helgrind 3.12.0 tells me that:
>
>
> ==00:00:00:11.885 29399==
> ==00:00:00:11.885 29399== *Possible data race during read of size 8
> at
> 0xB373BF0 by thread #4*
> ==00:00:00:11.885 29399== Locks held: none
> ==00:00:00:11.885 29399==at 0x6BD79AB:
> *zmq::atomic_ptr_t::cas*(zmq::command_t*,
> zmq::command_t*)
> (atomic_ptr.hpp:150)
> ==00:00:00:11.885 29399==by 0x6BD7874:
> zmq::ypipe_t<zmq::command_t,
> 16>::check_read() (ypipe.hpp:147)
> ==00:00:00:11.885 29399==by 0x6BD7288:
> zmq::ypipe_t<zmq::command_t,
> 16>::read(zmq::command_t*) (ypipe.hpp:165)
> ==00:00:00:11.885 29399==by 0x6BD6FE7:
> zmq::mailbox_t::recv(zmq::command_t*, int) (mailbox.cpp:98)
> ==00:00:00:11.885 29399==by 0x6BD29FC:
> zmq::io_thread_t::in_event()
> (io_thread.cpp:81)
> ==00:00:00:11.885 29399==by 0x6BD05C1: zmq::epoll_t::loop()
> (epoll.cpp:188)
> ==00:00:00:11.885 29399==by 0x6BD06C3:
> zmq::epoll_t::worker_routine(void*) (epoll.cpp:203)
> ==00:00:00:11.885 29399==by 0x6C18BA5: thread_routine
> (thread.cpp:109)
> ==00:00:00:11.885 29399==by 0x4C2F837: mythread_wrapper
> (hg_intercepts.c:389)
> ==00:00:00:11.885 29399==by 0x6E72463: start_thread
> (pthread_create.c:334)
> ==00:00:00:11.885 29399==by 0x92F901C: clone (clone.S:109)
> ==00:00:00:11.885 29399==
> ==00:00:00:11.885 29399== This conflicts with a previous write of
> size 8 by
> thread #2
> ==00:00:00:11.885 29399== Locks held: 1, at address 0xB373C08
> ==00:00:00:11.885 29399==at 0x6BD77F4:
> *zmq::atomic_ptr_t::set*(zmq::command_t*)
> (atomic_ptr.hpp:90)
> ==00:00:00:11.885 29399==by 0x6BD7422:
> zmq::ypipe_t<zmq::command_t,
> 16>::flush() (ypipe.hpp:125)
> ==00:00:00:11.885 29399==by 0x6BD6DF5:
> zmq::mailbox_t::send(zmq::command_t const&) (mailbox.cpp:63)
> ==00:00:00:11.885 29399==by 0x6BB9128:
> zmq::ctx_t::send_command(unsigned int, zmq::command_t const&)
> (ctx.cpp:438)
> ==00:00:00:11.885 29399==by 0x6BE34CE:
> zmq::object_t::send_command(zmq::command_t&) (object.cpp:474)
> ==00:00:00:11.885 29399==by 0x6BE26F8:
> zmq::object_t::send_plug(zmq::own_t*, bool) (object.cpp:220)
> ==00:00:00:11.885 29399==by 0x6BE68E2:
> zmq::own_t::launch_child(zmq::own_t*) (own.cpp:87)
> ==00:00:00:11.885 29399==by 0x6C03D6C:
> zmq::socket_base_t::add_endpoint(char const*, zmq::own_t*,
> zmq::pipe_t*)
> (socket_base.cpp:1006)
> ==00:00:00:11.885 29399==  Address 0xb373bf0 is 128 bytes inside a
> block of
> size 224 alloc'd
> ==00:00:00:11.885 29399==at 0x4C2A6FD: operator new(unsigned
> long,
> std::nothrow_t const&) (vg_replace_malloc.c:376)
> ==00:00:00:11.885 29399==by 0x6BB8B8D:
> zmq::ctx_t::create_socket(int)
> (ctx.cpp:351)
> ==00:00:00:11.885 

[zeromq-dev] are zmq::atomic_ptr_t<> Helgrind warnings known?

2018-02-23 Thread Francesco
Hi all,
I'm trying to further debug the problem I described in my earlier mail (
https://lists.zeromq.org/pipermail/zeromq-dev/2018-February/032303.html) so
I decided to use Helgrind to find race conditions in my code.

My problem is that apparently Helgrind 3.12.0 is reporting race conditions
against zmq::atomic_ptr_t<> implementation.
Now I know that Helgrind has troubles with C++11 atomics but by looking at
the code I see that ZMQ is not using them (note: I do have
ZMQ_ATOMIC_PTR_CXX11 defined but I also have ZMQ_ATOMIC_PTR_INTRINSIC
defined, so the latter wins!).

In particular Helgrind 3.12.0 tells me that:


==00:00:00:11.885 29399==
==00:00:00:11.885 29399== *Possible data race during read of size 8 at
0xB373BF0 by thread #4*
==00:00:00:11.885 29399== Locks held: none
==00:00:00:11.885 29399==at 0x6BD79AB:
*zmq::atomic_ptr_t::cas*(zmq::command_t*, zmq::command_t*)
(atomic_ptr.hpp:150)
==00:00:00:11.885 29399==by 0x6BD7874: zmq::ypipe_t<zmq::command_t,
16>::check_read() (ypipe.hpp:147)
==00:00:00:11.885 29399==by 0x6BD7288: zmq::ypipe_t<zmq::command_t,
16>::read(zmq::command_t*) (ypipe.hpp:165)
==00:00:00:11.885 29399==by 0x6BD6FE7:
zmq::mailbox_t::recv(zmq::command_t*, int) (mailbox.cpp:98)
==00:00:00:11.885 29399==by 0x6BD29FC: zmq::io_thread_t::in_event()
(io_thread.cpp:81)
==00:00:00:11.885 29399==by 0x6BD05C1: zmq::epoll_t::loop()
(epoll.cpp:188)
==00:00:00:11.885 29399==by 0x6BD06C3:
zmq::epoll_t::worker_routine(void*) (epoll.cpp:203)
==00:00:00:11.885 29399==by 0x6C18BA5: thread_routine (thread.cpp:109)
==00:00:00:11.885 29399==by 0x4C2F837: mythread_wrapper
(hg_intercepts.c:389)
==00:00:00:11.885 29399==by 0x6E72463: start_thread
(pthread_create.c:334)
==00:00:00:11.885 29399==by 0x92F901C: clone (clone.S:109)
==00:00:00:11.885 29399==
==00:00:00:11.885 29399== This conflicts with a previous write of size 8 by
thread #2
==00:00:00:11.885 29399== Locks held: 1, at address 0xB373C08
==00:00:00:11.885 29399==at 0x6BD77F4:
*zmq::atomic_ptr_t::set*(zmq::command_t*)
(atomic_ptr.hpp:90)
==00:00:00:11.885 29399==by 0x6BD7422: zmq::ypipe_t<zmq::command_t,
16>::flush() (ypipe.hpp:125)
==00:00:00:11.885 29399==by 0x6BD6DF5:
zmq::mailbox_t::send(zmq::command_t const&) (mailbox.cpp:63)
==00:00:00:11.885 29399==by 0x6BB9128:
zmq::ctx_t::send_command(unsigned int, zmq::command_t const&) (ctx.cpp:438)
==00:00:00:11.885 29399==by 0x6BE34CE:
zmq::object_t::send_command(zmq::command_t&) (object.cpp:474)
==00:00:00:11.885 29399==by 0x6BE26F8:
zmq::object_t::send_plug(zmq::own_t*, bool) (object.cpp:220)
==00:00:00:11.885 29399==by 0x6BE68E2:
zmq::own_t::launch_child(zmq::own_t*) (own.cpp:87)
==00:00:00:11.885 29399==by 0x6C03D6C:
zmq::socket_base_t::add_endpoint(char const*, zmq::own_t*, zmq::pipe_t*)
(socket_base.cpp:1006)
==00:00:00:11.885 29399==  Address 0xb373bf0 is 128 bytes inside a block of
size 224 alloc'd
==00:00:00:11.885 29399==at 0x4C2A6FD: operator new(unsigned long,
std::nothrow_t const&) (vg_replace_malloc.c:376)
==00:00:00:11.885 29399==by 0x6BB8B8D: zmq::ctx_t::create_socket(int)
(ctx.cpp:351)
==00:00:00:11.885 29399==by 0x6C284D5: zmq_socket (zmq.cpp:267)
==00:00:00:11.885 29399==by 0x6143809:
ZmqClientSocket::Config(PubSubSocketConfig const&) (ZmqRequestReply.cpp:303)
==00:00:00:11.885 29399==by 0x6144069:
ZmqClientMultiSocket::Config(PubSubSocketConfig const&)
(ZmqRequestReply.cpp:407)
==00:00:00:11.885 29399==by 0x61684EF: client_thread_main(void*)
(ZmqRequestReplyUnitTests.cpp:132)
==00:00:00:11.886 29399==by 0x4C2F837: mythread_wrapper
(hg_intercepts.c:389)
==00:00:00:11.886 29399==by 0x6E72463: start_thread
(pthread_create.c:334)
==00:00:00:11.886 29399==by 0x92F901C: clone (clone.S:109)
==00:00:00:11.886 29399==  Block was alloc'd by thread #2


Is this a known (and ignorable) issue with  zmq::atomic_ptr_t<>?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] ZMQ sporadic problem with REQ/REP sockets

2018-02-21 Thread Francesco
 know if the problem lies on the debug shell (REQ socket) or
on the daemon (REP socket)?
(everything seems to indicate the problem is in the daemon: only
restarting it I get back to a working condition!!)


Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] how to reliably wait for SUBscribers to connect before sending messages

2018-01-05 Thread Francesco
Hi Luca,

2018-01-05 12:23 GMT+01:00 Luca Boccassi <luca.bocca...@gmail.com>:

> On Fri, 2018-01-05 at 12:04 +0100, Francesco wrote:
> > Question for zmq developers: would be possible to add a new event to
> > the
> > socket monitor interface that tells you when the SUB side has
> > _really_
> > connected and is for sure ready to receive messages?
>
> Use XPUB - it will deliver the subscription message to the pub socket.
>
> You mean that I can avoid to sleep() if I use an XPUB socket instead of
PUB and do:
 - zmq_bind(),
 - zmq_msg_recv() (waiting to receive a 1byte message)
 - zmq_msg_send()
and the SUB socket will surely receive the same number of messages sent on
the XPUB side?

That would be a solution even if at my stage of development changing from
PUB to XPUB may have side effects I cannot easily understand right now...

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] how to reliably wait for SUBscribers to connect before sending messages

2018-01-05 Thread Francesco
Hi Mykola,

>This approach may require quite some design changes, which is not always
welcome. If you need this only for testing purposes, you could try to go
with a socket monitor instead.
Yeah, indeed that's the case: I would need this change just for unit
testing purposes... which seems to be too much invasive to me.
I forgot to say: I already attempted using socket monitor:

> There are ZMQ_EVENT_ACCEPTED and ZMQ_EVENT_HANDSHAKE_SUCCEED events.
Listening for them on PUB socket will give you an idea when client
connects. But I believe there still will be some time window before client
subscription is handled, so you may still need some delay before starting
to send messages on PUB.
You're right: even if I receive on the socket monitor the _ACCEPTED event,
and thus I can consider the PUB connected with the SUB, I still need to
sleep some time after that or otherwise the SUB will loose messages anyway.
That's really bothering.
Question for zmq developers: would be possible to add a new event to the
socket monitor interface that tells you when the SUB side has _really_
connected and is for sure ready to receive messages?

Thanks!!
Francesco







2018-01-05 11:55 GMT+01:00 Mykola Ostrovskyy via zeromq-dev <
zeromq-dev@lists.zeromq.org>:

> Francesco,
>
> This is kind of a classic issue with PUB-SUB. You will have quite some
> results by googling for "zeromq reliable pub-sub".
>
> AFAIK, the proper solution is to have an additional communication channel
> with a stronger message delivery guarantee (e.g. REQ-REP). The PUB server
> can broadcast some dummy messages and wait on REP socket for replies. As
> soon as client connects with SUB, subscribes, and start to receive the
> dummy messages, it sends over REQ socket a request for the actual data.
>
> This approach may require quite some design changes, which is not always
> welcome. If you need this only for testing purposes, you could try to go
> with a socket monitor instead. There are ZMQ_EVENT_ACCEPTED and
> ZMQ_EVENT_HANDSHAKE_SUCCEED events. Listening for them on PUB socket will
> give you an idea when client connects. But I believe there still will be
> some time window before client subscription is handled, so you may still
> need some delay before starting to send messages on PUB.
>
>
> Regards,
> Mykola
>
>
> 2018-01-05 11:03 GMT+02:00 Francesco <francesco.monto...@gmail.com>:
>
>> Hi all,
>> I have some unit tests around the ZMQ wrappers I wrote for PUB/SUB
>> sockets (these wrappers integrate functionalities specific to my use case).
>>
>> In these unit tests I spawn a thread that creates a PUB socket and
>> another one creating a SUB socket. In the PUB thread I do zmq_bind(), then
>> sleep for a SETTLE_TIME and then I start sending messages with
>> zmq_msg_send().
>> In the SUB I just do zmq_connect() and then immediately zmq_msg_recv().
>>
>> The problem is when I run such unit tests under valgrind. In that case I
>> noticed that randomly my settle time of 1sec is not enough: the unit tests
>> fail because the SUB receives 0 messages instead of N>0.
>> A simple fix would be to increase the settle time. However since I repeat
>> that kind of tests hundreds of times, that means increasing testing time a
>> lot.
>>
>> I think my problem is that I need to wait after the zmq_bind() and before
>> zmq_msg_send() some time to allow ZMQ background threads to actually
>> connect the PUB and SUB sockets together.
>> Is there a better way to test if there are pending connections in ZMQ
>> background threads rather than waiting a (random) amount of time?
>>
>> Thanks,
>> Francesco
>>
>>
>>
>>
>> ___
>> zeromq-dev mailing list
>> zeromq-dev@lists.zeromq.org
>> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>>
>>
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


[zeromq-dev] how to reliably wait for SUBscribers to connect before sending messages

2018-01-05 Thread Francesco
Hi all,
I have some unit tests around the ZMQ wrappers I wrote for PUB/SUB sockets
(these wrappers integrate functionalities specific to my use case).

In these unit tests I spawn a thread that creates a PUB socket and another
one creating a SUB socket. In the PUB thread I do zmq_bind(), then sleep
for a SETTLE_TIME and then I start sending messages with zmq_msg_send().
In the SUB I just do zmq_connect() and then immediately zmq_msg_recv().

The problem is when I run such unit tests under valgrind. In that case I
noticed that randomly my settle time of 1sec is not enough: the unit tests
fail because the SUB receives 0 messages instead of N>0.
A simple fix would be to increase the settle time. However since I repeat
that kind of tests hundreds of times, that means increasing testing time a
lot.

I think my problem is that I need to wait after the zmq_bind() and before
zmq_msg_send() some time to allow ZMQ background threads to actually
connect the PUB and SUB sockets together.
Is there a better way to test if there are pending connections in ZMQ
background threads rather than waiting a (random) amount of time?

Thanks,
Francesco
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


Re: [zeromq-dev] Need a pattern suggestion

2017-12-09 Thread Francesco
Hi Mark,
I suggest you to take a look at the ZeroMQ Guide book and search for "slow
joiner" issue... there are a few patterns to fight against that issue...

HTH
Francesco

2017-12-08 21:01 GMT+01:00 Mark Holbrook <ma...@propel-labs.com>:

> Hi all,
>
>
>
> We are going to be writing a Windows Service app which will hopefully
> listen for incoming data from our main application and write it to a
> database then to the cloud.
>
>
>
> I’m struggling with one basic understanding problem on how to do this…
>
>
>
> The Windows Service app will likely startup well before our application
> will.  It needs to go into “listening” mode.
>
>
>
> When our app starts it needs to connect to the service and start sending
> it data.
>
>
>
> All seems logical but in my tests using a PULL socket as the service and a
> PUSH in our app if I start the service before the app they do not connect.
> I have to start the push socket first then the pull.  Which is backwards of
> what we need.
>
>
>
> I also cannot see how I can tell in my app if connection to the service is
> successful.  The connect methods on the C# CLR Nuget offer no return
> indicating success or failure.
>
>
>
> So my question is:
>
>
>
> What pattern would allow the receiver to sit and listen and have the
> sender connect later and start sending.  We also need to support the case
> of the sender disconnecting and terminating then being restarted and
> reconnecting.
>
>
>
> Is there some way to know that a connection has successfully been made or
> is that usually done by like a REQ response thing?
>
>
>
> Thanks in advance!
>
>
> 
> 
> This message contains information that may be privileged or confidential
> and is the property of Propel Labs, Inc. It is intended only for the person
> to whom it is addressed. If you are not the intended recipient, you are not
> authorized to read, print, retain, copy, disseminate, distribute or use
> this message or any part thereof. If you receive this message in error,
> please notify the sender immediately and delete all copies of this message.
> 
> 
>
> ___
> zeromq-dev mailing list
> zeromq-dev@lists.zeromq.org
> https://lists.zeromq.org/mailman/listinfo/zeromq-dev
>
>
___
zeromq-dev mailing list
zeromq-dev@lists.zeromq.org
https://lists.zeromq.org/mailman/listinfo/zeromq-dev


  1   2   >