Hi,
Maybe this can be combined with a request that I have seen a couple of times to
be able to configure the allocator used in libzmq? I am thinking of something
like
struct zmq_allocator {
void* obj;
void* ()(size_t n, void* obj);
void ()(void* ptr, void* obj);
};
void*
Hi,
I've also made good experience with Google's gperftools
(https://github.com/gperftools/gperftools) / tcmalloc. I also experienced a
nice speed-up with tcmalloc in addition to its use as memory profiler. The
generated profile data can be viewed with kcachegrind.
Best wishes,
Jens
Hi,
I don’t think it is just syntactic sugar, even if the changes are not what I
was hoping for. Inline variables and guaranteed copy elision are semantic
changes, so there are some non-syntactic features. Most changes are small, but
I think they will have significant impact on the way that
h make it hard to adopt the code,
instead requiring complete rewrite:
2017-05-18 20:29 GMT+02:00 Jens Auer <jens.a...@betaversion.net>:
Hi,
I would be happy to contribute to such a project, even if many users will stay
with the "old" code. For me, it is a great way to le
Hi,
I would be happy to contribute to such a project, even if many users will stay
with the "old" code. For me, it is a great way to learn something. I would also
be happy to aim for C++14 or even C++17 once it is officially released. I think
structured bindings and the new if (init;
Hi,
quoting the manual from the link in Colin’s answer:
When a ZMQ_PUSH socket enters the mute state due to having reached the high
water mark for all downstream nodes, or if there are no downstream nodes at
all, then any zmq_send(3) operations on the socket shall block until the mute
Hi,
how do you send the messages? The default HWM is 1000 messages and sending is
done asynchronously in the IO thread. If you exceed this number of messages in
the queue, e.g. by sending a batch of much more than 1000 messages you will
experience message drop.
Best wishes,
Jens
at 11:39 AM, Jens Auer <jens.a...@betaversion.net> wrote:
Hi,
yes and no. If you overwrite it globally at compute-time every program on the
system has to use your custom implementation. So if you deliver your ZeroMQ
library with your program it will work, but what if my program
void free_(void* ptr, size_t n, void*obj) {
static_cast<std::allocator>(obj)->deallocate(ptr, n); }
std::allocator a;
allocator_t zmqAlloc{
,
allocate,
free_
};
void* ctx = zmq_ctx_new_with_allocator();
I think this should work?
Best wishes,
Jens
--
Dr. Jens Auer | CGI
Hi,
I think I have an idea why you are seeing unaligned messages, but this only
applies to messages where the payload is not stored in the msg_t object itself.
I think the threshold for this is 64 bytes. In ZeroMQ 4.1, receiving messages
was done by first receiving from the socket into a
Hi,
I am using 4.1.4. From the release notes, it seems that the fix is neither in
4.1.4 nor in 4.1.5. I will try with a patched version.
Thanks,
jens
-Ursprüngliche Nachricht-
Von: Fedor Sheremetyev [mailto:sheremet...@gmail.com]
Gesendet: Mittwoch, 22. Juni 2016 00:16
An: Jens Auer
be the send.
On Sat, Jun 18, 2016 at 7:54 PM, Jens Auer <jens.a...@betaversion.net> wrote:
Hi,
I’m using the C++ binding from
https://github.com/zeromq/cppzmq/blob/master/zmq.hpp. It is quite simple. The
send method I use (there are several depending on the parameters) is:
inlin
number from the send method, in most binding that
return the number of bytes send and not the error code. It might be different
with binding you are using but I'm not sure.
Anyway if that is the case it might be that the number of bytes just equal
EAGAIN?
On Sat, Jun 18, 2016 at 11:59 AM, Je
got disconnected.
On Fri, Jun 17, 2016 at 8:05 PM, Jens Auer <jens.a...@betaversion.net> wrote:
Hi,
I am witnessing something in my system that I didn't expect. I am processing
data at a data rate of 100Mbps, with occasional peaks of 5s of data (=5 *
100Mbps) delivered as fast as po
Sorry for the html mail and the anti-virus spam it includes. I forgot to
change to the Outlook settings.
Cheers,
Jens
Von: zeromq-dev [mailto:zeromq-dev-boun...@lists.zeromq.org] Im Auftrag von
Jens Auer
Gesendet: Freitag, 17. Juni 2016 19:09
An: zeromq-dev@lists.zeromq.org
Betreff
Hi,
I am using ZMQ_STREAM to receive and deliver data from/to non-zeroMQ
applications. For delivery, I have to send the data to a number of connected
clients. All clients receive a copy of the data. Since a ZMQ_STREAM socket
needs a first peer identity part, I have to manually add this every
Hi,
I am witnessing something in my system that I didn't expect. I am processing
data at a data rate of 100Mbps, with occasional peaks of 5s of data (=5 *
100Mbps) delivered as fast as possible because an internal buffer is
flushed. I use ZeroMQ internally, but I have to deliver the data on a TCP
MORE, then we won't be able to send
more */
/* message to any client */
}
std::cout << "Done" << std::endl;
int i;
std::cin >> i;
zmq_close (socket); zmq_ctx_destroy (ctx);
return 0;
}
Cheers,
Jens
--
Jens Auer | CGI | Software-Engineer
CGI (Germany)
Hi,
it is not recommended to use memcopied structs in C (or even worse classes in
C++) as an exchanged format. There are many reasons why this can go wrong:
- You do not consider different byte-order
- The size of int,short etc. is not defined in C/C++. If you run the
...@lists.zeromq.org] Im Auftrag von Thomas Rodgers
Gesendet: Dienstag, 2. Juni 2015 19:59
An: Jens Auer; ZeroMQ development list
Betreff: Re: [zeromq-dev] (almost) zero-copy message receive
FWIW, I made a local change to msg_t::content_t to use a type erased
zmq::atomic_counter_t
on the existence of
two pointers of different type pointing to the same address. There is an
exception for char*, but here we would have a char array and an
atomic_counter_t pointer pointing to it. This is illegal.
Cheers,
Jens
--
Jens Auer | CGI | Software-Engineer
CGI (Germany) GmbH Co. KG
allocations with one allocation, and it gets rid of the
same number of memcpy operations for the message data. I implemented this in a
fork (https://github.com/jens-auer/libzmq/tree/zero_copy_receive). For
testing, I ran the throughput test (message size 100, 10 messages) locally
and profiled
22 matches
Mail list logo