Being clever in zeromq and unsetting HAVE_SOCK_CLOEXEC will not help, as
the
zeromq server will
crash sooner or later when exiting a client.
Nothing will crash. It will leak socket if you will run exec call or
fork then exec. Which is usually avoidable in zeromq apps. To remove
memory leak
Just tested ZeroMQ and Java NIO in the same machine.
The results:
*
- ZeroMQ:*
message size: 13 [B]
roundtrip count: 10
average latency: *19.620* [us] *== ONE-WAY LATENCY*
*- Java NIO Selector:* (EPoll)
Average RTT (round-trip time) latency of a 13-byte message: 15.342 [us]
Min Time:
As far as I see you haven't included your test methodology or your test
code. Without any information about your test I can't have any opinion on
your results. Maybe I missed an earlier email where you included
information about your test environment and methodology?
Brian
On Wed, Aug 29, 2012
On Aug 29, 2012, at 10:13 AM, Julie Anderson wrote:
Just tested ZeroMQ and Java NIO in the same machine.
The results:
- ZeroMQ:
message size: 13 [B]
roundtrip count: 10
average latency: 19.620 [us] == ONE-WAY LATENCY
- Java NIO Selector: (EPoll)
Average RTT (round-trip
Hi All,
I am looking for a messaging pattern for the following scenario.
I have a Java NIO based server X, which has some threads processing client
requests. These threads receive events asynchronously. Now, I want to send
some of the events to another service(another server) Y in asynchronous
New numbers (fun!). Firstly, to make sure I was comparing apples with
apples, I modified my tests to compute one-way trip instead of round-trip.
I can't paste code, but I am simply using a Java NIO (non-blocking I/O)
optimized with busy spinning to send and receive tcp data. This is
*standard*
Julie, it is a little exasperating that you keep posting these numbers (and
related questions) but, to date, have not shown the CODE used to get them. It
is not possible to give a meaningful answer to your questions without looking
at the EXACT code you are using. Furthermore, it would be very
Here are the UDP numbers for whom it may concern. As one would expect much
better than TCP.
RTT: (round-trip time)
Iterations: 1,000,000 | Avg Time: *10373.9 nanos* | Min Time: 8626 nanos |
Max Time: 136269 nanos | 75%: 10186 nanos | 90%: 10253 nanos | 99%: 10327
nanos | 99.999%: 10372 nanos
On Tue, Aug 28, 2012 at 2:16 PM, Ian Barber ian.bar...@gmail.com wrote:
Ah, I fixed a similar issue in master the other day, may well be the
same thing. I'll check and send a pull req when I get home.
Ian
That's all merged in now by the way, so give it another go.
Ian
On Wed, Aug 29, 2012 at 2:20 AM, lzqsmst lzqs...@qq.com wrote:
what's wrong the site snapshot.zero.mq, i want to get the 0mq php dll
for windows ,please~
The machine it's on has died - Mikko is looking into it, but the server
seems to have become rather unhappy.
Ian
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra clock
cycles? (I am really curious to know)
ZeroMQ is using background IO threads to do the sending/receiving. So the
extra latency is do to passing the messages
See my comments below:
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.comwrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra clock
cycles? (I am really curious to know)
ZeroMQ is
On Aug 29, 2012, at 4:46 PM, Julie Anderson wrote:
See my comments below:
And mine too.
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky bo...@sharedrealm.com
wrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies
See my comments below:
On Wed, Aug 29, 2012 at 5:28 PM, Chuck Remes li...@chuckremes.com wrote:
On Aug 29, 2012, at 4:46 PM, Julie Anderson wrote:
See my comments below:
And mine too.
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.com wrote:
On Wednesday 29,
On Wednesday 29, Julie Anderson wrote:
See my comments below:
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.comwrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra
clock
On Wednesday 29, Julie Anderson wrote:
Nothing is perfect. I am just trying to understand ZeroMQ approach and its
overhead on top of the raw network latency. Maybe a single-threaded ZeroMQ
implementation for the future using non-blocking I/O?
You might be interested in xsnano [1] which is an
On 29 August 2012 17:46, Julie Anderson julie.anderson...@gmail.com wrote:
ZeroMQ is using background IO threads to do the sending/receiving. So the
extra latency is do to passing the messages between the application
thread and
the IO thread.
This kind of thread architecture sucks for
On Wed, Aug 29, 2012 at 3:45 PM, Julie Anderson
julie.anderson...@gmail.com wrote:
I understand your frustration.
It's abundantly clear to me that whatever expertise you have on the
absolute fastest and most trivial way to send data between two
programs, you do not understand Chuck's
If the ipc transport is used on unix, can I have one bind and multiple
connects, similar to how I would with the tcp transport? For some reason I
have this idea that unix shared pipes can only be 1 to 1, but I am not totally
sure on that.
Justin
___
On Aug 29, 2012, at 6:52 PM, Justin Karneges wrote:
If the ipc transport is used on unix, can I have one bind and multiple
connects, similar to how I would with the tcp transport? For some reason I
have this idea that unix shared pipes can only be 1 to 1, but I am not
totally
sure on
Not sure I want to step into the middle of this, but here we go. I'd be
really hesitant to base any evaluation of ZMQ's suitability for a highly
scalable low latency application on local_lat/remote_lat. They appear to
be single threaded synchronous tests which seems very unlike the kinds
of
See my comments below:
They appear to
be single threaded synchronous tests which seems very unlike the kinds
of applications being discussed (esp. if you're using NIO). More
realistic is a network connection getting slammed with lots of
concurrent sends and recvswhich is where lots of
On Wednesday 29, Stuart Brandt wrote:
Not sure I want to step into the middle of this, but here we go. I'd be
really hesitant to base any evaluation of ZMQ's suitability for a highly
scalable low latency application on local_lat/remote_lat. They appear to
be single threaded synchronous tests
2) Do people agree that 11 microseconds are just too much?
Nope once you go cross machine that 11 micro seconds become irrelevant .
The fastest exchange im aware of for frequent trading is 80 micro seconds
(+ transport costs) best case , so who are you talking to and if your not
doing
On Thu, Aug 30, 2012 at 10:35 AM, Julie Anderson
julie.anderson...@gmail.com wrote:
See my comments below:
They appear to
be single threaded synchronous tests which seems very unlike the kinds
of applications being discussed (esp. if you're using NIO). More
realistic is a network
Inline
On 8/29/2012 10:37 PM, Robert G. Jakabosky wrote:
echoloop*.c is testing throughput not latency, since it sends all
messages at once instead of sending one message and waiting for it to
return before sending the next message. Try comparing it with
local_thr/remote_thr.
Echoloopcli
Am 29.08.2012 um 22:54 schrieb Ian Barber:
On Tue, Aug 28, 2012 at 2:16 PM, Ian Barber ian.bar...@gmail.com wrote:
Ah, I fixed a similar issue in master the other day, may well be the
same thing. I'll check and send a pull req when I get home.
Ian
That's all merged in now by the way, so
On Wednesday 29, Stuart Brandt wrote:
Inline
On 8/29/2012 10:37 PM, Robert G. Jakabosky wrote:
echoloop*.c is testing throughput not latency, since it sends all
messages at once instead of sending one message and waiting for it to
return before sending the next message. Try comparing it
28 matches
Mail list logo