2012/8/29 Julie Anderson julie.anderson...@gmail.com:
I understand your frustration. I don't put the code here because I don't
want to, but because I am legally unable to. If you have a boss or employer
you can understand that. :) I will try to come up with a simple version to
do the same
As I said in the text you quoted: I will try to come up with a simple
version to do the same thing.
But Stuart did that for me in C. My thanks to him.
I am not complaining about anything... Just trying to understand why the
extra latency is necessary. There are already some very good answers
On Thu, Aug 30, 2012 at 11:56 PM, Julie Anderson
julie.anderson...@gmail.com wrote:
As I said in the text you quoted: I will try to come up with a simple
version to do the same thing.
But Stuart did that for me in C. My thanks to him.
I am not complaining about anything... Just trying to
On Thu, Aug 30, 2012 at 12:13 AM, Julie Anderson
julie.anderson...@gmail.com wrote:
Just tested ZeroMQ and Java NIO in the same machine.
You're comparing apples to a factory that can process apples into
juice at the rate of millions a second.
For that extra latency in 0MQ you get things like
Just tested ZeroMQ and Java NIO in the same machine.
The results:
*
- ZeroMQ:*
message size: 13 [B]
roundtrip count: 10
average latency: *19.620* [us] *== ONE-WAY LATENCY*
*- Java NIO Selector:* (EPoll)
Average RTT (round-trip time) latency of a 13-byte message: 15.342 [us]
Min Time:
As far as I see you haven't included your test methodology or your test
code. Without any information about your test I can't have any opinion on
your results. Maybe I missed an earlier email where you included
information about your test environment and methodology?
Brian
On Wed, Aug 29, 2012
On Aug 29, 2012, at 10:13 AM, Julie Anderson wrote:
Just tested ZeroMQ and Java NIO in the same machine.
The results:
- ZeroMQ:
message size: 13 [B]
roundtrip count: 10
average latency: 19.620 [us] == ONE-WAY LATENCY
- Java NIO Selector: (EPoll)
Average RTT (round-trip
New numbers (fun!). Firstly, to make sure I was comparing apples with
apples, I modified my tests to compute one-way trip instead of round-trip.
I can't paste code, but I am simply using a Java NIO (non-blocking I/O)
optimized with busy spinning to send and receive tcp data. This is
*standard*
] On Behalf Of Julie Anderson
Sent: Wednesday, August 29, 2012 1:19 PM
To: ZeroMQ development list
Subject: Re: [zeromq-dev] Too much ZeroMQ overhead versus plain TCP Java NIO
Epoll (with measurements)
New numbers (fun!). Firstly, to make sure I was comparing apples with apples, I
modified my tests
Here are the UDP numbers for whom it may concern. As one would expect much
better than TCP.
RTT: (round-trip time)
Iterations: 1,000,000 | Avg Time: *10373.9 nanos* | Min Time: 8626 nanos |
Max Time: 136269 nanos | 75%: 10186 nanos | 90%: 10253 nanos | 99%: 10327
nanos | 99.999%: 10372 nanos
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra clock
cycles? (I am really curious to know)
ZeroMQ is using background IO threads to do the sending/receiving. So the
extra latency is do to passing the messages
See my comments below:
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.comwrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra clock
cycles? (I am really curious to know)
ZeroMQ is
On Aug 29, 2012, at 4:46 PM, Julie Anderson wrote:
See my comments below:
And mine too.
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky bo...@sharedrealm.com
wrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies
See my comments below:
On Wed, Aug 29, 2012 at 5:28 PM, Chuck Remes li...@chuckremes.com wrote:
On Aug 29, 2012, at 4:46 PM, Julie Anderson wrote:
See my comments below:
And mine too.
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.com wrote:
On Wednesday 29,
On Wednesday 29, Julie Anderson wrote:
See my comments below:
On Wed, Aug 29, 2012 at 4:06 PM, Robert G. Jakabosky
bo...@sharedrealm.comwrote:
On Wednesday 29, Julie Anderson wrote:
So questions remain:
1) What does ZeroMQ do under the rood that justifies so many extra
clock
On Wednesday 29, Julie Anderson wrote:
Nothing is perfect. I am just trying to understand ZeroMQ approach and its
overhead on top of the raw network latency. Maybe a single-threaded ZeroMQ
implementation for the future using non-blocking I/O?
You might be interested in xsnano [1] which is an
On 29 August 2012 17:46, Julie Anderson julie.anderson...@gmail.com wrote:
ZeroMQ is using background IO threads to do the sending/receiving. So the
extra latency is do to passing the messages between the application
thread and
the IO thread.
This kind of thread architecture sucks for
On Wed, Aug 29, 2012 at 3:45 PM, Julie Anderson
julie.anderson...@gmail.com wrote:
I understand your frustration.
It's abundantly clear to me that whatever expertise you have on the
absolute fastest and most trivial way to send data between two
programs, you do not understand Chuck's
Not sure I want to step into the middle of this, but here we go. I'd be
really hesitant to base any evaluation of ZMQ's suitability for a highly
scalable low latency application on local_lat/remote_lat. They appear to
be single threaded synchronous tests which seems very unlike the kinds
of
See my comments below:
They appear to
be single threaded synchronous tests which seems very unlike the kinds
of applications being discussed (esp. if you're using NIO). More
realistic is a network connection getting slammed with lots of
concurrent sends and recvswhich is where lots of
On Wednesday 29, Stuart Brandt wrote:
Not sure I want to step into the middle of this, but here we go. I'd be
really hesitant to base any evaluation of ZMQ's suitability for a highly
scalable low latency application on local_lat/remote_lat. They appear to
be single threaded synchronous tests
2) Do people agree that 11 microseconds are just too much?
Nope once you go cross machine that 11 micro seconds become irrelevant .
The fastest exchange im aware of for frequent trading is 80 micro seconds
(+ transport costs) best case , so who are you talking to and if your not
doing
On Thu, Aug 30, 2012 at 10:35 AM, Julie Anderson
julie.anderson...@gmail.com wrote:
See my comments below:
They appear to
be single threaded synchronous tests which seems very unlike the kinds
of applications being discussed (esp. if you're using NIO). More
realistic is a network
Inline
On 8/29/2012 10:37 PM, Robert G. Jakabosky wrote:
echoloop*.c is testing throughput not latency, since it sends all
messages at once instead of sending one message and waiting for it to
return before sending the next message. Try comparing it with
local_thr/remote_thr.
Echoloopcli
On Wednesday 29, Stuart Brandt wrote:
Inline
On 8/29/2012 10:37 PM, Robert G. Jakabosky wrote:
echoloop*.c is testing throughput not latency, since it sends all
messages at once instead of sending one message and waiting for it to
return before sending the next message. Try comparing it
25 matches
Mail list logo