Possibly it is a time measurement problem. I have used zmq::clock_t::now_us which use gettimeofday which is said to not be the ideal solution to measure time. It is not monotonic (http://blog.habets.pp.se/2010/09/gettimeofday-should-never-be-used-to-measure-time), and its resolution may be milliseconds instead of micro-seconds (http://stackoverflow.com/questions/7760951/gettimeofday-c-inconsistency) depending on the plateform.

The first author suggests the use of: clock_gettime(CLOCK_MONOTONIC, ...), and propose a portable library for a monotonic clock here: https://github.com/ThomasHabets/monotonic_clock.

I doubt it is the cause of my problem, because my measure is around 1 minute, but I am going to try.

Le 18/02/2014 17:40, Lindley French a écrit :
It's possible the CPU isn't the bottleneck here?


On Tue, Feb 18, 2014 at 7:26 AM, Laurent Alebarde <[email protected] <mailto:[email protected]>> wrote:

    Hi Devs,

    Here are some tests on a 8 cores cpu.
    500 client sockets sending 100 requests each with CURVE.
    The duration measured takes into account only the exchange phase
    (not the preparation like sockets creation, bind, connect).
    Clients sockets are polled and a new request is sent as soon as
    the previous one is back.

    Same ctx for client (1 thread, 500 sockets), workers (6 threads,
    200 sockets each), proxy (1 thread)

    I/O threads    Test duration (us)
    1        58,220,570
    2        62,541,507
    3        61,411,154
    4        58,612,643
    5        58,752,070
    6        53,754,941
    7        56,311,634
    8
    9
    10        58,363,975
    20        53,909,971    52,686,884    53,027,835
    40        56,675,445    54,128,019    54,111,137
    100        52,951,096    53,056,814    53,100,007
    1,000        53,791,720


    Separate ctx for client, workers, and proxy (and so different
    number of I/O threads for each):

    I/O threads    Test duration (us)
    3/3/3        55,137,176    56,334,984    53,901,536

    All these figures are roughly the sames. Other than this test, my
    computer is quite idle, nothing stressing the cpu is running. The
    cpu graph is flat before running the test. When I run the test, I
    can see only one cpu significantly used, but I am not sure such
    graph is relevant.

    Furthermore, in the libzmq tests, there is no performance test
    that demonstrates the ZMQ_IO_THREADS option works. There is only
    tests/test_ctx_options.cpp that shows you can set the option, and
    tests/test_shutdown_stress.cpp that sets 7 I/O threads, but it
    does not demonstrate performances gains.

    This makes me raise the question: has someone used this
    ZMQ_IO_THREADS option succesfully ?

    All the test were run with the release build from Eclipse CDT.
    When run in a console directly, I can see a gain of 30%, but
    that's still far from what is expected.

    Cheers,


    Laurent

    _______________________________________________
    zeromq-dev mailing list
    [email protected] <mailto:[email protected]>
    http://lists.zeromq.org/mailman/listinfo/zeromq-dev




_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

_______________________________________________
zeromq-dev mailing list
[email protected]
http://lists.zeromq.org/mailman/listinfo/zeromq-dev

Reply via email to