On May 17, 2007, at 21:19, liusifan wrote:

bash$ java com/danga/MemCached/test/MemCachedThreadBench 500000 0 11211 1
Thread  start   runs    set time(ms)    get time(ms)
Main            500000  168924          149234
        ReqPerSecond    set - 2959      get - 3350


From the other side, I ran this test against my single-threaded java client <http://bleu.west.spy.net/~dustin/projects/memcached/>

        First, quick control (your test as well as I could reproduce it):

Thread  start   runs    set time(ms)    get time(ms)
0       0       500000  197039          86337

Avg             500000  197039          86337

Total           500000  197039          86337
        ReqPerSecond    set - 2537      get - 5791

Main            500000  197085          86338
        ReqPerSecond    set - 2536      get - 5791
110.466u 83.273s 4:43.78 68.2%  0+0k 0+27io 0pf+0w


        And then my java client (500000 0 11211 1)


Thread  start   runs    set time(ms)    get time(ms)
0       0       500000  18930           67679

Avg             500000  18930           67679

Total           500000  18930           67679
        ReqPerSecond    set - 26413     get - 7387

Main            500000  18932           67681
        ReqPerSecond    set - 26410     get - 7387
55.893u 32.035s 1:26.91 101.1%  0+0k 0+11io 0pf+0w



Note that the above was with async sets and a delay for flush after every 10,000 writes. If I synchronize the sets, it slows down a lot:

Thread  start   runs    set time(ms)    get time(ms)
0       0       500000  77528           70354

Avg             500000  77528           70354

Total           500000  77528           70354
        ReqPerSecond    set - 6449      get - 7106

Main            500000  77529           70355
        ReqPerSecond    set - 6449      get - 7106
82.571u 56.206s 2:28.17 93.6%   0+0k 0+17io 0pf+0w


Of course, unless you've got really high write rates, there's not much of a point of ever synchronizing the writes.

--
Dustin Sallings


Reply via email to