Eric, Jay:
Hi! I've just ran sysbenches on the newly implemented patch against the
trunk, and got some good news here:)

Configurations:
+--------+------------------------------------------+---------+----------+
| run_id |                           configuration  |  branch | revision |
+--------+------------------------------------------+---------+----------+
|      5 |                 innodb_1000K_readonly_ex |   trunk |     1097 |
|      6 | innodb_1000K_readonly_ex_pool_of_threads |   trunk |     1097 |
|      7 | innodb_1000K_readonly_ex_pool_of_threads | patched |     1098 |
+--------+------------------------------------------+---------+----------+
where the patched branch refers to lp:~mengbiping/rewriting-pool-of-threads
.

Notes:
   innodb_1000K_readonly_ex is the same as the original
innodb_1000K_readonly configuration except that the concurrency range is
extended to 2 ~ 4096; innodb_1000K_readonly_ex_pool_of_threads is the same
as innodb_1000K_readonly_ex except that the scheduler option is set to
pool_of_threads.
   It is at revision 1097 that the trunk is most lately merged into branch
lp:~mengbiping/rewriting-pool-of-threads as an up to date merge. So I chose
the trunk at r1097 as the comparing object. In other words, differences
between these two branches only exist in the file plug/pool_of_threads.cc .

Platform:
   CPU: Athlon 64 3000+
   RAM: DDRAM2 1....@400mhz Dual-Channel
   OS: Ubuntu-9.04-desktop-i386
   My machine is a little behind the times. Sysbenches repeated on more
powerful hardwares especially on multi-core platforms
are strongly welcome:). However, iteration on concurrency level of 4096
requires at least 16GB RAM if stack size is 8129kbytes in your system.
Shrink the stack size using "ulimit -s NEW_STACK_SIZE_VALUE" before running
sysbench if you have not enough RAM to run high concurrency level iterations
(which is what I did to let it fit in 1.5GB RAM :)).

Results:
   After running the sysbenches, I queried the results:
   mysql> select run_id, concurrency, AVG(tps),
AVG(read_write_req_per_second), AVG(avg_req_latency_ms) from
sysbench_run_iterations where (run_id = 5 or run_id = 6 or run_id = 7) group
by run_id, concurrency order by concurrency, run_id;
+--------+-------------+------------+--------------------------------+-------------------------+
| run_id | concurrency | AVG(tps)   | AVG(read_write_req_per_second) |
AVG(avg_req_latency_ms) |
+--------+-------------+------------+--------------------------------+-------------------------+
|      5 |      2 | 168.986667 |  4731.570000 |     11.813333 |
|      6 |      2 | 152.760000 |  4277.240000 |     13.070000 |
|      7 |      2 | 153.980000 |  4311.376667 |     12.963333 |
|      5 |      4 | 168.986667 |  4731.620000 |     23.646667 |
|      6 |      4 | 152.113333 |  4259.206667 |     26.270000 |
|      7 |      4 | 153.580000 |  4300.256667 |     26.023333 |
|      5 |      8 | 167.196667 |  4681.453333 |     47.803333 |
|      6 |      8 | 151.556667 |  4243.513333 |     52.756667 |
|      7 |      8 | 153.143333 |  4287.940000 |     52.206667 |
|      5 |     16 | 168.596667 |  4720.756667 |     94.770000 |
|      6 |     16 | 140.063333 |  3921.793333 |    114.153333 |
|      7 |     16 | 142.863333 |  4000.233333 |    111.923333 |
|      5 |     32 | 164.643333 |  4609.986667 |    193.840000 |
|      6 |     32 | 137.673333 |  3854.913333 |    232.183333 |
|      7 |     32 | 141.813333 |  3970.773333 |    225.396667 |
|      5 |     64 | 163.190000 |  4569.310000 |    390.270000 |
|      6 |     64 | 137.490000 |  3849.720000 |    464.530000 |
|      7 |     64 | 142.230000 |  3982.346667 |    449.026667 |
|      5 |    128 | 156.630000 |  4385.590000 |    810.570000 |
|      6 |    128 | 136.793333 |  3830.240000 |    931.723333 |
|      7 |    128 | 139.423333 |  3903.856667 |    914.216667 |
|      5 |    256 | 145.243333 |  4066.860000 |   1728.346667 |
|      6 |    256 | 136.320000 |  3816.950000 |   1864.210000 |
|      7 |    256 | 138.243333 |  3870.910000 |   1838.416667 |
|      5 |    512 | 121.763333 |  3409.396667 |   4061.426667 |
|      6 |    512 | 133.313333 |  3732.836667 |   3811.883333 |
|      7 |    512 | 140.636667 |  3937.760000 |   3612.410000 |
|      5 |   1024 | 109.216667 |  3058.136667 |   8608.133333 |
|      6 |   1024 | 134.336667 |  3761.523333 |   7505.293333 |
|      7 |   1024 | 140.370000 |  3930.376667 |   7257.756667 |
|      5 |   2048 |  73.676667 |  2062.883333 |  23895.490000 |
|      6 |   2048 | 122.756667 |  3437.200000 |  16588.006667 |
|      7 |   2048 | 140.710000 |  3939.933333 |  14483.366667 |
|      5 |   4096 |  63.123333 |  1767.510000 |  58600.943333 |
|      6 |   4096 | 105.386667 |  2950.810000 |  38585.280000 |
|      7 |   4096 | 135.816667 |  3802.876667 |  29895.406667 |
+--------+--------+------------+--------------+---------------+
36 rows in set (0.00 sec)
(Some of the blank spaces in the table are removed in order to keep good
format after auto-wrap)

Conclusions:
   Due to single core CPU, throughputs do not double with the increment of
concurrency levels (throughput peak is easily reached). It should really not
be the case on multi-core platforms.
   However, it still shows that the patched pool_of_threads plugin (run_id =
7) has a more stable performance especially at high concurrency levels :).

Unsolved problems:
   Actually I ran the "readwrite" series of sysbenches as well. But
encountered errors at concurrency level of 2048. So the result is not
complete for "readwrite".   Currently the size of pool of threads is fixed
to 8 by default (sysbenches on more than 8 cores platforms should produce
nearly the same results as 8 cores platform does; from 9th on, cores idle
for most of the time). But I think it really should be easy to set when
starting drizzled (like providing a option, say, --pool-size=XXX ), for the
size has a quite tight relationship with the number of CPU cores available
which varies on different machines. Thoughts?


--
Cheers.

Biping MENG
_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to