The results have just come out! But it seems the patch can in no way
overcome the weak point of pool of threads on readwrite workloads. It
took extremely long time to finish the iterations at high concurrency
levels. Data in the last part of the result table could really be described
as awful:(.

Configurations:
+--------+------------------------------------------+---------+----------+
| run_id |                            configuration  |  branch | revision |
+--------+-------------------------------------------+---------+----------+
|     11 |                 innodb_1000K_readwrite |   trunk |     1097 |
|     12 | innodb_1000K_readwrite_pool_of_threads |   trunk |     1097 |
|     13 | innodb_1000K_readwrite_pool_of_threads | patched |     1098 |
+--------+-------------------------------------------+---------+----------+
where the patched branch refers to lp:~mengbiping/rewriting-pool-of-threads
.

Platform:
Same as above.


Results:
   mysql> select run_id, concurrency, AVG(tps),
AVG(read_write_req_per_second), AVG(avg_req_latency_ms) from
sysbench_run_iterations where (run_id = 11 or ru   n_id = 12 or run_id = 13)
group by run_id, concurrency order by concurrency, run_id;
+--------+-------------+------------+--------------------------------+-------------------------+
| run_id | concurrency | AVG(tps)   | AVG(read_write_req_per_second) |
AVG(avg_req_latency_ms) |
+--------+-------------+------------+--------------------------------+-------------------------+
|     11 |     2 | 134.226667 |  4563.733333 |       14.876667 |
|     12 |     2 | 122.920000 |  4179.390000 |       16.246667 |
|     13 |     2 | 122.116667 |  4152.050000 |       16.353333 |
|     11 |     4 | 134.976667 |  4589.166667 |       29.613333 |
|     12 |     4 | 122.156667 |  4153.316667 |       32.720000 |
|     13 |     4 | 123.640000 |  4203.976667 |       32.330000 |
|     11 |     8 | 136.430000 |  4638.733333 |       58.596667 |
|     12 |     8 | 123.666667 |  4204.696667 |       64.653333 |
|     13 |     8 | 123.256667 |  4190.640000 |       64.880000 |
|     11 |    16 | 135.653333 |  4612.316667 |      117.793333 |
|     12 |    16 | 111.990000 |  3807.880000 |      142.813333 |
|     13 |    16 | 112.250000 |  3817.026667 |      142.380000 |
|     11 |    32 | 134.296667 |  4566.656667 |      237.710000 |
|     12 |    32 | 110.586667 |  3759.930000 |      288.950000 |
|     13 |    32 | 111.640000 |  3795.970000 |      286.343333 |
|     11 |    64 | 134.733333 |  4581.350000 |      473.136667 |
|     12 |    64 | 109.463333 |  3722.243333 |      583.426667 |
|     13 |    64 | 111.943333 |  3806.363333 |      569.306667 |
|     11 |   128 | 133.226667 |  4532.243333 |      953.073333 |
|     12 |   128 | 106.080000 |  3608.616667 |     1201.373333 |
|     13 |   128 | 109.263333 |  3715.666667 |     1166.730000 |
|     11 |   256 | 125.266667 |  4261.603333 |     2019.030000 |
|     12 |   256 |   6.943333 |   240.493333 |    36769.383333 |
|     13 |   256 |   7.043333 |   243.213333 |    36260.420000 |
|     11 |   512 | 109.240000 |  3722.010000 |     4524.813333 |
|     12 |   512 |   1.966667 |    71.743333 |   259482.830000 |
|     13 |   512 |   1.860000 |    68.143333 |   271795.930000 |
|     11 |  1024 |  90.330000 |  3090.540000 |    10890.640000 |
|     12 |  1024 |   0.670000 |    27.676667 |  1367618.170000 |
|     13 |  1024 |   0.696667 |    28.483333 |  1329004.933333 |
+--------+-------+------------+--------------+-----------------+
   30 rows in set (0.00 sec)


On Thu, Jul 30, 2009 at 3:07 PM, Eric Day <[email protected]> wrote:

> Great work Biping!
>
> I'm very interested to see how things scale out with read-write
> workloads and mixed workloads. I can also help you run some of these
> tests on larger 16 core machines if you don't have access to any.


Ah, Great! That will be very helpful! Thanks a lot for the offering! :)
One point that might be important is that as I mentioned above, 16 core
machine may be unfair for pool-of-thread scheduling, for only 8 out of 16
cores will be used.
The size of the pool is currently hard coded as follow:

   static DRIZZLE_SYSVAR_UINT(size, size,
                              PLUGIN_VAR_RQCMDARG,
                              N_("Size of Pool."),
                              NULL, NULL, 8, 1, 1024, 0);

I strongly suggest that the size of pool should be settable when drizzled is
started (like providing an option --pool-size=XXX) so as to fit in different
machines. How do you think about it?


>
> Thanks!
> -Eric
>
> On Thu, Jul 30, 2009 at 09:49:45AM +0800, Biping MENG wrote:
> >    On Thu, Jul 30, 2009 at 1:03 AM, Jay Pipes <[email protected]> wrote:
> >
> >      Biping MENG wrote:
> >
> >        Unsolved problems:
> >          Actually I ran the "readwrite" series of sysbenches as well. But
> >        encountered errors at concurrency level of 2048. So the result is
> not
> >        complete for "readwrite".
> >
> >      Don't run readwrites on InnoDB with concurrency levels greater than
> >      1024.  There is a hard limit of 1024 transactions in the concurrent
> lock
> >      graph currently and deadlocks and undefined behaviour are known to
> occur
> >      when concurrency exceeds 1024.  This is a known issue that Oracle
> (or
> >      maybe Percona?) has developed a beta solution for and hopefully will
> >      find its way into the next plugin version, at which point Drizzle
> will
> >      of course pull in the improvements.
> >
> >    ah, I guess it is the famous InnoDB 1024 bug?
> >    Thanks for your explanation about it!
> >    I'm just making up this part of readwrite test. And will be posting it
> to
> >    this thread right after I got results:)
> >
> >
> >      Cheers!
> >      jay
> >
> >    --
> >    Cheers.
> >
> >    Biping MENG
>



-- 
Cheers.

Biping MENG
_______________________________________________
Mailing list: https://launchpad.net/~drizzle-discuss
Post to     : [email protected]
Unsubscribe : https://launchpad.net/~drizzle-discuss
More help   : https://help.launchpad.net/ListHelp

Reply via email to