On 05/06/15 14:42, Carmelo Ponti (CSCS) wrote:
On Wed, 2015-05-06 at 14:11 +0200, LEIBOVICI Thomas wrote:
DB_apply is actually the longer stage with 0.4ms.
Other awaiting operations we see are stacked into the pipeline waiting
for a worker thread to process it
(the queue length is controled by the max_pending_operations you set in
your config).
It is normal there are many of them in the first stage when the
changelog is full.
read speed is a good indicator of how fast it goes.
Did you get better results with the latest code and disabling accounting?
I cannot say better. After the installation of last code I disabled
the accounting as you suggested and I did a small test before apply
the other tuning. The results were the following:
2015/05/06 13:06:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 2565.90 record/sec
2015/05/06 13:07:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1209.03 record/sec
2015/05/06 13:08:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1156.43 record/sec
2015/05/06 13:09:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1229.23 record/sec
2015/05/06 13:10:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1483.60 record/sec
2015/05/06 13:11:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1351.02 record/sec
2015/05/06 13:12:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1413.45 record/sec
2015/05/06 13:13:35 robinhood@daintrbh01
<mailto:robinhood@daintrbh01>[11038/1] STATS | read
speed = 1410.87 record/sec
and DB_APPLY went down from 0.40 to ~ 0.25.
Good sign.
A good sign I noticed is that the CPU load of mysqld increased from
less than 60/80% to ~500% or more. In general I think everything is
better than before
Yes, good sign.
but I cannot reach yet the performance I need (ca. 5000 record/sec).
Now I have:
Stage | Wait | Curr | Done | Total | ms/op |
0: GET_FID | 0 | 0 | 0 | 0 | 0.00 |
1: GET_INFO_DB |99999 | 1 | 0 | 2082860 | 0.25 |
2: GET_INFO_FS | 0 | 0 | 0 | 753109 | 0.58 |
3: REPORTING | 0 | 0 | 0 | 3752 | 0.00 |
4: PRE_APPLY | 0 | 0 | 0 | 562469 | 0.00 |
5: DB_APPLY | 0 | 0 | 0 | 562469 | 0.23 | 0.06%
batched (avg batch size: 5.2)
6: CHGLOG_CLR | 0 | 0 | 0 | 1559175 | 0.03 |
7: RM_OLD_ENTRIES | 0 | 0 | 0 | 0 | 0.00 |
Now the slower stage looks to be the filesystem operation (GET_INFO_FS
avg is 0.58ms).
Maybe you will get a better performance by increasing the number of
simultaneous filesystem operations:
try increasing EntryProcessor :: nb_threads to 12, 16...
What do you think about 0.06% batched (avg batch size: 5.2)? Should I
tune max_batch_size base on this?
No, the batch size is low because the rate of operations reaching this
stage is lower than the speed of the DB operation itself.
If we make FS operations faster, you'll get a better ratio.
Regards,
Carmelo
------------------------------------------------------------------------------
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
Performance metrics, stats and reports that give you Actionable Insights
Deep dive visibility with transaction tracing using APM Insight.
http://ad.doubleclick.net/ddm/clk/290420510;117567292;y
_______________________________________________
robinhood-support mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/robinhood-support