Yes, it has to re-acquire pg_lock today..
But, between journal write and initiating the ondisk ack, there is one context 
switche in the code path. So, I guess the pg_lock is not the only one that is 
causing this 1 ms delay...
Not sure increasing the finisher threads will help in the pg_lock case as it 
will be more or less serialized by this pg_lock..
But, increasing finisher threads for the other context switches I was talking 
about (see queue_completion_thru) may help...

Thanks & Regards
Somnath

-----Original Message-----
From: ceph-devel-ow...@vger.kernel.org 
[mailto:ceph-devel-ow...@vger.kernel.org] On Behalf Of Ding Dinghua
Sent: Tuesday, August 04, 2015 3:00 AM
To: ceph-devel@vger.kernel.org
Subject: More ondisk_finisher thread?

Hi:
   Now we are doing some ceph performance tuning work, our setup has ten ceph 
nodes, and SSD as journal, HDD for filestore, and ceph version is 0.80.9.
   We run fio in virtual maching with random 4KB write workload, we find that 
It took about 1ms in average for ondisk_finisher, while journal write only took 
0.4ms, so I think it's unreasonable.
    Since ondisk callback will be called with pg lock held, If pg lock has been 
grabbed by another thread(for example, osd->op_wq), all ondisk callback will be 
delayed, then all write op will be delayed.
     I found that op_commit must be called with pg lock, so what about increase 
the ondisk_finisher thread number, so ondisk callback can be less likely to be 
delayed.

--
Ding Dinghua
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in the 
body of a message to majord...@vger.kernel.org More majordomo info at  
http://vger.kernel.org/majordomo-info.html

________________________________

PLEASE NOTE: The information contained in this electronic mail message is 
intended only for the use of the designated recipient(s) named above. If the 
reader of this message is not the intended recipient, you are hereby notified 
that you have received this message in error and that any review, 
dissemination, distribution, or copying of this message is strictly prohibited. 
If you have received this communication in error, please notify the sender by 
telephone or e-mail (as shown above) immediately and destroy any and all copies 
of this message in your possession (whether hard copies or electronically 
stored copies).

Reply via email to