On Fri, Jul 08, 2011 at 12:02:57AM -0700, Srinivas Eeda wrote:
Below is excerpts from Joel's email for the same question :)
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a node
to query another mounted
Below is excerpts from Joel's email for the same question :)
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a node
to query another mounted one since that node will query it by
itself.
Node 1 has
On 07/08/2011 12:02 AM, Srinivas Eeda wrote:
Below is excerpts from Joel's email for the same question :)
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a node
to query another mounted one since that node
On 7/5/2011 11:17 PM, Sunil Mushran wrote:
2. All nodes have to scan all slots. Even live slots. I remember we
did for
a reason. And that reason should be in the comment in the patch written
by Srini.
When a node unlinks a file it inserts an entry into it's own orphan
slot. If another node
On 07/06/2011 11:19 PM, Srinivas Eeda wrote:
On 7/5/2011 11:17 PM, Sunil Mushran wrote:
2. All nodes have to scan all slots. Even live slots. I remember we
did for
a reason. And that reason should be in the comment in the patch written
by Srini.
When a node unlinks a file it inserts an entry
On 07/05/2011 09:38 PM, Wengang Wang wrote:
There is a use case that the app deletes huge number(XX kilo) of files in
every
5 minutes. The deletions of some specific files are extreamly slow(costing
xx~xxx seconds). That is unacceptable.
Reading out the dir entries and the relavent inodes
On 11-07-06 14:41, Wengang Wang wrote:
On 11-07-05 23:17, Sunil Mushran wrote:
On 07/05/2011 09:38 PM, Wengang Wang wrote:
There is a use case that the app deletes huge number(XX kilo) of files in
every
5 minutes. The deletions of some specific files are extreamly slow(costing
xx~xxx
There is a use case that the app deletes huge number(XX kilo) of files in every
5 minutes. The deletions of some specific files are extreamly slow(costing
xx~xxx seconds). That is unacceptable.
Reading out the dir entries and the relavent inodes cost time. And we are doing
that with i_mutex held,