Hi Joel,
This reply may be really too late. :)
Joel Becker wrote:
On Wed, Jun 10, 2009 at 01:37:53PM +0800, Tao Ma wrote:
I also have some thoughts for it. Wish it isn't too late.
Well, if we come up with changes it will affect what I push, but
that's OK.
Tao Ma wrote:
Hi Joel,
This reply may be really too late. :)
Joel Becker wrote:
On Wed, Jun 10, 2009 at 01:37:53PM +0800, Tao Ma wrote:
I also have some thoughts for it. Wish it isn't too late.
Well, if we come up with changes it will affect what I push, but
that's OK.
Tao Ma wrote:
Hi Srini/Joel/Sunil,
I also have some thoughts for it. Wish it isn't too late.
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a
node to query another mounted one since that node will query
Srinivas Eeda wrote:
Tao Ma wrote:
Hi Srini/Joel/Sunil,
I also have some thoughts for it. Wish it isn't too late.
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a
node to query another mounted one
On Wed, Jun 10, 2009 at 01:37:53PM +0800, Tao Ma wrote:
I also have some thoughts for it. Wish it isn't too late.
Well, if we come up with changes it will affect what I push, but
that's OK.
Currently, orphan scan just iterate all the slots and call
Srini,
I was re-reviewing these patches as part of 1.4 merge and something
caught my eye. Specifically that this may slowdown umount unnecessarily.
Consider the case if scan_work() is fired a tick before scan_stop().
Currently, scan_stop() will cancel the newly queued scan_work() but do
nothing
Hi Srini/Joel/Sunil,
I also have some thoughts for it. Wish it isn't too late.
Currently, orphan scan just iterate all the slots and call
ocfs2_queue_recovery_completion, but I don't think it is proper for a
node to query another mounted one since that node will query it by
When a dentry is unlinked, the unlinking node takes an EX on the dentry lock
before moving the dentry to the orphan directory. The other nodes, that all had
a PR on the same dentry lock, flag the corresponding inode as MAYBE_ORPHANED
during the downconvert. The inode is finally deleted when the
In the current implementation, unlink is a two step process.
1) The deleting node requests an EX on dentry lock and place the file in the
orphan directory. The lock request causes other nodes to downcovert to NULL,
and flag the inode as orphaned.
2) Each node that has inode cached will see
Srinivas Eeda wrote:
In the current implementation, unlink is a two step process.
1) The deleting node requests an EX on dentry lock and place the file in the
orphan directory. The lock request causes other nodes to downcovert to
NULL,
and flag the inode as orphaned.
2) Each node
When a dentry is unlinked, the unlinking node takes an EX on the dentry lock
before moving the dentry to the orphan directory. The other nodes, that all had
a PR on the same dentry lock, flag the corresponding inode as MAYBE_ORPHANED
during the downconvert. The inode is finally deleted when the
Srinivas Eeda wrote:
When a dentry is unlinked, the unlinking node takes an EX on the dentry lock
before moving the dentry to the orphan directory. The other nodes, that all
had
a PR on the same dentry lock, flag the corresponding inode as MAYBE_ORPHANED
during the downconvert. The inode is
12 matches
Mail list logo