----- Original Message -----
(snip)
> Please take a look at this
> again and figure out what the problematic cycle of events is, and then
> work out how to avoid that happening in the first place. There is no
> point in replacing one problem with another one, particularly one which
> would likely be very tricky to debug,
> 
> Steve.

Rhe problematic cycle of events is well known:
gfs2_clear_inode calls gfs2_glock_put() for the inode's glock,
but if it's the very last put, it calls into dlm, which can block,
and that's where we get into trouble.

The livelock goes like this:

1. A fence operation needs memory, so it blocks on memory allocation.
2. Memory allocation blocks on slab shrinker.
3. Slab shrinker calls into vfs inode shrinker to free inodes from memory.
4. vfs inode shrinker eventually calls gfs2_clear_inode to free an inode.
5. gfs2_clear_inode calls the final gfs2_glock_put to unlock the inode's glock.
6. gfs2_glock_put calls dlm unlock to unlock the glock.
7. dlm blocks on a pending fence operation. Goto 1.

So we need to prevent gfs2_clear_inode from calling into DLM. Still, somebody
needs to do the final put and tell dlm to unlock the inode's glock, which is why
I've been trying to queue it to the delayed work queue.

If I can't do that, we're left with few alternatives: Perhaps a new
function of the quota daemon: to run the lru list and call dlm to unlock
any that have a special bit set, but that just seems ugly.

I've thought of some other alternatives, but they seem a lot uglier and
harder to manage. I'll give it some more thought.

Regards,

Bob Peterson
Red Hat File Systems

Reply via email to