Hi,

On 15/08/2019 14:41, Bob Peterson wrote:
I just noticed the parameter comments for gfs2_glock_async_wait
are wrong, and I've fixed them in a newer version. I can post
the new version after I get people's initial reactions.

Bob

Overall this looks like a much better approach. We know that this doesn't happen very often, so the path which involves the timeout should be very rarely taken. The problem is how to select a suitable timeout... is 2 secs enough? Can we land up with a DoS in certain situations? Hopefully not, but we should take care.

The shared wait queue might also be an issue in terms of contention, so it might be worth looking at how to avoid that. Generally though, this is looking very promising I think,

Steve.


----- Original Message -----
Because s_vfs_rename_mutex is not cluster-wide, multiple nodes can
reverse the roles of which directories are "old" and which are "new"
for the purposes of rename. This can cause deadlocks where two nodes
can lock old-then-new but since their roles are reversed, they wait
for each other.

This patch fixes the problem by acquiring all gfs2_rename's inode
glocks asychronously and waits for all glocks to be acquired.
That way all inodes are locked regardless of the order.

Signed-off-by: Bob Peterson <rpete...@redhat.com>
---
(snip)
+ * gfs2_glock_async_wait - wait on multiple asynchronous glock acquisitions
+ * @gh: the glock holder
(snip)
+int gfs2_glock_async_wait(unsigned int num_gh, struct gfs2_holder *ghs)


Reply via email to