Hi,

On 18/12/2018 16:09, Mark Syms wrote:
Thanks Bob,

We believe we have seen these issues from time to time in our automated testing 
but I suspect that they're indicating a configuration problem with the backing 
storage. For flexibility a proportion of our purely functional testing will use 
storage provided by a VM running a software iSCSI target and these tests seem 
to be somewhat susceptible to getting I/O errors, some of which will inevitably 
end up being in the journal. If we start to see a lot we'll need to look at the 
config of the VMs first I think.

        Mark.

I think there are a few things here... firstly Bob is right that in general if we are going to retry I/O, then this would be done at the block layer, by multipath for example. However, having a way to gracefully deal with failure aside from fencing/rebooting a node is useful.

One issue with that is tracking outstanding I/O. For the journal we do that anyway, since we count the number of in flight I/Os. In other cases this is more difficult, for example where we use the VFS library functions for readpages/writepages. If we were able to track all the I/O that GFS2 produces and be certain to be able to turn off future I/O (or writes at least) internally then we could avoid using the dm based solution for withdraw that we currently have. That would be an improvement in terms of reliability.

The other issue is the one that Bob has been looking at, namely a way to signal that recovery is due, but without requiring fencing. If we can solve both of those issues, then that would certainly go a long way towards improving this,

Steve.



-----Original Message-----
From: Bob Peterson <rpete...@redhat.com>
Sent: 18 December 2018 15:52
To: Mark Syms <mark.s...@citrix.com>
Cc: cluster-devel@redhat.com
Subject: Re: [Cluster-devel] [GFS2 PATCH] gfs2: Panic when an io error occurs 
writing

----- Original Message -----
Hi Bob,

I agree, it's a hard problem. I'm just trying to understand that we've
done the absolute best we can and that if this condition is hit then
the best solution really is to just kill the node. I guess it's also a
question of how common this actually ends up being. We have now got
customers starting to use GFS2 for VM storage on XenServer so I guess
we'll just have to see how many support calls we get in on it.

Thanks,

Mark.
Hi Mark,

I don't expect the problem to be very common in the real world.
The user has to get IO errors while writing to the GFS2 journal, which is not 
very common. The patch is basically reacting to a phenomenon we recently 
started noticing in which the HBA (qla2xxx) driver shuts down and stops 
accepting requests when you do abnormal reboots (which we sometimes do to test 
node recovery). In these cases, the node doesn't go down right away.
It stays up just long enough to cause IO errors with subsequent withdraws, 
which, we discovered, results in file system corruption.
Normal reboots, "/sbin/reboot -fin", and "echo b > /proc/sysrq-trigger" should 
not have this problem, nor should node fencing, etc.

And like I said, I'm open to suggestions on how to fix it. I wish there was a 
better solution.

As it is, I'd kind of like to get something into this merge window for the 
upstream kernel, but I'll need to submit the pull request for that probably 
tomorrow or Thursday. If we find a better solution, we can always revert these 
changes and implement a new one.

Regards,

Bob Peterson


Reply via email to