Am 20.05.2025 um 16:03 hat Stefan Hajnoczi geschrieben:
> On Thu, May 15, 2025 at 05:02:46PM +0200, Kevin Wolf wrote:
> > Am 15.05.2025 um 16:01 hat Stefan Hajnoczi geschrieben:
> > > On Thu, May 15, 2025 at 10:15:53AM +0200, Kevin Wolf wrote:
> > > > Am 13.05.2025 um 15:51 hat Stefan Hajnoczi geschrieben:
> > > > > On Tue, May 13, 2025 at 01:37:30PM +0200, Kevin Wolf wrote:
> > > > > > When scsi-block is used on a host multipath device, it runs into the
> > > > > > problem that the kernel dm-mpath doesn't know anything about SCSI or
> > > > > > SG_IO and therefore can't decide if a SG_IO request returned an 
> > > > > > error
> > > > > > and needs to be retried on a different path. Instead of getting 
> > > > > > working
> > > > > > failover, an error is returned to scsi-block and handled according 
> > > > > > to
> > > > > > the configured error policy. Obviously, this is not what users want,
> > > > > > they want working failover.
> > > > > > 
> > > > > > QEMU can parse the SG_IO result and determine whether this could 
> > > > > > have
> > > > > > been a path error, but just retrying the same request could just 
> > > > > > send it
> > > > > > to the same failing path again and result in the same error.
> > > > > > 
> > > > > > With a kernel that supports the DM_MPATH_PROBE_PATHS ioctl on 
> > > > > > dm-mpath
> > > > > > block devices (queued in the device mapper tree for Linux 6.16), we 
> > > > > > can
> > > > > > tell the kernel to probe all paths and tell us if any usable paths
> > > > > > remained. If so, we can now retry the SG_IO ioctl and expect it to 
> > > > > > be
> > > > > > sent to a working path.
> > > > > > 
> > > > > > Signed-off-by: Kevin Wolf <kw...@redhat.com>
> > > > > > ---
> > > > > >  block/file-posix.c | 82 
> > > > > > +++++++++++++++++++++++++++++++++++++++++++++-
> > > > > >  1 file changed, 81 insertions(+), 1 deletion(-)
> > > > > 
> > > > > Maybe the probability of retry success would be higher with a delay so
> > > > > that intermittent issues have time to resolve themselves. Either way,
> > > > > the patch looks good.
> > > > 
> > > > I don't think adding a delay here would be helpful. The point of
> > > > multipath isn't that you wait until a bad path comes back, but that you
> > > > just switch to a different path until it is restored.
> > > 
> > > That's not what this loop does. DM_MPATH_PROBE_PATHS probes all paths
> > > and fails when no paths are available. The delay would only apply in the
> > > case when there are no paths available.
> > > 
> > > If the point is not to wait until some path comes back, then why loop at
> > > all?
> > 
> > DM_MPATH_PROBE_PATHS can only send I/O to paths in the active path
> > group, so it doesn't fail over to different path groups. If there are no
> > usable paths left in the current path group, but there are some in
> > another one, then the ioctl returns 0 and the next SG_IO would switch to
> > a different path group, which may or may not succeed. If it fails, we
> > have to probe the paths in that group, too.
> 
> This wasn't obvious to me, can that be emphasized in the code via naming
> or comments? About retrying up to 5 times: is the assumption that there
> will be 5 or fewer path groups?

Originally, the thought behind the 5 was more about the case where
DM_MPATH_PROBE_PATHS offlines bad paths, but then another one goes down
before we retry SG_IO, so that it fails again.

But you're right that it would now apply to retrying in a different path
group. The assumption we make would then be that there will be 5 or
fewer path groups with no working path in them (rather than just 5 of
them existing). That doesn't seem like a completely unreasonable
assumption, but maybe we should increase the number now just to be on
the safe side?

Ben, do you have an opinion on this?

Kevin

Attachment: signature.asc
Description: PGP signature

Reply via email to