On Mon, Nov 30, 2015 at 7:51 AM, Austin S Hemmelgarn
<[email protected]> wrote:

> General thoughts on this:
> 1. If there's a write error, we fail unconditionally right now.  It would be
> nice to have a configurable number of retries before failing.

I'm unconvinced. I pretty much immediately do not trust a block device
that fails even a single write, and I'd expect the file system to
quickly get confused if it can't rely on flushing pending writes to
that device. Unless Btrfs gets into the business of tracking bad
sectors (failed writes), the block device is a gonor upon a single
write failure, although it could still be reliable for reads.

Possibly reasonable, is the user indicting a preference for what
happens after the max number of write failures is exceeded:

- Volume goes degraded: Faulty block device is ignored entirely,
degraded writes permitted.
- Volumes goes ro: Faulty block device is still used for reads,
degraded writes not permitted.

As far as I know, md and lvm only do the former. And md/mdadm did
recently get the ability to support bad block maps so it can continue
using drives lacking reserve sectors (typically that's the reason for
write failures on conventional rotational drives).



> 2. Similar for read errors, possibly with the ability to ignore them below
> some threshold.

Agreed. Maybe it would be an error rate (set by ratio)?




-- 
Chris Murphy
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to