On Mar 19, 2014, at 9:40 AM, Marc MERLIN <m...@merlins.org> wrote:
> After adding a drive, I couldn't quite tell if it was striping over 11
> drive2 or 10, but it felt that at least at times, it was striping over 11
> drives with write failures on the missing drive.
> I can't prove it, but I'm thinking the new data I was writing was being
> striped in degraded mode.

Well it does sound fragile after all to add a drive to a degraded array, 
especially when it's not expressly treating the faulty drive as faulty. I think 
iotop will show what block devices are being written to. And in a VM it's easy 
(albeit rudimentary) with sparse files, as you can see them grow.

> Yes, although it's limited, you apparently only lose new data that was added
> after you went into degraded mode and only if you add another drive where
> you write more data.
> In real life this shouldn't be too common, even if it is indeed a bug.

It's entirely plausible a drive power/data cable becomes lose, runs for hours 
degraded before the wayward device is reseated. It'll be common enough. It's 
definitely not OK for all of that data in the interim to vanish just because 
the volume has resumed from degraded to normal. Two states of data, normal vs 
degraded, is scary. It sounds like totally silent data loss. So yeah if it's 
reproducible it's worthy of a separate bug.

Chris Murphy

To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to