Hi, Andy,

On Wed, Apr 01, 2015 at 03:11:14PM +0000, Andy Smith wrote:
> I have a 6 device RAID-1 filesystem:

[snip tale of a filesystem with out of data data on one copy of the RAID]

> I have now got a new enclosure and put this system back together
> with all six devices. I was not expecting this filesystem to mount
> without assistance on boot because of /dev/sdk being "stale"
> compared to the other devices. I suppose this incorrect view is a
> holdover from my experience with mdadm.
> 
> Anyway, I booted it and /srv/tank was mounted automatically with all
> six devices.  I got a bunch of these messages as soon as it was
> mounted:
> 
>     http://pastie.org/private/2ghahjwtzlcm6hwp66hkg
> 
> There's lots more of it but it's all like that. That paste is from
> the end of the log and there haven't been any more such message
> since, so that's about 20 minutes (the times are in GMT).
> 
> Is that normal output indicating that btrfs is repairing the
> "staleness" of sdk from the other copy?

   Yes, exactly. That output you pasted looks pretty much exactly like
what I'd expect to see in the situation described above. You might
also expect to see some checksum errors corrected in the data, as well
as the metadata messages you're getting.

> I seem to be able to use the filesystem and a cursory inspection
> isn't turning up anything that I can't read or that seems
> corrupted. I will now run checksums against my last good backup.
> 
> Should I run a scrub as well?

   Yes. The output you've had so far will be just the pieces that the
FS has tried to read, and where, as a result, it's been able to detect
the out-of-date data. A scrub will check and fix everything.

   Hugo.

-- 
Hugo Mills             | My karma has run over my dogma.
hugo@... carfax.org.uk |
http://carfax.org.uk/  |
PGP: 65E74AC0          |

Attachment: signature.asc
Description: Digital signature

Reply via email to