One of the drives which I added to my array two days ago was most
likely already damaged when I bought it - 312 read errors while
scrubbing and lots of SMART errors. I want to take the drive out, go
to my hardware vendor and have it replaced. So I issued the command:
"btrfs dev del /dev/sdf /mnt/btrfs-raid". It has been going on for
about an hour now. It seems that removing the drive triggered a
balance operation - "btrfs fi show"  shows /dev/sdf as having Zero
Data on it, while the size of the data stored on the other drives is
increasing. What I would like to know is where btrfs takes the data to
be replicated from - from the others drive or from the device about to
be removed? I am using RAID1 on data, metadata and system so there
should be a good copy of all the files on /dev/sdf. I just want to be
certain that the damaged data on /dev/sdf are not spread around the
filesystem in which case restoring my backup would be the best idea.

Yours sincerely
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to