On 2016-10-18 11:02, Stefan Malte Schumacher wrote:

One of the drives which I added to my array two days ago was most
likely already damaged when I bought it - 312 read errors while
scrubbing and lots of SMART errors. I want to take the drive out, go
to my hardware vendor and have it replaced. So I issued the command:
"btrfs dev del /dev/sdf /mnt/btrfs-raid". It has been going on for
about an hour now. It seems that removing the drive triggered a
balance operation - "btrfs fi show"  shows /dev/sdf as having Zero
Data on it, while the size of the data stored on the other drives is
increasing. What I would like to know is where btrfs takes the data to
be replicated from - from the others drive or from the device about to
be removed? I am using RAID1 on data, metadata and system so there
should be a good copy of all the files on /dev/sdf. I just want to be
certain that the damaged data on /dev/sdf are not spread around the
filesystem in which case restoring my backup would be the best idea.
This is expected behavior, any data that was on the device being removed needs to be relocated to other devices in the array, and the easiest way to do that is to leverage the balance code. Because it's using the balance code, you should have no issues with data corruption being propagated to the other devices, since balance also functionally scrubs data as it's being moved (it has the same fall-back error handling that regular file accesses do, so if it finds a checksum error, the other copy of that block will get used).
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
  • Btrfs dev del Stefan Malte Schumacher
    • Re: Btrfs dev del Austin S. Hemmelgarn

Reply via email to