Hi James,

I find this for btrfs

       device delete <dev> [<dev>..] <path>
              Remove device(s) from a filesystem identified by <path>.

       device add <dev> [<dev>..] <path>
              Add device(s) to the filesystem identified by <path>.

https://btrfs.wiki.kernel.org/index.php/Using_Btrfs_with_Multiple_Devices#Replacing_failed_devices

gives more details, including the "delete missing" option, I cannot find at
my CentOS 6 box manpage I just quoted.

I hope it helps. I looked it up out of curiosity, I am used to ZFS, to
 "zpool offline" and "zpool replace".

Regards
Peter


On Mon, Feb 1, 2016 at 10:52 PM, James Harper via luv-main <
[email protected]> wrote:

> Is there a way that I can, on a running system, mark a btrfs disk as
> having failed, so that it will now be “missing” and the array will be in a
> degraded state?
>
>
>
> I can obviously do it by using fdisk to delete the partition, then
> rebooting and mounting with the degraded option, but I want to do it
> without a reboot, and without having to tinker with the boot process
> remotely.
>
>
>
> I can also delete the device (move all the data off the device), then
> delete it, then add the new device, then rebalance to move all the data
> back, and that would be safer, but would be terribly slow. Also I’m not
> sure I have enough free space to allow this (maybe I do, but it would be
> tight enough that I can’t be sure there wouldn’t be some overhead I haven’t
> taken into account)
>
>
>
> Thanks
>
>
>
> James
>
> _______________________________________________
> luv-main mailing list
> [email protected]
> http://lists.luv.asn.au/listinfo/luv-main
>
>
_______________________________________________
luv-main mailing list
[email protected]
http://lists.luv.asn.au/listinfo/luv-main

Reply via email to