Package: btrfs-tools
Version: 0.19+20120328-7
Severity: critical
I've marked this as critical as more people are likely to use btrfs with
wheezy, and it results in filesystems not coming up at boot,
consequently it is a system-wide issue and not just a package issue. It
does not involve data loss.
Apparently the
btrfs dev scan
command must be invoked before a btrfs RAID1 filesystem can be mounted.
btrfs-tools has some support for this in /lib/udev/rules.d/60-btrfs.rules
However, in my case, my RAID1 is made up of two LVM logical volumes,
mkfs.btrfs -m raid1 -d raid1 /dev/mapper/vg00-btrfsvol0_[01]
and the udev script never seems to be called.
When I try to mount the volume after a reboot:
# mount /dev/mapper/vg00-btrfsvol0_0 /mnt/btrfs0
mount: wrong fs type, bad option, bad superblock on
/dev/mapper/vg00-btrfsvol0_0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so
I checked dmesg:
[17216.145092] device fsid c959d4a5-0713-4685-b572-8a679ec37e20 devid 1
transid 34 /dev/mapper/vg00-btrfsvol0_0
[17216.145639] btrfs: disk space caching is enabled
[17216.146987] btrfs: failed to read the system array on dm-100
[17216.147556] btrfs: open_ctree failed
The feedback from the btrfs community is that
a) `btrfs dev scan' should run from an init script
b) may need the "Device mapper uevents" option in the
kernel (CONFIG_DM_UEVENT) to trigger the udev rule when you enable
your VG(s)
http://comments.gmane.org/gmane.comp.file-systems.btrfs/19271
--
To UNSUBSCRIBE, email to [email protected]
with a subject of "unsubscribe". Trouble? Contact [email protected]