Frank Leonhardt wrote: > > > > vm-bhyve keeps virtual machines on zfs volumes with volmode=dev. How can > > I access/mount the filesystems within the volume when the virtual host > > is offline? > > > > If I kept virtual disks in raw files, I could access them as devices > > with mdconfig. But: > > > > root@newserv:~ # mdconfig -a -f /dev/zvol/zroot/vm/mail/disk0 > > mdconfig: /dev/zvol/zroot/vm/mail/disk0 is not a regular file > > root@newserv:~ # > > > > Also, how can I exchange those zfs volumes for use with other > > hypervisors? They are not real raw disk files so I cannot use > > sysutils/vmdktool etc. > > > I don't know this, but I'll guess(!) > > If you've set volmode to dev then you get a cdev device in devfs, and > you'll never get it to mount. Try using geom instead (which IIRC is the > default).
The default in vm-bhyve is volmode=dev, and I think this is reasonable. Do you know if I can clone an existing volmode=dev volume into a volmode=geom volume and then work with the clone? > > HOWEVER, I suspect you're doing this because you're hoping that a ZFS > volume is faster than a file. Well, not actually. > I went through this, in the hope it wouldn't do CoW and would > therefore be a lot better for databases. I was disappointed! > Bascially, it's no better than a ZFS file. If that was your plan, use > a UFS partition. A UFS partition? Where? > I don't use ZFS volumes any more; I > think they're more useful on Solaris. A md mapped on to a ZFS file seems > to be the BSD way, and for VMs just use a file in its own dataset. You > can then clone the dataset. Just what you need for nearly identical VMs. I've preferred disk0_dev="zvol" VMs for aesthetical reasons since vm-bhyve started supporting them. Those file-based VMs get in the way while backing up $vm_dir, and their disks are not visible in "zfs list -t volume" -- Victor Sudakov, VAS4-RIPE, VAS47-RIPN 2:5005/49@fidonet http://vas.tomsk.ru/
signature.asc
Description: PGP signature
