Victor Sudakov wrote on 2019-02-11 17:46:
...
I've preferred disk0_dev="zvol" VMs for aesthetical reasons since
vm-bhyve started supporting them. Those file-based VMs get in the way
while backing up $vm_dir, and their disks are not visible in
"zfs list -t volume"
+1.
--
P Vixie
Frank Leonhardt wrote:
> >
> > vm-bhyve keeps virtual machines on zfs volumes with volmode=dev. How can
> > I access/mount the filesystems within the volume when the virtual host
> > is offline?
> >
> > If I kept virtual disks in raw files, I could access them as devices
> > with mdconfig. But:
>
Paul Webster wrote:
> mdconfig -a -t vnode -f afaik
Sorry, this does not work.
root@newserv:~ # mdconfig -a -t vnode /dev/zvol/zroot/vm/mail/disk0
mdconfig: /dev/zvol/zroot/vm/mail/disk0 is not a regular file
There must be some other way.
> once they are mounted you could dd them
Indeed, I
> On Feb 11, 2019, at 6:32 AM, Christian Kratzer wrote:
>
> I am running freebsd vm on debian 10 buster with libvirt/kvm/qemu.
>
> I have several kvm hosts in the cluster. Some with various intel xeon and
> others with AMD EPYC 7301 cpu.
>
> FreeBSD vms upto 11.2-RELEASE-p9 boo fine on all
Hi all-
Fairly new to bhyve virtualization, and I have a question about disk usage.
I set up a sparse zvol on FreeNAS 11.2-RELEASE to house the data for a
Windows 7 VM. I install Win 7 Enterprise SP1 and just finished
patching/updating to current, and my zvol is reporting 539GB of usage! I
have a
mdconfig -a -t vnode -f afaik
once they are mounted you could dd them over to the zvols and then resize
them from within the vm
On Mon, 11 Feb 2019 at 17:04, Victor Sudakov wrote:
> Dear Colleagues,
>
> vm-bhyve keeps virtual machines on zfs volumes with volmode=dev. How can
> I access/mount
Dear Colleagues,
vm-bhyve keeps virtual machines on zfs volumes with volmode=dev. How can
I access/mount the filesystems within the volume when the virtual host
is offline?
If I kept virtual disks in raw files, I could access them as devices
with mdconfig. But:
root@newserv:~ # mdconfig -a -f
Hi,
I am running freebsd vm on debian 10 buster with libvirt/kvm/qemu.
I have several kvm hosts in the cluster. Some with various intel xeon and
others with AMD EPYC 7301 cpu.
FreeBSD vms upto 11.2-RELEASE-p9 boo fine on all systems when passing through
the host cpu using following libvirt