Zitat von Roger Pau Monné <roger....@citrix.com>:

El 18/09/15 a les 19.41, Michael Reifenberger ha escrit:
Hi,
today I've got my first real xen dom0 error so far:

I had a 20G zfs volume with windows installed (Windows has the PV
drivers installed).
The disk section of the cfg looks like:
...
disk =  [
        '/dev/zvol/zdata/VM/win81/root,raw,hda,rw',
        '/VM/ISO/W81.PRO.X64.MULTi8.ESD.Apr2015.iso,raw,hdc:cdrom,r'
        ]
boot="d"
...


This works until shutting down the domU and extending the volume (from
20G) to 40G:

zfs set volsize=40G zdata/VM/win81/root

Now trying to start the guest I get:

(vm)(root) # xl create win81.cfg
Parsing config from win81.cfg
libxl: error: libxl_device.c:950:device_backend_callback: unable to add
device with path /local/domain/0/backend/vbd/6/768
libxl: error: libxl_device.c:950:device_backend_callback: unable to add
device with path /local/domain/0/backend/vbd/6/5632
libxl: error: libxl_create.c:1153:domcreate_launch_dm: unable to add
disk devices
libxl: error: libxl_dm.c:1595:kill_device_model: unable to find device
model pid in /local/domain/6/image/device-model-pid
libxl: error: libxl.c:1608:libxl__destroy_domid:
libxl__destroy_device_model failed for 6
libxl: error: libxl_device.c:950:device_backend_callback: unable to
remove device with path /local/domain/0/backend/vbd/6/768
libxl: error: libxl_device.c:950:device_backend_callback: unable to
remove device with path /local/domain/0/backend/vbd/6/5632
libxl: error: libxl.c:1645:devices_destroy_cb: libxl__devices_destroy
failed for 6
libxl: info: libxl.c:1691:devices_destroy_cb: forked pid 2306 for
destroy of domain 6

Since I saw in syslog that GEOM did some auto-moddings of the disk I did:

`gpart commit zvol/zdata/VM/win81/root` on the dom0,
and `gpart resize -i 2 zvol/zdata/VM/win81/root`
but this didn't change the above failure.

The handbook for bhyve when using ZVOLs is to create them using:

# zfs create -V16G -o volmode=dev zroot/linuxdisk0

Note the volmode=dev, which prevents GEOM from sniffing the partition table.


Thats at least a workaround!
Sometimes it would be nice to be able to access/pre-fill domU slices/partitions on
dom0 as well...

Only after reboot the guest can be started so somewhere must be a
mismatch of cached data...

Any clues?

This is from my own experience, but xen-blkback doesn't recover from
errors sometimes and ends up in some kind of locked state waiting for a
device to disconnect. Not sure if that's the case here, but I won't be
surprised.

How does xen-blkback construct this Path: /local/domain/0/backend/vbd/6/768 or
/local/domain/0/backend/vbd/6/5632?

Is the volmode=dev changable after creation or only at creation time?


BTW: Many thanks for supporting XEN-dom0 under FreeBSD/ZFS.
So far it works surprisingly stable (Except some minor glitches like the above) :-)

Thanks!

Greetings
---
Michael


Gruß
---
Michael Reifenberger

_______________________________________________
freebsd-xen@freebsd.org mailing list
https://lists.freebsd.org/mailman/listinfo/freebsd-xen
To unsubscribe, send any mail to "freebsd-xen-unsubscr...@freebsd.org"

Reply via email to