Re: [Xen-devel] Questions about PVHv2/HVMlite
On Thu, May 18, 2017 at 09:48:02AM -0500, Gary R Hook wrote: > On 05/18/2017 03:16 AM, Roger Pau Monné wrote: > > > > So using your example, the config file should look like: > > > > extra = "root=/dev/xvda1 console=hvc0" > > kernel = "/root/64/vmlinuz-4.11.0-pvh+" > > ramdisk = "/root/64/initrd.img-4.11.0-pvh+" > > builder="hvm" > > device_model_version="none" > > memory = 4096 > > name = "sospv2" > > vcpus = 8 > > vif = [''] > > disk = ['phy:/dev/vg0/pvclient2,xvda,w'] > > Well, huzzah! > > amd@sospvclient2:~$ dmesg | grep -i xen > [0.00] Hypervisor detected: Xen > [0.00] Xen version 4.9. > [0.00] Xen Platform PCI: unrecognised magic value > [0.00] ACPI: RSDP 0x000FFFC0 24 (v02 Xen ) > [0.00] ACPI: XSDT 0xFC007FA0 34 (v01 XenHVM > HVML ) > [0.00] ACPI: FACP 0xFC007D70 00010C (v05 XenHVM > HVML ) > [0.00] ACPI: DSDT 0xFC001050 006C9B (v05 XenHVM > INTL 20140214) > [0.00] ACPI: APIC 0xFC007E80 6C (v02 XenHVM > HVML ) > [0.00] Booting paravirtualized kernel on Xen PVH > [0.00] xen: PV spinlocks enabled > ^^^ > > > > This is a temporary interface, and it's not stable. > > "Stable" as in syntax and keywords are subject to change? "not stable" as in device_model_version="none" will stop working at some point (because xl/libxl will only understand pvh=1). > > Long term PVH guest should > > be created using "pvh=1", sadly this has not yet been implemented. > > Do I understand this to mean that using "pvh=1" in the config file hasn't > been wired > up to do everything needed to create a PVH guest? That's right, the pvh option doesn't exist ATM. > Is there more to be done > besides > turning that parameter into "builder='hvm" device_model_version="none"? Hm, no, pvh=1 should be a guest type in libxl, so there's more to it. It should have it's own libxl_domain_type, and it's own struct in libxl_domain_build_info. Boris was working on this, he might be able to share some more info. > Or, > better yet, > are there any design notes on this? The interface it's not yet clear, it's very likely that we will have a design discussion about this in the upcoming XenSummit. Roger. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 05/18/2017 03:16 AM, Roger Pau Monné wrote: So using your example, the config file should look like: extra = "root=/dev/xvda1 console=hvc0" kernel = "/root/64/vmlinuz-4.11.0-pvh+" ramdisk = "/root/64/initrd.img-4.11.0-pvh+" builder="hvm" device_model_version="none" memory = 4096 name = "sospv2" vcpus = 8 vif = [''] disk = ['phy:/dev/vg0/pvclient2,xvda,w'] Well, huzzah! amd@sospvclient2:~$ dmesg | grep -i xen [0.00] Hypervisor detected: Xen [0.00] Xen version 4.9. [0.00] Xen Platform PCI: unrecognised magic value [0.00] ACPI: RSDP 0x000FFFC0 24 (v02 Xen ) [0.00] ACPI: XSDT 0xFC007FA0 34 (v01 XenHVM HVML ) [0.00] ACPI: FACP 0xFC007D70 00010C (v05 XenHVM HVML ) [0.00] ACPI: DSDT 0xFC001050 006C9B (v05 XenHVM INTL 20140214) [0.00] ACPI: APIC 0xFC007E80 6C (v02 XenHVM HVML ) [0.00] Booting paravirtualized kernel on Xen PVH [0.00] xen: PV spinlocks enabled ^^^ This is a temporary interface, and it's not stable. "Stable" as in syntax and keywords are subject to change? Long term PVH guest should be created using "pvh=1", sadly this has not yet been implemented. Do I understand this to mean that using "pvh=1" in the config file hasn't been wired up to do everything needed to create a PVH guest? Is there more to be done besides turning that parameter into "builder='hvm" device_model_version="none"? Or, better yet, are there any design notes on this? Hope this helps, Roger. It seems it did. Thank you very much! ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On Wed, May 17, 2017 at 07:07:25PM -0500, Gary R Hook wrote: > On 5/16/2017 12:13 PM, Boris Ostrovsky wrote: > > On 05/16/2017 11:52 AM, Gary R Hook wrote: > > A PVH guest's config looks something like > > > > kernel="/root/64/vmlinux" > > > May I ask from whence this kernel came? > > One of 4.11's rcs. Make sure you set CONFIG_XEN_PVH in your .config file. > > Please excuse my lack of clarity. I meant, on what filesystem > does this kernel reside? dom0. Got it. > > So here's where I stand: > > I have pulled the torvalds repo, found the "Linux 4.11" commit > to create a branch, verified that the config parameters suggested > in a different post are all enabled (CONFIG_XEN, CONFIG_XEN_PVH, > etc): they're all turned on. Built a kernel. Boot dom0 with it, > and I have it in my guest, too (by booting the xvda in a PV guest > and building it there... I'm hoping that's not a problem?). And the > kernel and initrd are in /root/64 on dom0, per the above. > > > I have this configuration (using a logical volume for my raw disk): > > extra = "root=/dev/xvda1 console=hvc0" > kernel = "/root/64/vmlinuz-4.11.0-pvh+" > ramdisk = "/root/64/initrd.img-4.11.0-pvh+" > pvh = 1 This is not yet available, (pvh=1), let me try to clarify the current situation: In order to create a PVH guest you need to add the following to your config file: builder="hvm" device_model_version="none" So using your example, the config file should look like: extra = "root=/dev/xvda1 console=hvc0" kernel = "/root/64/vmlinuz-4.11.0-pvh+" ramdisk = "/root/64/initrd.img-4.11.0-pvh+" builder="hvm" device_model_version="none" memory = 4096 name = "sospv2" vcpus = 8 vif = [''] disk = ['phy:/dev/vg0/pvclient2,xvda,w'] This is a temporary interface, and it's not stable. Long term PVH guest should be created using "pvh=1", sadly this has not yet been implemented. Hope this helps, Roger. NB: FWIW, I've tried PVH DomU on AMD hardware in the past and they seemed to work fine. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 5/16/2017 12:13 PM, Boris Ostrovsky wrote: On 05/16/2017 11:52 AM, Gary R Hook wrote: A PVH guest's config looks something like kernel="/root/64/vmlinux" May I ask from whence this kernel came? One of 4.11's rcs. Make sure you set CONFIG_XEN_PVH in your .config file. Please excuse my lack of clarity. I meant, on what filesystem does this kernel reside? dom0. Got it. So here's where I stand: I have pulled the torvalds repo, found the "Linux 4.11" commit to create a branch, verified that the config parameters suggested in a different post are all enabled (CONFIG_XEN, CONFIG_XEN_PVH, etc): they're all turned on. Built a kernel. Boot dom0 with it, and I have it in my guest, too (by booting the xvda in a PV guest and building it there... I'm hoping that's not a problem?). And the kernel and initrd are in /root/64 on dom0, per the above. I have this configuration (using a logical volume for my raw disk): extra = "root=/dev/xvda1 console=hvc0" kernel = "/root/64/vmlinuz-4.11.0-pvh+" ramdisk = "/root/64/initrd.img-4.11.0-pvh+" pvh = 1 device_model_version="none" memory = 4096 name = "sospv2" vcpus = 8 vif = [''] disk = ['phy:/dev/vg0/pvclient2,xvda,w'] It boots, but I get: $ dmesg | egrep -i 'xen|front' [0.00] Linux version 4.11.0-pvh+ (a...@sosxen.amd.com) (gcc version 4.8.4 (Ubuntu 4.8.4-2ubuntu1~14.04.3) ) #10 SMP Tue May 16 16:36:14 CDT 2017 [0.00] Xen: [mem 0x-0x0009] usable [0.00] Xen: [mem 0x000a-0x000f] reserved [0.00] Xen: [mem 0x0010-0x] usable [0.00] Hypervisor detected: Xen [0.00] Booting paravirtualized kernel on Xen [0.00] Xen version: 4.9-rc (preserve-AD) [0.00] xen: PV spinlocks enabled No PVH indication. :-( And /var/log/xen/xl-sospv2.log has only a "waiting for domain to die" message in it. Please forgive my ignorance. What magic am I missing, or what have I not observed in this exchange? Guidance and expertise are greatly appreciated. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On Tue, May 16, 2017 at 6:13 PM, Boris Ostrovskywrote: > On 05/16/2017 11:52 AM, Gary R Hook wrote: >> On 05/15/2017 09:54 PM, Boris Ostrovsky wrote: >>> >> >> Possibly stupid question time... >> >>> On 05/15/2017 03:51 PM, Gary R Hook wrote: >>> 2) Or, perhaps more importantly, what distinguishes said guest? >>> >>> Simplifying things a bit, it's an HVM guest that doesn't have device >>> model (i.e. qemu) and which is booted directly (i.e. without hvmloader) >> >> So, an unmodified/stock kernel which would rely upon a typical (i.e. its >> own grub) bootloader. The magic comes from the PVH drivers? > > Typically there is no bootloader (or, one might say, the hypervisor is > the bootloader). You indicate which kernel to boot in the config file > (just like for PV guests). > > I believe there is some work going on with OVMF that will make it boot > in PVH mode. It will then mount the guest filesystem and load the > kernel. I think that's what Andrew was referring to. All this is quite confusing to a typical user because from Xen's perspective, PV and HVM and PVH aren't as different as they look from a user's perspective. The most common mode of operation for HVM guests is to run SeaBIOS -> grub -> kernel inside the guest. But you *can* direct-boot an HVM guest. And the most common mode of operation for PV is to direct boot it (usually by running pygrub in dom0). But you *can* also run grub inside the PV guest -- but only because grub has been ported to run in PV mode*. In both cases, from the hypervisor's perspective, the only difference is which blob of data is written into the guest when it starts: a kernel or something which loads a kernel. The same will be true for PVH: You'll be able to direct-boot it (either by specifying a kernel or pygrub). We're also working on porting OVMF to PVH mode, so that nothing in dom0 has to interpret the guest's filesystem. In theory we could port grub or even SeaBIOS to run in PVH mode as well. At the moment this is controlled by a mish-mash of different configuration parameters and you have to know far too much about the internals. Once the 4.10 development window opens up we'll be trying to make a more sensible interface. -George * But you have to know ahead of time whether the kernel you will eventually boot will be 64-bit or 32-bit, because PV guests can't change mode. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 05/16/2017 11:52 AM, Gary R Hook wrote: > On 05/15/2017 09:54 PM, Boris Ostrovsky wrote: >> > > Possibly stupid question time... > >> On 05/15/2017 03:51 PM, Gary R Hook wrote: >> >>> 2) Or, perhaps more importantly, what distinguishes said guest? >> >> Simplifying things a bit, it's an HVM guest that doesn't have device >> model (i.e. qemu) and which is booted directly (i.e. without hvmloader) > > So, an unmodified/stock kernel which would rely upon a typical (i.e. its > own grub) bootloader. The magic comes from the PVH drivers? Typically there is no bootloader (or, one might say, the hypervisor is the bootloader). You indicate which kernel to boot in the config file (just like for PV guests). I believe there is some work going on with OVMF that will make it boot in PVH mode. It will then mount the guest filesystem and load the kernel. I think that's what Andrew was referring to. > >> domU PVH support has been added in 4.11 kernel so you don't have it. > > You refer to the drivers? The drivers are the same as what we use for PV-HVM. It's kernel itself (mostly the startup code) that was modified. > >> An PVH guest's config looks something like >> >> kernel="/root/64/vmlinux" > > May I ask from whence this kernel came? One of 4.11's rcs. Make sure you set CONFIG_XEN_PVH in your .config file. -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 05/16/2017 04:36 AM, Andrew Cooper wrote: On 16/05/17 03:54, Boris Ostrovsky wrote: 2) Or, perhaps more importantly, what distinguishes said guest? Simplifying things a bit, it's an HVM guest that doesn't have device model (i.e. qemu) and which is booted directly (i.e. without hvmloader) The "booted directly" isn't relevant here. While being able to boot a PVH kernel directly is useful for development purposes, it is problematic for production purposes. For production systems, mounting of the guest filesystem and parsing of the guest kernel should happen in guest context, rather than dom0 context, to remove the security attack surfaces present in the PV guest model. Okay, stupid question time (again). I interpret the above to mean that the (referenced) disk image would be used to find a boot loader and run it (e.g. grub2). No pygrub, no special boot kernel such as appears to be needed by a PV guest. So if I install an OS (e.g. Ubuntu 14 or 16) onto a raw device (e.g. an LV on a VG on dom0), then build a 4.11 kernel and install it (on that xvda), that device would be bootable in a PVH guest. Yes/no? ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 05/15/2017 09:54 PM, Boris Ostrovsky wrote: Possibly stupid question time... On 05/15/2017 03:51 PM, Gary R Hook wrote: 2) Or, perhaps more importantly, what distinguishes said guest? Simplifying things a bit, it's an HVM guest that doesn't have device model (i.e. qemu) and which is booted directly (i.e. without hvmloader) So, an unmodified/stock kernel which would rely upon a typical (i.e. its own grub) bootloader. The magic comes from the PVH drivers? domU PVH support has been added in 4.11 kernel so you don't have it. You refer to the drivers? An PVH guest's config looks something like kernel="/root/64/vmlinux" May I ask from whence this kernel came? builder="hvm" device_model_version="none" extra="root=/dev/xvda1 console=hvc0" memory=8192 vcpus=2 name = "pvh" disk=['/root/virt/f22.img,raw,xvda,rw'] (note device_model_version) I saw the comment from I Roger. The overt statement of pvh intention would be a good thing. I am, at the moment, building a 4.11 kernel in a guest, hoping to boot it in PVH mode. Thanks, Gary ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 16/05/17 03:54, Boris Ostrovsky wrote: > >> 2) Or, perhaps more importantly, what distinguishes said guest? > > Simplifying things a bit, it's an HVM guest that doesn't have device > model (i.e. qemu) and which is booted directly (i.e. without hvmloader) The "booted directly" isn't relevant here. While being able to boot a PVH kernel directly is useful for development purposes, it is problematic for production purposes. For production systems, mounting of the guest filesystem and parsing of the guest kernel should happen in guest context, rather than dom0 context, to remove the security attack surfaces present in the PV guest model. ~Andrew ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On Mon, May 15, 2017 at 10:54:19PM -0400, Boris Ostrovsky wrote: [...] > An PVH guest's config looks something like > > kernel="/root/64/vmlinux" > builder="hvm" > device_model_version="none" > extra="root=/dev/xvda1 console=hvc0" > memory=8192 > vcpus=2 > name = "pvh" > disk=['/root/virt/f22.img,raw,xvda,rw'] > > (note device_model_version) I would like to add that this (device_model_version="none") is a temporary interface, and that the idea is to use 'pvh=1' in order to create PVH guests, but this needs some work on the toolstack side. Roger. ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
Re: [Xen-devel] Questions about PVHv2/HVMlite
On 05/15/2017 03:51 PM, Gary R Hook wrote: So I've been slogging through online docs and the code, trying to understand where things stand with PVH. I think my primary questions are: 1) How do I identify a PVHv2/HVMlite guest? [root@dhcp-burlington7-2nd-B-east-10-152-55-52 ~]# dmesg | grep PVH [0.00] Booting paravirtualized kernel on Xen PVH [root@dhcp-burlington7-2nd-B-east-10-152-55-52 ~]# 2) Or, perhaps more importantly, what distinguishes said guest? Simplifying things a bit, it's an HVM guest that doesn't have device model (i.e. qemu) and which is booted directly (i.e. without hvmloader) I've got Xen 4.9 unstable built/installed/booted, and am running 4.10 kernels on my dom0 and guests. domU PVH support has been added in 4.11 kernel so you don't have it. I've gotten a guest booted, and a basic Ubuntu 14.04 installed from a distro ISO onto a raw disk (a logical volume). All good. If I use the example file /etc/xen/example.hvm to define a simple guest (but no VGA: nographic=1), I see that I have a qemu instance running, which I expect, along with some threads: This is exactly the thing that PVH guests won't have. You are likely booting a regular HVM guest. An PVH guest's config looks something like kernel="/root/64/vmlinux" builder="hvm" device_model_version="none" extra="root=/dev/xvda1 console=hvc0" memory=8192 vcpus=2 name = "pvh" disk=['/root/virt/f22.img,raw,xvda,rw'] (note device_model_version) -boris ___ Xen-devel mailing list Xen-devel@lists.xen.org https://lists.xen.org/xen-devel
[Xen-devel] Questions about PVHv2/HVMlite
So I've been slogging through online docs and the code, trying to understand where things stand with PVH. I think my primary questions are: 1) How do I identify a PVHv2/HVMlite guest? 2) Or, perhaps more importantly, what distinguishes said guest? I've got Xen 4.9 unstable built/installed/booted, and am running 4.10 kernels on my dom0 and guests. I've gotten a guest booted, and a basic Ubuntu 14.04 installed from a distro ISO onto a raw disk (a logical volume). All good. If I use the example file /etc/xen/example.hvm to define a simple guest (but no VGA: nographic=1), I see that I have a qemu instance running, which I expect, along with some threads: root 8523 1 0 14:31 ?00:00:03 /usr/local/lib/xen/bin/qemu-system-i386 -xen-domid 17 -chardev socket root 8779 2 0 14:31 ?00:00:00 [17.xvda-0] root 8780 2 0 14:31 ?00:00:00 [vif17.0-q0-gues] root 8781 2 0 14:31 ?00:00:00 [vif17.0-q0-deal] root 8782 2 0 14:31 ?00:00:00 [vif17.0-q1-gues] root 8783 2 0 14:31 ?00:00:00 [vif17.0-q1-deal] All seems good. Now, I've read through the doc at https://wiki.xen.org/wiki/Xen_Linux_PV_on_HVM_drivers and when I log into the above guest, and run dmesg | egrep -i 'xen|front' I get this output: [0.00] DMI: Xen HVM domU, BIOS 4.9-rc 04/25/2017 [0.00] Hypervisor detected: Xen [0.00] Xen version 4.9. [0.00] Xen Platform PCI: I/O protocol version 1 [0.00] Netfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated NICs. [0.00] Blkfront and the Xen platform PCI driver have been compiled for this kernel: unplug emulated disks. [0.00] ACPI: RSDP 0x000F6800 24 (v02 Xen ) [0.00] ACPI: XSDT 0xFC00A5B0 54 (v01 XenHVM HVML ) [0.00] ACPI: FACP 0xFC00A2D0 F4 (v04 XenHVM HVML ) [0.00] ACPI: DSDT 0xFC0012A0 008FAC (v02 XenHVM INTL 20140214) [0.00] ACPI: APIC 0xFC00A3D0 70 (v02 XenHVM HVML ) [0.00] ACPI: HPET 0xFC00A4C0 38 (v01 XenHVM HVML ) [0.00] ACPI: WAET 0xFC00A500 28 (v01 XenHVM HVML ) [0.00] ACPI: SSDT 0xFC00A530 31 (v02 XenHVM INTL 20140214) [0.00] ACPI: SSDT 0xFC00A570 31 (v02 XenHVM INTL 20140214) [0.00] Booting paravirtualized kernel on Xen HVM [0.00] xen: PV spinlocks enabled [0.00] xen:events: Using FIFO-based ABI [0.00] xen:events: Xen HVM callback vector for event delivery is enabled [0.156221] clocksource: xen: mask: 0x max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns [0.156244] Xen: using vcpuop timer interface [0.156253] installing Xen timer for CPU 0 [0.157188] installing Xen timer for CPU 1 [0.248050] xenbus: xs_reset_watches failed: -38 [0.292506] xen: --> pirq=16 -> irq=9 (gsi=9) [0.464822] xen:balloon: Initialising balloon driver [0.468089] xen_balloon: Initialising balloon driver [0.476131] clocksource: Switched to clocksource xen [0.491289] xen: --> pirq=17 -> irq=8 (gsi=8) [0.491405] xen: --> pirq=18 -> irq=12 (gsi=12) [0.491511] xen: --> pirq=19 -> irq=1 (gsi=1) [0.491622] xen: --> pirq=20 -> irq=6 (gsi=6) [1.058087] xen: --> pirq=21 -> irq=24 (gsi=24) [1.058369] xen:grant_table: Grant tables using version 1 layout [1.091277] blkfront: xvda: flush diskcache: enabled; persistent grants: enabled; indirect descriptors: enabled; [1.100218] xen_netfront: Initialising Xen virtual ethernet driver [1.173298] xenbus_probe_frontend: Device with no driver: device/vkbd/0 [2.692397] systemd[1]: Detected virtualization xen. [3.453534] input: Xen Virtual Keyboard as /devices/virtual/input/input5 [3.454923] input: Xen Virtual Pointer as /devices/virtual/input/input6 Current linux kernels contains PV drivers, as I understand it. And based on the referenced document, the above messages would seem to imply that this is a PVHv2 guest here. At least according to what the referenced document explains as how to identify a PVH guest. But shouldn't this be an HVM guest, per the example config file? I get that the wiki is stale, so I gotta ask questions: How do I identify/characterize a a PVHv2/HVMlite guest on Xen 4.9? What, precisely, -defines- one of these (PVHv2) guests? Re: my prior question on documentation, how does the current tech preview define one of these hybrid guests? what are the salient aspects of said guests, and what is that we want to do to create one? My apologies if this is a simplistic question, but some clarification would be greatly appreciated. Gary ___ Xen-devel mailing