Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-19 Thread Roger Pau Monné
On Tue, Jul 18, 2017 at 10:37:53AM -0700, Stefano Stabellini wrote:
> On Mon, 17 Jul 2017, Roger Pau Monné wrote:
> > firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> > 
> > This allows to load a firmware inside of the guest and run it in guest
> > mode. Note that the firmware needs to support booting in PVH mode.
> 
> Probably we need to support absolute paths for firmware too. For
> example, pvgrub2 can only be built with Raisin, not from xen-unstable.
> Similarly OVMF for ARM can only be built with Raisin. In both cases, the
> resulting binary is loaded passing its path to the "kernel" vm config
> option.

Yes, this was a mistake on my side, it was already agreed that a path
to the firmware would be allowed, I just failed to add it in this
document.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-18 Thread Stefano Stabellini
On Mon, 17 Jul 2017, Roger Pau Monné wrote:
> Hello,
> 
> I didn't actually take notes, so this is from the top of my head. If
> anyone took notes or remember something different, please feel free to
> correct it.
> 
> This is the output from the PVH toolstack interface session. The
> participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
> Legout and myself.
> 
> We agreed on the following interface for xl configuration files:
> 
> type = "hvm | pv | pvh"
> 
> This is going to supersede the "builder" option present in xl. Both
> options are mutually exclusive. The "builder" option is going to be
> marked as deprecated once the new "type" option is implemented.
> 
> In order to decide how to boot the guest the following options will be
> available. Note that they are mutually exclusive.
> 
> kernel = ""
> ramdisk = ""
> cmdline = ""
> 
> : relative or full path in the filesystem.
> 
> Boot directly into the kernel/ramdisk provided. In this case the
> kernel must be available somewhere in the toolstack filesystem
> hierarchy.
> 
> firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> 
> This allows to load a firmware inside of the guest and run it in guest
> mode. Note that the firmware needs to support booting in PVH mode.

Probably we need to support absolute paths for firmware too. For
example, pvgrub2 can only be built with Raisin, not from xen-unstable.
Similarly OVMF for ARM can only be built with Raisin. In both cases, the
resulting binary is loaded passing its path to the "kernel" vm config
option.___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread George Dunlap
On 07/17/2017 04:08 PM, Ian Jackson wrote:
> Andrew Cooper writes ("Re: [Xen-devel] [XenSummit 2017] Notes from the PVH 
> toolstack interface session"):
>> On 17/07/17 10:36, Roger Pau Monné wrote:
>>> kernel = ""
>>> ramdisk = ""
>>> cmdline = ""
>>>
>>> : relative or full path in the filesystem.
>>
>> Please can xl or libxl's (not entirely sure which) path handling be
>> fixed as part of this work.  As noted in
>> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
>> inconsistent as to whether it allows paths relative to the .cfg file. 
>> All paths should support being relative to the cfg file, as that is the
>> most convenient for the end user to use.
> 
> Domain config files are conventionally in /etc.  It does not make
> sense to look for images there.

I never put them there.  But anyone who does put them there will be
using absolute paths.  Having non-absolute paths be relative to the cfg
file makes it easier for people to put config files somewhere *outside*
of /etc.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread Ian Jackson
Andrew Cooper writes ("Re: [Xen-devel] [XenSummit 2017] Notes from the PVH 
toolstack interface session"):
> On 17/07/17 10:36, Roger Pau Monné wrote:
> > kernel = ""
> > ramdisk = ""
> > cmdline = ""
> >
> > : relative or full path in the filesystem.
> 
> Please can xl or libxl's (not entirely sure which) path handling be
> fixed as part of this work.  As noted in
> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
> inconsistent as to whether it allows paths relative to the .cfg file. 
> All paths should support being relative to the cfg file, as that is the
> most convenient for the end user to use.

Domain config files are conventionally in /etc.  It does not make
sense to look for images there.  OTOH there should be a way to specify
a path which is relative to xl's cwd at startup.  I wouldn't mind some
kind of magic token system either, eg kernel = "%cfgdir%/image" or
soemthing, if we can agree on a syntax.

> > Boot directly into the kernel/ramdisk provided. In this case the
> > kernel must be available somewhere in the toolstack filesystem
> > hierarchy.
> >
> > firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> 
> What is the purpose of having uefi and bios in there?  ovmf is the uefi
> implementation, and {rom,sea}bios are the bios implementations.

See Roger's comments below.

> How does someone specify ovmf + seabios as a CSM?

EXPN CSM

> > There's no plan to support any bios or pvgrub ATM for PVH, those
> > options are simply listed for completeness. Also, generic options like
> > uefi or bios would be aliases to a concrete implementation by the
> > toolstack, ie: uefi -> ovmf, bios -> seabios most likely.
> 
> Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
> let people choose options which are liable to change under their feet if
> they were to boot the same .cfg file on a newer version of Xen, as their
> VM will inevitable break.

Most VMs will not break simply if booted with a different BIOS.  Your
logic leads inevitably to the libvirt config files, which specify
things in far too much detail and cause lots of trouble.  They can be
un-portable to different versions of libvirt or qemu, let alone
different hypervisors.

> Instead of kernel= and ramdisk=, it would be better to generalise to
> something like modules=[...], perhaps with kernel being an alias for
> module[0] etc.  hvmloader already takes multiple binaries using the PVH
> module system, and PV guests are perfectly capable of multiple modules
> as well.  One specific example where an extra module would be very
> helpful is for providing the cloudinit install config file.

I don't think HVM guests can do direct boot of other than
kernel+ramdisk, can they ?

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread George Dunlap
On 07/17/2017 11:37 AM, Roger Pau Monné wrote:
> On Mon, Jul 17, 2017 at 11:10:50AM +0100, Andrew Cooper wrote:
>> On 17/07/17 10:36, Roger Pau Monné wrote:
>>> Hello,
>>>
>>> I didn't actually take notes, so this is from the top of my head. If
>>> anyone took notes or remember something different, please feel free to
>>> correct it.
>>>
>>> This is the output from the PVH toolstack interface session. The
>>> participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
>>> Legout and myself.
>>>
>>> We agreed on the following interface for xl configuration files:
>>>
>>> type = "hvm | pv | pvh"
>>>
>>> This is going to supersede the "builder" option present in xl. Both
>>> options are mutually exclusive. The "builder" option is going to be
>>> marked as deprecated once the new "type" option is implemented.
>>>
>>> In order to decide how to boot the guest the following options will be
>>> available. Note that they are mutually exclusive.
>>
>> I presume you mean the kernel/ramdisk/cmdline are mutually exclusive
>> with firmware?
> 
> Yes, sorry that's confusing. Either you use kernel, firmware or
> bootloader.
> 
>>> kernel = ""
>>> ramdisk = ""
>>> cmdline = ""
>>>
>>> : relative or full path in the filesystem.
>>
>> Please can xl or libxl's (not entirely sure which) path handling be
>> fixed as part of this work.  As noted in
>> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
>> inconsistent as to whether it allows paths relative to the .cfg file. 
>> All paths should support being relative to the cfg file, as that is the
>> most convenient for the end user to use.
>>
>>> Boot directly into the kernel/ramdisk provided. In this case the
>>> kernel must be available somewhere in the toolstack filesystem
>>> hierarchy.
>>>
>>> firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
>>
>> What is the purpose of having uefi and bios in there?  ovmf is the uefi
>> implementation, and {rom,sea}bios are the bios implementations.
>>
>> How does someone specify ovmf + seabios as a CSM?
> 
> Hm, I have no idea. How is this done usually, is ovmf built with
> seabios support, or is it fetched by ovmf from the uefi partition?
> 
>>> This allows to load a firmware inside of the guest and run it in guest
>>> mode. Note that the firmware needs to support booting in PVH mode.
>>>
>>> There's no plan to support any bios or pvgrub ATM for PVH, those
>>> options are simply listed for completeness. Also, generic options like
>>> uefi or bios would be aliases to a concrete implementation by the
>>> toolstack, ie: uefi -> ovmf, bios -> seabios most likely.
>>
>> Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
>> let people choose options which are liable to change under their feet if
>> they were to boot the same .cfg file on a newer version of Xen, as their
>> VM will inevitable break.
> 
> Noted, I think not allowing bios or uefi is fine, I would rather
> document in the man page that our recommended bios implementation is
> seabios and the uefi one ovmf.

We need both "I don't care much just choose the best one" options, and
"I want this specific version and not have it change" options.

You accurately describe the problem with having *only* "This is the
general idea but the implementation can change under my feet" options.
But there's also a problem with having only "I want this specific
version" options: Namely, that a lot of people really don't care much
and want the most reasonably up-to-date version, and don't want to know
the details below.

Having both allows us to be reasonably user-friendly to both "just make
it work" people and people who want to "get their hands greasy" knowing
all the technical inner workings.


>> Where does hvmloader fit into this mix?
> 
> Right, I wasn't planning anyone using hvmloader, but there's no reason
> to prevent it. I guess it would fit into the "firmware" option, but
> then you should be able to use something like: firmware = "hvmloader +
> ovmf".
> 
> What would be the purpose of using hvmloader inside of a PVH guest?
> Hardware initialization?

AFAICT hvmloader is an internal implementation detail; the user should,
in general, not need to know anything about it (except in cases like
XTF, where you're deliberately abusing the system).

And as Roger said, the `firmware=` option should allow a user to specify
their own binary.

>> Instead of kernel= and ramdisk=, it would be better to generalise to
>> something like modules=[...], perhaps with kernel being an alias for
>> module[0] etc.  hvmloader already takes multiple binaries using the PVH
>> module system, and PV guests are perfectly capable of multiple modules
>> as well.  One specific example where an extra module would be very
>> helpful is for providing the cloudinit install config file.
> 
> I might prefer to keep the current kernel = "..." and convert ramdisk
> into a list named modules. Do you think (this also applies to xl/libxl
> maintainers) we 

Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread George Dunlap
On 07/17/2017 10:36 AM, Roger Pau Monné wrote:
> Hello,
> 
> I didn't actually take notes, so this is from the top of my head. If
> anyone took notes or remember something different, please feel free to
> correct it.
> 
> This is the output from the PVH toolstack interface session. The
> participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
> Legout and myself.
> 
> We agreed on the following interface for xl configuration files:
> 
> type = "hvm | pv | pvh"
> 
> This is going to supersede the "builder" option present in xl. Both
> options are mutually exclusive. The "builder" option is going to be
> marked as deprecated once the new "type" option is implemented.
> 
> In order to decide how to boot the guest the following options will be
> available. Note that they are mutually exclusive.
> 
> kernel = ""
> ramdisk = ""
> cmdline = ""
> 
> : relative or full path in the filesystem.
> 
> Boot directly into the kernel/ramdisk provided. In this case the
> kernel must be available somewhere in the toolstack filesystem
> hierarchy.
> 
> firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> 
> This allows to load a firmware inside of the guest and run it in guest
> mode. Note that the firmware needs to support booting in PVH mode.
> 
> There's no plan to support any bios or pvgrub ATM for PVH, those
> options are simply listed for completeness.

FYI there was a *lot* of interest in PVGRUB for PVH at the hackathon.
If we can prod the person who did the first PV grub port (pvgrub2 as
it's sometimes called) to do the same for PVH I think it would be an
important feature.

Everything else looks good to me.

 -George

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread Roger Pau Monné
On Mon, Jul 17, 2017 at 11:10:50AM +0100, Andrew Cooper wrote:
> On 17/07/17 10:36, Roger Pau Monné wrote:
> > Hello,
> >
> > I didn't actually take notes, so this is from the top of my head. If
> > anyone took notes or remember something different, please feel free to
> > correct it.
> >
> > This is the output from the PVH toolstack interface session. The
> > participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
> > Legout and myself.
> >
> > We agreed on the following interface for xl configuration files:
> >
> > type = "hvm | pv | pvh"
> >
> > This is going to supersede the "builder" option present in xl. Both
> > options are mutually exclusive. The "builder" option is going to be
> > marked as deprecated once the new "type" option is implemented.
> >
> > In order to decide how to boot the guest the following options will be
> > available. Note that they are mutually exclusive.
> 
> I presume you mean the kernel/ramdisk/cmdline are mutually exclusive
> with firmware?

Yes, sorry that's confusing. Either you use kernel, firmware or
bootloader.

> > kernel = ""
> > ramdisk = ""
> > cmdline = ""
> >
> > : relative or full path in the filesystem.
> 
> Please can xl or libxl's (not entirely sure which) path handling be
> fixed as part of this work.  As noted in
> http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
> inconsistent as to whether it allows paths relative to the .cfg file. 
> All paths should support being relative to the cfg file, as that is the
> most convenient for the end user to use.
> 
> > Boot directly into the kernel/ramdisk provided. In this case the
> > kernel must be available somewhere in the toolstack filesystem
> > hierarchy.
> >
> > firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"
> 
> What is the purpose of having uefi and bios in there?  ovmf is the uefi
> implementation, and {rom,sea}bios are the bios implementations.
> 
> How does someone specify ovmf + seabios as a CSM?

Hm, I have no idea. How is this done usually, is ovmf built with
seabios support, or is it fetched by ovmf from the uefi partition?

> > This allows to load a firmware inside of the guest and run it in guest
> > mode. Note that the firmware needs to support booting in PVH mode.
> >
> > There's no plan to support any bios or pvgrub ATM for PVH, those
> > options are simply listed for completeness. Also, generic options like
> > uefi or bios would be aliases to a concrete implementation by the
> > toolstack, ie: uefi -> ovmf, bios -> seabios most likely.
> 
> Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
> let people choose options which are liable to change under their feet if
> they were to boot the same .cfg file on a newer version of Xen, as their
> VM will inevitable break.

Noted, I think not allowing bios or uefi is fine, I would rather
document in the man page that our recommended bios implementation is
seabios and the uefi one ovmf.

> > bootloader = "pygrub"
> >
> > Run a specific binary in the toolstack domain that's going to provide
> > a kernel, ramdisk and cmdline as output. This is mostly pygrub, that
> > accesses the guest disk image and extracts the kernel/ramdisk/cmdline
> > from it.
> >
> > We also spoke about the libxl interface. This is going to require
> > changes to libxl_domain_build_info, which obviously need to be
> > performed in an API compatible way.
> >
> > A new libxl_domain_type needs to be added (PVH) and the new "type"
> > config option is going to map to the "type" field in the
> > libxl_domain_create_info struct.
> >
> > While looking at the contents of the libxl_domain_build_info we
> > realized that there was a bunch of duplication between the
> > domain-specific fields and the top level ones. Ie: there's a top level
> > "kernel" field and one inside of the pv nested structure. It would be
> > interesting to prevent adding a new pvh structure, and instead move
> > all the fields to the top level structure (libxl_domain_build_info).
> >
> > I think that's all of it, as said in the beginning, if anything is
> > missing feel free to add it.
> >
> > Regarding the implementation work itself, I'm currently quite busy
> > with other PVH stuff, so I would really appreciate if someone could
> > take care of this.
> >
> > I think this should be merged in 4.10, so that the toolstack finally
> > has a stable interface to create PVH guests and we can start
> > announcing this. Without this work, even if the PVH DomU ABI is
> > stable, there's no way anyone is going to use it.
> 
> Some other questions.
> 
> Where does hvmloader fit into this mix?

Right, I wasn't planning anyone using hvmloader, but there's no reason
to prevent it. I guess it would fit into the "firmware" option, but
then you should be able to use something like: firmware = "hvmloader +
ovmf".

What would be the purpose of using hvmloader inside of a PVH guest?
Hardware initialization?

> How does firmware_override= work in this new 

Re: [Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread Andrew Cooper
On 17/07/17 10:36, Roger Pau Monné wrote:
> Hello,
>
> I didn't actually take notes, so this is from the top of my head. If
> anyone took notes or remember something different, please feel free to
> correct it.
>
> This is the output from the PVH toolstack interface session. The
> participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
> Legout and myself.
>
> We agreed on the following interface for xl configuration files:
>
> type = "hvm | pv | pvh"
>
> This is going to supersede the "builder" option present in xl. Both
> options are mutually exclusive. The "builder" option is going to be
> marked as deprecated once the new "type" option is implemented.
>
> In order to decide how to boot the guest the following options will be
> available. Note that they are mutually exclusive.

I presume you mean the kernel/ramdisk/cmdline are mutually exclusive
with firmware?

> kernel = ""
> ramdisk = ""
> cmdline = ""
>
> : relative or full path in the filesystem.

Please can xl or libxl's (not entirely sure which) path handling be
fixed as part of this work.  As noted in
http://xenbits.xen.org/docs/xtf/index.html#errata, path handling is
inconsistent as to whether it allows paths relative to the .cfg file. 
All paths should support being relative to the cfg file, as that is the
most convenient for the end user to use.

> Boot directly into the kernel/ramdisk provided. In this case the
> kernel must be available somewhere in the toolstack filesystem
> hierarchy.
>
> firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"

What is the purpose of having uefi and bios in there?  ovmf is the uefi
implementation, and {rom,sea}bios are the bios implementations.

How does someone specify ovmf + seabios as a CSM?

> This allows to load a firmware inside of the guest and run it in guest
> mode. Note that the firmware needs to support booting in PVH mode.
>
> There's no plan to support any bios or pvgrub ATM for PVH, those
> options are simply listed for completeness. Also, generic options like
> uefi or bios would be aliases to a concrete implementation by the
> toolstack, ie: uefi -> ovmf, bios -> seabios most likely.

Oh - here is the reason.  -1 to this idea.  We don't want to explicitly
let people choose options which are liable to change under their feet if
they were to boot the same .cfg file on a newer version of Xen, as their
VM will inevitable break.

> bootloader = "pygrub"
>
> Run a specific binary in the toolstack domain that's going to provide
> a kernel, ramdisk and cmdline as output. This is mostly pygrub, that
> accesses the guest disk image and extracts the kernel/ramdisk/cmdline
> from it.
>
> We also spoke about the libxl interface. This is going to require
> changes to libxl_domain_build_info, which obviously need to be
> performed in an API compatible way.
>
> A new libxl_domain_type needs to be added (PVH) and the new "type"
> config option is going to map to the "type" field in the
> libxl_domain_create_info struct.
>
> While looking at the contents of the libxl_domain_build_info we
> realized that there was a bunch of duplication between the
> domain-specific fields and the top level ones. Ie: there's a top level
> "kernel" field and one inside of the pv nested structure. It would be
> interesting to prevent adding a new pvh structure, and instead move
> all the fields to the top level structure (libxl_domain_build_info).
>
> I think that's all of it, as said in the beginning, if anything is
> missing feel free to add it.
>
> Regarding the implementation work itself, I'm currently quite busy
> with other PVH stuff, so I would really appreciate if someone could
> take care of this.
>
> I think this should be merged in 4.10, so that the toolstack finally
> has a stable interface to create PVH guests and we can start
> announcing this. Without this work, even if the PVH DomU ABI is
> stable, there's no way anyone is going to use it.

Some other questions.

Where does hvmloader fit into this mix?

How does firmware_override= work in this new world?  How about firmware=
taking a  to allow for easy testing of custom binaries?

Instead of kernel= and ramdisk=, it would be better to generalise to
something like modules=[...], perhaps with kernel being an alias for
module[0] etc.  hvmloader already takes multiple binaries using the PVH
module system, and PV guests are perfectly capable of multiple modules
as well.  One specific example where an extra module would be very
helpful is for providing the cloudinit install config file.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [XenSummit 2017] Notes from the PVH toolstack interface session

2017-07-17 Thread Roger Pau Monné
Hello,

I didn't actually take notes, so this is from the top of my head. If
anyone took notes or remember something different, please feel free to
correct it.

This is the output from the PVH toolstack interface session. The
participants where: Ian Jackson, Wei Liu, George Dunlap, Vincent
Legout and myself.

We agreed on the following interface for xl configuration files:

type = "hvm | pv | pvh"

This is going to supersede the "builder" option present in xl. Both
options are mutually exclusive. The "builder" option is going to be
marked as deprecated once the new "type" option is implemented.

In order to decide how to boot the guest the following options will be
available. Note that they are mutually exclusive.

kernel = ""
ramdisk = ""
cmdline = ""

: relative or full path in the filesystem.

Boot directly into the kernel/ramdisk provided. In this case the
kernel must be available somewhere in the toolstack filesystem
hierarchy.

firmware = "ovmf | uefi | bios | seabios | rombios | pvgrub"

This allows to load a firmware inside of the guest and run it in guest
mode. Note that the firmware needs to support booting in PVH mode.

There's no plan to support any bios or pvgrub ATM for PVH, those
options are simply listed for completeness. Also, generic options like
uefi or bios would be aliases to a concrete implementation by the
toolstack, ie: uefi -> ovmf, bios -> seabios most likely.

bootloader = "pygrub"

Run a specific binary in the toolstack domain that's going to provide
a kernel, ramdisk and cmdline as output. This is mostly pygrub, that
accesses the guest disk image and extracts the kernel/ramdisk/cmdline
from it.

We also spoke about the libxl interface. This is going to require
changes to libxl_domain_build_info, which obviously need to be
performed in an API compatible way.

A new libxl_domain_type needs to be added (PVH) and the new "type"
config option is going to map to the "type" field in the
libxl_domain_create_info struct.

While looking at the contents of the libxl_domain_build_info we
realized that there was a bunch of duplication between the
domain-specific fields and the top level ones. Ie: there's a top level
"kernel" field and one inside of the pv nested structure. It would be
interesting to prevent adding a new pvh structure, and instead move
all the fields to the top level structure (libxl_domain_build_info).

I think that's all of it, as said in the beginning, if anything is
missing feel free to add it.

Regarding the implementation work itself, I'm currently quite busy
with other PVH stuff, so I would really appreciate if someone could
take care of this.

I think this should be merged in 4.10, so that the toolstack finally
has a stable interface to create PVH guests and we can start
announcing this. Without this work, even if the PVH DomU ABI is
stable, there's no way anyone is going to use it.

Thanks, Roger.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel