[libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Stefan Bader
Ok, I think indentation should be ok like that and I added error
handling for VIR_STRDUP. I think just returning -1 like the other
places should be ok as the only caller looks to handle the disposal
of the config structure.

-Stefan

---

From dfe579003e91137ecd824d2a08bcdc8f18725857 Mon Sep 17 00:00:00 2001
From: Stefan Bader stefan.ba...@canonical.com
Date: Thu, 27 Mar 2014 16:01:18 +0100
Subject: [PATCH] libxl: Implement basic video device selection

This started as an investigation into an issue where libvirt (using the
libxl driver) and the Xen host, like an old couple, could not agree on
who is responsible for selecting the VNC port to use.

Things usually (and a bit surprisingly) did work because, just like that
old couple, they had the same idea on what to do by default. However it
was possible that this ended up in a big argument.

The problem is that display information exists in two different places:
in the vfbs list and in the build info. And for launching the device model,
only the latter is used. But that never gets initialized from libvirt. So
Xen allows the device model to select a default port while libvirt thinks
it has told Xen that this is done by libvirt (though the vfbs config).

While fixing that, I made a stab at actually evaluating the configuration
of the video device. So that it is now possible to at least decide between
a Cirrus or standard VGA emulation and to modify the VRAM within certain
limits using libvirt.

Signed-off-by: Stefan Bader stefan.ba...@canonical.com
---
 src/libxl/libxl_conf.c |   93 
 1 file changed, 93 insertions(+)

diff --git a/src/libxl/libxl_conf.c b/src/libxl/libxl_conf.c
index b8de72a..042f4da 100644
--- a/src/libxl/libxl_conf.c
+++ b/src/libxl/libxl_conf.c
@@ -1298,6 +1298,89 @@ libxlMakeCapabilities(libxl_ctx *ctx)
 return NULL;
 }
 
+static int
+libxlSetBuildGraphics(virDomainDefPtr def, libxl_domain_config *d_config)
+{
+libxl_domain_build_info *b_info = d_config-b_info;
+
+/*
+ * Take the first defined video device (graphics card) to display
+ * on the first graphics device (display).
+ * Right now only type and vram info is used and anything beside
+ * type xen and vga is mapped to cirrus.
+ */
+if (def-nvideos) {
+unsigned int min_vram = 8 * 1024;
+
+switch (def-videos[0]-type) {
+case VIR_DOMAIN_VIDEO_TYPE_VGA:
+case VIR_DOMAIN_VIDEO_TYPE_XEN:
+b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
+/*
+ * Libxl enforces a minimal VRAM size of 8M when using
+ * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
+ * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
+ * Avoid build failures and go with the minimum if less
+ * is specified.
+ */
+switch (b_info-device_model_version) {
+case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+min_vram = 8 * 1024;
+break;
+case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+default:
+min_vram = 16 * 1024;
+}
+break;
+default:
+/*
+ * Ignore any other device type and use Cirrus. Again fix
+ * up the minimal VRAM to what libxl expects.
+ */
+b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_CIRRUS;
+switch (b_info-device_model_version) {
+case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+min_vram = 4 * 1024; /* Actually the max, too */
+break;
+case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+default:
+min_vram = 8 * 1024;
+}
+}
+b_info-video_memkb = (def-videos[0]-vram  min_vram) ?
+   def-videos[0]-vram :
+   LIBXL_MEMKB_DEFAULT;
+} else {
+libxl_defbool_set(b_info-u.hvm.nographic, 1);
+}
+
+/*
+ * When making the list of displays, only VNC and SDL types were
+ * taken into account. So it seems sensible to connect the default
+ * video device to the first in the vfb list.
+ *
+ * FIXME: Copy the structures and fixing the strings feels a bit dirty.
+ */
+if (d_config-num_vfbs) {
+libxl_device_vfb *vfb0 = d_config-vfbs[0];
+
+b_info-u.hvm.vnc = vfb0-vnc;
+if (VIR_STRDUP(b_info-u.hvm.vnc.listen, vfb0-vnc.listen)  0)
+return -1;
+if (VIR_STRDUP(b_info-u.hvm.vnc.passwd, vfb0-vnc.passwd)  0)
+return -1;
+b_info-u.hvm.sdl = vfb0-sdl;
+if (VIR_STRDUP(b_info-u.hvm.sdl.display, vfb0-sdl.display)  0)
+return -1;
+if 

Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Ian Campbell
On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
 +/*
 + * Take the first defined video device (graphics card) to display
 + * on the first graphics device (display).
 + * Right now only type and vram info is used and anything beside
 + * type xen and vga is mapped to cirrus.
 + */
 +if (def-nvideos) {
 +unsigned int min_vram = 8 * 1024;
 +
 +switch (def-videos[0]-type) {
 +case VIR_DOMAIN_VIDEO_TYPE_VGA:
 +case VIR_DOMAIN_VIDEO_TYPE_XEN:
 +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
 +/*
 + * Libxl enforces a minimal VRAM size of 8M when using
 + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
 + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
 + * Avoid build failures and go with the minimum if less
 + * is specified.

Build failures? Do you mean domain build rather than libvirt build?

I'm not sure about this fixing up the GIGO from the user side, but
that's a libvirt policy decision I suppose? If it were me I would just
let libxl provide the error and expect people to fix their domain config
rather than silently giving them something other than what they asked
for. What if increasing the VRAM causes a cascading failure due e.g. to
lack of memory? That's going to be tricky to debug I think!


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Ian Campbell
On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
 On 04.04.2014 11:48, Ian Campbell wrote:
  On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
  +/*
  + * Take the first defined video device (graphics card) to display
  + * on the first graphics device (display).
  + * Right now only type and vram info is used and anything beside
  + * type xen and vga is mapped to cirrus.
  + */
  +if (def-nvideos) {
  +unsigned int min_vram = 8 * 1024;
  +
  +switch (def-videos[0]-type) {
  +case VIR_DOMAIN_VIDEO_TYPE_VGA:
  +case VIR_DOMAIN_VIDEO_TYPE_XEN:
  +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
  +/*
  + * Libxl enforces a minimal VRAM size of 8M when using
  + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
  + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
  + * Avoid build failures and go with the minimum if less
  + * is specified.
  
  Build failures? Do you mean domain build rather than libvirt build?
  
  I'm not sure about this fixing up the GIGO from the user side, but
  that's a libvirt policy decision I suppose? If it were me I would just
  let libxl provide the error and expect people to fix their domain config
  rather than silently giving them something other than what they asked
  for. What if increasing the VRAM causes a cascading failure due e.g. to
  lack of memory? That's going to be tricky to debug I think!
 
 In the end its a start a domain with such a config. Which seems to be what I
 would end up with in my testing with an admittedly older version of 
 virt-manager
 connecting to a libvirtd running an initial version of this patch without 
 that part.
 The error seen on the front-end was something along the lines of failed to 
 get
 enough memory to start the guest (the libxl log on the other side had the
 better error message). And the gui always reduces the memory below the minimum
 for both the options (VGA and Xen).
 That is the reason I went for meh, go for the minimum anyways.

Does the libvirt protocol require the client to provide all the sizes
etc with no provision for asking the server side to pick a sane default?

 And btw, it is already confusing enough as with cirrus, I get a 9M by default
 which is passed on to qemu on the command line and then ignored by it and one
 gets 32M in any way...

Lovely!

Ian.

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Stefan Bader
On 04.04.2014 12:34, Ian Campbell wrote:
 On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
 On 04.04.2014 11:48, Ian Campbell wrote:
 On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
 +/*
 + * Take the first defined video device (graphics card) to display
 + * on the first graphics device (display).
 + * Right now only type and vram info is used and anything beside
 + * type xen and vga is mapped to cirrus.
 + */
 +if (def-nvideos) {
 +unsigned int min_vram = 8 * 1024;
 +
 +switch (def-videos[0]-type) {
 +case VIR_DOMAIN_VIDEO_TYPE_VGA:
 +case VIR_DOMAIN_VIDEO_TYPE_XEN:
 +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
 +/*
 + * Libxl enforces a minimal VRAM size of 8M when using
 + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
 + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
 + * Avoid build failures and go with the minimum if less
 + * is specified.

 Build failures? Do you mean domain build rather than libvirt build?

 I'm not sure about this fixing up the GIGO from the user side, but
 that's a libvirt policy decision I suppose? If it were me I would just
 let libxl provide the error and expect people to fix their domain config
 rather than silently giving them something other than what they asked
 for. What if increasing the VRAM causes a cascading failure due e.g. to
 lack of memory? That's going to be tricky to debug I think!

 In the end its a start a domain with such a config. Which seems to be what I
 would end up with in my testing with an admittedly older version of 
 virt-manager
 connecting to a libvirtd running an initial version of this patch without 
 that part.
 The error seen on the front-end was something along the lines of failed to 
 get
 enough memory to start the guest (the libxl log on the other side had the
 better error message). And the gui always reduces the memory below the 
 minimum
 for both the options (VGA and Xen).
 That is the reason I went for meh, go for the minimum anyways.
 
 Does the libvirt protocol require the client to provide all the sizes
 etc with no provision for asking the server side to pick a sane default?

I think libvirt itself would be ok with a config that does not enforce a certain
amount of VRAM (libvirt devs correct me if I am talking non-sense here).
But together with the virt-manager front-end that tries to be helpful halfway.
I get a incorrect setting which I cannot change from that version of the gui.
Again, never versions might be better.
Just practically I expect some crazy people (like me *cough*) to try this from
an older Desktop release...

 
 And btw, it is already confusing enough as with cirrus, I get a 9M by default
 which is passed on to qemu on the command line and then ignored by it and one
 gets 32M in any way...
 
 Lovely!
 
 Ian.
 




signature.asc
Description: OpenPGP digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Daniel P. Berrange
On Fri, Apr 04, 2014 at 11:34:17AM +0100, Ian Campbell wrote:
 On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
  On 04.04.2014 11:48, Ian Campbell wrote:
   On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
   +/*
   + * Take the first defined video device (graphics card) to display
   + * on the first graphics device (display).
   + * Right now only type and vram info is used and anything beside
   + * type xen and vga is mapped to cirrus.
   + */
   +if (def-nvideos) {
   +unsigned int min_vram = 8 * 1024;
   +
   +switch (def-videos[0]-type) {
   +case VIR_DOMAIN_VIDEO_TYPE_VGA:
   +case VIR_DOMAIN_VIDEO_TYPE_XEN:
   +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
   +/*
   + * Libxl enforces a minimal VRAM size of 8M when using
   + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
   + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
   + * Avoid build failures and go with the minimum if less
   + * is specified.
   
   Build failures? Do you mean domain build rather than libvirt build?
   
   I'm not sure about this fixing up the GIGO from the user side, but
   that's a libvirt policy decision I suppose? If it were me I would just
   let libxl provide the error and expect people to fix their domain config
   rather than silently giving them something other than what they asked
   for. What if increasing the VRAM causes a cascading failure due e.g. to
   lack of memory? That's going to be tricky to debug I think!
  
  In the end its a start a domain with such a config. Which seems to be what I
  would end up with in my testing with an admittedly older version of 
  virt-manager
  connecting to a libvirtd running an initial version of this patch without 
  that part.
  The error seen on the front-end was something along the lines of failed to 
  get
  enough memory to start the guest (the libxl log on the other side had the
  better error message). And the gui always reduces the memory below the 
  minimum
  for both the options (VGA and Xen).
  That is the reason I went for meh, go for the minimum anyways.
 
 Does the libvirt protocol require the client to provide all the sizes
 etc with no provision for asking the server side to pick a sane default?

The XML does not have to include the VGA ram size. If it is omitted the
we fill in a default value after initial parsing is done. That defualt
can be hypervisor specific

 
  And btw, it is already confusing enough as with cirrus, I get a 9M by 
  default
  which is passed on to qemu on the command line and then ignored by it and 
  one
  gets 32M in any way...

Yeah, we were silently ignoring ram size for cirrus since QEMU doesn't let
it be configured. Only QXL and I think stdvga is configurable.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Ian Campbell
On Fri, 2014-04-04 at 14:51 +0200, Daniel P. Berrange wrote:
 On Fri, Apr 04, 2014 at 11:34:17AM +0100, Ian Campbell wrote:
  On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
   On 04.04.2014 11:48, Ian Campbell wrote:
On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
+/*
+ * Take the first defined video device (graphics card) to display
+ * on the first graphics device (display).
+ * Right now only type and vram info is used and anything beside
+ * type xen and vga is mapped to cirrus.
+ */
+if (def-nvideos) {
+unsigned int min_vram = 8 * 1024;
+
+switch (def-videos[0]-type) {
+case VIR_DOMAIN_VIDEO_TYPE_VGA:
+case VIR_DOMAIN_VIDEO_TYPE_XEN:
+b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
+/*
+ * Libxl enforces a minimal VRAM size of 8M when using
+ * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
+ * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
+ * Avoid build failures and go with the minimum if 
less
+ * is specified.

Build failures? Do you mean domain build rather than libvirt build?

I'm not sure about this fixing up the GIGO from the user side, but
that's a libvirt policy decision I suppose? If it were me I would just
let libxl provide the error and expect people to fix their domain config
rather than silently giving them something other than what they asked
for. What if increasing the VRAM causes a cascading failure due e.g. to
lack of memory? That's going to be tricky to debug I think!
   
   In the end its a start a domain with such a config. Which seems to be 
   what I
   would end up with in my testing with an admittedly older version of 
   virt-manager
   connecting to a libvirtd running an initial version of this patch without 
   that part.
   The error seen on the front-end was something along the lines of failed 
   to get
   enough memory to start the guest (the libxl log on the other side had the
   better error message). And the gui always reduces the memory below the 
   minimum
   for both the options (VGA and Xen).
   That is the reason I went for meh, go for the minimum anyways.
  
  Does the libvirt protocol require the client to provide all the sizes
  etc with no provision for asking the server side to pick a sane default?
 
 The XML does not have to include the VGA ram size. If it is omitted the
 we fill in a default value after initial parsing is done.

I guess the issue is that whatever client Stefan is using is including
the VGA ram size, with a value which it turns out is not allowed.

 That defualt can be hypervisor specific

Great!

Ian.
   And btw, it is already confusing enough as with cirrus, I get a 9M by 
   default
   which is passed on to qemu on the command line and then ignored by it and 
   one
   gets 32M in any way...
 
 Yeah, we were silently ignoring ram size for cirrus since QEMU doesn't let
 it be configured. Only QXL and I think stdvga is configurable.
 
 
 Regards,
 Daniel


--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Daniel P. Berrange
On Fri, Apr 04, 2014 at 01:56:09PM +0100, Ian Campbell wrote:
 On Fri, 2014-04-04 at 14:51 +0200, Daniel P. Berrange wrote:
  On Fri, Apr 04, 2014 at 11:34:17AM +0100, Ian Campbell wrote:
   On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
On 04.04.2014 11:48, Ian Campbell wrote:
 On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
 +/*
 + * Take the first defined video device (graphics card) to 
 display
 + * on the first graphics device (display).
 + * Right now only type and vram info is used and anything beside
 + * type xen and vga is mapped to cirrus.
 + */
 +if (def-nvideos) {
 +unsigned int min_vram = 8 * 1024;
 +
 +switch (def-videos[0]-type) {
 +case VIR_DOMAIN_VIDEO_TYPE_VGA:
 +case VIR_DOMAIN_VIDEO_TYPE_XEN:
 +b_info-u.hvm.vga.kind = 
 LIBXL_VGA_INTERFACE_TYPE_STD;
 +/*
 + * Libxl enforces a minimal VRAM size of 8M when 
 using
 + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL 
 or
 + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
 + * Avoid build failures and go with the minimum if 
 less
 + * is specified.
 
 Build failures? Do you mean domain build rather than libvirt 
 build?
 
 I'm not sure about this fixing up the GIGO from the user side, but
 that's a libvirt policy decision I suppose? If it were me I would just
 let libxl provide the error and expect people to fix their domain 
 config
 rather than silently giving them something other than what they asked
 for. What if increasing the VRAM causes a cascading failure due e.g. 
 to
 lack of memory? That's going to be tricky to debug I think!

In the end its a start a domain with such a config. Which seems to be 
what I
would end up with in my testing with an admittedly older version of 
virt-manager
connecting to a libvirtd running an initial version of this patch 
without that part.
The error seen on the front-end was something along the lines of 
failed to get
enough memory to start the guest (the libxl log on the other side had 
the
better error message). And the gui always reduces the memory below the 
minimum
for both the options (VGA and Xen).
That is the reason I went for meh, go for the minimum anyways.
   
   Does the libvirt protocol require the client to provide all the sizes
   etc with no provision for asking the server side to pick a sane default?
  
  The XML does not have to include the VGA ram size. If it is omitted the
  we fill in a default value after initial parsing is done.
 
 I guess the issue is that whatever client Stefan is using is including
 the VGA ram size, with a value which it turns out is not allowed.

Yeah, that's probably a historical accident in virt-manager trying to guess
the default values itself rather than delegating.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list


Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Stefan Bader
On 04.04.2014 14:56, Ian Campbell wrote:
 On Fri, 2014-04-04 at 14:51 +0200, Daniel P. Berrange wrote:
 On Fri, Apr 04, 2014 at 11:34:17AM +0100, Ian Campbell wrote:
 On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
 On 04.04.2014 11:48, Ian Campbell wrote:
 On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
 +/*
 + * Take the first defined video device (graphics card) to display
 + * on the first graphics device (display).
 + * Right now only type and vram info is used and anything beside
 + * type xen and vga is mapped to cirrus.
 + */
 +if (def-nvideos) {
 +unsigned int min_vram = 8 * 1024;
 +
 +switch (def-videos[0]-type) {
 +case VIR_DOMAIN_VIDEO_TYPE_VGA:
 +case VIR_DOMAIN_VIDEO_TYPE_XEN:
 +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
 +/*
 + * Libxl enforces a minimal VRAM size of 8M when using
 + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
 + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
 + * Avoid build failures and go with the minimum if less
 + * is specified.

 Build failures? Do you mean domain build rather than libvirt build?

 I'm not sure about this fixing up the GIGO from the user side, but
 that's a libvirt policy decision I suppose? If it were me I would just
 let libxl provide the error and expect people to fix their domain config
 rather than silently giving them something other than what they asked
 for. What if increasing the VRAM causes a cascading failure due e.g. to
 lack of memory? That's going to be tricky to debug I think!

 In the end its a start a domain with such a config. Which seems to be what 
 I
 would end up with in my testing with an admittedly older version of 
 virt-manager
 connecting to a libvirtd running an initial version of this patch without 
 that part.
 The error seen on the front-end was something along the lines of failed 
 to get
 enough memory to start the guest (the libxl log on the other side had the
 better error message). And the gui always reduces the memory below the 
 minimum
 for both the options (VGA and Xen).
 That is the reason I went for meh, go for the minimum anyways.

 Does the libvirt protocol require the client to provide all the sizes
 etc with no provision for asking the server side to pick a sane default?

 The XML does not have to include the VGA ram size. If it is omitted the
 we fill in a default value after initial parsing is done.
 
 I guess the issue is that whatever client Stefan is using is including
 the VGA ram size, with a value which it turns out is not allowed.

Right and the current fixup code is in there because I am too lazy to be
bothered to use virsh to fix up the vram size all the times. And in some way I
expected other users to be of the same mind. Or even not to find out what went
wrong at all.
I am open to let this drop on the upstream side and only carry the delta
locally. Whichever sounds more suitable for the upstream maintainers.

-Stefan
 
 That defualt can be hypervisor specific
 
 Great!
 
 Ian.
 And btw, it is already confusing enough as with cirrus, I get a 9M by 
 default
 which is passed on to qemu on the command line and then ignored by it and 
 one
 gets 32M in any way...

 Yeah, we were silently ignoring ram size for cirrus since QEMU doesn't let
 it be configured. Only QXL and I think stdvga is configurable.


 Regards,
 Daniel
 
 
 --
 libvir-list mailing list
 libvir-list@redhat.com
 https://www.redhat.com/mailman/listinfo/libvir-list
 




signature.asc
Description: OpenPGP digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Stefan Bader
On 04.04.2014 15:17, Daniel P. Berrange wrote:
 On Fri, Apr 04, 2014 at 11:36:39AM +0200, Stefan Bader wrote:
 +static int
 +libxlSetBuildGraphics(virDomainDefPtr def, libxl_domain_config *d_config)
 +{
 +libxl_domain_build_info *b_info = d_config-b_info;
 +
 +/*
 + * Take the first defined video device (graphics card) to display
 + * on the first graphics device (display).
 + * Right now only type and vram info is used and anything beside
 + * type xen and vga is mapped to cirrus.
 + */
 +if (def-nvideos) {
 +unsigned int min_vram = 8 * 1024;
 +
 +switch (def-videos[0]-type) {
 +case VIR_DOMAIN_VIDEO_TYPE_VGA:
 +case VIR_DOMAIN_VIDEO_TYPE_XEN:
 +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
 +/*
 + * Libxl enforces a minimal VRAM size of 8M when using
 + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
 + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
 + * Avoid build failures and go with the minimum if less
 + * is specified.
 + */
 +switch (b_info-device_model_version) {
 +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
 +min_vram = 8 * 1024;
 +break;
 +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
 +default:
 +min_vram = 16 * 1024;
 +}
 +break;
 +default:
 +/*
 + * Ignore any other device type and use Cirrus. Again fix
 + * up the minimal VRAM to what libxl expects.
 + */
 
 We shouldn't do that 'default'. For any device type that Xen can't support
 we should report  VIR_ERR_CONFIG_UNSUPPORTED.

Ok, I could remove that. At some point it might be possible to enhance that
again. But that requires more careful thinking. At least with
device_model_version QEMU_XEN (not QEMU_XEN_TRADITIONAL and not to confuse with
device_model which just sets the path ;)) at least in theory there should be the
same support as for kvm... Anyway, just loud thinking.

Would you also rather want the VRAM fixup to be removed as well? I probably wait
a bit more for other feedback and then would do a v3...

-Stefan
 
 Regards,
 Daniel
 




signature.asc
Description: OpenPGP digital signature
--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Re: [libvirt] [PATCH v2] libxl: Implement basic video device selection

2014-04-04 Thread Daniel P. Berrange
On Fri, Apr 04, 2014 at 03:36:38PM +0200, Stefan Bader wrote:
 On 04.04.2014 15:17, Daniel P. Berrange wrote:
  On Fri, Apr 04, 2014 at 11:36:39AM +0200, Stefan Bader wrote:
  +static int
  +libxlSetBuildGraphics(virDomainDefPtr def, libxl_domain_config *d_config)
  +{
  +libxl_domain_build_info *b_info = d_config-b_info;
  +
  +/*
  + * Take the first defined video device (graphics card) to display
  + * on the first graphics device (display).
  + * Right now only type and vram info is used and anything beside
  + * type xen and vga is mapped to cirrus.
  + */
  +if (def-nvideos) {
  +unsigned int min_vram = 8 * 1024;
  +
  +switch (def-videos[0]-type) {
  +case VIR_DOMAIN_VIDEO_TYPE_VGA:
  +case VIR_DOMAIN_VIDEO_TYPE_XEN:
  +b_info-u.hvm.vga.kind = LIBXL_VGA_INTERFACE_TYPE_STD;
  +/*
  + * Libxl enforces a minimal VRAM size of 8M when using
  + * LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
  + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
  + * Avoid build failures and go with the minimum if less
  + * is specified.
  + */
  +switch (b_info-device_model_version) {
  +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
  +min_vram = 8 * 1024;
  +break;
  +case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
  +default:
  +min_vram = 16 * 1024;
  +}
  +break;
  +default:
  +/*
  + * Ignore any other device type and use Cirrus. Again fix
  + * up the minimal VRAM to what libxl expects.
  + */
  
  We shouldn't do that 'default'. For any device type that Xen can't support
  we should report  VIR_ERR_CONFIG_UNSUPPORTED.
 
 Ok, I could remove that. At some point it might be possible to enhance that
 again. But that requires more careful thinking. At least with
 device_model_version QEMU_XEN (not QEMU_XEN_TRADITIONAL and not to confuse 
 with
 device_model which just sets the path ;)) at least in theory there should be 
 the
 same support as for kvm... Anyway, just loud thinking.
 
 Would you also rather want the VRAM fixup to be removed as well? I probably 
 wait
 a bit more for other feedback and then would do a v3...

I'd suggest, do a first patch which removes the 'default:' case and sets an
error and honours the VRAM size. Then do a second patch which adds the VRAM
workaround, so we can discuss it separately from the main enablement.


Regards,
Daniel
-- 
|: http://berrange.com  -o-http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org  -o- http://virt-manager.org :|
|: http://autobuild.org   -o- http://search.cpan.org/~danberr/ :|
|: http://entangle-photo.org   -o-   http://live.gnome.org/gtk-vnc :|

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list