Re: Re: [PATCH 2/6] migration/multifd: Add zero pages and zero bytes counter to migration status interface.

2024-02-07 Thread Jiri Denemark
On Wed, Feb 07, 2024 at 12:37:15 +0800, Peter Xu wrote:
> On Wed, Feb 07, 2024 at 12:13:10PM +0800, Peter Xu wrote:
> > On Tue, Feb 06, 2024 at 11:19:04PM +, Hao Xiang wrote:
> > > This change extends the MigrationStatus interface to track zero pages
> > > and zero bytes counter.
> > > 
> > > Signed-off-by: Hao Xiang 
> > 
> > Reviewed-by: Peter Xu 
> 
> I'll need to scratch this, sorry..
> 
> The issue is I forgot we have "duplicate" which is exactly "zero
> page"s.. See:
> 
> info->ram->duplicate = stat64_get(&mig_stats.zero_pages);
> 
> If you think the name too confusing and want a replacement, maybe it's fine
> and maybe we can do that.  Then we can keep this zero page counter
> introduced, reporting the same value as duplicates, then with a follow up
> patch to deprecate "duplicate" parameter.  See an exmaple on how to
> deprecate in 7b24d326348e1672.
> 
> One thing I'm not sure is whether Libvirt will be fine on losing
> "duplicates" after 2+ QEMU major releases.  Copy Jiri for this.  My
> understanding is that Libvirt should be keeping an eye on deprecation list
> and react, but I'd like to double check..

This should not be a big deal as we can internally map either one
(depending on what QEMU supports) to the same libvirt's field. AFAIK
there is a consensus on Cc-ing libvirt-devel on patches that deprecate
QEMU interfaces so that we can update our code in time before the
deprecated interface is dropped.

BTW, libvirt maps "duplicate" to:

/**
 * VIR_DOMAIN_JOB_MEMORY_CONSTANT:
 *
 * virDomainGetJobStats field: number of pages filled with a constant
 * byte (all bytes in a single page are identical) transferred since the
 * beginning of the migration job, as VIR_TYPED_PARAM_ULLONG.
 *
 * The most common example of such pages are zero pages, i.e., pages filled
 * with zero bytes.
 *
 * Since: 1.0.3
 */
# define VIR_DOMAIN_JOB_MEMORY_CONSTANT  "memory_constant"

Jirka




Re: [RFC 0/6] Migration deprecated parts

2023-06-13 Thread Jiri Denemark
On Mon, Jun 12, 2023 at 21:33:38 +0200, Juan Quintela wrote:
> Hi this series describe the migration parts that have to be deprecated.
> 
> - It is an rfc because I doubt that I did the deprecation process right. 
> Hello Markus O:-)
> 
> - skipped field: It is older than me, I have never know what it stands
>   for.  As far as I know it has always been zero.

Libvirt doesn't use this.

> - inc/blk migrate command options.  They are only used by block
>   migration (that I deprecate on the following patch).  And they are really 
> bad.
>   grep must_remove_block_options.
>
> - block migration.  block jobs, whatever they are called this week are
>   way more flexible.  Current code works, but we broke it here and
>   there, and really nobody has stand up to maintain it.  It is quite
>   contained and can be left there.  Is anyone really using it?

We prefer nbd for storage migration if it is available.

> - old compression method.  It don't work.  See last try from Lukas to
>   make a test that works reliabely.  I failed with the same task years
>   ago.  It is really slow, and if compression is good for you, multifd
>   + zlib is going to perform/compress way more.

We do support these methods, but only enable them if a user explicitly
requests so. In other words, the user will just get an error once these
methods are removed, which is fine.

> - -incoming 
> 
>   if you need to set parameters (multifd cames to mind, and preempt has
>   the same problem), you really needs to use defer.  So what should we do 
> here?
> 
>   This part is not urget, because management apps have a working
>   option that are already using "defer", and the code simplifacation
>   if we remove it is not so big.  So we can leave it until 9.0 or
>   whatever we think fit.

Right, libvirt already uses -incoming defer if supported.

In other words, no objection to removing anything on this list and no
additional work is needed on libvirt side.

Jirka




Re: [PATCH v1 1/1] hw/pci: Disable PCI_ERR_UNCOR_MASK register for machine type < 8.0

2023-05-22 Thread Jiri Denemark
On Thu, May 11, 2023 at 13:43:47 +0200, Juan Quintela wrote:
> "Michael S. Tsirkin"  wrote:
> 
> [Added libvirt people to the party, see the end of the message ]

Sorry, I'm not that much into parties :-)

> That would fix the:
> 
> qemu-8.0 -M pc-7.2 -> qemu-8.0.1 -M pc-7.2
> 
> It is worth it?  Dunno.  That is my question.
> 
> And knowing from what qemu it has migrated from would not help.  We
> would need to add a new tweak and means:
> 
> This is a pc-7.2 machine that has been isntantiated in a qemu-8.0 and
> has the pciaerr bug.  But wait, we have _that_.
> 
> And it is called
> 
> +{ TYPE_PCI_DEVICE, "x-pcie-err-unc-mask", "off" },
> 
> from the patch.
> 
> We can teach libvirt about this glitch, and if he is migrating a pc-7.2
> machine in qemu-8.0 machine, And they want to migrate to a new qemu
> (call it qemu-8.1), it needs to be started:
> 
> qemu-8.1 -M pc-7.2 ,x-pci-err-unc-mask="true"
> 
> Until the user reboots it and then that property can be reset to default
> value.

Hmm and what would happen if eventually this machine gets migrated back
to qemu-8.0? Or even when the machine is stopped, started again, and
then migrated to qemu-8.0?

Jirka




Re: [PATCH] spapr/kvm: Set default cpu model for all machine classes

2019-11-13 Thread Jiri Denemark
Hi David.

On Wed, Oct 30, 2019 at 17:32:43 +0100, David Gibson wrote:
> We have to set the default model of all machine classes, not just for the
> active one. Otherwise, "query-machines" will indicate the wrong CPU model
> ("qemu-s390x-cpu" instead of "host-s390x-cpu") as "default-cpu-type".
> 
> s390x already fixed this in de60a92e "s390x/kvm: Set default cpu model for
> all machine classes".  This patch applies a similar fix for the pseries-*
> machine types on ppc64.
> 
> Doing a
> {"execute":"query-machines"}
> under KVM now results in
> {
>   "hotpluggable-cpus": true,
>   "name": "pseries-4.2",
>   "numa-mem-supported": true,
>   "default-cpu-type": "host-powerpc64-cpu",
>   "is-default": true,
>   "cpu-max": 1024,
>   "deprecated": false,
>   "alias": "pseries"
> },
> {
>   "hotpluggable-cpus": true,
>   "name": "pseries-4.1",
>   "numa-mem-supported": true,
>   "default-cpu-type": "host-powerpc64-cpu",
>   "cpu-max": 1024,
>   "deprecated": false
> },
> ...
> 
> Libvirt probes all machines via "-machine none,accel=kvm:tcg" and will
> currently see the wrong CPU model under KVM.

Will this patch make it into 4.2.0?

Jirka




Re: [PATCH] spapr/kvm: Set default cpu model for all machine classes

2019-11-01 Thread Jiri Denemark
On Wed, Oct 30, 2019 at 17:32:43 +0100, David Gibson wrote:
> We have to set the default model of all machine classes, not just for the
> active one. Otherwise, "query-machines" will indicate the wrong CPU model
> ("qemu-s390x-cpu" instead of "host-s390x-cpu") as "default-cpu-type".
> 
> s390x already fixed this in de60a92e "s390x/kvm: Set default cpu model for
> all machine classes".  This patch applies a similar fix for the pseries-*
> machine types on ppc64.
> 
> Doing a
> {"execute":"query-machines"}
> under KVM now results in
> {
>   "hotpluggable-cpus": true,
>   "name": "pseries-4.2",
>   "numa-mem-supported": true,
>   "default-cpu-type": "host-powerpc64-cpu",
>   "is-default": true,
>   "cpu-max": 1024,
>   "deprecated": false,
>   "alias": "pseries"
> },
> {
>   "hotpluggable-cpus": true,
>   "name": "pseries-4.1",
>   "numa-mem-supported": true,
>   "default-cpu-type": "host-powerpc64-cpu",
>   "cpu-max": 1024,
>   "deprecated": false
> },
> ...
> 
> Libvirt probes all machines via "-machine none,accel=kvm:tcg" and will
> currently see the wrong CPU model under KVM.
> 
> Reported-by: Jiři Denemark 
> Cc: David Hildenbrand 
> Cc: Igor Mammedov 

Works as expected now, thanks.

Tested-by: Jiri Denemark 




Re: Default CPU models on s390x and ppc64

2019-10-18 Thread Jiri Denemark
On Thu, Oct 17, 2019 at 17:13:10 +0100, Peter Maydell wrote:
> On Thu, 17 Oct 2019 at 17:09, David Hildenbrand  wrote:
> > The default model under KVM is "host", under TCG it's "qemu". We should
> > not use "qemu" under KVM, although it might work on some setups ...
> 
> Possibly a tangent, but on Arm the approach we used to deal
> with "'-cpu host' only works for kvm" was to define a "-cpu max"
> meaning "best thing you can give me", which is an alias for
> -cpu host with KVM and an alias for a CPU with all the extra
> features we have emulation support under TCG. Then users can
> use '-cpu max' and have something that generally will DTRT
> regardless of accelerator.

This thread is not really about a CPU model that users could universally
use for both TCG or KVM. It's about checking what CPU model is used by
QEMU if no CPU model is specified on the command line.

BTW, I was told aarch64 is a bit different and QEMU fails to start with
KVM unless -cpu host is explicitly specified. Is that correct?

Jirka



Re: Default CPU models on s390x and ppc64

2019-10-18 Thread Jiri Denemark
On Thu, Oct 17, 2019 at 17:35:30 +0200, David Hildenbrand wrote:
> On 17.10.19 17:31, David Hildenbrand wrote:
> > On 17.10.19 17:18, David Hildenbrand wrote:
> >> On 17.10.19 17:16, Jiri Denemark wrote:
> >>> Hi David and David,
> >>>
> >>> I'm working on libvirt's support [1] for query-machines'
> >>> default-cpu-type, which is supposed to return the type of the default
> >>> CPU model that QEMU uses for each machine type. Rather than hard coding
> >>> the default in libvirt (which we currently do on x86), we ask QEMU for
> >>> the default CPU model and use it unless a user asks for a specific CPU
> >>> model explicitly.
> >>>
...
> >>> This situation on s390x is not so complicated, but not really better.
> >>> The default CPU is said to be "qemu" for all machine types, which works
> >>> fine for TCG domains, but it doesn't work on KVM because QEMU complains
> >>> that some features requested in the CPU model are not available. In
> >>> other words the "qemu" CPU model is not runnable on KVM. This a bit
> >>> similar to what happens on x86_64, but QEMU just ignores missing
> >>> features and starts happily there.
> >>
> >> The default model under KVM is "host", under TCG it's "qemu". We should
> >> not use "qemu" under KVM, although it might work on some setups ...
> >>
> >> Where/how is this default model detected?

All this is about probing QEMU capabilities so that we can tell users
what to expect and let them change the default. E.g., qemu64 default CPU
model on x86_64 is never the right one to be used with KVM. By making
the default CPU model visible in the domain XML even before the domain
gets started, we help users detect such suboptimal configurations.

> > 
> > ... target/s390x/kvm.c:kvm_arch_init()
> > 
> > MachineClass *mc = MACHINE_GET_CLASS(ms);
> > 
> > mc->default_cpu_type = S390_CPU_TYPE_NAME("host");
> > 
> > 
> > I think the call order should make sure that "host" is set after "qemu"
> > is set. I'll go ahead and verify that.kvm_arch_init(
> > 
> 
> configure_accelerator(current_machine, argv[0]) -> kvm_arch_init()
> 
> is called after
> 
> current_machine = MACHINE(object_new(object_class_get_name(
> OBJECT_CLASS(machine_class;
> 
> and therefore after the .class_init function of the machine was called.
> 
> I don't see how the default cpu model could not be "host" if qemu was 
> started with "--enable-kvm"

I guess the problem could be that QEMU capabilities probing in libvirt
is done with -machine none. We first probe with KVM enabled and then for
a few specific commands reprobe with TCG only. We could do it with
query-machines too to get different default CPU models depending on the
accelerator, but it wouldn't actually help.

The full command line used for capabilities probing is

qemu-system-s390x \
-S \
-no-user-config \
-nodefaults \
-nographic \
-machine none,accel=kvm:tcg \
-qmp unix:/tmp/qmp-ivG4UN/qmp.monitor,server,nowait \
-pidfile /tmp/qmp-ivG4UN/qmp.pid \
-daemonize

One of the command we send is

{"execute":"query-kvm","id":"libvirt-5"}

and the reply is

{"return": {"enabled": true, "present": true}, "id": "libvirt-5"}

Which means KVM is usable and enabled, but still query-machines returns

{
  "return": [
{
  "hotpluggable-cpus": true,
  "name": "s390-ccw-virtio-4.0",
  "numa-mem-supported": false,
  "default-cpu-type": "qemu-s390x-cpu",
  "cpu-max": 248,
  "deprecated": false
},
{
  "hotpluggable-cpus": true,
  "name": "s390-ccw-virtio-3.1",
  "numa-mem-supported": false,
  "default-cpu-type": "qemu-s390x-cpu",
  "cpu-max": 248,
  "deprecated": false
},
...
  ]
}

So it seems the code is run during the machine initialization and thus
it doesn't affect what query-machines returns with -machine none.

Jirka



Default CPU models on s390x and ppc64

2019-10-17 Thread Jiri Denemark
Hi David and David,

I'm working on libvirt's support [1] for query-machines'
default-cpu-type, which is supposed to return the type of the default
CPU model that QEMU uses for each machine type. Rather than hard coding
the default in libvirt (which we currently do on x86), we ask QEMU for
the default CPU model and use it unless a user asks for a specific CPU
model explicitly.

We use query-cpu-definitions for translating the default CPU type to the
actual CPU model we need to pass to -cpu by looking up the CPU model
with matching typename.

However, all this seems to work only with TCG on both s390x and ppc64.
The issues I met with KVM on each architecture are described below.

On ppc64 the default CPU type reported by query-machines is
power*-powerpc64-cpu, while IIUC QEMU would effectively use -cpu host by
default. In fact -cpu power8 is mostly just a fancy alias to -cpu host
on a Power8 machine. But QEMU even rewrites typename of the current host
CPU model to host-powerpc64-cpu. Which means on a Power8 host the power8
CPU model would have typename=host-powerpc64-cpu while the default CPU
type would still use power8*-powerpc64-cpu. Thus we may just fail to
find any CPU model corresponding to the default CPU model.

And to make it even worse, the default CPU model changes with machine
type. E.g., pseries-3.1 uses power8_v2.0-powerpc64-cpu while pseries-4.2
uses power9_v2.0-powerpc64-cpu. However, starting QEMU with pseries-4.2
machine type and the reported default -cpu power9 fails on any
non-Power9 host. That said QEMU just lies about the default CPU model.

So for all combinations of (pseries-3.1, pseries-4.2) machine types and
(Power8, Power9) hosts, one will get the following mixed results:

- pseries-3.1 on Power8: no default CPU model will be detected, QEMU
  starts fine with it's own default
- pseries-4.2 on Power9: same as above
- pseries-3.1 on Power9: -cpu power8 (not sure if this works, though)
- pseries-4.2 on Power8: -cpu power9, QEMU doesn't start


This situation on s390x is not so complicated, but not really better.
The default CPU is said to be "qemu" for all machine types, which works
fine for TCG domains, but it doesn't work on KVM because QEMU complains
that some features requested in the CPU model are not available. In
other words the "qemu" CPU model is not runnable on KVM. This a bit
similar to what happens on x86_64, but QEMU just ignores missing
features and starts happily there.


So what do you suggest we should do? Currently I just restricted all the
default CPU model staff to TCG on both s390x and ppc64. So either we can
be happy with the current state or we need to come up with a solution
that would allow us to enable this even for KVM.

Thanks,
Jirka

[1] https://www.redhat.com/archives/libvir-list/2019-October/msg00872.html



Re: [libvirt] [RFC] cpu_map: Remove pconfig from Icelake-Server CPU model

2019-10-03 Thread Jiri Denemark
On Tue, Oct 01, 2019 at 11:20:42 +0200, Paolo Bonzini wrote:
> On 30/09/19 18:16, Jiri Denemark wrote:
> > On Mon, Sep 30, 2019 at 17:16:27 +0200, Paolo Bonzini wrote:
> >> On 30/09/19 16:31, Hu, Robert wrote:
> >>>> This might be a problem if there are plans to eventually make KVM support
> >>>> pconfig, though.  Paolo, Robert, are there plans to support pconfig in 
> >>>> KVM in the
> >>>> future?
> >>> [Robert Hoo] 
> >>> Thanks Eduardo for efforts in resolving this issue, introduced from my 
> >>> Icelake CPU
> >>> model patch.
> >>> I've no idea about PCONFIG's detail and plan. Let me sync with Huang, Kai 
> >>> and answer
> >>> you soon.
> >>
> >> It's really, really unlikely.  It's possible that some future processor
> >> overloads PCONFIG in such a way that it will become virtualizable, but
> >> not IceLake.
> > 
> > I guess, the likelihood of this happening would be similar to
> > reintroducing other features, such as osxsave or ospke, right?
> 
> No, haveing osxsave and ospke was a mistake in the first place (they are
> not CPU features at all; they are more like a special way to let
> unprivileged programs read some bits of CR4).  For pconfig, it's just
> very unlikely.
> 
> >> Would it make sense for libvirt to treat absent CPU flags as "default
> >> off" during migration, so that it can leave out the flag in the command
> >> line if it's off?  If it's on, libvirt would pass pconfig=on as usual.
> >> This is a variant of [2], but more generally applicable:
> >>
> >>> [2] However starting a domain with Icelake-Server so that it can be
> >>> migrated or saved/restored on QEMU in 3.1.1 and 4.0.0 would be
> >>> impossible. This can be solved by a different hack, which would drop
> >>> pconfig=off from QEMU command line.
> > 
> > The domain XML does not contain a complete list of all CPU features.
> > Features which are implicitly included in a CPU model are not listed in
> > the XML. Count in the differences in libvirt's vs QEMU's definitions of
> > a particular CPU model and you can see feat=off cannot be mechanically
> > dropped from the command line as the CPU model itself could turn it on
> > by default and thus feat=off is not redundant.
> 
> I think I wasn't very clear, I meant "unsupported by QEMU" when I said
> "absent".  Libvirt on the destination knows that from
> query-cpu-model-expansion, so it can leave off pconfig if it is not
> supported by the destination QEMU.

Oh yeah, we should do this (and I plan to do so), but it won't really
help us in this case. Although it could potentially save us some work in
case we end up in a similar situation.

Jirka



Re: [RFC] cpu_map: Remove pconfig from Icelake-Server CPU model

2019-09-30 Thread Jiri Denemark
On Mon, Sep 30, 2019 at 17:16:27 +0200, Paolo Bonzini wrote:
> On 30/09/19 16:31, Hu, Robert wrote:
> >> This might be a problem if there are plans to eventually make KVM support
> >> pconfig, though.  Paolo, Robert, are there plans to support pconfig in KVM 
> >> in the
> >> future?
> > [Robert Hoo] 
> > Thanks Eduardo for efforts in resolving this issue, introduced from my 
> > Icelake CPU
> > model patch.
> > I've no idea about PCONFIG's detail and plan. Let me sync with Huang, Kai 
> > and answer
> > you soon.
> 
> It's really, really unlikely.  It's possible that some future processor
> overloads PCONFIG in such a way that it will become virtualizable, but
> not IceLake.

I guess, the likelihood of this happening would be similar to
reintroducing other features, such as osxsave or ospke, right?

> Would it make sense for libvirt to treat absent CPU flags as "default
> off" during migration, so that it can leave out the flag in the command
> line if it's off?  If it's on, libvirt would pass pconfig=on as usual.
> This is a variant of [2], but more generally applicable:
> 
> > [2] However starting a domain with Icelake-Server so that it can be
> > migrated or saved/restored on QEMU in 3.1.1 and 4.0.0 would be
> > impossible. This can be solved by a different hack, which would drop
> > pconfig=off from QEMU command line.

The domain XML does not contain a complete list of all CPU features.
Features which are implicitly included in a CPU model are not listed in
the XML. Count in the differences in libvirt's vs QEMU's definitions of
a particular CPU model and you can see feat=off cannot be mechanically
dropped from the command line as the CPU model itself could turn it on
by default and thus feat=off is not redundant.

Jirka



Re: [Qemu-devel] [PATCH 1/1] x86: add CPU flags supported inside libvirt

2019-08-14 Thread Jiri Denemark
On Thu, Jul 18, 2019 at 16:45:37 +0300, Denis V. Lunev wrote:
> There are the following flags available in libvirt inside cpu_map.xm
> 
>   
> 
>  
>   
> 
> We have faced the problem that QEMU does not start once these flags are
> present in the domain.xml.

Libvirt should not add this to the XML by itself (when using host-model
CPU, for example) so the user must have asked for the feature
explicitly. Thus I don't see any problem with QEMU refusing to start
with such configuration. And the workaround is easy, just don't do it.

I'm not sure about cvt16, but IIRC cmt and mbm_* features were added as
a way to detect whether the host CPU supports perf monitoring counters.
I think tt was not the brightest idea, but there's no reason why QEMU
should support enabling these features. Unless it actually makes sense
for QEMU.

If there are any issues with libvirt passing these features to QEMU
without explicit request from the user, we should address them in
libvirt.

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-22 Thread Jiri Denemark
On Fri, May 10, 2019 at 17:23:13 -0300, Eduardo Habkost wrote:
> On Fri, May 10, 2019 at 01:33:03PM +0200, Jiri Denemark wrote:
> > On Thu, May 09, 2019 at 13:08:25 -0300, Eduardo Habkost wrote:
> > > On Thu, May 09, 2019 at 05:26:16PM +0200, Jiri Denemark wrote:
> > > > On Thu, May 09, 2019 at 10:56:17 -0300, Eduardo Habkost wrote:
> > > > > On Thu, May 09, 2019 at 03:35:37PM +0200, Jiri Denemark wrote:
> > > > > > Would this unavailable-features property contain only canonical 
> > > > > > names of
> > > > > > the features or all possible aliases of all features? For example,
> > > > > > "tsc-adjust" can also be spelled as "tsc_adjust". When calling
> > > > > > query-cpu-model-expansion, we have a way to request all variants by
> > > > > > running full expansion on the result of a previous static 
> > > > > > expansion. Can
> > > > > > we get something like this for unavailable-features too?
> > > > > 
> > > > > I'd like to avoid that, and refer only to the canonical names.
> > > > > 
> > > > > Could you explain the use case you have in mind, so we can look
> > > > > for alternatives?
> > > > 
> > > > Libvirt only knows about a single spelling for each CPU feature and
> > > > quite often it is not the canonical variant. Thus libvirt would only
> > > > recognize features for which the name known by libvirt is the canonical
> > > > name.
> > > > 
> > > > We could theoretically make the translation in libvirt, but it's not
> > > > going to be future proof. If a new spelling is introduced, it's because
> > > > the old one is not considered correct and the new one becomes the
> > > > canonical version. When libvirt doesn't have the translation (libvirt is
> > > > older or just didn't catch up yet) we have a big problem.
> > > > 
> > > > I guess a good alternative would be some way of querying all CPU feature
> > > > names and their aliases. If I'm not mistaken, we can either get a list
> > > > of canonical names or all names, but without any clue which names
> > > > actually refer to a single feature.
> > > 
> > > Right.  But (as described below) if you want to know the status a
> > > specific feature you already know about (even if you are using
> > > the old spelling), qom-get should work for you.
> > 
> > Yeah, since we'll be checking all features anyway, we can detect enabled
> > and disabled features at the same time. However, we don't know whether a
> > specific feature was disabled because we did not ask for it to be
> > enabled (remember, CPU model definition may differ between libvirt and
> > QEMU) or because it was filtered out.
> > 
> > Depending on the domain XML used to start a domain, libvirt may not (and
> > usually will not) refuse to start a domain for which QEMU filtered out
> > some CPU features. Of course, once the domain is running, the checking
> > becomes very strict and libvirt would refuse to start the domain on
> > another host during migration if any feature is filtered out.
> > 
> > Thus libvirt stores all features QEMU filtered out when a domain was
> > started in the non-strict way. So we need to match the features in the
> > unavailable-features list with our features. Just checking for the list
> > being empty is not sufficient.
> 
> OK, I understand you want to translate the canonical names on
> unavailable-features back to the old names on some cases.
> 
> But I really prefer Igor's suggestion of deprecating aliases and
> getting rid of them in the future, instead of increasing the
> complexity of our QMP interfaces just to accommodate the existing
> aliases.

OK, I think you can go on and implement the unavailable-features
property the way you suggested and we'll deal with the translation
internally in libvirt.

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-22 Thread Jiri Denemark
On Fri, May 10, 2019 at 17:32:03 -0300, Eduardo Habkost wrote:
> On Thu, May 09, 2019 at 06:36:18PM +0200, Jiri Denemark wrote:
> > On Thu, May 09, 2019 at 18:06:03 +0200, Igor Mammedov wrote:
> > > On Thu, 9 May 2019 10:56:17 -0300
> > > Eduardo Habkost  wrote:
> > > 
> > > > On Thu, May 09, 2019 at 03:35:37PM +0200, Jiri Denemark wrote:
> > > > > Would this unavailable-features property contain only canonical names 
> > > > > of
> > > > > the features or all possible aliases of all features? For example,
> > > > > "tsc-adjust" can also be spelled as "tsc_adjust". When calling
> > > > > query-cpu-model-expansion, we have a way to request all variants by
> > > > > running full expansion on the result of a previous static expansion. 
> > > > > Can
> > > > > we get something like this for unavailable-features too?
> > > > 
> > > > I'd like to avoid that, and refer only to the canonical names.
> > > 
> > > Can we deprecate aliases to avoid confusion in future?
> > > (there aren't that many of them that used pre-QOM name format)
> > 
> > If you come up with a way libvirt could use to detect which name it
> > should use when talking to QEMU...
> 
> The property names are part of the API, and deprecation would
> just be documented in the QEMU documentation.  Why would you need
> to enumerate them dynamically at runtime?

The tricky part is to know which variant of a particular feature name we
should use when talking to a specific version of QEMU. But I guess we
can use the new "unavailable-features" property for this purpose. When
the property is present, we can translate all feature names to their
canonical names (via a static translation table in libvirt). We'd be
using the old untranslated names when talking to any QEMU which does not
support the "unavailable-features" property.

But I hope we won't get into a situation when some CPU feature needs to
be renamed again, that would make a big mess.

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-10 Thread Jiri Denemark
On Thu, May 09, 2019 at 13:08:25 -0300, Eduardo Habkost wrote:
> On Thu, May 09, 2019 at 05:26:16PM +0200, Jiri Denemark wrote:
> > On Thu, May 09, 2019 at 10:56:17 -0300, Eduardo Habkost wrote:
> > > On Thu, May 09, 2019 at 03:35:37PM +0200, Jiri Denemark wrote:
> > > > Would this unavailable-features property contain only canonical names of
> > > > the features or all possible aliases of all features? For example,
> > > > "tsc-adjust" can also be spelled as "tsc_adjust". When calling
> > > > query-cpu-model-expansion, we have a way to request all variants by
> > > > running full expansion on the result of a previous static expansion. Can
> > > > we get something like this for unavailable-features too?
> > > 
> > > I'd like to avoid that, and refer only to the canonical names.
> > > 
> > > Could you explain the use case you have in mind, so we can look
> > > for alternatives?
> > 
> > Libvirt only knows about a single spelling for each CPU feature and
> > quite often it is not the canonical variant. Thus libvirt would only
> > recognize features for which the name known by libvirt is the canonical
> > name.
> > 
> > We could theoretically make the translation in libvirt, but it's not
> > going to be future proof. If a new spelling is introduced, it's because
> > the old one is not considered correct and the new one becomes the
> > canonical version. When libvirt doesn't have the translation (libvirt is
> > older or just didn't catch up yet) we have a big problem.
> > 
> > I guess a good alternative would be some way of querying all CPU feature
> > names and their aliases. If I'm not mistaken, we can either get a list
> > of canonical names or all names, but without any clue which names
> > actually refer to a single feature.
> 
> Right.  But (as described below) if you want to know the status a
> specific feature you already know about (even if you are using
> the old spelling), qom-get should work for you.

Yeah, since we'll be checking all features anyway, we can detect enabled
and disabled features at the same time. However, we don't know whether a
specific feature was disabled because we did not ask for it to be
enabled (remember, CPU model definition may differ between libvirt and
QEMU) or because it was filtered out.

Depending on the domain XML used to start a domain, libvirt may not (and
usually will not) refuse to start a domain for which QEMU filtered out
some CPU features. Of course, once the domain is running, the checking
becomes very strict and libvirt would refuse to start the domain on
another host during migration if any feature is filtered out.

Thus libvirt stores all features QEMU filtered out when a domain was
started in the non-strict way. So we need to match the features in the
unavailable-features list with our features. Just checking for the list
being empty is not sufficient.

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-09 Thread Jiri Denemark
On Thu, May 09, 2019 at 18:06:03 +0200, Igor Mammedov wrote:
> On Thu, 9 May 2019 10:56:17 -0300
> Eduardo Habkost  wrote:
> 
> > On Thu, May 09, 2019 at 03:35:37PM +0200, Jiri Denemark wrote:
> > > Would this unavailable-features property contain only canonical names of
> > > the features or all possible aliases of all features? For example,
> > > "tsc-adjust" can also be spelled as "tsc_adjust". When calling
> > > query-cpu-model-expansion, we have a way to request all variants by
> > > running full expansion on the result of a previous static expansion. Can
> > > we get something like this for unavailable-features too?
> > 
> > I'd like to avoid that, and refer only to the canonical names.
> 
> Can we deprecate aliases to avoid confusion in future?
> (there aren't that many of them that used pre-QOM name format)

If you come up with a way libvirt could use to detect which name it
should use when talking to QEMU...

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-09 Thread Jiri Denemark
On Thu, May 09, 2019 at 10:56:17 -0300, Eduardo Habkost wrote:
> On Thu, May 09, 2019 at 03:35:37PM +0200, Jiri Denemark wrote:
> > Would this unavailable-features property contain only canonical names of
> > the features or all possible aliases of all features? For example,
> > "tsc-adjust" can also be spelled as "tsc_adjust". When calling
> > query-cpu-model-expansion, we have a way to request all variants by
> > running full expansion on the result of a previous static expansion. Can
> > we get something like this for unavailable-features too?
> 
> I'd like to avoid that, and refer only to the canonical names.
> 
> Could you explain the use case you have in mind, so we can look
> for alternatives?

Libvirt only knows about a single spelling for each CPU feature and
quite often it is not the canonical variant. Thus libvirt would only
recognize features for which the name known by libvirt is the canonical
name.

We could theoretically make the translation in libvirt, but it's not
going to be future proof. If a new spelling is introduced, it's because
the old one is not considered correct and the new one becomes the
canonical version. When libvirt doesn't have the translation (libvirt is
older or just didn't catch up yet) we have a big problem.

I guess a good alternative would be some way of querying all CPU feature
names and their aliases. If I'm not mistaken, we can either get a list
of canonical names or all names, but without any clue which names
actually refer to a single feature.

> > As you mentioned, there are two interesting QOM properties:
> > filtered-features and feature-words and both are used by libvirt. We use
> > feature-words to get CPU features which were enabled in the guest CPU on
> > top of what we expected. This is the case of, e.g., a feature added to a
> > given CPU model for new machine types. I guess we could switch to
> > checking QOM properties for individual features as a replacement for
> > feature-words which covers both CPUID and MSR features.
> 
> I guess it depends on your goal:
> 
> If your just want to know if one specific feature is missing for
> some reason, you can check the QOM properties directly.  That's
> OK, and it's even better than checking the `feature-words`
> property.
> 
> If you want to be 100% sure no property was missing when starting
> the VM (e.g. emulate the behavior of the "enforce" option), I
> suggest you check if `unavailable-features` is empty.

That's what filtered-features is used for.

As I tried to explain (and apparently failed :-)) we use feature-words
for a different thing. I guess it will be more clear using a specific
example. For example, when arat CPU feature was introduced, it was added
into several existing CPU models and thus libvirt's version of the CPU
model differs from the one in QEMU. (This is actually safer and better
then changing the libvirt's CPU model too since migration relies on both
hosts having the same definition of the CPU model used by a domain.) So,
for example, when a domain with Broadwell CPU model is started, libvirt
checks feature-words to see whether some CPU features not defined in
libvirt's definition of Broadwell CPU model were enabled. And it will
see arat. As a result the live domain definition will actually be
changed to Broadwell+arat to make sure arat is not lost after migration.

Is the usage clear now?

Anyway, I think we can just check all features we know about via CPU
properties to get the same result. It's not going to be as nice as using
feature-words, but it's doable.

Jirka



Re: [Qemu-devel] [PATCH 0/2] i386: "unavailable-features" QOM property

2019-05-09 Thread Jiri Denemark
On Mon, Apr 22, 2019 at 20:47:40 -0300, Eduardo Habkost wrote:
> Currently, libvirt uses the "filtered-features" QOM property at
> runtime to ensure no feature was accidentally disabled on VCPUs
> because it's not available on the host.
> 
> However, the code for "feature-words" assumes that all missing
> features have a corresponding CPUID bit, which is not true for
> MSR-based features like the ones at FEAT_ARCH_CAPABILITIES.
> 
> We could extend X86CPUFeatureWordInfo to include information
> about MSR features, but it's impossible to do that while keeping
> compatibility with clients that (reasonably) expect all elements
> of "filtered-features" to have the cpuid-* fields.
> 
> We have a field in "query-cpu-definitions" that already describes
> all features that are missing on a CPU, including MSR features:
> CpuDefinitionInfo.unavailable-features.  The existing code for
> building the unavailable-features array even uses
> X86CPU::filtered_features to build the feature list.
> 
> This series adds a "unavailable-features" QOM property to X86CPU
> objects that have the same semantics of "unavailable-features" on
> query-cpu-definitions.  The new property has the same goal of
> "filtered-features", but is generic enough to let any kind of CPU
> feature to be listed there without relying on low level details
> like CPUID leaves or MSR numbers.

Thanks.

Would this unavailable-features property contain only canonical names of
the features or all possible aliases of all features? For example,
"tsc-adjust" can also be spelled as "tsc_adjust". When calling
query-cpu-model-expansion, we have a way to request all variants by
running full expansion on the result of a previous static expansion. Can
we get something like this for unavailable-features too?

As you mentioned, there are two interesting QOM properties:
filtered-features and feature-words and both are used by libvirt. We use
feature-words to get CPU features which were enabled in the guest CPU on
top of what we expected. This is the case of, e.g., a feature added to a
given CPU model for new machine types. I guess we could switch to
checking QOM properties for individual features as a replacement for
feature-words which covers both CPUID and MSR features.

Jirka



Re: [Qemu-devel] Dropped CPU feature names and backward compatibility

2018-09-18 Thread Jiri Denemark
On Tue, Sep 18, 2018 at 10:14:45 -0300, Eduardo Habkost wrote:
> On Tue, Sep 18, 2018 at 03:07:35PM +0200, Jiri Denemark wrote:
> > Sure, libvirt could just avoid passing feature=off for any feature which
> > is not supported by the QEMU binary it is about to start since such
> > feature should be disabled anyway. And if it actually is enabled even if
> > it is not supported (which can apparently happen according to the commit
> > message which removed ospke), we can at least rely on QEMU doing it
> > consistently. But what should we do if the user explicitly asked for the
> > feature to be enabled, QEMU enabled it, and suddenly the feature is not
> > supported by new QEMU?
> 
> I incorrectly assumed that nobody would be using the option
> names, because the options did nothing (ospke and osxsave are
> runtime CPU state flags exposed via CPUID, and were never
> configurable from the command-line).
> 
> If it broke something, we should restore the option names and
> declare them as deprecated.
> 
> > 
> > Any ideas on how we should deal with the two features which were removed
> > from QEMU 3.0 and how to make sure we don't have to discuss this again
> > when some other feature is removed?
> 
> Removing the flags without notice was a mistake.  Next time, we
> should follow the deprecation policy and communicate the option
> removal more clearly.

Fair enough, however once they are declared as deprecated they will
actually be removed at some point. And we're back in this situation.
Deprecating or removing command line options or QMP commands is
different, because libvirt can detect what is (un)supported and use an
alternative command or option.

But if a CPU feature is removed (unfortunately it doesn't really matter
if it was doing something or not), we need to have a plan what to do in
a situation when the feature is used in domain XML. We can detect the
feature is not supported, but does it mean the feature was didn't have
any effect and thus it was removed or are we just talking to a QEMU
binary which does not support the feature (yet)?

Jirka



[Qemu-devel] Dropped CPU feature names and backward compatibility

2018-09-18 Thread Jiri Denemark
Hi,

I noticed two x86_64 CPU features were removed from QEMU in 3.0.0:
- ospke removed by 9ccb9784b57
- osxsave removed by f1a23522b03

More precisely, the CPUID bits are still there (and for example Icelake
CPU model has the ospke bit set), but the string representations were
removed. Thus QEMU will no longer report there features and it will not
accept it on the command line anymore. Which is a regression which we
need to deal with somehow.

With QEMU < 3.0:

$ qemu-system-x86_64 -machine pc,accel=kvm -cpu Skylake-Client,osxsave=on
qemu-system-x86_64: warning: host doesn't support requested feature: 
CPUID.01H:ECX.osxsave [bit 27]

$ qemu-system-x86_64 -machine pc,accel=kvm -cpu Skylake-Client,osxsave=off


While new QEMU 3.0 complains:

$ qemu-system-x86_64 -machine pc,accel=kvm -cpu Skylake-Client,osxsave=off
qemu-system-x86_64: can't apply global Skylake-Client-x86_64-cpu.osxsave=off: 
Property '.osxsave' not found


I accept the argument that setting those two features was never actually
allowed, but that doesn't mean they should be just removed. It's quite
likely that somebody has a libvirt domain defined with either of the two
features explicitly enabled or disabled. It's likely because both
virt-manager and virt-install can be told to copy host CPU configuration
to the domain XML. This is normally not a problem, because it will just
ask QEMU to enable all features available on the host and QEMU will
enable only some of them. To make sure libvirt can provide stable guest
CPU ABI after migration or save/restore, we ask QEMU what CPU features
were enabled and disabled and update the active definition. This means
that an explicit  will likely
turn into . In other words,
the -cpu option will have an explicit osxsave=off parameter during
migration. As a result of this such domain would happily migrate to QEMU
< 3.0, but fail to migrate to QEMU 3.0.

Sure, libvirt could just avoid passing feature=off for any feature which
is not supported by the QEMU binary it is about to start since such
feature should be disabled anyway. And if it actually is enabled even if
it is not supported (which can apparently happen according to the commit
message which removed ospke), we can at least rely on QEMU doing it
consistently. But what should we do if the user explicitly asked for the
feature to be enabled, QEMU enabled it, and suddenly the feature is not
supported by new QEMU?

Any ideas on how we should deal with the two features which were removed
from QEMU 3.0 and how to make sure we don't have to discuss this again
when some other feature is removed?

Jirka



Re: [Qemu-devel] CPU model versioning separate from machine type versioning ?

2018-06-29 Thread Jiri Denemark
On Fri, Jun 29, 2018 at 11:14:17 +0100, Daniel P. Berrangé wrote:
> On Thu, Jun 28, 2018 at 04:52:27PM -0300, Eduardo Habkost wrote:
> > On Thu, Jun 28, 2018 at 04:45:02PM +0100, Daniel P. Berrangé wrote:
> > [...]
> > > What if we can borrow the concept of versioning from machine types and 
> > > apply
> > > it to CPU models directly. For example, considering the history of 
> > > "Haswell"
> > > in QEMU, if we had versioned things, we would by now have:
> > > 
> > >  Haswell-1.3.0 - first version 
> > > (37507094f350b75c62dc059f998e7185de3ab60a)
> > >  Haswell-2.2.0 - added 'rdrand' 
> > > (78a611f1936b3eac8ed78a2be2146a742a85212c_
> > >  Haswell-2.3.0 - removed 'hle' & 'rtm' 
> > > (a356850b80b3d13b2ef737dad2acb05e6da03753)
> > >  Haswell-2.5.0 - added 'abm' (becb66673ec30cb604926d247ab9449a60ad8b11
> > >  Haswell-2.12.0 - added 'spec-ctrl' 
> > > (ac96c41354b7e4c70b756342d9b686e31ab87458)
> > >  Haswell-3.0.0  - added 'ssbd' (never done)
> > > 
> > > If we followed the machine type approach, then a bare "Haswell" would
> > > statically resolve at build time to the most recent Haswell-X.X.X version
> > > associated with the QEMU release. This is unhelpful as we have a direct
> > > dependancy on the host hardware features. Better would be for a bare
> > > "Haswell" to be dynamically resolved at runtime, picking the most recent
> > > version that is capable of launching given the current hardware, KVM/TCG 
> > > impl
> > > and QEMU version.
> > > 
> > >   ie -cpu  Haswell
> > > 
> > > should use Haswell-2.5.0  if on silicon with the TSX errata applied,
> > > but use Haswell-2.12.0 if the Spectre errata is applied in microcode,
> > > and use Haswell-3.0.0 once Intel finally releases SSBD microcode errata.
> > 
> > Doing this unconditionally would make
> > "-machine pc-q35-3.1 -cpu Haswell" unsafe for live migration, and
> > break existing usage.  But this behavior could be enabled
> > explicitly somehow.
> 
> True, for full back compat with existing libvirt we would probably
> want to opt-in to it.
> 
> eg  -cpu Haswell could pick a fixed Haswell--XXX version according
> to the machine type.  -cpu Haswell,best=on  could pick best version
> for the host with the caveat about migration between heterogenous
> hosts.

I was thinking we could even separate the CPU model version from the
name itself:

-cpu Haswell(the old, compatible way)
-cpu Haswell,version=best
-cpu Haswell,version=2.12.0

It would be slightly more work for the upper management layers, but IMHO
it would make more sense.

In any case, we have to think about keeping guest ABI stable.

I hope the automatic version selection would not cause any problems for
subsequent cold starts (such as Windows activation issues). It should be
very similar to updating CPU microcode which the guest OS is already
supposed to deal with in real hardware. However, in the past QEMU
changed CPU signature (family, model, stepping) for new machine types
and it is likely to happen with separately versioned CPU models too. I
believe CPU microcode updates do not touch these values. On the other
hand, it's similar to host-model and the user can always specify exact
version to avoid this slight change should it be a problem.

Once the domain starts, we need to keep stable ABI across migrations,
save/restores, or snapshots. Libvirt already does so by talking to QEMU
before starting vCPUs and checking for disabled/enabled features. Then
we store this information in the active domain XML to make sure we can
enforce the same CPU later. This concept would need to be enhanced to
include the CPU model version which QEMU would need to be able to
report.

A significantly more fun would result from letting libvirt use the
versioned CPU model stuff by default without an explicit knob in the
XML. But I guess you don't want to go that direction, do you?

Jirka



Re: [Qemu-devel] [libvirt] CPU model versioning separate from machine type versioning ?

2018-06-28 Thread Jiri Denemark
On Thu, Jun 28, 2018 at 16:23:53 -0300, Eduardo Habkost wrote:
> On Thu, Jun 28, 2018 at 07:59:38PM +0100, Dr. David Alan Gilbert wrote:
> [...]
> > > An application like virt-manager which wants a simple UI can forever be
> > > happy simply giving users a list of bare CPU model names, and allowing
> > > libvirt / QEMU to automatically expand to the best versioned model for
> > > their host.
> > > 
> > > An application like oVirt/OpenStack which wants direct control can allow
> > > the admin to choice if a bare name, or explicitly picking a versioned name
> > > if they need to cope with possibility of outdated hosts.
> > 
> > I fear people are going to find this out the hard way, when they add
> > a new system into their cluster, a little bit later it gets a VM started
> > on it, and then they try and migrate it to one of the older machines.
> > 
> > Now if there was something that could take the CPU defintions from all
> > the machines in the cluster and tell it which to use/which problems
> > they had then that might make sense.  It would be best for each
> > higher level not to reinvent that.
> 
> I think QEMU already provides enough info to allow that to be
> implemented.  I'm not sure sure if the libvirt API already
> provides all the info needed for this (I think it does).

Right, libvirt provides virConnectBaselineHypervisorCPU API which
accepts a list of host CPU models from several hosts and returns the
best CPU definition runnable on these hosts. However, OpenStack
clusters are too dynamic for this to be practical so the admin would
need to take care of this by setting an appropriate model statically.

> > Would you restrict the combinations to cut down the test matrix - e.g.
> > not allow Haswell-3.0.0 on anything prior to a 2.12 machine type?
> 
> Not sure if it would be worth the extra complexity: we would need
> an interface to tell libvirt which CPU models are usable on which
> machine-types.

In case we do this, libvirt should already by ready for it on the API
level for both reporting capabilities and CPU comparison/baseline. All
these APIs already accept machine type as an optional parameter so that
different results can be provided depending on machine type.

Jirka



Re: [Qemu-devel] [PATCH] migration: Don't activate block devices if using -S

2018-04-11 Thread Jiri Denemark
On Tue, Apr 10, 2018 at 16:47:56 +0200, Kevin Wolf wrote:
> Am 10.04.2018 um 16:22 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (kw...@redhat.com) wrote:
> > > Am 10.04.2018 um 12:40 hat Dr. David Alan Gilbert geschrieben:
> > > > Hmm; having chatted to Jiri I'm OK with reverting it, on the condition
> > > > that I actually understand how this alternative would work first.
> > > > 
> > > > I can't currently see how a block-inactivate would be used.
> > > > I also can't see how a block-activate unless it's also with the
> > > > change that you're asking to revert.
> > > > 
> > > > Can you explain the way you see it working?
> > > 
> > > The key is making the delayed activation of block devices (and probably
> > > delayed announcement of NICs? - you didn't answer that part) optional
> > > instead of making it the default.
> > 
> > NIC announcments are broken in similar but slightly different ways;  we
> > did have a series on list to help a while ago but it never got merged;
> > I'd like to keep that mess separate.
> 
> Okay. I just thought that it would make sense to have clear migration
> phases that are the same for all external resources that the QEMU
> processes use.

I don't think NIC announcements should be delayed in this specific case
since we're dealing with a failure recovery which should be rare in
comparison to successful migration when we want NIC announcements to be
send early. In other words, any NIC issues should be solved separately
and Laine would likely be a better person for discussing them since he
has a broader knowledge of all the fancy network stuff which libvirt
needs to coordinate with.

Jirka



Re: [Qemu-devel] [PATCH] migration: Don't activate block devices if using -S

2018-04-10 Thread Jiri Denemark
On Mon, Apr 09, 2018 at 15:40:03 +0200, Kevin Wolf wrote:
> Am 09.04.2018 um 12:27 hat Dr. David Alan Gilbert geschrieben:
> > It's a fairly hairy failure case they had; if I remember correctly it's:
> >   a) Start migration
> >   b) Migration gets to completion point
> >   c) Destination is still paused
> >   d) Libvirt is restarted on the source
> >   e) Since libvirt was restarted it fails the migration (and hence knows
> >  the destination won't be started)
> >   f) It now tries to resume the qemu on the source
> > 
> > (f) fails because (b) caused the locks to be taken on the destination;
> > hence this patch stops doing that.  It's a case we don't really think
> > about - i.e. that the migration has actually completed and all the data
> > is on the destination, but libvirt decides for some other reason to
> > abandon migration.
> 
> If you do remember correctly, that scenario doesn't feel tricky at all.
> libvirt needs to quit the destination qemu, which will inactivate the
> images on the destination and release the lock, and then it can continue
> the source.
> 
> In fact, this is so straightforward that I wonder what else libvirt is
> doing. Is the destination qemu only shut down after trying to continue
> the source? That would be libvirt using the wrong order of steps.

There's no connection between the two libvirt daemons in the case we're
talking about so they can't really synchronize the actions. The
destination daemon will kill the new QEMU process and the source will
resume the old one, but the order is completely random.

...
> > Yes it was a 'block-activate' that I'd wondered about.  One complication
> > is that if this now under the control of the management layer then we
> > should stop asserting when the block devices aren't in the expected
> > state and just cleanly fail the command instead.
> 
> Requiring an explicit 'block-activate' on the destination would be an
> incompatible change, so you would have to introduce a new option for
> that. 'block-inactivate' on the source feels a bit simpler.

As I said in another email, the explicit block-activate command could
depend on a migration capability similarly to how pre-switchover state
works.

Jirka



Re: [Qemu-devel] [PATCH] migration: Don't activate block devices if using -S

2018-04-09 Thread Jiri Denemark
On Wed, Apr 04, 2018 at 12:03:03 +0200, Kevin Wolf wrote:
> Am 03.04.2018 um 22:52 hat Dr. David Alan Gilbert geschrieben:
> > * Kevin Wolf (kw...@redhat.com) wrote:
> > > Consider a case where the management tool keeps a mirror job with
> > > sync=none running to expose all I/O requests to some external process.
> > > It needs to shut down the old block job on the source in the
> > > 'pre-switchover' state, and start a new block job on the destination
> > > when the destination controls the images, but the VM doesn't run yet (so
> > > that it doesn't miss an I/O request). This patch removes the migration
> > > phase that the management tool needs to implement this correctly.
> > > 
> > > If we need a "neither side has control" phase, we might need to
> > > introduce it in addition to the existing phases rather than replacing a
> > > phase that is still needed in other cases.
> > 
> > This is yet another phase to be added.
> > IMHO this needs the managment tool to explicitly take control in the
> > case you're talking about.
> 
> What kind of mechanism do you have in mind there?
> 
> Maybe what could work would be separate QMP commands to inactivate (and
> possibly for symmetry activate) all block nodes. Then the management
> tool could use the pre-switchover phase to shut down its block jobs
> etc., inactivate all block nodes, transfer its own locks and then call
> migrate-continue.

Libvirt's migration protocol consists of several phases, the ones
relevant to QEMU are:

1. destination libvirtd starts a new QEMU process with -incoming
2. source libvirtd calls migrate QMP command and waits until migration
   completes; in this step we actually wait for the pre-switchover, shut
   down all block jobs, and call migrate-continue
3. destination libvirtd calls cont QMP command

The daemons only communicated between these steps, i.e., the result of
each steps is transferred to the other side of migration. In other
words, transferring of locks and other state

IIUC what you suggested would require step 2. to be split so that some
work can be done on the destination between "migrate" command and
completed migration. This would be pretty complicated and I don't think
the problem we're trying to solve would be worth it. Not to mention that
it would multiply the number of possible code paths in the migration
code.

However, I don't think the extra step is even necessary. We could just
have a separate QMP command to activate block nodes. Thus the source
would wait for pre-switchover, shut down all block jobs and call
migrate-continue. Once migration completes, the control would be
transferred to the destination which would call the new command to
activate block nodes followed by cont. That is, the "cont" command would
just be replaced with two commands. And this similarly to the
pre-switchover state, this could be controlled by a new migration
capability to make sure all block nodes are activated automatically with
old libvirt which doesn't know anything about the new command. This
would solve compatibility with non-libvirt use cases too.

Jirka



Re: [Qemu-devel] [PATCH RESEND v1 1/2] i386: Add Intel Processor Trace feature support

2018-01-15 Thread Jiri Denemark
On Mon, Jan 15, 2018 at 12:04:55 -0200, Eduardo Habkost wrote:
> CCing libvirt developers.
...
> This case is slightly more problematic, however: the new feature
> is actually migratable (under very controlled circumstances)
> because of patch 2/2, but it is not migration-safe[1].  This
> means libvirt shouldn't include it in "host-model" expansion
> (which uses the query-cpu-model-expansion QMP command) until we
> make the feature migration-safe.
> 
> For QEMU, this means the feature shouldn't be returned by
> "query-cpu-model-expansion type=static model=max" (but it can be
> returned by "query-cpu-model-expansion type=full model=max").
> 
> Jiri, it looks like libvirt uses type=full on
> query-cpu-model-expansion on x86.  It needs to use
> type=static[2], or it will have no way to find out if a feature
> is migration-safe or not.
...
> [2] It looks like libvirt uses type=full because it wants to get
> all QOM property aliases returned.  In this case, one
> solution for libvirt is to use:
> 
> static_expansion = query_cpu_model_expansion(type=static, model)
> all_props = query_cpu_model_expansion(type=full, static_expansion)

This is exactly what libvirt is doing (with model = "host") ever since
query-cpu-model-expansion support was implemented for x86.

Jirka



Re: [Qemu-devel] [libvirt] [PATCH/QEMU] s390x/kvm: use cpu_model_available for guarded storage on compat machines

2017-10-27 Thread Jiri Denemark
On Fri, Oct 27, 2017 at 17:18:44 +0200, Halil Pasic wrote:
> On 10/27/2017 04:06 PM, Christian Borntraeger wrote:
...
> > I talked to several people and it seems that on x86 the host model will 
> > also enable new features
> > that are not known by older QEMUs and its considered works as designed. 
> > (see also Jiris mail)
> 
> Yes, I've seen that. It would be nice though if this design was easier to
> find in written. Unfortunately I can read minds only to a very limited extent,
> and the written stuff I've read did not give me a full understanding of the
> design -- although the entity to blame for this could be my limited intellect.

I think it's more likely the documentation is just not perfect. I'll
look at it and try to make it better. I know about some parts which need
to be clarified thanks to this discussion.

Jirka



Re: [Qemu-devel] [PATCH/QEMU] s390x/kvm: use cpu_model_available for guarded storage on compat machines

2017-10-27 Thread Jiri Denemark
On Thu, Oct 26, 2017 at 10:09:46 +0200, David Hildenbrand wrote:
> On 25.10.2017 18:45, Marc Hartmayer wrote:
> > On Wed, Oct 25, 2017 at 05:50 PM +0200, David Hildenbrand 
> >  wrote:
> >> On 25.10.2017 17:09, Boris Fiuczynski wrote:
> >>> David, I disagree if your proposal is to generally tolerate new cpu
> >>> features in old machine types. This *might* work for gs but how do you
> >>> guaranty that guests do not behave differently/wrong when suddenly new
> >>> cpu features are made available to guest when (re-)starting them.
> >>> That is my feedback for the qemu side of this mater.
> >>
> >> Just re-reading this section, so what you mean is:
> >>
> >> a) guest is started, host model is copied and used. guest works.
> >> b) guest is stopped.
> >> c) QEMU/KVM/HW is updated.
> >> d) guest is started, new host model is copied. guest no longer works.
> >>
> >> d) could be because there are now some additional features with e.g.
> >> broken guest implementation or now missing features.
> >>
> >>
> >> What you propose (if I am not wrong) is a to bind features somehow to a
> >> QEMU machine. I think that should never be done. You could not catch now
> >> missing features.
> > 
> > What exactly do you mean by the last sentence?
> 
> In general, up/downgrading QEMU/KVM/HW can lead to the removal of features.
> 
> Another example is the "nested" flag for KVM. toggling that can lead to
> the host feature looking differently (+/- SIE features).
> 
> So if you really want to make sure that a VM XML that once ran perfectly
> fine on a host will still run after any QEMU/KVM/HW changes on that host:
> 
> a) specify an explicit CPU model, not the host model (e.g. "z13")
> b) copy the host model to the XML persistently.
> 
> Linking any of that to the machine types is in my opinion the very wrong
> approach.

I agree, we should do that only if it's really impossible to even create
a machine with a given machine type in combination with some CPU models.
And I believe this is not the case.

The host-model CPU guarantees guest ABI only while a domain is running.
Once it stops and a user starts it again, the ABI seen by the guest OS
may be different and this may sometimes cause the guest OS won't start.
It's pretty similar to what can happen when you change the machine type.
Of course, machine type doesn't change automatically while host-model
CPUs do change. But that's just how host-model is defined. If you don't
want this to happen, you should use a specific CPU model; you can copy
it from domain capabilities XML to make a persistent version of a
host-model CPU.

BTW, using host-model CPU may actually break migration to host with
older QEMU even though an old machine type is used. This is OK since
host-model may bring features which are not supported on the older host
and backward migration is only supported when no new features are used.
However, if a domain with a host-model CPU was started on the old host
and migrated to the new one, migrating it back to the old one is
supported and it will work.

Jirka



Re: [Qemu-devel] [libvirt] Question about the host-model CPU mode

2017-10-23 Thread Jiri Denemark
On Fri, Oct 20, 2017 at 15:04:57 +0200, David Hildenbrand wrote:
> On 20.10.2017 14:50, Jiri Denemark wrote:
> > The thing is libvirt calls query-cpu-model-expansion to check what the
> > host CPU is. This 'host-model' CPU is replaced with the probed CPU model
> > when a domain starts. The problem described by Marc is that the probed
> > CPU model cannot be used universally with all machine types. So starting
> > a domain on such host with machine type s390-virtio-ccw-2.10 works, but
> > a domain with machine type s390-virtio-ccw-2.9 fails to start with the
> > same probed CPU model.
> > 
> 
> My assumption would be that the CPU model is copied into the XML when
> the domain fist starts. This is what the documentation describes.

The CPU model is copied into the XML each time the domain starts.

> So when upgrading QEMU, the CPU model in the XML is still the same
> (z13), even though something different is now reported in the host info
> after upgrading QEMU (z14).
> 
> In this case it would continue to work.
> 
> The problem is that the CPU model is not copied into the XML doesn't
> remain there, right? It is suddenly replaced with a z14 host model.

Even preserving the actual CPU model in the XML from the first start
wouldn't solve the issue. You can always create a new domain with
s390-virtio-ccw-2.9 machine type even on the upgraded host.

Jirka



Re: [Qemu-devel] [libvirt] Question about the host-model CPU mode

2017-10-20 Thread Jiri Denemark
On Fri, Oct 20, 2017 at 13:37:42 +0200, David Hildenbrand wrote:
> On 20.10.2017 13:09, Marc Hartmayer wrote:
> > we recently encountered the problem that the 'host-model' [1] has to be
> > related to the machine type of a domain. We have following problem:
> > 
> >Let's assume we've a z13 system with a QEMU 2.9 and we define a
> >domain using the default s390-virtio-ccw machine together with the
> >host-model CPU mode [1]. The definition will have the machine
> >expanded to s390-virtio-ccw-2.9 but retain the host-model CPU mode
> >in the domain definition. In a next step we upgrade to QEMU 2.10
> >(first version to recognize z14). Everything is still fine, even
> >though the machine runs in 2.9 compatibility mode. Finally we
> >upgrade to a z14. As a consequence it is not possible to start the
> >domain anymore as the machine type doesn't support our CPU host
> >model (which is expanded at start time of the domain).
> 
> Yes, the host model may vary depending on QEMU version and to some
> degree also on compatibility machines (although I always try to push
> people to not introduce any new such stuff).
> 
> Quoting for the libvirt documentation: https://libvirt.org/formatdomain.html
> 
> "host-model
> The host-model mode is essentially a shortcut to copying host CPU
> definition from capabilities XML into domain XML. Since the CPU
> definition is copied just before starting a domain, exactly the same XML
> can be used on different hosts while still providing the best guest CPU
> each host supports."
> 
> You're telling me that that copying does not happen, which seems to be
> the problematic part about this in my opinion.
> 
> So I am not sure if probing anything else (as you noted below) is the
> correct thing to do. Copy it and you have a migration_safe and even
> static version of the _current_ host CPU.

The thing is libvirt calls query-cpu-model-expansion to check what the
host CPU is. This 'host-model' CPU is replaced with the probed CPU model
when a domain starts. The problem described by Marc is that the probed
CPU model cannot be used universally with all machine types. So starting
a domain on such host with machine type s390-virtio-ccw-2.10 works, but
a domain with machine type s390-virtio-ccw-2.9 fails to start with the
same probed CPU model.

Jirka



Re: [Qemu-devel] [libvirt] Question about the host-model CPU mode

2017-10-20 Thread Jiri Denemark
On Fri, Oct 20, 2017 at 13:09:26 +0200, Marc Hartmayer wrote:
> we recently encountered the problem that the 'host-model' [1] has to be
> related to the machine type of a domain. We have following problem:
> 
>Let's assume we've a z13 system with a QEMU 2.9 and we define a
>domain using the default s390-virtio-ccw machine together with the
>host-model CPU mode [1]. The definition will have the machine
>expanded to s390-virtio-ccw-2.9 but retain the host-model CPU mode
>in the domain definition. In a next step we upgrade to QEMU 2.10
>(first version to recognize z14). Everything is still fine, even
>though the machine runs in 2.9 compatibility mode. Finally we
>upgrade to a z14. As a consequence it is not possible to start the
>domain anymore as the machine type doesn't support our CPU host
>model (which is expanded at start time of the domain).
> 
> For determining the actual host-model the QMP command
> 'query-cpu-model-expansion' is used. This is done once per QEMU binary
> and the result of it is cached by libvirt. The problem with that is
> that libvirt queries with the newest machine type of the QEMU binary
> for the host CPU model.

No, libvirt probes QEMU with -machine none.

> We could now either probe the host CPU model for each QEMU binary +
> machine type combination and for this we've to start a new QEMU
> process each time.

This is not really a viable solution.

Jirka



Re: [Qemu-devel] [PATCH v3 0/7] migration: pause-before-switchover

2017-10-19 Thread Jiri Denemark
The libvirt changes which will make use of this new migration capability
can be found in migration-pause branch of my gitlab repository:

git fetch https://gitlab.com/jirkade/libvirt.git migration-pause

It's not properly split into patches, it has no commit message etc.,
but the functionality should be complete.

Feel free to test it and report any issues.

Thanks,

Jirka



Re: [Qemu-devel] [PATCH v3 4/7] migration: migrate-continue

2017-10-19 Thread Jiri Denemark
On Wed, Oct 18, 2017 at 18:40:10 +0100, Dr. David Alan Gilbert (git) wrote:
> From: "Dr. David Alan Gilbert" 
> 
> A new qmp command allows the caller to continue from a given
> paused state.
> 
> Signed-off-by: Dr. David Alan Gilbert 
...
> diff --git a/qapi/migration.json b/qapi/migration.json
> index b56f95db64..3b50afa6e8 100644
> --- a/qapi/migration.json
> +++ b/qapi/migration.json
> @@ -877,6 +877,23 @@
>  { 'command': 'migrate_cancel' }
>  
>  ##
> +# @migrate-continue:
> +#
> +# Continue migration when it's in a paused state.
> +#
> +# @state: The state the migration is currently expected to be in
> +#
> +# Returns: nothing on success
> +# Since: 2.11
> +# Example:
> +#
> +# -> { "execute": "migrate-continue" , "arguments":
> +#  { "state": "pause-before-device" } }

This example still uses the old name, it should be

 -> { "execute": "migrate-continue" , "arguments":
  { "state": "pre-switchover" } }

> +# <- { "return": {} }
> +##
> +{ 'command': 'migrate-continue', 'data': {'state': 'MigrationStatus'} }
> +
> +##
>  # @migrate_set_downtime:
>  #
>  # Set maximum tolerated downtime for migration.

Jirka



Re: [Qemu-devel] host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21] Does this warn affect virtual machine performance or use?

2017-09-15 Thread Jiri Denemark
On Thu, Sep 14, 2017 at 20:48:49 +0800, Paul Schlacter wrote:
> this is my stackoverflow question:
> https://stackoverflow.com/questions/46219552/host-doesnt-support-requested-feature-cpuid-01hedx-ds-bit-21-does-this-warn
> 
> 
> I found a lot of warning from the VM qemu log, Does this warn affect
> virtual machine performance or use? , is my libvirt.xml file problem? Or
> support hot-plug will have these warnings
> 
> This is my libvirt xml configuration:
> 
> 
...
>   
> 
> 
>   
> 
>   
...
> 2017-09-14T12:16:03.441702Z qemu-kvm: warning: CPU(s) not present in
> any NUMA nodes: 2 3 4 5 6 7
> 2017-09-14T12:16:03.441773Z qemu-kvm: warning: All CPU(s) up to
> maxcpus should be described in NUMA config
> warning: host doesn't support requested feature: CPUID.01H:EDX.ds [bit 21]
> warning: host doesn't support requested feature: CPUID.01H:EDX.acpi [bit 22]
> warning: host doesn't support requested feature: CPUID.01H:EDX.ht [bit 28]
> warning: host doesn't support requested feature: CPUID.01H:EDX.tm [bit 29]
...

Similar warnings are expected with host-model CPUs on older libvirt
which didn't know how to ask QEMU what CPU features are supported on the
host. Thus libvirt just used CPUID and asked for everything and QEMU
filtered a lot of them. You need a recent QEMU and libvirt to avoid
these warnings.

Jirka



Re: [Qemu-devel] [PATCH for-2.9 v2 0/2] i386: Don't override -cpu options on -cpu host/max

2017-03-28 Thread Jiri Denemark
On Mon, Mar 27, 2017 at 11:48:13 -0300, Eduardo Habkost wrote:
> The existing code for "host" and "max" CPU models overrides every
> single feature in the CPU object at realize time, even the ones
> that were explicitly enabled or disabled by the user using
> "feat=on" or "feat=off", while features set using +feat/-feat are
> kept.
> 
> This means "-cpu host,+invtsc" works as expected, while
> "-cpu host,invtsc=on" doesn't.
> 
> This was a known bug, already documented in a comment inside
> x86_cpu_expand_features(). What makes this bug worse now is that
> libvirt 3.0.0 and newer now use "feat=on|off" instead of
> +feat/-feat when it detects a QEMU version that supports it (see
> libvirt commit d47db7b16dd5422c7e487c8c8ee5b181a2f9cd66).
> 
> This series fixes the bug.

Thanks.

Tested-by: Jiri Denemark 

Jirka



Re: [Qemu-devel] [PATCH for-2.9 15/17] target-i386: Define static "base" CPU model

2016-12-16 Thread Jiri Denemark
On Mon, Dec 05, 2016 at 21:57:45 -0200, Eduardo Habkost wrote:
> On Mon, Dec 05, 2016 at 07:18:47PM +0100, David Hildenbrand wrote:
> > Am 02.12.2016 um 22:18 schrieb Eduardo Habkost:
> > > The query-cpu-model-expand QMP command needs at least one static
> > > model, to allow the "static" expansion mode to be implemented.
> > > Instead of defining static versions of every CPU model, define a
> > > "base" CPU model that has absolutely no feature flag enabled.
> > > 
> > 
> > Introducing separate ones makes feature lists presented to the user much
> > shorter (and therefore easier to maintain). But I don't know how libvirt
> > wants to deal with models on x86 in the future.
> 
> I understand that having a larger set of static models would make
> expansions shorter. But I worry that by defining a complete set
> of static models on x86 would require extra maintenance work on
> the QEMU side with no visible benefit for libvirt.
> 
> I would like to hear from libvirt developers what they think. I
> still don't know what they plan to use the type=static expansion
> results for.

Currently we are mostly interested in the expansion of the "host" CPU
model. We're fine with the expansion based on the "basic" static model
with no features. Returning some real model instead of "basic" would be
OK as long as it would be one of the existing CPU models. Adding special
static models, such as Broadwell-base would actually be a complication
since we would need to provide some translation to the existing models
for backward compatibility. I'd appreciate if we could avoid doing this.

Jirka



Re: [Qemu-devel] [libvirt] [PATCH v1] qemu: command: rework cpu feature argument support

2016-11-16 Thread Jiri Denemark
On Tue, Nov 15, 2016 at 11:44:00 -0200, Eduardo Habkost wrote:
> CCing qemu-devel.
> 
> CCing Markus, in case he has any insights about the interface
> introspection.
> 
> On Tue, Nov 15, 2016 at 08:42:12AM +0100, Jiri Denemark wrote:
> > On Mon, Nov 14, 2016 at 18:02:29 -0200, Eduardo Habkost wrote:
> > > On Mon, Nov 14, 2016 at 02:26:03PM -0500, Collin L. Walling wrote:
> > > > cpu features are passed to the qemu command with feature=on/off
> > > > instead of +/-feature.
> > > > 
> > > > Signed-off-by: Collin L. Walling 
> > > 
> > > If I'm not mistaken, the "feature=on|off" syntax was added on
> > > QEMU 2.0.0. Does current libvirt support older QEMU versions?
> > 
> > Of course it does. I'd love to switch to feature=on|off, but how can we
> > check if QEMU supports it? We can't really start using this syntax
> > without it.
> 
> Actually, I was wrong, this was added in v2.4.0. "feat=on|off"
> needs two things to work (in x86):
> 
> * Translation of all "foo=bar" options to QOM property setting.
>   This was added in v2.0.0-rc0~162^2
> * The actual QOM properties for feature names to be present. They
>   were added in v2.4.0-rc0~101^2~1
> 
> So you can be sure "feat=on" is supported by checking if the
> feature flags are present in device-list-properties output for
> the CPU model. But device-list-properties is also messy[1].
> 
> Maybe we can use the availability of query-cpu-model-expansion to
> check if we can safely use the new "feat=on|off" system? It's
> easier than taking all the variables above into account.

Yeah, this could work since s390 already supports
query-cpu-model-expansion. It would cause feature=on|off not to be used
on x86_64 with QEMU older than 2.9.0, but I guess that's not a big deal,
is it?

Jirka



Re: [Qemu-devel] [PATCH v4 7/8] qmp: Support abstract classes on device-list-properties

2016-11-11 Thread Jiri Denemark
On Tue, Nov 08, 2016 at 08:29:41 +0100, Markus Armbruster wrote:
> Eduardo Habkost  writes:
> > libvirt wants to know if the QEMU binary supports a given -cpu
> > option (normally CPU features that can be enabled/disabled using
> > "+foo"/"-foo").
> 
> The obvious way to check whether a specific CPU supports it is to
> introspect that CPU.
> 
> The obvious way to check whether all CPUs of interest support it
> (assuming that is a productive question) is to introspect all CPUs of
> interest.  Works regardless of whether the option is inherited, which is
> really an implementation detail.

As Eduardo said, libvirt wants to know whether it can use a given CPU
feature with current QEMU binary. In -cpu help, we can see a list of
models and features QEMU understands, but we need to get similar info
via QMP. CPU models are easy, but how do we get the list of CPU
features? If introspection is the way to get it, I'm fine with that,
just beware that we don't actually know the name of the CPU object
(Westmere-x86_64-cpu), we only know the name of the CPU model
(Westmere). So if the object name is needed, we need to query the
mapping from CPU model names to CPU object names.

Jirka



Re: [Qemu-devel] QMP stubs: how to return "not implemented" errors?

2016-10-03 Thread Jiri Denemark
On Mon, Oct 03, 2016 at 16:04:42 -0300, Eduardo Habkost wrote:
> Hi,
> 
> When adding new QMP commands that are implemented by
> arch-specific code, we have been adding stubs that report
> QERR_UNSUPPORTED (see stubs/arch-query-cpu-model-expansion.c for
> an example).
> 
> But we are using GenericError for that, and this prevents clients
> from reliably checking if the command is really implemented by
> the QEMU binary.
> 
> What should be the right solution for this? Some of the options I
> have considered are:
> 
> 1) Using CommandNotFound as the error class in the stubs. This
>sounds wrong because the command exists (it is present in
>query-commands and in the QAPI schema).
> 2) Creating a CommandNotImplemented error class. Simple to do,
>but it would require clients to make two separate checks,
>before concluding that the command is available (checking
>query-commands or query-qmp-schema, and then checking for
>CommandNotImplemented errors).

Both options require an extra step to check whether a particular command
is implemented or not. It would be highly appreciated if we didn't have
to go this way since it would seriously complicate the probing process.
We'd need to run each command with some artificial, but sane enough
arguments to bypass arguments checking code:

(QEMU) query-cpu-model-expansion type=full
{"error": {"class": "GenericError", "desc": "Invalid parameter type for
'model', expected: QDict"}}

(QEMU) query-cpu-model-expansion type=full model={'name':'host'}
{"error": {"class": "GenericError", "desc": "this feature or command is
not currently supported"}}

(QEMU) query-cpu-model-expansion type=full model={'name':'Penryn'}
{"error": {"class": "GenericError", "desc": "this feature or command is
not currently supported"}}


> 3.1) Removing the command from query-commands and from the QAPI
>schema on binaries that don't implement the command.
>Needlessly complex?
> 3.2) Removing the unimplemented command from query-commands only
>(by calling qmp_disable_command()), but keeping it on the QAPI
>schema. I am not sure it's OK to do that. If it is, this
>sounds like the simplest solution.

These options are both acceptable to libvirt. So it really depends what
is acceptable to QEMU...

From my POV (which is quite ignorant to QEMU internals in this area),
there are a few other options:

- implementing a new QMP command to list unsupported commands
  (e.g., query-unsupported-commands)
- adding a flag to qmp-schema which would indicate whether a command is
  supported or not
- even a new stupid command that would take another command as an
  argument and return whether it's supported or not would be better than
  having to run each command individually with special arguments

I'm not saying these options are clean or doable, I'm just brainstorming
here.

Jirka



Re: [Qemu-devel] [PATCH v4 01/11] tests: Add test case for x86 feature parsing compatibility

2016-09-30 Thread Jiri Denemark
On Fri, Sep 30, 2016 at 09:55:33 +0200, Paolo Bonzini wrote:
> 
> 
> On 29/09/2016 23:14, Eduardo Habkost wrote:
> > + * "-foo" overrides "+foo"
> > + * "[+-]foo" overrides "foo=..."
> 
> Is this something that people are actually using?  Can we detect it and
> deprecate it in 2.8, and drop it in 2.9?

Libvirt uses -cpu Model,+foo,-bar style, but we do not mix mix -foo and
+foo, or even [+-]foo and foo= if this is what you asked.

Jirka



Re: [Qemu-devel] [PATCH v4 05/11] target-i386: Remove underscores from property names

2016-09-30 Thread Jiri Denemark
On Thu, Sep 29, 2016 at 18:14:53 -0300, Eduardo Habkost wrote:
> Instead of translating the feature name entries when adding
> property names, store the actual property names in the feature
> name array.
> 
> Signed-off-by: Eduardo Habkost 
> ---
> Changes series v3 -> v4:
> * New patch added to series
> ---
>  target-i386/cpu.c | 31 ---
>  1 file changed, 16 insertions(+), 15 deletions(-)
> 
> diff --git a/target-i386/cpu.c b/target-i386/cpu.c
> index 4eaec0e..7795a7c 100644
> --- a/target-i386/cpu.c
> +++ b/target-i386/cpu.c
> @@ -279,11 +279,11 @@ static FeatureWordInfo feature_word_info[FEATURE_WORDS] 
> = {
>  [FEAT_1_ECX] = {
>  .feat_names = {
>  "pni|sse3" /* Intel,AMD sse3 */, "pclmulqdq|pclmuldq", "dtes64", 
> "monitor",
> -"ds_cpl", "vmx", "smx", "est",
> +"ds-cpl", "vmx", "smx", "est",
>  "tm2", "ssse3", "cid", NULL,
>  "fma", "cx16", "xtpr", "pdcm",
> -NULL, "pcid", "dca", "sse4.1|sse4_1",
> -"sse4.2|sse4_2", "x2apic", "movbe", "popcnt",
> +NULL, "pcid", "dca", "sse4.1|sse4-1",
> +"sse4.2|sse4-2", "x2apic", "movbe", "popcnt",
>  "tsc-deadline", "aes", "xsave", "osxsave",
>  "avx", "f16c", "rdrand", "hypervisor",
>  },

It wasn't quite obvious to me where this means we can't use the names
with underscores when talking to QEMU. So I tried it and apparently
underscores are just silently translated to dashes. It's backward
compatible this way. However, QEMU will always give us the names with
dashes, which means we have even more differences between libvirt's
feature names and QEMU's feature names. So assuming we'll have an
interface for querying supported CPU properties (and their aliases),
shouldn't the old underscore names be added as aliases? This way, we
could actually know that "ds-cpl" means "ds_cpl".

Jirka



Re: [Qemu-devel] [RFC 00/28] s390x CPU models: exposing features

2016-06-22 Thread Jiri Denemark
On Wed, Jun 22, 2016 at 09:54:51 +0200, David Hildenbrand wrote:
> > On Wed, Jun 22, 2016 at 09:34:49 +0200, David Hildenbrand wrote:
> > > I think the coffee didn't do its work already :) . I wanted to write that 
> > > we can
> > > _with_ this additional query. Meaning the involved overhead would be ok - 
> > > in my
> > > opinion for s390x.
> > > 
> > > What we could do to avoid one compare operation would be:
> > > 
> > > a) Expand the host model
> > > b) Expand the target model (because on s390x we could have migration 
> > > unsafe
> > > model)
> > > c) Work with the runnability information returned via 
> > > query-cpu-definitions
> > > 
> > > But as we have to do b) either way on s390x, we can directly do a compare
> > > operation. (which makes implementation a lot simpler, because libvirt then
> > > doesn't have to deal with any feature/model names).  
> > 
> > But why do you even need to do any comparison? Isn't it possible to let
> > QEMU do it when a domain starts? The thing is we should avoid doing
> > completely different things on each architecture.
> > 
> 
> Sure, QEMU will of course double check when starting the guest! So trying to
> start and failing is of course an option! So no check is needed if that is
> acceptable.

Yeah, I think it's the safest and easiest option now.

Jirka



Re: [Qemu-devel] [RFC 00/28] s390x CPU models: exposing features

2016-06-22 Thread Jiri Denemark
On Wed, Jun 22, 2016 at 09:34:49 +0200, David Hildenbrand wrote:
> I think the coffee didn't do its work already :) . I wanted to write that we 
> can
> _with_ this additional query. Meaning the involved overhead would be ok - in 
> my
> opinion for s390x.
> 
> What we could do to avoid one compare operation would be:
> 
> a) Expand the host model
> b) Expand the target model (because on s390x we could have migration unsafe
> model)
> c) Work with the runnability information returned via query-cpu-definitions
> 
> But as we have to do b) either way on s390x, we can directly do a compare
> operation. (which makes implementation a lot simpler, because libvirt then
> doesn't have to deal with any feature/model names).

But why do you even need to do any comparison? Isn't it possible to let
QEMU do it when a domain starts? The thing is we should avoid doing
completely different things on each architecture.

Jirka



Re: [Qemu-devel] [RFC 00/28] s390x CPU models: exposing features

2016-06-22 Thread Jiri Denemark
On Wed, Jun 22, 2016 at 08:51:40 +0200, David Hildenbrand wrote:
> > On Tue, Jun 21, 2016 at 17:33:09 -0300, Eduardo Habkost wrote:
> > > On Tue, Jun 21, 2016 at 07:01:44PM +0200, David Hildenbrand wrote:  
> > > I still don't think we want to set in stone that "the result the
> > > guest sees when using -cpu host" is always the same as "what the
> > > host supports running".
> > > 
> > > For example: let's assume a given architecture have two features
> > > (A and B) that are both supported by the host but can never be
> > > enabled together. For actual "-cpu host" usage, QEMU would have
> > > to choose between enabling A and B. For querying host
> > > capabilities, we still want to let management software know that
> > > either A or B are supported.  
> > 
> > What libvirt is really interested in is the guest CPU which would be
> > used with -cpu host. This is actually what I thought query-host-cpu was
> > all about. Perhaps because there's no difference for x86.
> 
> That's also what I had in mind first. Let me explain why they are not the same
> on s390x and why it is problematic (actually both types are not the same!):
> 
> 1. Certain CPU features are only valid for LPAR guests, they can't be
> virtualized for KVM guests (remember that we always have at least one level of
> virtualization on s390x). So we will never be able to provide these features 
> to
> a KVM guest, therefore we will never be able to completely mimic the real host
> model.
> 
> 2. We have a whole lot of unpublished features. We would have to include them 
> in
> the "real host model", but we can't. For the "host" model, this is not a
> problem, because we simply don't virtualize them. But the "real host model"
> would then vary between say QEMZ versions, which looks really strage, because
> in essance, QEMU/KVM looks like the wrong interface to query for a "real host
> model".
> 
> 3. We don't have the kernel interfaces in place to query the "real host 
> model".
> We can only query what we are able to virtualize. And that is indeed different
> compared to what I said previously.
> 
> And as I can't see any use case for s390x, we most probably will never be able
> to support the interface you are suggesting here. Which is fine, if it makes
> sense for x86.

Well, as I said I always thought about query-host-cpu as a way to get
the CPU configuration QEMU would provide with -cpu host. Thanks to this
discussion I realized its semantics is different and thus what I really
need is actually the command you suggested, i.e.,
query-cpu-model-expansion.

> > > 2) Requiring a running QEMU instance to run
> > >query-cpu-model-comparison
> > > 
> > > With my previous query-host-cpu proposal, the task of comparing
> > > the configuration requested by the user with host capabilities
> > > can be done directly by libvirt. This way, no extra QEMU instance
> > > needs to be run before starting a VM.  
> > 
> > I think we can just easily get around this by not comparing a guest CPU
> > to host (except for the explicit virConnectCompareCPU, which is not very
> > useful in its current form anyway).
> 
> If there is some flexible way around that, great. But I think we (s390x) could
> life without this additional query.

So if I understand correctly, you say you don't need the API to compare
guest CPU to host CPU, right? If so, that's exactly what I said too.

Jirka



Re: [Qemu-devel] [libvirt] [RFC 00/28] s390x CPU models: exposing features

2016-06-22 Thread Jiri Denemark
On Tue, Jun 21, 2016 at 18:22:30 -0300, Eduardo Habkost wrote:
> On Tue, Jun 21, 2016 at 11:09:49PM +0200, Jiri Denemark wrote:
> [...]
> > > 1) "query-cpu-model-expansion model=host" vs "query-host-cpu":
> > > 
> > > I still don't think we want to set in stone that "the result the
> > > guest sees when using -cpu host" is always the same as "what the
> > > host supports running".
> > > 
> > > For example: let's assume a given architecture have two features
> > > (A and B) that are both supported by the host but can never be
> > > enabled together. For actual "-cpu host" usage, QEMU would have
> > > to choose between enabling A and B. For querying host
> > > capabilities, we still want to let management software know that
> > > either A or B are supported.
> > 
> > What libvirt is really interested in is the guest CPU which would be
> > used with -cpu host. This is actually what I thought query-host-cpu was
> > all about. Perhaps because there's no difference for x86.
> 
> In that case, I think it makes sense to just extend
> query-cpu-definitions or use "query-cpu-model-expansion
> model=host" instead of a query-host-cpu command.
> 
> Probably query-cpu-model-expansion is better than just extending
> query-cpu-definitions, because it would allow the expansion of
> extra CPU options, like "host,migratable=off".

Yeah, this would be even better.

Jirka



Re: [Qemu-devel] [RFC 00/28] s390x CPU models: exposing features

2016-06-21 Thread Jiri Denemark
On Tue, Jun 21, 2016 at 17:33:09 -0300, Eduardo Habkost wrote:
> On Tue, Jun 21, 2016 at 07:01:44PM +0200, David Hildenbrand wrote:
> > > (CCing libvirt people)
> > > 
> > > On Tue, Jun 21, 2016 at 03:02:05PM +0200, David Hildenbrand wrote:
> > > > This is our second attempt to implement CPU models for s390x. We 
> > > > realized
> > > > that we also want to have features exposed via the CPU model. While 
> > > > doing
> > > > that we realized that we want to have a better interface for libvirt.  
> > > 
> > > Before getting into the details, I would like to clarify how the
> > > following could be accomplished using the new commands:
> > > 
> > > Example:
> > > 
> > > 1) User configures libvirt with:
> > >
> > >Westmere
> > >
> > >
> > > 2) libvirt will translate that to:
> > >"-cpu Westmere,+aes" or "-cpu Westmere,aes=on"
> > > 3) libvirt wants to know if "-cpu Westmere,aes=on" is usable in
> > >the current host, before trying to start the VM.
> > > 
> > > How exactly would this be done using the new commands?
> > 
> > Hi Eduardo,
> > 
> > thanks for having a look - highly appreciated that you actually map this
> > to libvirt requirements!
> > 
> > That would map to a compare operation between "host" and "Westmere,aes=on".
> > 
> > Host could at that point already be expanded by libvirt. Doesn't matter at 
> > that
> > point.
> > 
> > If the result is "identica"l or "superset", it is runnable. If the result is
> > "subset" or "incompatible", details about the responsible properties is
> > indicated. (I actually took that idea from your patch for indicating
> > runnability).
> 
> So, I have two worries about the proposal:
> 
> 
> 1) "query-cpu-model-expansion model=host" vs "query-host-cpu":
> 
> I still don't think we want to set in stone that "the result the
> guest sees when using -cpu host" is always the same as "what the
> host supports running".
> 
> For example: let's assume a given architecture have two features
> (A and B) that are both supported by the host but can never be
> enabled together. For actual "-cpu host" usage, QEMU would have
> to choose between enabling A and B. For querying host
> capabilities, we still want to let management software know that
> either A or B are supported.

What libvirt is really interested in is the guest CPU which would be
used with -cpu host. This is actually what I thought query-host-cpu was
all about. Perhaps because there's no difference for x86.

> 2) Requiring a running QEMU instance to run
>query-cpu-model-comparison
> 
> With my previous query-host-cpu proposal, the task of comparing
> the configuration requested by the user with host capabilities
> can be done directly by libvirt. This way, no extra QEMU instance
> needs to be run before starting a VM.

I think we can just easily get around this by not comparing a guest CPU
to host (except for the explicit virConnectCompareCPU, which is not very
useful in its current form anyway).

Jirka



Re: [Qemu-devel] [RFC 00/28] s390x CPU models: exposing features

2016-06-21 Thread Jiri Denemark
On Tue, Jun 21, 2016 at 13:44:31 -0300, Eduardo Habkost wrote:
> (CCing libvirt people)
> 
> On Tue, Jun 21, 2016 at 03:02:05PM +0200, David Hildenbrand wrote:
> > This is our second attempt to implement CPU models for s390x. We realized
> > that we also want to have features exposed via the CPU model. While doing
> > that we realized that we want to have a better interface for libvirt.
> 
> Before getting into the details, I would like to clarify how the
> following could be accomplished using the new commands:
> 
> Example:
> 
> 1) User configures libvirt with:
>
>Westmere
>
>
> 2) libvirt will translate that to:
>"-cpu Westmere,+aes" or "-cpu Westmere,aes=on"
> 3) libvirt wants to know if "-cpu Westmere,aes=on" is usable in
>the current host, before trying to start the VM.

While this is what libvirt currently does, I don't think it's necessary
to keep doing that... see below.

> > a) "query-cpu-model-expansion" - tell us what the "host" or a migration
> >unsafe model looks like. Either falling back to a stable model or
> >completely exposing all properties. We are interested in stable models.
> > b) "query-cpu-model-comparison" - tell us how two CPU models compare,
> > indicating which properties were responsible for the decision.
> > c) "query-cpu-model-baseline" - create a new model out of two models,
> > taking a requested level of stability into account.

This looks like it copies current libvirt APIs, which I think is not a
very good idea. Both CPU compare and baseline APIs in libvirt will need
to be changed (or rather new APIs with similar functionality added) to
become useful. I think we should first focus on making guest CPU
configuration work the way it should work. For that I think we need some
higher level commands.

Let me sum up what libvirt is doing (or will be doing) with guest
CPUs... Libvirt supports three guest CPU configuration modes:

- host-passthrough -- this is easy, it's just asking for "-cpu host" and
  no fancy commands are required.

- host-model -- for this we need to know what CPU configuration we need
  to ask for to get as close to the host CPU as possible while being
  able to ask for the exact same CPU on a different host (after
  migration). And we need to be able to ask for this at probing time
  (when libvirtd starts rather than when starting a new domain) using
  "-machine none,accel=kvm:tcg", that is without actually creating any
  guest CPU. This is what Eduardo's query-host-cpu QMP command is useful
  for. In x86 world we could just query the guest CPU after running QEMU
  with "-cpu host", but that's unusable with "-machine none", which is
  why another way of getting this info is needed.

- custom -- the XML specifies an exact guest CPU configuration. We don't
  really need to know if that exact CPU is runnable on the current host
  in advance, we can just try to start the domain, check if the guest
  CPU matches what we asked for, and we may kill QEMU if the CPU does
  not meet the specification. Of course, higher level management wants
  to know a set of host where it can run a given domain for scheduling
  purposes, but since they logically want to avoid tons of different CPU
  configs, they would just stick the CPU models predefined by QEMU. Thus
  giving them a way of checking what CPU models can be run on a given
  host with a given QEMU (using the unavailable features stuff for
  query-cpu-definitions usable with "-machine none") should be enough
  and it's even better than having to ask for every single CPU model,
  which is what they need to do now.

There are configuration options which are somewhere between host-model
and custom, but they don't bring more requirements in addition to what
both of them needs.

This was basically all about starting a new domain. When the domain
successfully starts, we need to make sure the guest CPU does not change
during save/restore or migration to another host. To achieve this, we
need the same checking we need for custom mode, i.e., whether the guest
CPU we got is what we asked for. In addition to this, we need a way to
ask QEMU what the guest CPU looks like so that we can store it in the
domain XML and ask for it during migration.

I think all of this should be pretty much architecture agnostic. Of
course the actual data would be quite different fro each architecture,
but I believe the entry points could fit all. Or did I miss anything?

Jirka



Re: [Qemu-devel] [PATCH v2 0/6] Add runnability info to query-cpu-definitions

2016-06-21 Thread Jiri Denemark
On Mon, Jun 20, 2016 at 17:09:18 -0300, Eduardo Habkost wrote:
> 
> Ping? No other feedback on this?

The interface is fine from my point of view and I even have a working
libvirt code that consumes this.

Jirka



Re: [Qemu-devel] [PATCH v2 4/6] qmp: Add runnability information to query-cpu-definitions

2016-06-09 Thread Jiri Denemark
On Mon, Jun 06, 2016 at 17:05:41 -0300, Eduardo Habkost wrote:
> Extend query-cpu-definitions schema to allow it to return two new
> optional fields: "runnable" and "unavailable-features".
> "runnable" will tell if the CPU model can be run in the current
> host. "unavailable-features" will contain a list of CPU
> properties that are preventing the CPU model from running in the
> current host.

The commit message still talks about both "runnable" and
"unavailable-features" fields even though the former has been removed in
version 2 of the series.

Jirka



Re: [Qemu-devel] [PATCH RFC 0/8] cpus: make "-cpu cpux, features" global properties

2016-06-03 Thread Jiri Denemark
On Thu, Jun 02, 2016 at 22:44:49 +0200, David Hildenbrand wrote:
> > Current CLI option -cpu cpux,features serves as template
> > for all created cpus of type: cpux. However QEMU parses
> > "features" every time it creates a cpu instance and applies
> > them to it while doing parsing.
> > 
> > That doesn't work well with -device/device_add infrastructure
> > as it has no idea about cpu specific hooks that's used for
> > parsing "features".
> > In order to make -device/device_add utilize "-cpu features"
> > template convert it into a set of global properties, so that
> > every new CPU created will have them applied automatically.
> > 
> > That also allows to parse features only once, instread of
> > doing it for every CPU instance created.
> 
> While I think this makes sense for most cases, we (s390x) are
> currently working on a mechanism to compare and baseline cpu models via
> a qmp interface, to not have to replicate CPU models in libvirt
> every time we do some changes.

BTW, we don't replicate any change to QEMU CPU models in libvirt. We
merely add new CPU models that match the current QEMU definition.
Existing CPU models are never changed. And while the CPU models are
currently used to check compatibility with host CPU, we will stop doing
that in the near future and our CPU models will mostly be used for
detecting what the host CPU is.

Jirka



Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model

2016-05-26 Thread Jiri Denemark
On Wed, May 25, 2016 at 23:13:24 -0600, Chris Friesen wrote:
> Hi,
> 
> If I don't specify a virtual CPU model, it appears to give me a "qemu64" CPU, 
> and /proc/cpuinfo in the guest instance looks something like this:
> 
> processor 0
> vendor_id GenuineIntel
> cpu family 6
> model 6
> model name: QEMU Virtual CPU version 2.2.0
> stepping: 3
> microcode: 0x1
> flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pse36 clflush
> mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx cx16 x2apic popcnt 
> hypervisor lahf_lm abm vnmi ept
> 
> 
> However, if I explicitly specify a custom CPU model of "qemu64" the instance 
> refuses to boot and I get a log saying:
> 
> libvirtError: unsupported configuration: guest and host CPU are not 
> compatible: 
> Host CPU does not provide required features: svmlibvirtError: unsupported 
> configuration: guest and host CPU are not compatible: Host CPU does not 
> provide 
> required features: svm

The qemu64 CPU model contains svm and thus libvirt will always consider
it incompatible with any Intel CPUs (which have vmx instead of svm). On
the other hand, QEMU by default ignores features that are missing in the
host CPU and has no problem using qemu64 CPU, the guest just won't see
some of the features defined in qemu64 model.

In your case, you should be able to use


qemu64



to get the same CPU model you'd get by default (if not, you may need to
also add ).

Alternatively


qemu64



should work too (and it would be better in case you use it on an AMD
host).

But why you even want to use qemu64 CPU in a domain XML explicitly? If
you're fine with that CPU, just let QEMU use a default one. If not, use
a CPU model that fits your host/needs better.

BTW, using qemu64 with TCG (i.e., domain type='qemu' as oppose to
type='kvm') is fine because libvirt won't check it against host CPU and
QEMU will emulate all features so you'd get even the features that host
CPU does not support.

Jirka

P.S. Kashyap is right, the issue he mentioned is not related at all to
your case.



Re: [Qemu-devel] [libvirt] [PATCH 7/9] qmp: Add runnability information to query-cpu-definitions

2016-05-12 Thread Jiri Denemark
On Wed, May 11, 2016 at 16:35:50 -0300, Eduardo Habkost wrote:
> # @CpuDefinitionInfo:
> #
> # Virtual CPU definition.
> #
> # @name: the name of the CPU definition
> # @runnable: #optional. whether the CPU model us usable with the

s/ us / is /

> #current machine and accelerator. Omitted if we don't
> #know the answer. (since 2.7)
> # @unavailable-features: List of attributes that prevent the CPU
> #model from running in the current host.
> #(since 2.7)
> #
> # @unavailable-features is a list of QOM property names that
> # represent CPU model attributes that prevent the CPU from running.
> # If the QOM property is read-only, that means the CPU model can
> # never run in the current host. If the property is read-write, it
> # means that it MAY be possible to run the CPU model in the current
> # host if that property is changed. Management software can use it
> # as hints to suggest or choose an alternative for the user, or
> # just to generate meaningful error messages explaining why the CPU
> # model can't be used.

Any chance this could be extended to provide data about every single
machine type rather than just the current one?

Jirka



Re: [Qemu-devel] [PATCH 8/9] target-i386: Use "-" instead of "_" on all feature names

2016-05-10 Thread Jiri Denemark
On Tue, May 10, 2016 at 17:19:51 +0200, Igor Mammedov wrote:
> On Fri,  6 May 2016 15:11:31 -0300
> Eduardo Habkost  wrote:
> 
> > This makes the feature name tables in feature_word_info all match
> > the actual QOM property names we use.
> > 
> > This will make the command-line interface more consistent,
> > allowing the QOM property names to be used as "-cpu" arguments
> > directly.
> 
> wouldn't that change output of '-cpu help',
> can it break libvirt?

Don't worry, for any QEMU >= 1.2.0 libvirt uses only QMP commands when
probing capabilities. And currently we don't even parse feature names
anywhere, we only use feature names when constructing -cpu command line
and that parts seems to be covered by this patch.

Jirka



Re: [Qemu-devel] [libvirt] [PATCH 7/9] qmp: Add runnability information to query-cpu-definitions

2016-05-10 Thread Jiri Denemark
On Tue, May 10, 2016 at 10:23:16 +0200, Markus Armbruster wrote:
> Eduardo Habkost  writes:
> 
> > On Mon, May 09, 2016 at 09:20:15AM -0600, Eric Blake wrote:
> >> On 05/06/2016 12:11 PM, Eduardo Habkost wrote:
> >> > Extend query-cpu-definitions schema to allow it to return two new
> >> > optional fields: "runnable" and "unavailable-features".
> >> > "runnable" will tell if the CPU model can be run in the current
> >> > host. "unavailable-features" will contain a list of CPU
> >> > properties that are preventing the CPU model from running in the
> >> > current host.
> >> > 
> >> > Cc: David Hildenbrand 
> >> > Cc: Michael Mueller 
> >> > Cc: Christian Borntraeger 
> >> > Cc: Cornelia Huck 
> >> > Cc: Jiri Denemark 
> >> > Cc: libvir-l...@redhat.com
> >> > Signed-off-by: Eduardo Habkost 
> >> > ---
> >> >  qapi-schema.json | 10 +-
> >> >  1 file changed, 9 insertions(+), 1 deletion(-)
> >> > 
> >> > diff --git a/qapi-schema.json b/qapi-schema.json
> >> > index 54634c4..450e6e7 100644
> >> > --- a/qapi-schema.json
> >> > +++ b/qapi-schema.json
> >> > @@ -2948,11 +2948,19 @@
> >> >  # Virtual CPU definition.
> >> >  #
> >> >  # @name: the name of the CPU definition
> >> > +# @runnable: true if the CPU model is runnable using the current
> >> > +#machine and accelerator. Optional. Since 2.6.
> >> 
> >> You've missed 2.6.  Also, the typical spelling for a per-member
> >> designation is '(since 2.7)', not 'Since 2.7'
> >
> > Oops! I meant 2.7, and I didn't notice that it was not using the
> > typical format. I will fix it, thanks.
> >
> >> 
> >> Why is it optional? Would it hurt to always be present in qemu new
> >> enough to understand why it is needed?
> >
> > It is optional because not all architectures will return the
> > field. This series implements it only for x86.
> 
> Its documentation seems to suggest missing runnable has the same meaning
> as runnable: false.  Is that correct?

I think it would be a bug if missing runnable had the same meaning as
runnable=false. To maintain backward compatibility with older QEMU
missing runnable should mean the CPU may or may not be runnable, i.e.,
we don't know which and the user has to try to figure it out.

Jirka



Re: [Qemu-devel] [PULL 24/28] migration: Make events a capability

2015-07-07 Thread Jiri Denemark
On Tue, Jul 07, 2015 at 15:09:05 +0200, Juan Quintela wrote:
> Make check fails with events.  THis is due to the parser/lexer that it
> uses.  Just in case that they are more broken parsers, just only send
> events when there are capabilities.
> 
> Signed-off-by: Juan Quintela 
> Reviewed-by: Dr. David Alan Gilbert 
...
> diff --git a/qapi-schema.json b/qapi-schema.json
> index 106008c..1285b8c 100644
> --- a/qapi-schema.json
> +++ b/qapi-schema.json
> @@ -523,6 +523,9 @@
>  #  minimize migration traffic. The feature is disabled by default.
>  #  (since 2.4 )
>  #
> +# @events: generate events for each migration state change
> +#  (since 2.4 )
> +#
>  # @auto-converge: If enabled, QEMU will automatically throttle down the guest
>  #  to speed up convergence of RAM migration. (since 1.6)
>  #
> @@ -530,7 +533,7 @@
>  ##
>  { 'enum': 'MigrationCapability',
>'data': ['xbzrle', 'rdma-pin-all', 'auto-converge', 'zero-blocks',
> -   'compress'] }
> +   'compress', 'events'] }
> 

Perhaps I messed something up, but I don't see this new capability
documented in qmp-commands.hx and
{"execute":"query-migrate-capabilities"} does not report its status:
{
  "return": [
{
  "state": false,
  "capability": "xbzrle"
},
{
  "state": false,
  "capability": "rdma-pin-all"
},
{
  "state": false,
  "capability": "auto-converge"
},
{
  "state": false,
  "capability": "zero-blocks"
},
{
  "state": false,
  "capability": "compress"
}
  ]
}

Blindly setting it does not work either:
{
  "execute": "migrate-set-capabilities",
  "arguments": {
"capabilities": [
  {
"capability": "events",
"state": "true"
  }
]
  }
}

returns
{
  "error": {
"class": "GenericError",
"desc": "Invalid parameter 'events'"
  }
}

Jirka



Re: [Qemu-devel] [PATCH 0/2] target-i386: "custom" CPU model + script to dump existing CPU models

2015-06-24 Thread Jiri Denemark
On Tue, Jun 23, 2015 at 14:32:00 +0200, Andreas Färber wrote:
> Am 08.06.2015 um 22:18 schrieb Jiri Denemark:
> >> To help libvirt in the transition, a x86-cpu-model-dump script is provided,
> >> that will generate a config file that can be loaded using -readconfig, 
> >> based on
> >> the -cpu and -machine options provided in the command-line.
> > 
> > Thanks Eduardo, I never was a big fan of moving (or copying) all the CPU
> > configuration data to libvirt, but now I think it actually makes sense.
> > We already have a partial copy of CPU model definitions in libvirt
> > anyway, but as QEMU changes some CPU models in some machine types (and
> > libvirt does not do that) we have no real control over the guest CPU
> > configuration. While what we really want is full control to enforce
> > stable guest ABI.
> 
> That sounds like FUD to me. Any concrete data points where QEMU does not
> have a stable ABI for x86 CPUs? That's what we have the pc*-x.y machines
> for.

QEMU provides stable ABI for x86 CPUs only if you use -cpu ...,enforce.
Without enforce the CPU may change everytime a domain is started or
migrated. A small example: let's say a CPU model called "Model" includes
feature "xyz"; when QEMU is started with -cpu Model (no enforce) on a
host which supports xyz, the guest OS will see a CPU with xyz, but when
you migrate it to a host which does not support xyz, QEMU will just
silently drop xyz. In other words, we need to use enforce to make sure
CPU ABI does not change.

But the problem is we can't use enforce because we don't know how a
specific CPU model looks like for a given machine type. Remember, while
libvirt allows users to explicitly ask for a specific CPU model and
features, it also has a mode when libvirt itself computes the right CPU
model and features. And this is impossible with enforce without us
knowing all details about CPU models.

So there are two possible ways to address this:
1. enhance QEMU to give us all we need
- either by providing commands that would do all the computations
  (CPU model comparison, intersections or denominator, something
  like -cpu best)
- or provide a way to probe for all (currently 700+) combinations of
  a CPU model and a machine type without actually having to start
  QEMU with each CPU and a machine type separately

2. manage CPU models in libvirt (aka -cpu custom)

During the past several years Eduardo tried to do (1) without getting
anywhere close to something that QEMU would be willing to accept. On the
other hand (2) is a pretty minimal change to QEMU and is more flexible
than (1) because it allows CPU model versions to be decoupled from
machine types (but this was already discussed a lot in the other emails
in this thread).

Jirka



Re: [Qemu-devel] [PATCH 0/2] target-i386: "custom" CPU model + script to dump existing CPU models

2015-06-08 Thread Jiri Denemark
On Mon, Jun 08, 2015 at 16:07:38 -0300, Eduardo Habkost wrote:
...
> libvirt can solve this problem partially by making sure every feature in a CPU
> model is explicitly configured, instead of (incorrectly) expecting that a 
> named
> CPU model will never change in QEMU. But this doesn't solve the problem
> completely, because it is still possible that new features unknown to libvirt
> get enabled in the default CPU model in future machine-types (that's very
> likely to happen when we introduce new KVM features, for example).
> 
> So, to make sure no new feature will be ever enabled without the knowledge of
> libvirt, add a "-cpu custom" mode, where no CPU model data is loaded at all,
> and everything needs to be configured explicitly using CPU properties. That
> means no CPU features will ever change depending on machine-type or 
> accelerator
> capabilities when using "-cpu custom".
> 
>   * * *
> 
> I know that this is basically the opposite of what we were aiming at in the
> last few month^Wyears, where we were struggling to implement probing 
> interfaces
> to expose machine-type-dependent data for libvirt. But, at least the fact that
> we could implement the new approach using a 9-line patch means we were still
> going in the right direction. :)
> 
> To help libvirt in the transition, a x86-cpu-model-dump script is provided,
> that will generate a config file that can be loaded using -readconfig, based 
> on
> the -cpu and -machine options provided in the command-line.

Thanks Eduardo, I never was a big fan of moving (or copying) all the CPU
configuration data to libvirt, but now I think it actually makes sense.
We already have a partial copy of CPU model definitions in libvirt
anyway, but as QEMU changes some CPU models in some machine types (and
libvirt does not do that) we have no real control over the guest CPU
configuration. While what we really want is full control to enforce
stable guest ABI.

I will summarize my ideas on how libvirt should be using -cpu custom and
send them to libvirt-list soon.

Jirka



Re: [Qemu-devel] [PATCH v4 0/3] Migration Events

2015-06-02 Thread Jiri Denemark
On Wed, May 20, 2015 at 17:35:21 +0200, Juan Quintela wrote:
> Hi
> 
> By popular demand (i.e. libvirt) we have another round of Migration events 
> patches.
> 
> Notice that althought this is v4, code has changed enough since previous 
> submission that was done from scratch.
> (Pervious submisinon was from Wed, 5 May 2010).
> 
> In this series:
> - We remove extra assignment of MIGRATION_STATUS_SETUP
> - Migration events for source (we recycle the state from query-migrate)
> - Migration events for target (we recycle previous ones)
> 
> As you can see, we win the greener price of the year.
> 
> This patches are on top of my previous series for optional sections.  You can 
> get the whole bunch at:
> 
> 
> https://github.com/juanquintela/qemu/commits/migration-events

I compiled QEMU from Juan's github repository ^ and tested it with
libvirt patches I just sent to the list
(https://www.redhat.com/archives/libvir-list/2015-June/msg00064.html)
and it all works very well.

Thanks Juan, you can count this as

Tested-by: Jiri Denemark 

Jirka



Re: [Qemu-devel] [PATCH 2/3] migration: create migration event

2015-05-29 Thread Jiri Denemark
On Wed, May 20, 2015 at 17:35:23 +0200, Juan Quintela wrote:
> We have one argument that tells us what event has happened.
> 
> Signed-off-by: Juan Quintela 
> ---
>  docs/qmp/qmp-events.txt | 16 
>  migration/migration.c   | 12 
>  qapi/event.json | 14 ++
>  3 files changed, 42 insertions(+)

Hi Juan,

I patched libvirt to be able to consume this event and it all seems to
work fine except for events sent after migrate_cancel command:

{"execute":"migrate_cancel","id":"libvirt-33"}
{"timestamp": {"seconds": 1432899178, "microseconds": 844907}, "event":
"MIGRATION", "data": {"status": "cancelling"}}
{"return": {}, "id": "libvirt-33"}
{"timestamp": {"seconds": 1432899178, "microseconds": 845625}, "event":
"MIGRATION", "data": {"status": "failed"}}
{"timestamp": {"seconds": 1432899178, "microseconds": 846432}, "event":
"MIGRATION", "data": {"status": "cancelled"}}

In other words, the status first changes to "failed" and then it changes
correctly "cancelled". Can you fix this weird behavior in QEMU or do we
have to work around it in libvirt?

Jirka



Re: [Qemu-devel] [Question] why x2apic's set by default without host support(on Nehalem CPU).

2013-07-23 Thread Jiri Denemark
On Tue, Jul 23, 2013 at 10:44:48 +0800, Peter Huang(Peng) wrote:
> > libvirt's "host-passthrough" uses "-cpu host', and it "-cpu host"
> > enables every feature that can be enabled on the host.
> From my test results, I found that even when use host-passthrough mode, VM's
> cpu features are very different from host, this doesn't match what 
> host-passthrough
> mode's explanation.
> 
> libvirt's option exlanation:
> With this mode, the CPU visible to the guest should be exactly  the  same as  
> the host 
> CPU even in  the aspects  that libvirt  does not understand.

The libvirt documentation is what needs to be updated. While
host-passthrough is asking for a CPU which is as close as possible to
the real host CPU, there are features that need special handling before
they can be provided to a guest. And if the hypervisor does not provide
that handling, it may just filter such feature out. Also if some
features can be efficiently provided to a guest even though the host CPU
does not provide them (x2apic is an example of such feature), they may
be provided to a guest.

Jirka



Re: [Qemu-devel] [PATCH V7 1/5] runstate: introduce prelaunch-migrate state

2013-03-11 Thread Jiri Denemark
On Thu, Mar 07, 2013 at 08:37:17 -0700, Eric Blake wrote:
> On 03/07/2013 01:23 AM, Jason Wang wrote:
> > Sometimes, we need track the state when guest is just about to start after
> > migration. There's not a accurate state available which do this accurately
> > (consider qemu may started with -S in destination).
> 
> s/may/may be/
> 
> and yes, libvirt _always_ starts qemu with -S in the destination.
> 
> > 
> > So this patch introduces a new state prelaunch-migrate which just tracks 
> > this
> > state, it covers the case both w/ and w/o -S in destination. The first user 
> > of
> > this is the support of doing announce by guest.
> > 
> > Signed-off-by: Jason Wang 
> > ---
> >  migration.c  |3 +--
> >  qapi-schema.json |5 -
> >  vl.c |4 +++-
> >  3 files changed, 8 insertions(+), 4 deletions(-)
> 
> I'm not sure if this patch will have any negative effects on existing
> libvirt migration or state reporting; adding Jirka to cc.

I don't see any issues this patch could cause to libvirt. The only place
where we ask qemu for its current state is when we reconnect to existing
qemu processes after libvirtd restart. And the only thing we care about
is whether the guest is running or not. We use our own state information
to detect if we were migrating or not.

Jirka



Re: [Qemu-devel] [PATCH] qdev: DEVICE_DELETED event

2013-03-08 Thread Jiri Denemark
On Fri, Mar 08, 2013 at 09:50:55 +0100, Markus Armbruster wrote:
> Osier Yang  writes:
> 
> > I'm wondering if it could be long time to wait for the device_del
> > completes (AFAIK from previous bugs, it can be, though it should be
> > fine for most of the cases). If it's too long, it will be a problem
> > for management, because it looks like hanging. We can have a timeout
> > for the device_del in libvirt, but the problem is the device_del
> > can be still in progress by qemu, which could cause the inconsistency.
> > Unless qemu has some command to cancel the device_del.
> 
> I'm afraid cancelling isn't possible, at least not for PCI.

I don't think we need anything like that. We just need the device
deletion API to return immediately without actually removing stuff from
domain definition (unless the device was really removed fast enough,
e.g., USB devices are removed before device_del returns) and then remove
the device from domain definition when we get the event from QEMU or
when libvirtd reconnects to a domain and sees a particular device is no
longer present. After all, devices may be removed even if we didn't ask
for it (when the removal is initiated by a guest OS). And we should also
provide similar event for higher level apps.

The question is whether we can make use of our existing API or if we
need to introduce a new one. But that's of little relevance to
qemu-devel I guess.

Jirka



Re: [Qemu-devel] libvirt<->QEMU interfaces for CPU models

2013-03-01 Thread Jiri Denemark
On Fri, Mar 01, 2013 at 12:02:07 -0300, Eduardo Habkost wrote:
> On Fri, Mar 01, 2013 at 02:12:38PM +0100, Jiri Denemark wrote:
> > Definitely, we plan to start using "enforce" flag as soon as we have
> > better CPU probing interface with QEMU. Since libvirt does not currently
> > consult CPU specs with QEMU, some configurations in fact rely on QEMU
> > dropping features it can't provide. Of course, that's bad for several
> > reasons but we don't want such configurations to suddenly stop working.
> > We want to first fix the CPU specs libvirt creates so that we know they
> > will work with "enforce".
> 
> Also: more important than fixing the CPU definitions from libvirt, is to
> ask QEMU for host capabilities and CPU model definitions. The whole
> point of this is to solve the CPU model data duplication/synchronization
> problems between libvirt and QEMU.
> 
> Once you are able to query CPU model definitions on runtime, you don't
> even need to make cpu_map.xml agree with QEMU. You can simply ask QEMU
> how each model looks like, and remove/add features from the command-line
> as necessary, so the resulting VM matches what the user asked for.

Right, that's what I had in mind. By libvirt providing correct CPU
definitions (probably better called the right -cpu command line option)
I meant that we need to actually probe QEMU rather than making the
definitions on our own from cpu_map.xml and host CPUID.

> > > Limitation: no proper machine-friendly interface to report which 
> > > features
> > > are missing.
> > > 
> > > Workaround: See "querying for host capabilities" below.
> > 
> > I doubt we will be ready to start using "enforce" before the machine
> > friendly interface is available...
> 
> If you query for the "-cpu host" capabilities first and ensure all
> features from a CPU model is available, enforce is supposed to not fail.
> 
> I understand that a machine-friendly error reporting for "enforce" would
> be very useful, but note that if "enforce" fails, it is probably already
> too late for libvirt, and that means that what libvirt thinks about host
> capabilities and CPU models is already incorrect.

But we still need to know details about each CPU model so that we
can choose the right one. I was trying to say that by the time we have
this probing and can start using enforce, the machine friendly reporting
could be available as well and the limitation will be gone.


> > > == Future plans ==
> > > 
> > > It would be interesting to get rid of the requirement for a live QEMU 
> > > process
> > > (with a complete machine being created) to be already running.
> > 
> > Hmm, so is this complete machine needed even for getting CPU models from
> > qom-list-types or only for querying exact definitions using
> > query-cpu-definitions command?
> 
> Maybe "complete machine" isn't the right expression, here. What I mean is:
> AFAIK, it is not possible to get a QMP monitor without actually having a
> machine being created by QEMU (even if it is a machine that will never run).

OK, I thought there was something special needed. We already start QEMU
in such a way that we can communicate with it using QMP monitor. So this
is just a question of using the right machine if we need to know the
details about given CPU model.

> > > = Getting information about CPU models =
> > > 
> > > Requirement: libvirt uses the predefined CPU models from QEMU, but it 
> > > needs to
> > > be able to query for CPU model details, to find out how it can create a 
> > > VM that
> > > matches what was requested by the user.
> > > 
> > > Current problem: libvirt has a copy of the CPU model definitions on its
> > > cpu_map.xml file, and the copy can be out of sync in case CPU models in 
> > > QEMU
> > > change. libvirt also assumes that the set of features on each model is 
> > > always
> > > the same on all machine-types, which is not true.
> > > 
> > > Challenge: the resulting CPU features depend on lots of factors, 
> > > including
> > > the machine-type.
> > > 
> > > Workaround: start a paused VM and query for the CPU device 
> > > information
> > > after the CPU was created.
> 
> I just noticed another problem here, but this gave me an idea that would
> help solve the "enforce" error reporting problem:
> 
>   Problem: "qemu -machine  -cpu " will create CPU objects
>   where the CPU features are _already_ filtered base

Re: [Qemu-devel] libvirt<->QEMU interfaces for CPU models

2013-03-01 Thread Jiri Denemark
On Thu, Feb 21, 2013 at 11:58:18 -0300, Eduardo Habkost wrote:
> = Querying host capabilities =
> 
> Requirement: libvirt needs to know which feature can really be enabled, before
> it tries to start a VM, and before it tries to start a live-migration process.
> 
> The set of available capabilities depend on:
> 
>   • Host CPU (hardware) capabilities;
>   • Kernel capabilities (reported by GET_SUPPORTED_CPUID);
>   • QEMU capabilities;
>   • Specific configuration options (e.g. in-kernel IRQ chip is required for
> some features).

Actually, one more thing. Can any of these requirements change while a
host is up and QEMU is not upgraded? I believe, host CPU capabilities
can only change when the host starts. Kernel capabilities are a bit less
clear since I guess they could possibly change when kvm module is
unloaded and loaded back with a different options. QEMU capabilities
should only change when different version is installed. And the specific
configuration options are the most unclear to me. The reason I'm asking
is whether libvirt could run-time cache CPU definitions (including all
model details) in the same way we currently cache QEMU capabilities,
such as availability of specific QMP commands.

Jirka



Re: [Qemu-devel] libvirt<->QEMU interfaces for CPU models

2013-03-01 Thread Jiri Denemark
On Thu, Feb 21, 2013 at 11:58:18 -0300, Eduardo Habkost wrote:
> Hi,
> 
> After a long time trying to figure out the proper modelling inside QEMU,
> I believe the plans are now clearer in QEMU, so it's time to coordinate
> more closely with libvirt to try to make use of the new stuff.
> 
> I tried to enumerate the libvirt requirements and current problems, and
> how we should be able to solve those problems using the X86CPU
> subclasses and properties, on the following wiki page:
> 
> http://wiki.qemu.org/Features/CPUModels#Interfaces.2Frequirements_from_libvirt

> = Ensuring predictable set of guest features =
> 
> Requirement: libvirt needs to ensure all features required on the command-line
> are present and exposed to the guest.
> 
> Current problem: libvirt doesn't use the "enforce" flag so it can't guarantee
> that a given feature will be actually exposed to the guest.
> 
> Solution: use the "enforce" flag on the "-cpu" option.

Definitely, we plan to start using "enforce" flag as soon as we have
better CPU probing interface with QEMU. Since libvirt does not currently
consult CPU specs with QEMU, some configurations in fact rely on QEMU
dropping features it can't provide. Of course, that's bad for several
reasons but we don't want such configurations to suddenly stop working.
We want to first fix the CPU specs libvirt creates so that we know they
will work with "enforce".

> Limitation: no proper machine-friendly interface to report which features
> are missing.
> 
> Workaround: See "querying for host capabilities" below.

I doubt we will be ready to start using "enforce" before the machine
friendly interface is available...


> = Listing CPU models =
> 
> Requirement: libvirt needs to know which CPU models are available to be used
> with the "-cpu" option.
> 
> Current problem: libvirt relies on help output parsing for that.
> 
> Solution: use QMP qom-list-types command.
> 
> Dependency: X86CPU subclasses.
> Limitation: needs a live QEMU process for the query.

No problem, we already run QEMU and use several QMP commands to probe
its capabilities. And "qom-list-types" is actually one of them. To get
the list of CPU models, we would just call

{
"execute": "qom-list-types",
"arguments": {
"implements": "X86CPU"
}
}

right? What about other non-x86 architectures? Will we need to use
different class name or is there a generic CPU class that could be used
universally?

> Solution: use QMP query-cpu-definitions command.
> 
> Limitation: needs a live QEMU process for the query.

IIUC, the result of this command will depend on machine type and we
can't use -M none we currently use for probing, right?

> == Future plans ==
> 
> It would be interesting to get rid of the requirement for a live QEMU process
> (with a complete machine being created) to be already running.

Hmm, so is this complete machine needed even for getting CPU models from
qom-list-types or only for querying exact definitions using
query-cpu-definitions command?

Actually, what is query-cpu-definitions supposed to return? Currently it
seems it's just the CPU model names rather than details about all CPU
models. From the command name, one would expect to get more than just
names.


> = Getting information about CPU models =
> 
> Requirement: libvirt uses the predefined CPU models from QEMU, but it needs to
> be able to query for CPU model details, to find out how it can create a VM 
> that
> matches what was requested by the user.
> 
> Current problem: libvirt has a copy of the CPU model definitions on its
> cpu_map.xml file, and the copy can be out of sync in case CPU models in QEMU
> change. libvirt also assumes that the set of features on each model is always
> the same on all machine-types, which is not true.
> 
> Challenge: the resulting CPU features depend on lots of factors, including
> the machine-type.
> 
> Workaround: start a paused VM and query for the CPU device information
> after the CPU was created.
> 
> Solution: start a paused VM with no devices, but with the right
> machine-type and right CPU model. Use QMP QOM commands to query for CPU
> flags (especially the properties starting with the "f-" prefix).
> 
> Dependency: X86CPU feature properties ("f-*" properties).
> Limitation: requires a live QEMU process with the right machine-type/
> CPU-model to be started, to make the query.

This would be very useful for ensuring the guest sees the exact same CPU
after it's been migrated or restored from a stored state or a snapshot.
Should we make sure the guest will always see the same CPU even after
shutdown or is it ok if the guest CPU changes a bit on next boot, e.g.,
in case the host kernel was upgraded and is able to provide more
features?

However, probing several CPU definitions for compatibility with
host/kernel/QEMU would be quite inefficient. Although I guess we should
be able to limit doing 

Re: [Qemu-devel] [QEMU PATCH 3/3] x86: pc: versioned CPU model names & compatibility aliases

2012-07-26 Thread Jiri Denemark
On Wed, Jul 25, 2012 at 15:18:43 -0300, Eduardo Habkost wrote:
> This adds version number to CPU model names on the "pc-"
> machine-types, so we can create new models with bug fixes while keeping
> compatibility when using older machine-types.
> 
> When naming the existing models, I used the last QEMU version where the
> model was changed (see summary below), but by coincidence every single
> one was changed on QEMU-1.1.
> 
> - Conroe, Penryn, Nehalem, Opteron_G1, Opteron_G2, Opteron_G3:
>   added on 0.13, changed on 1.1
> - Westmere, SandyBridge, Opteron_G4: added on 1.1

Hi Eduardo,

What models would be listed by "qemu-system-x86_64 -cpu ?"? Only non-versioned
model aliases or full names with versions or both?

Jirka



Re: [Qemu-devel] [PATCH qemu v3 0/6] -no-user-config option, move CPU models to /usr/share

2012-05-11 Thread Jiri Denemark
On Fri, May 11, 2012 at 08:37:33 -0500, Anthony Liguori wrote:
> On 05/11/2012 08:34 AM, Eduardo Habkost wrote:
> >
> > I just noticed it didn't get into -rc1 either. So, this is definitely
> > not going to be in qemu-1.1, I guess?
> 
> I've got this applied and am testing it right now for -rc2.

Oh, having this in rc2 would be awesome. I already tested this patch set (at
the time it was submitted) with my patches for libvirt which make use of it
and it all worked very nicely and I was finally able to use all the new fancy
CPUs :-) If it helps, you can count this as

Tested-by: Jiri Denemark 

Jirka



Re: [Qemu-devel] [PATCH v2 06/24] qdev: fix -device foo,?

2012-04-27 Thread Jiri Denemark
On Wed, Apr 11, 2012 at 23:30:24 +0200, Paolo Bonzini wrote:
> Since most property types do not have a parse property now, this was
> broken.  Fix it by looking at the setter instead.
> 
> Signed-off-by: Paolo Bonzini 
> ---
>  hw/qdev-monitor.c |4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/hw/qdev-monitor.c b/hw/qdev-monitor.c
> index 4783366..0acfc82 100644
> --- a/hw/qdev-monitor.c
> +++ b/hw/qdev-monitor.c
> @@ -157,7 +157,7 @@ int qdev_device_help(QemuOpts *opts)
>   * for removal.  This conditional should be removed along with
>   * it.
>   */
> -if (!prop->info->parse) {
> +if (!prop->info->set) {
>  continue;   /* no way to set it, don't show */
>  }
>  error_printf("%s.%s=%s\n", driver, prop->name,
> @@ -165,7 +165,7 @@ int qdev_device_help(QemuOpts *opts)
>  }
>  if (info->bus_info) {
>  for (prop = info->bus_info->props; prop && prop->name; prop++) {
> -if (!prop->info->parse) {
> +if (!prop->info->set) {
>  continue;   /* no way to set it, don't show */
>  }
>  error_printf("%s.%s=%s\n", driver, prop->name,

Oops, this patch (or an equivalent fix) doesn't seem to have been ever
applied, which makes libvirt quite unhappy with current qemu. Bootindex cannot
be used through libvirt, to name just one issue resulting from broken
-device foo,?

Jirka



Re: [Qemu-devel] [PATCH 0/5] -no-user-config option, move CPU models to /usr/share

2012-04-19 Thread Jiri Denemark
On Wed, Apr 18, 2012 at 17:02:35 -0300, Eduardo Habkost wrote:
> This is the first try of the new -no-user-config option.
> 
> Patches 1 to 3 just move some code around, patch 4 just adds the new option
> without adding any new config file. Patch 5 finally creates a /usr/share/qemu
> /cpus-x86_64.conf file, with the CPU models we currently have on Qemu.
> 
> Reference to previous discussion:
>  - http://marc.info/?l=qemu-devel&m=133278877315665
> 
> This series depends on the following:
>   Subject: [PATCH v5 00/14] configure: --with-confsuffix option
>   Message-Id: <1334778950-18660-1-git-send-email-ehabk...@redhat.com>
> 
> 
> Eduardo Habkost (5):
>   move code to read default config files to a separate function
>   eliminate arch_config_name variable
>   move list of default config files to an array
>   implement -no-user-config command-line option
>   move CPU definitions to /usr/share/qemu/cpus-x86_64.conf

This is a huge improvement from libvirt point of view. Thanks Eduadro. I'll
prepare corresponding libvirt patches.

Jirka



Re: [Qemu-devel] [libvirt] Modern CPU models cannot be used with libvirt

2012-03-26 Thread Jiri Denemark
On Sun, Mar 25, 2012 at 10:26:57 -0500, Anthony Liguori wrote:
> On 03/25/2012 10:16 AM, Avi Kivity wrote:
> > On 03/25/2012 04:59 PM, Anthony Liguori wrote:
> So how about:
> 
> 1) Load ['@SYSCONFDIR@/qemu/qemu.cfg', '@SYSCONFDIR@/qemu/target-@ARCH@.cfg',
>   '@DATADIR@/system.cfg', '@DATADIR@/system-@ARCH@.cfg']
> 
> 2) system-@ARCH@.cfg will contain:
> 
> [system]
> readconfig=@DATADIR@/target-@a...@-cpus.cfg
> readconfig=@DATADIR@/target-@a...@-machine.cfg
> 
> 3) -nodefconfig will not load any configuration files from DATADIR or 
> SYSCONFDIR.  -no-user-config will not load any configuration files from 
> SYSCONFDIR.
> 
...
> > The command line becomes unstable if you use -nodefconfig.
> 
> -no-user-config solves this but I fully expect libvirt would continue to use 
> -nodefconfig.

Libvirt uses -nodefaults -nodefconfig because it wants to fully control how
the virtual machine will look like (mainly in terms of devices). In other
words, we don't want any devices to just magically appear without libvirt
knowing about them. -nodefaults gets rid of default devices that are built
directly in qemu. Since users can set any devices or command line options
(such as enable-kvm) into qemu configuration files in @SYSCONFDIR@, we need to
avoid reading those files as well. Hence we use -nodefconfig. However, we
would still like qemu to read CPU definitions, machine types, etc. once they
become externally loaded configuration (or however we decide to call it). That
said, when CPU definitions are moved into @DATADIR@, and -no-user-config is
introduced, I don't see any reason for libvirt to keep using -nodefconfig.

I actually like
-no-user-config
more than
-nodefconfig -readconfig @DATADIR@/...
since it would avoid additional magic to detect what files libvirt should
explicitly pass to -readconfig but basically any approach that would allow us
to do read files only from @DATADIR@ is much better than what we have with
-nodefconfig now.

Jirka



Re: [Qemu-devel] QEMU fstatfs(2) and libvirt SELinux policy

2012-03-09 Thread Jiri Denemark
Hi.

On Fri, Mar 09, 2012 at 11:32:47 +, Stefan Hajnoczi wrote:
...
> static __inline__ int platform_test_xfs_fd(int fd)
> {
> struct statfs buf;
> if (fstatfs(fd, &buf) < 0)
> return 0;
> return (buf.f_type == 0x58465342);  /* XFSB */
> }
> 
> In other words, XFS detection will fail when SELinux is enabled.
> 
> I'm not familiar with libvirt's use of SELinux.  Can someone explain
> if we need to expand the policy in libvirt and how to do that?

Actually, there is no SELinux policy in libvirt. Libvirt merely uses an
appropriate security context when running qemu processes. The rules what such
processes can do and what they are forbidden to do are described in SELinux
policy which is provided as a separate package (or packages on some distros).
So it's this policy (selinux-policy package on Fedora based distros) which
would need to be expanded. Thus it should be negotiated with SELinux policy
maintainers if they are willing to allow svirt_t domain calling fstatfs.

Jirka



Re: [Qemu-devel] [libvirt] Qemu, libvirt, and CPU models

2012-03-08 Thread Jiri Denemark
On Wed, Mar 07, 2012 at 19:26:25 -0300, Eduardo Habkost wrote:
> Awesome. So, if Qemu and libvirt disagrees, libvirt will know that and
> add the necessary flags? That was my main worry. If disagreement between
> Qemu and libvirt is not a problem, it would make things much easier.
> 
> ...but:
> 
> Is that really implemented? I simply don't see libvirt doing that. I see
> code that calls "-cpu ?" to list the available CPU models, but no code
> calling "-cpu ?dump", or parsing the Qemu CPU definition config file. I
> even removed some random flags from the Nehalem model on my machine
> (running Fedora 16), and no additional flags were added.

Right, currently we only detect if Qemu knows requested CPU model and use
another one if not. We should really start using something like -cpu ?dump.
However, since qemu may decide to change parts of the model according to,
e.g., machine type, we would need something more dynamic. Something like, "hey
Qemu, this is the machine type and CPU model we want to use, these are the
features we want in this model, and we also want few additional features,
please, tell us what the resulting CPU configuration is (if it is even
possible to deliver such CPU on current host)". And the result would be
complete CPU model, which may of course be different from what the qemu's
configuration file says. We could then use the result to update domain XML (in
a way similar to how we handle machine types) so that we can guarantee the
guest will always see the same CPU. Once CPU is updated, we could just check
with Qemu if it can provide such CPU and start (or refuse to start) the
domain. Does it seem reasonable?

> > We could go one step further and just write
> > out a cpu.conf file that we load in QEMU with -loadconfig.
> 
> Sounds good. Anyway, I want to make everything configurable on the
> cpudef config file configurable on the command-line too, so both options
> (command-line or config file) would work.

I'd be afraid of hitting the command line length limit if we specified all CPU
details in it :-)

> So, it looks like either I am missing something on my tests or libvirt
> is _not_ probing the Qemu CPU model definitions to make sure libvirt
> gets all the features it expects.
> 
> Also, I would like to ask if you have suggestions to implement
> the equivalent of "-cpu ?dump" in a more friendly and extensible way.
> Would a QMP command be a good alternative? Would a command-line option
> with json output be good enough?

I quite like the possible solution Anthony (or perhaps someone else) suggested
some time ago (it may however be biased by my memory): qemu could provide a
command line option that would take QMP command(s) and the result would be QMP
response on stdout. We could use this interface for all kinds of probes with
easily parsed output.

> (Do we have any case of capability-querying being made using QMP before
> starting any actual VM, today?)

Not really. We only query QMP while for available QMP commands that we can
used further on when the domain is running.

> So, about the above: the cases where libvirt thinks a feature is
> available but Qemu knows it is not available are sort-of OK today,
> because Qemu would simply refuse to start and an error message would be
> returned to the user.

Really? In my experience qemu just ignored the feature it didn't know about
without any error message and started the domain happily. It might be because
libvirt doesn't use anything like -cpu ...,check or whatever is needed to make
it fail. However, I think we should fix it.

> But what about the features that are not available on the host CPU,
> libvirt will think it can't be enabled, but that _can_ be enabled?
> x2apic seems to be the only case today, but we may have others in the
> future.

I think qemu could tell us about those features during the probe phase (my
first paragraph) and we would either use them with policy='force' or something
similar.

Jirka



Re: [Qemu-devel] Modern CPU models cannot be used with libvirt

2011-12-15 Thread Jiri Denemark
On Thu, Dec 15, 2011 at 08:58:55 -0600, Anthony Liguori wrote:
> Pass '-readconfig /etc/qemu/target-x86_64.conf' to pick up those models and 
> if 
> you are absolutely insistent on not giving the user any ability to change 
> things 
> on their own, cp the file from qemu.git into libvirt.git and install it in a 
> safe place.

Ah, this looks like a good idea (and we could even generate that file
dynamically if we add support for family/stepping/... and other things that we
do not model now). However, separating these definitions from qemu may result
in incompatibilities with older qemu versions. I guess mainly because our
configuration file would mention a CPU feature that an installed qemu version
doesn't understand. Currently, qemu seems to just ignore such feature
(although it prints an error) and continues happily without it. Is there
any way for us to ask qemu what CPU features it knows about so that we could
avoid using a CPU models which include features qemu doesn't understand?

Jirka



[Qemu-devel] Modern CPU models cannot be used with libvirt

2011-12-15 Thread Jiri Denemark
Hi,

Recently I realized that all modern CPU models defined in
/etc/qemu/target-x86_64.conf are useless when qemu is used through libvirt.
That's because we start qemu with -nodefconfig which results in qemu ignoring
that file with CPU model definitions. We have a very good reason for using
-nodefconfig because we need to control the ABI presented to a guest OS and we
don't want any configuration file that can contain lots of things including
device definitions to be read by qemu. However, we would really like the new
CPU models to be understood by qemu even if used through libvirt. What would
be the best way to solve this?

I suspect this could have been already discussed in the past but obviously a
workable solution was either not found or just not implemented.

Jirka



Re: [Qemu-devel] libvirt doesn't work with qemu 1.0

2011-12-02 Thread Jiri Denemark
On Fri, Dec 02, 2011 at 08:50:11 -0600, Anthony Liguori wrote:
> On 12/02/2011 08:21 AM, Gerd Hoffmann wrote:
> >Hi,
> >
> > $subject says all.  The error message is:
> >
> > error: internal error cannot parse /home/kraxel/bin/qemu-default version
> > number in 'QEMU emulator version 1.0, Copyright (c) 2003-2008 Fabrice
> > Bellard'
> 
> Parsing help output for the version number is a really bad idea.  
> query-version 
> in QMP does the right thing:
> 
> (QEMU) query-version
> {u'return': {u'qemu': {u'micro': 0, u'major': 1, u'minor': 0}, u'package': 
> ''}}

Which of course only works with new enough QEMU that is known to support QMP
(which BTW we detect by checking the version number). Anyway, the fix is easy,
we just need to assume micro is 0 if it's missing instead of requiring it to
be present. A fix will come soon.

Jirka



Re: [Qemu-devel] [PATCH STABLE] build: Move QEMU_INCLUDES before QEMU_CFLAGS

2011-08-10 Thread Jiri Denemark
On Wed, Aug 10, 2011 at 16:04:15 +0300, Avi Kivity wrote:
> On 08/10/2011 01:04 PM, Jiri Denemark wrote:
> > This patch fixes build when any of the include paths from QEMU_CFLAGS
> > contains a header file with similar name to a header file in qemu
> > sources. I hit it with error.h included by qapi/qapi-types-core.h. GCC
> > decided to use /usr/include/alsa/error.h instead of qemu's error.h.
> >
> > Signed-off-by: Jiri Denemark
> > ---
> >   rules.mak |8 
> >   1 files changed, 4 insertions(+), 4 deletions(-)
> >
> > diff --git a/rules.mak b/rules.mak
> > index 612ae37..1a2622c 100644
> > --- a/rules.mak
> > +++ b/rules.mak
> > @@ -15,21 +15,21 @@ MAKEFLAGS += -rR
> >   QEMU_DGFLAGS += -MMD -MP -MT $@ -MF $(*D)/$(*F).d
> >
> >   %.o: %.c
> > -   $(call quiet-command,$(CC) $(QEMU_CFLAGS) $(QEMU_INCLUDES) 
> > $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC$(TARGET_DIR)$@")
> > +   $(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) 
> > $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC$(TARGET_DIR)$@")
> >
> >
> 
> Shouldn't we use -isystem instead of -I for system headers?

That would be ideal but unfortunately QEMU_CFLAGS also contains output of
pkg-config --cflags which uses -I for header paths. We would need to convert
them to -isystem. And is -isystem even portable to other compilers (in case we
care about that)?

IMHO just moving paths to qemu sources first is easier and the result is the
same.

Jirka



[Qemu-devel] [PATCH STABLE] build: Move QEMU_INCLUDES before QEMU_CFLAGS

2011-08-10 Thread Jiri Denemark
This patch fixes build when any of the include paths from QEMU_CFLAGS
contains a header file with similar name to a header file in qemu
sources. I hit it with error.h included by qapi/qapi-types-core.h. GCC
decided to use /usr/include/alsa/error.h instead of qemu's error.h.

Signed-off-by: Jiri Denemark 
---
 rules.mak |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/rules.mak b/rules.mak
index 612ae37..1a2622c 100644
--- a/rules.mak
+++ b/rules.mak
@@ -15,21 +15,21 @@ MAKEFLAGS += -rR
 QEMU_DGFLAGS += -MMD -MP -MT $@ -MF $(*D)/$(*F).d
 
 %.o: %.c
-   $(call quiet-command,$(CC) $(QEMU_CFLAGS) $(QEMU_INCLUDES) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC$(TARGET_DIR)$@")
+   $(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  CC$(TARGET_DIR)$@")
 
 ifeq ($(LIBTOOL),)
 %.lo: %.c
@echo "missing libtool. please install and rerun configure"; exit 1
 else
 %.lo: %.c
-   $(call quiet-command,libtool --mode=compile --quiet --tag=CC $(CC) 
$(QEMU_CFLAGS) $(QEMU_INCLUDES) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC 
$@")
+   $(call quiet-command,libtool --mode=compile --quiet --tag=CC $(CC) 
$(QEMU_INCLUDES) $(QEMU_CFLAGS) $(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  lt CC 
$@")
 endif
 
 %.o: %.S
-   $(call quiet-command,$(CC) $(QEMU_CFLAGS) $(QEMU_INCLUDES) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  AS$(TARGET_DIR)$@")
+   $(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  AS$(TARGET_DIR)$@")
 
 %.o: %.m
-   $(call quiet-command,$(CC) $(QEMU_CFLAGS) $(QEMU_INCLUDES) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  OBJC  $(TARGET_DIR)$@")
+   $(call quiet-command,$(CC) $(QEMU_INCLUDES) $(QEMU_CFLAGS) 
$(QEMU_DGFLAGS) $(CFLAGS) -c -o $@ $<,"  OBJC  $(TARGET_DIR)$@")
 
 LINK = $(call quiet-command,$(CC) $(QEMU_CFLAGS) $(CFLAGS) $(LDFLAGS) -o $@ 
$(1) $(LIBS),"  LINK  $(TARGET_DIR)$@")
 
-- 
1.7.6




Re: [Qemu-devel] [patch 4/4] QEMU live block copy

2011-06-16 Thread Jiri Denemark
On Wed, Jun 15, 2011 at 14:14:07 -0300, Marcelo Tosatti wrote:
> Index: qemu-block-copy/qmp-commands.hx
> ===
> --- qemu-block-copy.orig/qmp-commands.hx
> +++ qemu-block-copy/qmp-commands.hx
...
>  SQMP
> +query-block-copy
> +-
> +
> +Live block copy status.
> +
> +Each block copy instance information is stored in a json-object and the 
> returned
> +value is a json-array of all instances.
> +
> +Each json-object contains the following:
> +
> +- "device": device name (json-string)
> +- "status": block copy status (json-string)
> +- Possible values: "active", "failed", "mirrored", "completed", meaning:
> +- failed: block copy failed.
> +- stopped: block copy stopped.
> +- active: block copy active, copying to destination 
> image.
> +- mirrored: block copy active, finished copying to 
> destination
> +  image, writes are mirrored.
> +- completed: block copy completed.
> +
> +- "info": A json-object with the statistics information, if status is 
> "active":
> +- "percentage": percentage completed (json-int)
> +
> +Example:
> +
> +Block copy for "ide1-hd0" active and block copy for "ide1-hd1" failed:
> +
> +-> { "execute": "query-block-copy" }
> +<- {
> +  "return":[
> +{"device":"ide1-hd0",
> +"status":"active",
> +"info":{
> +   "percentage":23,
> +}
> +},
> +{"device":"ide1-hd1",
> + "status":"failed"
> +}
> +  ]
> +   }
> +
> +EQMP

This documentation doesn't reflect the changes to progress reporting you made
in the code.

> Index: qemu-block-copy/docs/block_copy.txt
> ===
> --- /dev/null
> +++ qemu-block-copy/docs/block_copy.txt
...
> +Migration
> +=
> +
> +It is necessary to specify active block copy instance in the destination
> +VM before migration is performed. Example:
> +
> +1) start VM in incoming mode.
> +2) for each active block copy instance on the source, run:
> +(qemu) block_copy device /path/to/image.dst [-i] -m
> +
> +

I guess I'm a bit behind but what exactly can this migration support in
block_copy be used for?

Jirka



Re: [Qemu-devel] [patch 6/7] QEMU live block copy (update)

2011-06-07 Thread Jiri Denemark
On Mon, Jun 06, 2011 at 14:03:44 -0300, Marcelo Tosatti wrote:
...
>  SQMP
> +query-block-copy
> +-
> +
> +Live block copy status.
> +
> +Each block copy instance information is stored in a json-object and the 
> returned
> +value is a json-array of all instances.
> +
> +Each json-object contains the following:
> +
> +- "device": device name (json-string)
> +- "status": block copy status (json-string)
> +- Possible values: "active", "failed", "mirrored", "completed", meaning:
> +- failed: block copy failed.
> +- active: block copy active, copying to destination 
> image.
> +- mirrored: block copy active, finished copying to 
> destination
> +  image, writes are mirrored.
> +- completed: block copy completed.
> +
> +- "info": A json-object with the statistics information, if status is 
> "active":
> +- "percentage": percentage completed (json-int)
> +
> +Example:
> +
> +Block copy for "ide1-hd0" active and block copy for "ide1-hd1" failed:
> +
> +-> { "execute": "query-block-copy" }
> +<- {
> +  "return":[
> +{"device":"ide1-hd0",
> +"status":"active",
> +"info":{
> +   "percentage":23,
> +}
> +},

What about using the same form of progress reporting as used by query-migrate?
That is, instead of

"info":{
"percentage":23
}

the following would be reported:

"disk":{
"transferred":123,
"remaining":123,
"total":246
}

One can trivially compute percentage from that but it's impossible to get this
kind of data when only percentage is reported. And total can even change in
time if needed (just like it changes during migration).

> +{"device":"ide1-hd1",
> + "status":"failed"
> +}

Is there any way we can get the exact error which made it fail, such as EPERM
or ENOSPC?

> +  ]
> +   }
> +
> +EQMP

Jirka



Re: [Qemu-devel] QMP: RFC: I/O error info & query-stop-reason

2011-06-02 Thread Jiri Denemark
On Thu, Jun 02, 2011 at 08:08:35 -0500, Anthony Liguori wrote:
> On 06/02/2011 04:06 AM, Daniel P. Berrange wrote:
> >>> B. query-stop-reason
> >>> 
> >>>
> >>> I also have a simple solution for item 2. The vm_stop() accepts a reason
> >>> argument, so we could store it somewhere and return it as a string, like:
> >>>
> >>> ->   { "execute": "query-stop-reason" }
> >>> <- { "return": { "reason": "user" } }
> >>>
> >>> Valid reasons could be: "user", "debug", "shutdown", "diskfull" (hey,
> >>> this should be "ioerror", no?), "watchdog", "panic", "savevm", "loadvm",
> >>> "migrate".
> >>>
> >>> Also note that we have a STOP event. It should be extended with the
> >>> stop reason too, for completeness.
> >>
> >>
> >> Can we just extend query-block?
> >
> > Primarily we want 'query-stop-reason' to tell us what caused the VM
> > CPUs to stop. If that reason was 'ioerror', then 'query-block' could
> > be used to find out which particular block device(s) caused the IO
> > error to occurr&  get the "reason" that was in the BLOCK_IO_ERROR
> > event.
> 
> My concern is that we're over abstracting here.  We're not going to add 
> additional stop reasons in the future.
> 
> Maybe just add an 'io-error': True to query-state.

Sure, adding a new field to query-state response would work as well. And it
seems like a good idea to me since one already needs to call query-status to
check if CPUs are stopped or not so it makes sense to incorporate the
additional information there as well. And if you want to be safe for the
future, the new field doesn't have to be boolean 'io-error' but it can be the
string 'reason' which Luiz suggested above.

Jirka



Re: [Qemu-devel] [PATCH] configure: Fix spice probe

2011-01-24 Thread Jiri Denemark
On Mon, Jan 24, 2011 at 15:17:17 +0100, Jiri Denemark wrote:
> On Mon, Jan 24, 2011 at 15:01:27 +0100, Gerd Hoffmann wrote:
> > On 01/24/11 13:20, Jiri Denemark wrote:
> > > From: Jiri Denemark
> > >
> > > Non-existent $pkgconfig instead of $pkg_config was used when configure
> > > probes for spice availability.
> > 
> > What tree you are looking at?  It _is_ $pkgconfig in mine ...
> 
> Yes, it is. And that's the problem, since $pkgconfig is not ever set inside
> configure script. However, $pkg_config is set and used all over the script so
> this patch makes spice probe use correct $pkg_config.

Looking at git://git.qemu.org/qemu.git tree.

Jirka



Re: [Qemu-devel] [PATCH] configure: Fix spice probe

2011-01-24 Thread Jiri Denemark
On Mon, Jan 24, 2011 at 15:01:27 +0100, Gerd Hoffmann wrote:
> On 01/24/11 13:20, Jiri Denemark wrote:
> > From: Jiri Denemark
> >
> > Non-existent $pkgconfig instead of $pkg_config was used when configure
> > probes for spice availability.
> 
> What tree you are looking at?  It _is_ $pkgconfig in mine ...

Yes, it is. And that's the problem, since $pkgconfig is not ever set inside
configure script. However, $pkg_config is set and used all over the script so
this patch makes spice probe use correct $pkg_config.

Jirka



[Qemu-devel] [PATCH] configure: Fix spice probe

2011-01-24 Thread Jiri Denemark
From: Jiri Denemark 

Non-existent $pkgconfig instead of $pkg_config was used when configure
probes for spice availability.

Signed-off-by: Jiri Denemark 
---
 configure |6 +++---
 1 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/configure b/configure
index 210670c..dc469b2 100755
--- a/configure
+++ b/configure
@@ -2207,9 +2207,9 @@ if test "$spice" != "no" ; then
 #include 
 int main(void) { spice_server_new(); return 0; }
 EOF
-  spice_cflags=$($pkgconfig --cflags spice-protocol spice-server 2>/dev/null)
-  spice_libs=$($pkgconfig --libs spice-protocol spice-server 2>/dev/null)
-  if $pkgconfig --atleast-version=0.5.3 spice-server >/dev/null 2>&1 && \
+  spice_cflags=$($pkg_config --cflags spice-protocol spice-server 2>/dev/null)
+  spice_libs=$($pkg_config --libs spice-protocol spice-server 2>/dev/null)
+  if $pkg_config --atleast-version=0.5.3 spice-server >/dev/null 2>&1 && \
  compile_prog "$spice_cflags" "$spice_libs" ; then
 spice="yes"
 libs_softmmu="$libs_softmmu $spice_libs"
-- 
1.7.4.rc2




[Qemu-devel] [PATCH] Fix CPU topology initialization

2010-01-05 Thread Jiri Denemark
Late initialization of CPU topology in CPUState prevents KVM guests to
actually see the topology.

Signed-off-by: Jiri Denemark 
---
 vl.c |8 
 1 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/vl.c b/vl.c
index e881e45..a03d7a6 100644
--- a/vl.c
+++ b/vl.c
@@ -3484,10 +3484,10 @@ void qemu_init_vcpu(void *_env)
 {
 CPUState *env = _env;
 
-if (kvm_enabled())
-kvm_init_vcpu(env);
 env->nr_cores = smp_cores;
 env->nr_threads = smp_threads;
+if (kvm_enabled())
+kvm_init_vcpu(env);
 return;
 }
 
@@ -3813,12 +3813,12 @@ void qemu_init_vcpu(void *_env)
 {
 CPUState *env = _env;
 
+env->nr_cores = smp_cores;
+env->nr_threads = smp_threads;
 if (kvm_enabled())
 kvm_start_vcpu(env);
 else
 tcg_init_vcpu(env);
-env->nr_cores = smp_cores;
-env->nr_threads = smp_threads;
 }
 
 void qemu_notify_event(void)
-- 
1.6.6