No virtio devices in SeaBIOS VMs

2024-02-28 Thread Dario Faggioli
Hello everyone,

With QEMU 8.2, guests that:
 - use SeaBIOS
 - use something different than "-cpu host" OR don't use "host-phys-
bits=on"
 - have more than 2815 MB of RAM

have problems with their virtio devices and, hence, malfunction in
various ways (e.g., if they're using a virtio disk, they don't find it,
and the VM does not boot).

This broke all of a sudden, as soon as we updated QEMU to 8.2.0 in
openSUSE, and we got some bugreports about it. E.g., this one [1]
includes some logs and info, but I can provide more, if helpful.

I did try master (instead of 8.2, although it was master as of last
week), but the problem was still there.

I've then bisected it to SeaBIOS commit:
96a8d130a8c2e908e357ce62cd713f2cc0b0a2eb ("be less conservative with
the 64bit pci io window"). In fact, falling back to an earlier SeaBIOS
version, before that commit, or even just reverting it [1], solves the
issue.

UEFI guests seem not to be affected in any way, no matter amount of RAM
or CPU model (well, of course, since it's a SeaBIOS commit! :-D What I
mean is that there seems to be nothing in edk2 that induces the same
behavior).

A way of working this around (beside switching to UEFI or to cpu=host)
is to turn on host-phys-bits, e.g., with '' in the XML.

It is, however, a bit impractical to have to do this for all the VMs
that one may have... Especially if they're a lot! :-)

I know that there have been issues and discussions (and they were also
related to virtio, I think) about these changes already. I don't know
if it's the same or a related problem but is there a way to avoid
having to ask people to change all their VMs' config?

Thanks and Regards,
Dario

[1] https://bugzilla.suse.com/show_bug.cgi?id=1219977
[2] well, I actually reverted a6ed6b701f0a57db0569ab98b0661c12a6ec3ff8
too, for convenience
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[PATCH v2] pc: q35: Bump max_cpus to 1024

2022-11-18 Thread Dario Faggioli
Keep the old limit of 288 for machine versions 7.2 and earlier.

Signed-off-by: Dario Faggioli 
---
Cc: Paolo Bonzini 
Cc: Richard Henderson 
Cc: Eduardo Habkost 
Cc: "Michael S. Tsirkin" 
Cc: Marcel Apfelbaum 
---
Changes from v1:
- fix actually keeping the old max value for the 7.2 machine type,
  which was the original goal, but was done wrongly

---
This is related to:

https://lore.kernel.org/qemu-devel/c705d0d8d6ed1a520b1ff92cb2f83fef19522d30.ca...@suse.com/

With this applied to QEMU, I've been able to start a VM with as high as
980 vCPUs (even if I was on an host with 384 pCPUs, so everything was
super slow!). After that, I started to see messages like this:

"SMBIOS 2.1 table length 66822 exceeds 65535"

Basing on the discussion happening in that thread, I'm going straight
to 1024, as it seems to me that it's going to be working well soon
(especially considering that this is meant for next release, not for
7.2)

Thanks and Regards
---
 hw/i386/pc_q35.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index a496bd6e74..54804337e9 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -367,7 +367,7 @@ static void pc_q35_machine_options(MachineClass *m)
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_INTEL_IOMMU_DEVICE);
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_RAMFB_DEVICE);
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE);
-m->max_cpus = 288;
+m->max_cpus = 1024;
 }
 
 static void pc_q35_7_2_machine_options(MachineClass *m)
@@ -375,6 +375,7 @@ static void pc_q35_7_2_machine_options(MachineClass *m)
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
 pc_q35_machine_options(m);
 m->alias = "q35";
+m->max_cpus = 288;
 pcmc->default_cpu_version = 1;
 }
 





Re: [PATCH] pc: q35: Bump max_cpus to 1024

2022-11-17 Thread Dario Faggioli
Well...

On Thu, 2022-11-17 at 16:27 +0100, Dario Faggioli wrote:
> Keep the old limit of 288 for machine versions 7.2 and earlier.
> 
...At least, this was the idea...

> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -386,6 +386,7 @@ static void
> pc_q35_7_1_machine_options(MachineClass *m)
>  PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
>  pc_q35_7_2_machine_options(m);
>  m->alias = NULL;
> +    m->max_cpus = 288;
>
...But I managed to put this in the wrong function (xxx_7_1_yyy,
instead than xxx_7_2_)! :-/

Sorry about taht. I'll send a v2 (taking the feedback that I got to my
other email into account).

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[PATCH] pc: q35: Bump max_cpus to 1024

2022-11-17 Thread Dario Faggioli
Keep the old limit of 288 for machine versions 7.2 and earlier.

Signed-off-by: Dario Faggioli 
---
Cc: Paolo Bonzini 
Cc: Richard Henderson 
Cc: Eduardo Habkost 
Cc: "Michael S. Tsirkin" 
Cc: Marcel Apfelbaum 
---
This is related to:

https://lore.kernel.org/qemu-devel/c705d0d8d6ed1a520b1ff92cb2f83fef19522d30.ca...@suse.com/

With this applied to QEMU, I've been able to start a VM with as high as
980 vCPUs (even if I was on an host with 384 pCPUs, so everything was
super slow!). After that, I started to see messages like this:

"SMBIOS 2.1 table length 66822 exceeds 65535"

Thanks and Regards
---
 hw/i386/pc_q35.c |3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index a496bd6e74..d2a567a71f 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -367,7 +367,7 @@ static void pc_q35_machine_options(MachineClass *m)
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_INTEL_IOMMU_DEVICE);
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_RAMFB_DEVICE);
 machine_class_allow_dynamic_sysbus_dev(m, TYPE_VMBUS_BRIDGE);
-m->max_cpus = 288;
+m->max_cpus = 1024;
 }
 
 static void pc_q35_7_2_machine_options(MachineClass *m)
@@ -386,6 +386,7 @@ static void pc_q35_7_1_machine_options(MachineClass *m)
 PCMachineClass *pcmc = PC_MACHINE_CLASS(m);
 pc_q35_7_2_machine_options(m);
 m->alias = NULL;
+m->max_cpus = 288;
 pcmc->legacy_no_rng_seed = true;
 compat_props_add(m->compat_props, hw_compat_7_1, hw_compat_7_1_len);
 compat_props_add(m->compat_props, pc_compat_7_1, pc_compat_7_1_len);





How about increasing max_cpus for q35 ?

2022-11-09 Thread Dario Faggioli
Hello,

Sorry for the potentially naive question, but I'm not clear what the
process would be if, say, I'd like to raise the number of maximum CPUs
a q35 VM can have.

So, right now we have:

void pc_q35_2_7_machine_options(MachineClass *m) {
  ...
  m->max_cpus = 255;
}

And:

void pc_q35_machine_options(MachineClass *m)
{
  ...
  m->max_cpus = 288;
}

Focusing on the latter, it comes from this commit:

https://gitlab.com/qemu-project/qemu/-/commit/00d0f9fd6602a27b204f672ef5bc8e69736c7ff1
  
  pc: q35: Bump max_cpus to 288

  Along with it for machine versions 2.7 and older keep
  it at 255.

So, it was 255 and is now 288. This seems to me to be there since QEMU
2.8.0.

Now, as far as I understand, KVM can handle 1024, at least since this
commit (and a couple of other related ones):

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=074c82c8f7cf8a46c3b81965f122599e3a133450
"kvm: x86: Increase MAX_VCPUS to 1024"

Which basically does:

-#define KVM_MAX_VCPUS 288
+#define KVM_MAX_VCPUS 1024

And it's included in kernels >= 5.15.

So, what's the correct way of bumping up the limit again? Just changing
that assignment in pc_q35_machine_options() ? Or do we want a new
version of the machine type or something like that?

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH] block/io_uring: revert "Use io_uring_register_ring_fd() to skip fd operations"

2022-10-07 Thread Dario Faggioli
Yes, we did hit this bug as well, in the QEMU 7.1 package, for openSUSE
Tumbleweed (more info
here: https://bugzilla.suse.com/show_bug.cgi?id=1204082)

FWIW, I can confirm that applying this patch fixes the issue, so this
can have:

On Sat, 2022-09-24 at 22:48 +0800, Sam Li wrote:
> Resolves: https://gitlab.com/qemu-project/qemu/-/issues/1193
> 
> [...]
>
> This reverts commit e2848bc574fe2715c694bf8fe9a1ba7f78a1125a
> and 77e3f038af1764983087e3551a0fde9951952c4d.
> 
> Signed-off-by: Sam Li 
>
Tested-by: Dario Faggioli 

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[RESEND PATCH 1/2] modules: introduces module_kconfig directive

2022-05-27 Thread Dario Faggioli
From: Jose R. Ziviani 

module_kconfig is a new directive that should be used with module_obj
whenever that module depends on the Kconfig to be enabled.

When the module is enabled in Kconfig we are sure that its dependencies
will be enabled as well, thus the module will be loaded without any
problem.

The correct way to use module_kconfig is by passing the Kconfig option
to module_kconfig (or the *config-devices.mak without CONFIG_).

Signed-off-by: Jose R. Ziviani 
Signed-off-by: Dario Faggioli 
---
Cc: Gerd Hoffmann 
Cc: John Snow 
Cc: Cleber Rosa 
Cc: Paolo Bonzini 
Cc: qemu-s3...@nongnu.org
---
 hw/display/qxl.c|1 +
 hw/display/vhost-user-gpu-pci.c |1 +
 hw/display/vhost-user-gpu.c |1 +
 hw/display/vhost-user-vga.c |1 +
 hw/display/virtio-gpu-base.c|1 +
 hw/display/virtio-gpu-gl.c  |1 +
 hw/display/virtio-gpu-pci-gl.c  |1 +
 hw/display/virtio-gpu-pci.c |1 +
 hw/display/virtio-gpu.c |1 +
 hw/display/virtio-vga-gl.c  |1 +
 hw/display/virtio-vga.c |1 +
 hw/s390x/virtio-ccw-gpu.c   |1 +
 hw/usb/ccid-card-emulated.c |1 +
 hw/usb/ccid-card-passthru.c |1 +
 hw/usb/host-libusb.c|1 +
 hw/usb/redirect.c   |1 +
 include/qemu/module.h   |   10 ++
 scripts/modinfo-generate.py |2 ++
 18 files changed, 28 insertions(+)

diff --git a/hw/display/qxl.c b/hw/display/qxl.c
index 2db34714fb..5b10f697f1 100644
--- a/hw/display/qxl.c
+++ b/hw/display/qxl.c
@@ -2515,6 +2515,7 @@ static const TypeInfo qxl_primary_info = {
 .class_init= qxl_primary_class_init,
 };
 module_obj("qxl-vga");
+module_kconfig(QXL);
 
 static void qxl_secondary_class_init(ObjectClass *klass, void *data)
 {
diff --git a/hw/display/vhost-user-gpu-pci.c b/hw/display/vhost-user-gpu-pci.c
index daefcf7101..d119bcae45 100644
--- a/hw/display/vhost-user-gpu-pci.c
+++ b/hw/display/vhost-user-gpu-pci.c
@@ -44,6 +44,7 @@ static const VirtioPCIDeviceTypeInfo vhost_user_gpu_pci_info 
= {
 .instance_init = vhost_user_gpu_pci_initfn,
 };
 module_obj(TYPE_VHOST_USER_GPU_PCI);
+module_kconfig(VHOST_USER_GPU);
 
 static void vhost_user_gpu_pci_register_types(void)
 {
diff --git a/hw/display/vhost-user-gpu.c b/hw/display/vhost-user-gpu.c
index 96e56c4467..3340ef9e5f 100644
--- a/hw/display/vhost-user-gpu.c
+++ b/hw/display/vhost-user-gpu.c
@@ -606,6 +606,7 @@ static const TypeInfo vhost_user_gpu_info = {
 .class_init = vhost_user_gpu_class_init,
 };
 module_obj(TYPE_VHOST_USER_GPU);
+module_kconfig(VHOST_USER_GPU);
 
 static void vhost_user_gpu_register_types(void)
 {
diff --git a/hw/display/vhost-user-vga.c b/hw/display/vhost-user-vga.c
index 072c9c65bc..0c146080fd 100644
--- a/hw/display/vhost-user-vga.c
+++ b/hw/display/vhost-user-vga.c
@@ -45,6 +45,7 @@ static const VirtioPCIDeviceTypeInfo vhost_user_vga_info = {
 .instance_init = vhost_user_vga_inst_initfn,
 };
 module_obj(TYPE_VHOST_USER_VGA);
+module_kconfig(VHOST_USER_VGA);
 
 static void vhost_user_vga_register_types(void)
 {
diff --git a/hw/display/virtio-gpu-base.c b/hw/display/virtio-gpu-base.c
index 8ba5da4312..790cec333c 100644
--- a/hw/display/virtio-gpu-base.c
+++ b/hw/display/virtio-gpu-base.c
@@ -260,6 +260,7 @@ static const TypeInfo virtio_gpu_base_info = {
 .abstract = true
 };
 module_obj(TYPE_VIRTIO_GPU_BASE);
+module_kconfig(VIRTIO_GPU);
 
 static void
 virtio_register_types(void)
diff --git a/hw/display/virtio-gpu-gl.c b/hw/display/virtio-gpu-gl.c
index 0bca887703..e06be60dfb 100644
--- a/hw/display/virtio-gpu-gl.c
+++ b/hw/display/virtio-gpu-gl.c
@@ -160,6 +160,7 @@ static const TypeInfo virtio_gpu_gl_info = {
 .class_init = virtio_gpu_gl_class_init,
 };
 module_obj(TYPE_VIRTIO_GPU_GL);
+module_kconfig(VIRTIO_GPU);
 
 static void virtio_register_types(void)
 {
diff --git a/hw/display/virtio-gpu-pci-gl.c b/hw/display/virtio-gpu-pci-gl.c
index 99b14a0718..a2819e1ca9 100644
--- a/hw/display/virtio-gpu-pci-gl.c
+++ b/hw/display/virtio-gpu-pci-gl.c
@@ -47,6 +47,7 @@ static const VirtioPCIDeviceTypeInfo virtio_gpu_gl_pci_info = 
{
 .instance_init = virtio_gpu_gl_initfn,
 };
 module_obj(TYPE_VIRTIO_GPU_GL_PCI);
+module_kconfig(VIRTIO_PCI);
 
 static void virtio_gpu_gl_pci_register_types(void)
 {
diff --git a/hw/display/virtio-gpu-pci.c b/hw/display/virtio-gpu-pci.c
index e36eee0c40..93f214ff58 100644
--- a/hw/display/virtio-gpu-pci.c
+++ b/hw/display/virtio-gpu-pci.c
@@ -65,6 +65,7 @@ static const TypeInfo virtio_gpu_pci_base_info = {
 .abstract = true
 };
 module_obj(TYPE_VIRTIO_GPU_PCI_BASE);
+module_kconfig(VIRTIO_PCI);
 
 #define TYPE_VIRTIO_GPU_PCI "virtio-gpu-pci"
 typedef struct VirtIOGPUPCI VirtIOGPUPCI;
diff --git a/hw/display/virtio-gpu.c b/hw/display/virtio-gpu.c
index 529b5246b2..cd4a56056f 100644
--- a/hw/display/virtio-gpu.c
+++ b/hw/display/virtio-gpu.c
@@ -1452,6 +1452,7 @@ static const TypeInfo virtio_gpu_info = {
 .class_init = virtio_gpu_class_i

[RESEND PATCH 2/2] modules: generates per-target modinfo

2022-05-27 Thread Dario Faggioli
From: Jose R. Ziviani 

This patch changes the way modinfo is generated and built. Instead of
one modinfo.c it generates one modinfo--softmmu.c per target. It
aims a fine-tune control of modules by configuring Kconfig.

Signed-off-by: Jose R. Ziviani 
Signed-off-by: Dario Faggioli 
---
Cc: Gerd Hoffmann 
Cc: John Snow 
Cc: Cleber Rosa 
Cc: Paolo Bonzini 
Cc: qemu-s3...@nongnu.org
---
 meson.build |   25 +
 scripts/modinfo-generate.py |   42 +-
 2 files changed, 42 insertions(+), 25 deletions(-)

diff --git a/meson.build b/meson.build
index df7c34b076..3744923aa7 100644
--- a/meson.build
+++ b/meson.build
@@ -3172,14 +3172,23 @@ foreach d, list : target_modules
 endforeach
 
 if enable_modules
-  modinfo_src = custom_target('modinfo.c',
-  output: 'modinfo.c',
-  input: modinfo_files,
-  command: [modinfo_generate, '@INPUT@'],
-  capture: true)
-  modinfo_lib = static_library('modinfo', modinfo_src)
-  modinfo_dep = declare_dependency(link_whole: modinfo_lib)
-  softmmu_ss.add(modinfo_dep)
+  foreach target : target_dirs
+if target.endswith('-softmmu')
+  config_target = config_target_mak[target]
+  config_devices_mak = target + '-config-devices.mak'
+  modinfo_src = custom_target('modinfo-' + target + '.c',
+  output: 'modinfo-' + target + '.c',
+  input: modinfo_files,
+  command: [modinfo_generate, '--devices', 
config_devices_mak, '@INPUT@'],
+  capture: true)
+
+  modinfo_lib = static_library('modinfo-' + target + '.c', modinfo_src)
+  modinfo_dep = declare_dependency(link_with: modinfo_lib)
+
+  arch = config_target['TARGET_NAME'] == 'sparc64' ? 'sparc64' : 
config_target['TARGET_BASE_ARCH']
+  hw_arch[arch].add(modinfo_dep)
+endif
+  endforeach
 endif
 
 nm = find_program('nm')
diff --git a/scripts/modinfo-generate.py b/scripts/modinfo-generate.py
index 689f33c0f2..a0c09edae1 100755
--- a/scripts/modinfo-generate.py
+++ b/scripts/modinfo-generate.py
@@ -32,7 +32,7 @@ def parse_line(line):
 continue
 return (kind, data)
 
-def generate(name, lines):
+def generate(name, lines, core_modules):
 arch = ""
 objs = []
 deps = []
@@ -49,7 +49,13 @@ def generate(name, lines):
 elif kind == 'arch':
 arch = data;
 elif kind == 'kconfig':
-pass # ignore
+# don't add a module which dependency is not enabled
+# in kconfig
+if data.strip() not in core_modules:
+print("/* module {} isn't enabled in Kconfig. */"
+  .format(data.strip()))
+print("/* },{ */")
+return []
 else:
 print("unknown:", kind)
 exit(1)
@@ -60,7 +66,7 @@ def generate(name, lines):
 print_array("objs", objs)
 print_array("deps", deps)
 print_array("opts", opts)
-print("},{");
+print("},{")
 return deps
 
 def print_pre():
@@ -74,26 +80,28 @@ def print_post():
 print("}};")
 
 def main(args):
+if len(args) < 3 or args[0] != '--devices':
+print('Expected: modinfo-generate.py --devices '
+  'config-device.mak [modinfo files]', file=sys.stderr)
+exit(1)
+
+# get all devices enabled in kconfig, from *-config-device.mak
+enabled_core_modules = set()
+with open(args[1]) as file:
+for line in file.readlines():
+config = line.split('=')
+if config[1].rstrip() == 'y':
+enabled_core_modules.add(config[0][7:]) # remove CONFIG_
+
 deps = {}
 print_pre()
-for modinfo in args:
+for modinfo in args[2:]:
 with open(modinfo) as f:
 lines = f.readlines()
 print("/* %s */" % modinfo)
-(basename, ext) = os.path.splitext(modinfo)
-deps[basename] = generate(basename, lines)
+(basename, _) = os.path.splitext(modinfo)
+deps[basename] = generate(basename, lines, enabled_core_modules)
 print_post()
 
-flattened_deps = {flat.strip('" ') for dep in deps.values() for flat in 
dep}
-error = False
-for dep in flattened_deps:
-if dep not in deps.keys():
-print("Dependency {} cannot be satisfied".format(dep),
-  file=sys.stderr)
-error = True
-
-if error:
-exit(1)
-
 if __name__ == "__main__":
 main(sys.argv[1:])





[RESEND PATCH 0/2] modules: Improve modinfo.c support

2022-05-27 Thread Dario Faggioli
Hello,

This is a RESEND of patch series "[PATCH v3 0/2] modules: Improve modinfo.c
support", from Sept 2021.

Message-ID: <20210928204628.20001-1-jzivi...@suse.de>
https://lore.kernel.org/qemu-devel/20210928204628.20001-1-jzivi...@suse.de/

Jose sent it because we were having issues building QEMU in the way we do that
for openSUSE and SUSE Linux Enterprise.

It was, back then, Acked by Gerd (see Message-ID:
20210929050908.3fqf3wwbk6vrt...@sirius.home.kraxel.org), but then never picked
up. Well, since we are still having those building problems without it, I've
rebased and I'm resending it, as agreed with Gerd himself.

"Rebase" was as easy as just reapplying the patches (no offsets, no fuzz). Yet,
I removed the ack, assuming that it needs being re-locked at.

`make check` is happy. The CI, well, it looks fine to me. There's some 'Build'
jobs that are taking too much to complete, and hence causing failures in the
'Test' ones, but that seems unrelated to the patches. I'll try to restart them
in these days, and see if they manage to finish.

 https://gitlab.com/dfaggioli/qemu/-/pipelines/549884208

FWIW, we've also started to use it, as downstream patches, in our packages,
on top of various versions of QEMU.

Let me know if there's anything more or different that I should do.

Thanks and Regards
---
Jose R. Ziviani (2):
  modules: introduces module_kconfig directive
  modules: generates per-target modinfo

 hw/display/qxl.c|  1 +
 hw/display/vhost-user-gpu-pci.c |  1 +
 hw/display/vhost-user-gpu.c |  1 +
 hw/display/vhost-user-vga.c |  1 +
 hw/display/virtio-gpu-base.c|  1 +
 hw/display/virtio-gpu-gl.c  |  1 +
 hw/display/virtio-gpu-pci-gl.c  |  1 +
 hw/display/virtio-gpu-pci.c |  1 +
 hw/display/virtio-gpu.c |  1 +
 hw/display/virtio-vga-gl.c  |  1 +
 hw/display/virtio-vga.c |  1 +
 hw/s390x/virtio-ccw-gpu.c   |  1 +
 hw/usb/ccid-card-emulated.c |  1 +
 hw/usb/ccid-card-passthru.c |  1 +
 hw/usb/host-libusb.c|  1 +
 hw/usb/redirect.c   |  1 +
 include/qemu/module.h   | 10 
 meson.build | 25 +---
 scripts/modinfo-generate.py | 42 -
 19 files changed, 69 insertions(+), 24 deletions(-)
--
Signature




QEMU malfunctioning if built with FORTIFY_SOURCE=3

2022-05-27 Thread Dario Faggioli
Hello Everyone!

So, I'm not sure how much this would be interesting, but I thought
about reporting it anyways, then let's see.

A few days ago we started to build openSUSE_Tumbleweed packages with
-D_FORTIFY_SOURCES=3 by default (it was =2 before, and it's back to =2
again now, at least for QEMU :-/).

It seemed fine, but then we discovered that a QEMU built that way, does
not work properly. In fact, it crashes pretty early displaying a
message like this: "*** buffer overflow detected ***"

I've had a look around, and did not find anything about previous
attempts of doing that, or things to be aware of, in general, if doing
it.

Now, for now, I don't have many other info myself either. Just some
terminal logs from a few users, and from our automated testing system,
i.e., like this:

$ sudo virsh start VM1
error: Failed to start domain 'VM1'
error: internal error: qemu unexpectedly closed the monitor: qxl_send_events: 
spice-server bug: guest stopped, ignoring
*** buffer overflow detected ***: terminated

Or this:

error: Failed to start domain 'vm-swtpm-legacy'
error: internal error: qemu unexpectedly closed the monitor: 
2022-05-25T16:30:05.738186Z qemu-system-x86_64: -accel kvm: warning: Number of 
SMP cpus requested (2) exceeds the recommended cpus supported by KVM (1)
2022-05-25T16:30:05.738259Z qemu-system-x86_64: -accel kvm: warning: Number of 
hotpluggable cpus requested (2) exceeds the recommended cpus supported by KVM 
(1)
2022-05-25T16:30:05.742354Z qemu-system-x86_64: warning: host doesn't support 
requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2022-05-25T16:30:05.742369Z qemu-system-x86_64: warning: host doesn't support 
requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
2022-05-25T16:30:05.743989Z qemu-system-x86_64: warning: host doesn't support 
requested feature: MSR(48FH).vmx-exit-load-perf-global-ctrl [bit 12]
2022-05-25T16:30:05.744050Z qemu-system-x86_64: warning: host doesn't support 
requested feature: MSR(490H).vmx-entry-load-perf-global-ctrl [bit 13]
*** buffer overflow detected ***: terminated

Or this:
https://openqa.opensuse.org/tests/2375666#step/usr_sbin_dnsmasq/47
https://xenbits.xen.org/people/dariof/download.png (also here, in case
the image disappears from OpenQA)

I am planning to try to investigate this more, but not right away. And
I can't even tell for sure when I'll have time for it. So, this is just
for letting people know that this has been (quickly) attempted, and
that it currently does not work, in case it's interesting for anyone
else.

Of course, in case it's the other way around, i.e., someone already has
more info on the subject that I've not been able to find, feel free to
ping me. :-)

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


[PATCH] tests/Makefile.include: Fix 'make check-help' output

2022-05-27 Thread Dario Faggioli
Since commit 3d2f73ef75e ("build: use "meson test" as the test harness"),
check-report.tap is no more, and we have check-report.junit.xml.

Update the output of 'make check-help', which was still listing
'check-report.tap', accordingly.

Fixes: 3d2f73ef75e
Signed-off-by: Dario Faggioli 
---
Cc: Paolo Bonzini 
---
 tests/Makefile.include |   30 +++---
 1 file changed, 15 insertions(+), 15 deletions(-)

diff --git a/tests/Makefile.include b/tests/Makefile.include
index ec84b2ebc0..5caa3836ad 100644
--- a/tests/Makefile.include
+++ b/tests/Makefile.include
@@ -3,28 +3,28 @@
 .PHONY: check-help
 check-help:
@echo "Regression testing targets:"
-   @echo " $(MAKE) checkRun block, qapi-schema, unit, 
softfloat, qtest and decodetree tests"
-   @echo " $(MAKE) benchRun speed tests"
+   @echo " $(MAKE) check  Run block, qapi-schema, unit, 
softfloat, qtest and decodetree tests"
+   @echo " $(MAKE) bench  Run speed tests"
@echo
@echo "Individual test suites:"
-   @echo " $(MAKE) check-qtest-TARGET   Run qtest tests for given target"
-   @echo " $(MAKE) check-qtest  Run qtest tests"
-   @echo " $(MAKE) check-unit   Run qobject tests"
-   @echo " $(MAKE) check-qapi-schemaRun QAPI schema tests"
-   @echo " $(MAKE) check-block  Run block tests"
+   @echo " $(MAKE) check-qtest-TARGET Run qtest tests for given target"
+   @echo " $(MAKE) check-qtestRun qtest tests"
+   @echo " $(MAKE) check-unit Run qobject tests"
+   @echo " $(MAKE) check-qapi-schema  Run QAPI schema tests"
+   @echo " $(MAKE) check-blockRun block tests"
 ifneq ($(filter $(all-check-targets), check-softfloat),)
-   @echo " $(MAKE) check-tcgRun TCG tests"
-   @echo " $(MAKE) check-softfloat  Run FPU emulation tests"
+   @echo " $(MAKE) check-tcg  Run TCG tests"
+   @echo " $(MAKE) check-softfloatRun FPU emulation tests"
 endif
-   @echo " $(MAKE) check-avocadoRun avocado (integration) tests 
for currently configured targets"
+   @echo " $(MAKE) check-avocado  Run avocado (integration) tests 
for currently configured targets"
@echo
-   @echo " $(MAKE) check-report.tap Generates an aggregated TAP test 
report"
-   @echo " $(MAKE) check-venv   Creates a Python venv for tests"
-   @echo " $(MAKE) check-clean  Clean the tests and related data"
+   @echo " $(MAKE) check-report.junit.xml Generates an aggregated TAP test 
report"
+   @echo " $(MAKE) check-venv Creates a Python venv for tests"
+   @echo " $(MAKE) check-cleanClean the tests and related data"
@echo
@echo "The following are useful for CI builds"
-   @echo " $(MAKE) check-build  Build most test binaries"
-   @echo " $(MAKE) get-vm-imagesDownloads all images used by 
avocado tests, according to configured targets (~350 MB each, 1.5 GB max)"
+   @echo " $(MAKE) check-buildBuild most test binaries"
+   @echo " $(MAKE) get-vm-images  Downloads all images used by 
avocado tests, according to configured targets (~350 MB each, 1.5 GB max)"
@echo
@echo
@echo "The variable SPEED can be set to control the gtester speed 
setting."





Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Dario Faggioli
On Fri, 2022-05-27 at 10:18 +0200, Claudio Fontana wrote:
> On 5/27/22 9:26 AM, Dario Faggioli wrote:
> > > 
> > Yes, this kind of matches what I've also seen and reported about in
> > <5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
> > enable/run just one of:
> > - reconnect
> > - flags_mismatch
> > - connect_fail
> > 
> > I see no issues.
> 
> On the countrary, for me just running a single one of those can fail.
> 
Well, but you said (or at least so I understood) that running the test
for the first time, works.

Then, when you run it multiple times, things start to fail.

That was, in fact, my point... I was making the parallelism between the
fact running only one of those tests works for me and the fact that
running the test for the first time works for you too.

And between the fact that running two tests, one after the other, fails
for me and the fact that running the same tests multiple times fails
for you too.

:-)

> > However, Claudio, AFAIUI, you're seeing this with an older GCC and
> > without LTO, right?
> 
> Yes, to provide a different angle I tried on veteran OpenSUSE Leap
> 15.2, so gcc is based on 7.5.0.
> 
> I don't think LTO is being used in any way.
> 
Yep, agreed. Now I don't think it's related to LTO specifically either.

Although, it's at least a bit of an Heisenbug. I mean, we're seeing it
(with two different setups), but for others, things work fine, I guess?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: make -j check failing on master, interesting valgrind errors on qos-test vhost-user-blk-test/basic

2022-05-27 Thread Dario Faggioli
On Thu, 2022-05-26 at 20:18 +0200, Claudio Fontana wrote:
> Forget about his aspect, I think it is a separate problem.
> 
> valgind of qos-test when run restricted to those specific paths (-p
> /x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/virtio-net-pci/virtio-
> net/virtio-net-tests/vhost-user/reconnect for example)
> shows all clear,
> 
> and still the test fails when run in a while loop after a few
> attempts:
> 
Yes, this kind of matches what I've also seen and reported about in
<5bcb5ceb44dd830770d66330e27de6a4345fcb69.ca...@suse.com>. If
enable/run just one of:
- reconnect
- flags_mismatch
- connect_fail

I see no issues.

As soon as two of those are run, one after the other, the problem
starts to appear.

However, Claudio, AFAIUI, you're seeing this with an older GCC and
without LTO, right?

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: Problem running qos-test when building with gcc12 and LTO

2022-05-25 Thread Dario Faggioli
On Wed, 2022-05-25 at 09:40 +, Dario Faggioli wrote:
> On Wed, 2022-05-25 at 07:41 +0100, Alex Bennée wrote:
> 
> 
> > Does it still trigger errors with my latest virtio cleanup series
> > (which
> > adds more tests to qos-test):
> > 
> >   Subject: [PATCH  v2 00/15] virtio-gpio and various virtio
> > cleanups
> >   Date: Tue, 24 May 2022 16:40:41 +0100
> >   Message-Id: <20220524154056.2896913-1-alex.ben...@linaro.org>
> > 
> I'll try it. I know it fails on master (at least two days ago's
> master). I'll apply the series and re-test.
> 
Ok, so, yes: current master + the v2 of "virtio-gpio and various virtio
cleanups", still fails (with GCC12 and LTO, of course), pretty much in
the same way I've described in this thread.

I've also tried current master + above series + revert of
8dcb404bff6d914 , although I'm not sure it even makes sense... :-O

Anyway, it also fails, but differently:

MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))} \  


>   QTEST_QEMU_IMG=./qemu-img 
> G_TEST_DBUS_DAEMON=../tests/dbus-vmstate-daemon.sh \  
>   
> 
>   QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-storage-daemon \ 
>   
>   
>   
>   QTEST_QEMU_BINARY=./qemu-system-x86_64 ./tests/qtest/qos-test --tap -k  
>   
>   
>   
# random seed: R02S69b7a984047f827959f7adb2e4161fb7 

  
# starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-31788.sock 
-qtest-log /dev/null -chardev socket,path=/tmp/qtest-31788.qmp,id=char0 -mon 
chardev=char0,mode=control -display none -machine none -accel qtest 

1..101  

  
# Start of x86_64 tests 

  
# Start of pc tests 

  
# Start of i440FX-pcihost tests 

  
# Start of pci-bus-pc tests
# Start of pci-bus tests
# Start of vhost-user-gpio-pci tests
# Start of vhost-user-gpio tests
# Start of vhost-user-gpio-tests tests
# Start of read-guest-mem tests
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -22: Invalid argument (22)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost VQ 1 ring restore failed: -22: Invalid argument (22)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost_set_vring_call failed: Invalid argument (22)
qemu-system-x86_64: Failed to set msg fds.
qemu-system-x86_64: vhost_set_vring_call failed: Invalid argument (22)
qemu-system-x86_64: Failed to write msg. Wrote -1 instead of 20.
qemu-system-x86_64: vhost VQ 0 ring restore failed: -5: Input/output error (5)  

  
../tests/qtest/libqtest.c:165: kill_qemu() detected QEMU death from signal 11 
(Segmentation fault)

# child process 
(/x86_64/pc/i440FX-pcihost/pci-bus-pc/pci-bus/vhost-user-gpio-pci/vhost-user-gpio/vhost-user-gpio-tests/read-guest-mem/memfile/subprocess
 [31850]) killed by signal 6 (Aborted)  
 
# child process 
(/x86_64/pc/i440FX-pcihost/pci

Re: Problem running qos-test when building with gcc12 and LTO

2022-05-25 Thread Dario Faggioli
On Wed, 2022-05-25 at 07:41 +0100, Alex Bennée wrote:
> Dario Faggioli  writes:
> > I'll try to dig further. Any idea/suggestion anyone has, feel free.
> > :-)
> 
> Sounds like there are still memory corruption/not initialised issues
> that are affected by moving things around.
> 
Right. In fact, I've just tried to enable the tests (re)introduced by
8dcb404bff6d9147765d7dd3e9c8493372186420 one by one and:

- with only one of them enabled, whichever one it is, things seems fine
- with vhost_user_test_setup_reconnect and 
  vhost_user_test_setup_connect_fail enabled (in this order) things
  fail
- with vhost_user_test_setup_connect_fail and 
  vhost_user_test_setup_flags_mismatch enabled (in this order) things 
  fail
- with vhost_user_test_setup_reconnect and 
  vhost_user_test_setup_flags_mismatch enabled (in this order) things 
  fail

Even if I keep vhost_user_test_setup_reconnect and
vhost_user_test_setup_connect_fail enabled, but change the order (i.e.,
I move the qos_add_test for connect-fail before the one for reconnect,
things also fail.

I haven't tried other combinations, so far...

> Does it still trigger errors with my latest virtio cleanup series
> (which
> adds more tests to qos-test):
> 
>   Subject: [PATCH  v2 00/15] virtio-gpio and various virtio cleanups
>   Date: Tue, 24 May 2022 16:40:41 +0100
>   Message-Id: <20220524154056.2896913-1-alex.ben...@linaro.org>
> 
I'll try it. I know it fails on master (at least two days ago's
master). I'll apply the series and re-test.

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH v3 0/2] modules: Improve modinfo.c support

2022-05-25 Thread Dario Faggioli
On Wed, 2022-05-25 at 08:32 +0200, Gerd Hoffmann wrote:
> On Tue, May 24, 2022 at 01:49:41PM +0200, Dario Faggioli wrote:
> > Hello! Sorry for bringing up an old thread, but I'd have a question
> > about this series.
> > 
> > As far as I can see, the patches were fine, and they were Acked,
> > but
> > then the series was never committed... Is this correct?
> > 
> > If yes, can it be committed (I'm up for rebasing and resending, if
> > it's
> > necessary)? If not, would it be possible to know what's missing, so
> > that we can continue working on it?
> 
> rebase, run through ci, resend is probably the best way forward.
>
Ok, great, thanks! I'll do all that.

> Don't remember any problems, not sure why it wasn't picked up,
> maybe paolo (who does the meson + buildsystem stuff) was just busy
> so it fell through the cracks,
> 
Sure, and it's fine... It happens, I know it very well. :-)

I hope it's clear, but just in case, I wasn't complaining or anything.
I just wanted to know what we could do about it now, which is exactly
what you just told me. :-D

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: Problem running qos-test when building with gcc12 and LTO

2022-05-24 Thread Dario Faggioli
On Mon, 2022-05-23 at 19:19 +, Dario Faggioli wrote:
> As soon as I get rid of _both_ "-flto=auto" _and_ "--enable-lto", the
> above tests seem to work fine.
> 
> When they fail, they fail immediately, while creating the graph, like
> this:
> 
> MALLOC_PERTURB_=${MALLOC_PERTURB_:-$(( ${RANDOM:-0} % 255 + 1))}
> QTEST_QEMU_IMG=./qemu-img G_TEST_DBUS_DAEMON=../tests/dbus-vmstate-
> daemon.sh QTEST_QEMU_STORAGE_DAEMON_BINARY=./storage-daemon/qemu-
> storage-daemon QTEST_QEMU_BINARY=./qemu-system-x86_64
> ./tests/qtest/qos-test --tap -k
> # random seed: R02S90d4b61102dd94459f986c2367d6d375
> # starting QEMU: exec ./qemu-system-x86_64 -qtest unix:/tmp/qtest-
> 28822.sock -qtest-log /dev/null -chardev socket,path=/tmp/qtest-
> 28822.qmp,id=char0 -mon chardev=char0,mode=control -display none -
> machine none -accel qtest
> QOSStack: full stack, cannot pushAborted
> 
Ok, apparently, v6.2.0 works (with GCC 12 and LTO), while as said
v7.0.0 doesn't.

Therefore, I run a bisect, and it pointed at:

8dcb404bff6d9147765d7dd3e9c8493372186420
tests/qtest: enable more vhost-user tests by default

I've also confirmed that on v7.0.0 with 8dcb404bff6d914 reverted, the
test actually works.

As far as downstream packaging is concerned, I'll revert it locally.
But I'd be happy to help figuring our what is actually going wrong.

I'll try to dig further. Any idea/suggestion anyone has, feel free. :-)

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH v3 0/2] modules: Improve modinfo.c support

2022-05-24 Thread Dario Faggioli
Hello! Sorry for bringing up an old thread, but I'd have a question
about this series.

As far as I can see, the patches were fine, and they were Acked, but
then the series was never committed... Is this correct?

If yes, can it be committed (I'm up for rebasing and resending, if it's
necessary)? If not, would it be possible to know what's missing, so
that we can continue working on it?

The reason I'm asking is that in our (openSUSE) build system, we're
still seeing the failures shown below; so far, we've had some rather
ugly downstream patches to deal with those, but we've recently
discovered they're not only ugly... they're also broken! :-/

I'm not sure if (and if yes why) this seems to be a problem only for
us, but it'd be great to get rid of both the failures and the patches
(assuming that what is implemented in this series is also of general
use, and good for the project... which, AFAIUI, it should be).

Any kind of feedback would be greatly appreciated.

Thanks and Regards

[PS. I've removed Jose, as his SUSE email address is no longer valid]

On Wed, 2021-09-29 at 07:09 +0200, Gerd Hoffmann wrote:
> On Tue, Sep 28, 2021 at 05:46:26PM -0300, Jose R. Ziviani wrote:
> > This patchset introduces the modinfo_kconfig aiming for a fine-tune
> > control of module loading by simply checking Kconfig options during
> > the
> > compile time, then generates one modinfo--softmmu.c per
> > target.
> > 
> > The main reason of this change is to fix problems like:
> > $ ./qemu-system-s390x -nodefaults -display none -accel tcg -M none
> > -device help | head
> > Failed to open module: /.../hw-display-qxl.so: undefined symbol:
> > vga_ioport_read
> > Failed to open module: /.../hw-display-virtio-vga.so: undefined
> > symbol: vmstate_vga_common
> > Failed to open module: /.../hw-display-virtio-vga.so: undefined
> > symbol: vmstate_vga_common
> > Failed to open module: /.../hw-display-virtio-vga-gl.so: undefined
> > symbol: have_vga
> > Failed to open module: /.../hw-usb-smartcard.so: undefined symbol:
> > ccid_card_ccid_attach
> > Failed to open module: /.../hw-usb-redirect.so: undefined symbol:
> > vmstate_usb_device
> > Failed to open module: /.../hw-usb-host.so: undefined symbol:
> > vmstate_usb_device
> > 
> > With this patch, I run this small script successfuly:
> >     #!/bin/bash
> >     pushd ~/suse/virtualization/qemu/build
> >     for qemu in qemu-system-*
> >     do
> >     [[ -f "$qemu" ]] || continue
> >     res=$(./$qemu -nodefaults -display none -accel tcg -M none
> > -device help 2>&1 | grep "Failed to" > /dev/null; echo $?)
> >     [[ $res -eq 0 ]] && echo "Error: $qemu"
> >     done
> >     popd
> > 
> > Also run 'make check' and 'check-acceptance' without any failures.
> 
> Acked-by: Gerd Hoffmann 
> 
> take care,
>   Gerd
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [PATCH] hostmem: default the amount of prealloc-threads to smp-cpus

2022-05-18 Thread Dario Faggioli
On Wed, 2022-05-18 at 12:17 +0200, Igor Mammedov wrote:
> On Tue, 17 May 2022 20:46:50 +0200
> Paolo Bonzini  wrote:
> > > diff --git a/backends/hostmem.c b/backends/hostmem.c
> > > index a7bae3d713..624bb7ecd3 100644
> > > --- a/backends/hostmem.c
> > > +++ b/backends/hostmem.c
> > > @@ -274,7 +274,7 @@ static void host_memory_backend_init(Object
> > > *obj)
> > >   backend->merge = machine_mem_merge(machine);
> > >   backend->dump = machine_dump_guest_core(machine);
> > >   backend->reserve = true;
> > > -    backend->prealloc_threads = 1;
> > > +    backend->prealloc_threads = machine->smp.cpus;
> > >   }
> > >   
> > >   static void host_memory_backend_post_init(Object *obj)  
> > 
> > Queued, thanks.
> 
> PS:
> There is no good default in this case (whatever number is picked
> it could be good or bad depending on usecase).
> 
That is fair enough. What we observed, however, is that, with QEMU 5.2,
starting a 1024G VM takes ~34s.

Then you just update QEMU to > 5.2 (and don't do/changing anything
else) and the same VM now takes ~4m30s to start.

If users are managing QEMU via Libvirt *and* have _at_least_ Libvirt
8.2, they can indeed set, e.g.,  (provided they can understand where the problem is, and
figure out that this is the solution).

If they have Libvirt < 8.2 (e.g., people/distros that have, say, QEMU
6.2 and Libvirt 8.0.0, or something like that), there's basically
nothing they can do... Except perhaps command line passthrough [1], but
that's really rather tricky!

So, I personally don't know where any default should be set and how,
but the above situation is not nice for users to have to handle.

[1] https://libvirt.org/kbase/qemu-passthrough-security.html

Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


SecureBoot and PCI passthrough with kernel lockdown in place (on Xen)

2022-02-14 Thread Dario Faggioli
Hello,

We have run into an issue when trying to use PCI passthrough for a Xen
VM running on an host where dom0 kernel is 5.14.21 (but we think it
could be any kernel > 5.4) and SecureBoot is enabled.

The error we get, when (for instance) trying to attach a device to an
(HVM) VM, on such system is:

# xl pci-attach 2-fv-sles15sp4beta2 :58:03.0 
libxl: error: libxl_qmp.c:1838:qmp_ev_parse_error_messages: Domain 12:Failed to 
initialize 12/15, type = 0x1, rc: -1
libxl: error: libxl_pci.c:1777:device_pci_add_done: Domain 
12:libxl__device_pci_add failed for PCI device 0:58:3.0 (rc -28)
libxl: error: libxl_device.c:1420:device_addrm_aocomplete: unable to add device

QEMU, is telling us the following:

[00:04.0] xen_pt_msix_init: Error: Can't open /dev/mem: Operation not permitted
[00:04.0] xen_pt_msix_size_init: Error: Internal error: Invalid 
xen_pt_msix_init.

And the kernel reports this:

Jan 27 16:20:53 narvi-sr860v2-bps-sles15sp4b2 kernel: Lockdown: 
qemu-system-i38: /dev/mem,kmem,port is restricted; see man kernel_lockdown.7

So, it's related to lockdown. Which AFAIUI it's consistent with the
fact that the problem only shows up when SecureBoot is enabled, as
that's implies lockdown. It's also consistent with the fact that we
don't seem to have any problems doing the same with a 5.3.x dom0
kernel... As there's no lockdown there!

Some digging revealed that QEMU tries to open /dev/mem in
xen_pt_msix_init():

fd = open("/dev/mem", O_RDWR);
...
msix->phys_iomem_base =
mmap(NULL,
 total_entries * PCI_MSIX_ENTRY_SIZE + 
msix->table_offset_adjust,
 PROT_READ,
 MAP_SHARED | MAP_LOCKED,
 fd,
 msix->table_base + table_off - msix->table_offset_adjust);
close(fd);

This comes from commit:

commit 3854ca577dad92c4fe97b4a6ebce360e25407af7
Author: Jiang Yunhong 
Date:   Thu Jun 21 15:42:35 2012 +

Introduce Xen PCI Passthrough, MSI

A more complete history can be found here:
git://xenbits.xensource.com/qemu-xen-unstable.git

Signed-off-by: Jiang Yunhong 
Signed-off-by: Shan Haitao 
Signed-off-by: Anthony PERARD 
Acked-by: Stefano Stabellini 

Now, the questions:
- is this (i.e., PCI-Passthrough with a locked-down dom0 kernel) 
  working for anyone? I've Cc-ed Marek, because I think I've read that 
  QubesOS that it does on QubesOS, but I'm not sure if the situation 
  is the same...
- if it's working, how?

Thanks and Regards
-- 
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
---
<> (Raistlin Majere)


signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [PATCH] docker: dockerfile for openSUSE Leap

2018-11-19 Thread Dario Faggioli
On Mon, 2018-11-19 at 01:02 +0100, Philippe Mathieu-Daudé wrote:
> Hi Dario,
> 
Hi,

> On Sun, Nov 18, 2018 at 10:54 PM Dario Faggioli 
> wrote:
> > On Sun, 2018-11-18 at 19:47 +, Alex Bennée wrote:
> > > This hasn't been tested because the docker image fails to build
> > > due
> > > to
> > > continuation breakage.
> > > 
> > It is indeed broken.
> > 
> > Basically, I tested it thoroughly, and I'm quite sure it works ok
> > (although, I'm no docker expert).
> 
> I quickly fixed/tried it.
>
Cool, thanks for having a look. :-)

> First this package list pulls many unuseful system packages.
> I added '--no-recommends' to reduce a bit:
> 
> RUN zypper ref && \
> zypper up -y && \
> zypper install -y --no-recommends $PACKAGES
> 
Right, that makes sense. I'll add it.

> This still install a bunch of unnecessary stuffs:
> 
> Creating group systemd-journal with gid 485.
> Creating group systemd-network with gid 484.
> Creating user systemd-network (systemd Network Management) with uid
> 484 and gid 484.
> Creating group systemd-coredump with gid 483.
> Creating user systemd-coredump (systemd Core Dumper) with uid 483 and
> gid 483.
> Creating group systemd-timesync with gid 482.
> Creating user systemd-timesync (systemd Time Synchronization) with
> uid
> 482 and gid 482.
> Failed to connect to bus: No such file or directory
> 
> Then the resulting image takes the same than without --no-recommends:
> 1.1GB. No idea why.
>
So, my "model" was the fedora.docker dockerfile, where quite a few
package are also installed, much more than the bare minimum necessary
for a build to succeed (AFAICT, at least).

The idea would be to enable and include support for as many
features/backends/integrations/etc as possible, in order to test them.

Basically, I did this by trial-and-error, i.e., adding packages related
to entries in `./configure' which looked like 'no', and keeping them
(only) if they turned such entry to 'yes'.

Now, this is what I see on my system, and that's why size did not
concerned me: :-)

REPOSITORY   TAG IMAGE ID   CREATED   SIZE
qemu opensuse-leap   6bd6bc2ac139   4 hours ago   1.09GB
qemu fedora  1b05e254d0d9   5 hours ago   2.02GB

If this is not ok, I'm happy to try to shrink the image size a bit
(e.g., I'll double check if there are useless packages left in place),
and/or to put together another dockerfile (something like opensuse-
leap_minimal), with just what is necessary for the build (and the
tests) to succeed.

> At some point some package tried to talk to the network manager via
> DBus to start sshd...
> 
Ah, interesting. :-P

So, from where did you see this (as well as the logs above) logs?
Running `make docker-***' with 'V=1', I'm guessing?

> Second, while this works on x86_64, it fails on aarch64:
> 
> Problem: libcurl-devel-7.59.0-lp150.1.2.aarch64 requires libcurl4 =
> 7.59.0, but this requirement cannot be provided
>   not installable providers: libcurl4-7.59.0-lp150.1.2.aarch64[repo-
> oss]
>  Solution 1: downgrade of libcurl4-7.60.0-lp150.2.15.1.aarch64 to
> libcurl4-7.59.0-lp150.1.2.aarch64
>  Solution 2: do not install libcurl-devel-7.59.0-lp150.1.2.aarch64
>  Solution 3: break libcurl-devel-7.59.0-lp150.1.2.aarch64 by ignoring
> some of its dependencies
> 
Ok, I see.

> Not your fault. So I suppose you are planning to use this image to
> compile QEMU for openSUSE/x86_64 only, is this correct?
> 
Yes, basically, x86_64 is my main usecase, and the (only) setup I can
test quickly and easily enough. However, I'm certainly up for trying to
improve this, and make this dockerfile useful for other arches as well.

I'll try to look into this. How did you test this on aarch64? On a
native ARM host, or using QEMU?

Thanks again and Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [PATCH] docker: dockerfile for openSUSE Leap

2018-11-18 Thread Dario Faggioli
On Sun, 2018-11-18 at 19:47 +, Alex Bennée wrote:
> Dario Faggioli  writes:
> 
> > Dockerfile for building an openSUSE Leap container.
> > 
> > Tracks the latest release (at the time of writing this
> > patch, it is Leap 15).
> > 
> > Signed-off-by: Dario Faggioli 
> > ---
> > Cc: "Alex Bennée" 
> > Cc: Fam Zheng 
> > Cc: "Philippe Mathieu-Daudé" 
> > ---
> >  tests/docker/dockerfiles/opensuse-leap.docker |   62
> > +
> >  1 file changed, 62 insertions(+)
> >  create mode 100644 tests/docker/dockerfiles/opensuse-leap.docker
> > 
> > diff --git a/tests/docker/dockerfiles/opensuse-leap.docker
> > b/tests/docker/dockerfiles/opensuse-leap.docker
> > new file mode 100644
> > index 00..9d00861e66
> > --- /dev/null
> > +++ b/tests/docker/dockerfiles/opensuse-leap.docker
> > @@ -0,0 +1,62 @@
> > +FROM opensuse/leap
> > +ENV PACKAGES \
> 
> > +tar \
> > +usbredir-devel \
> > +virglrenderer-devel \
> > +vte-devel \
> > +which \
> > +xen-devel
> > +zlib-devel \
> 
> This hasn't been tested because the docker image fails to build due
> to
> continuation breakage.
>
It is indeed broken.

Basically, I tested it thoroughly, and I'm quite sure it works ok
(although, I'm no docker expert).

Then, when I was about to send the mail, I saw in another commit that
it was best to have the package list sorted; so I did that, and forgot
to adjust the continuation. :-(

Sorry.

I'll send a v2 with this fixed.

Thanks and Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part


[Qemu-devel] [PATCH] docker: dockerfile for openSUSE Leap

2018-11-16 Thread Dario Faggioli
Dockerfile for building an openSUSE Leap container.

Tracks the latest release (at the time of writing this
patch, it is Leap 15).

Signed-off-by: Dario Faggioli 
---
Cc: "Alex Bennée" 
Cc: Fam Zheng 
Cc: "Philippe Mathieu-Daudé" 
---
 tests/docker/dockerfiles/opensuse-leap.docker |   62 +
 1 file changed, 62 insertions(+)
 create mode 100644 tests/docker/dockerfiles/opensuse-leap.docker

diff --git a/tests/docker/dockerfiles/opensuse-leap.docker 
b/tests/docker/dockerfiles/opensuse-leap.docker
new file mode 100644
index 00..9d00861e66
--- /dev/null
+++ b/tests/docker/dockerfiles/opensuse-leap.docker
@@ -0,0 +1,62 @@
+FROM opensuse/leap
+ENV PACKAGES \
+bc \
+bison \
+bluez-devel \
+brlapi-devel \
+bzip2 \
+flex \
+gcc \
+gcc-c++ \
+gettext-tools \
+git \
+glib2-devel \
+glusterfs-devel \
+gtk3-devel \
+gtkglext-devel \
+gzip \
+hostname \
+libSDL2-devel \
+libaio-devel \
+libasan4 \
+libcap-devel \
+libcap-ng-devel \
+libcurl-devel \
+libfdt-devel \
+libgcrypt-devel \
+libgnutls-devel \
+libjpeg62-devel \
+libnettle-devel \
+libnuma-devel \
+libpixman-1-0-devel \
+libpng16-devel \
+librbd-devel \
+libspice-server-devel \
+libssh2-devel \
+libtasn1-devel \
+libxml2-devel \
+lzo-devel \
+make \
+ncurses-devel \
+perl \
+pkg-config \
+python3 \
+python3-PyYAML \
+snappy-devel \
+sparse \
+tar \
+usbredir-devel \
+virglrenderer-devel \
+vte-devel \
+which \
+xen-devel
+zlib-devel \
+ENV QEMU_CONFIGURE_OPTS --python=/usr/bin/python3
+
+ENV LANG en_US.UTF-8
+ENV LANGUAGE en_US:en
+ENV LC_ALL en_US.UTF-8
+
+RUN zypper ref && zypper up -y
+RUN zypper install -y $PACKAGES
+RUN rpm -q $PACKAGES | sort > /packages.txt




Re: [Qemu-devel] [RFC PATCH 1/3] i386: add properties for customizing L2 and L3 caches size

2018-11-14 Thread Dario Faggioli
On Wed, 2018-11-14 at 08:14 -0600, Eric Blake wrote:
> On 11/14/18 4:56 AM, Dario Faggioli wrote:
> > ---
> >   0 files changed
> 
> That's an odd diffstat. Why is git not giving you the normal
> diffstat 
> with an actual summary of files changed?
> 
Ah, more weirdness about this submission. :-O

I've just tried re-sending the series to myself and, for this patch, I
do see a diffstat that makes sense:

 include/hw/i386/pc.h |8 
 target/i386/cpu.c|8 
 target/i386/cpu.h|3 +++
 3 files changed, 19 insertions(+)

And same is true for all other series I've sent around in the same way
and with the same tool (I went double checking a couple of them! :-P).

So, clearly, something went wrong this time. :-/

Regards,
Dario
-- 
<> (Raistlin Majere)
-----
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [RFC PATCH 0/3] Series short description

2018-11-14 Thread Dario Faggioli
On Wed, 2018-11-14 at 11:29 +, Daniel P. Berrangé wrote:
> On Wed, Nov 14, 2018 at 12:08:42PM +0100, Dario Faggioli wrote:
> > Wow... Mmm, not sure what went wrong... Anyway, this is the cover
> > letter I thought I had sent. Sorry :-/
> 
> No problem !
> 
Hello,

> If you have not come across it before, "git-publish" is a great addon
> tool for git to make sending patch series more pain-free
> 
>https://github.com/stefanha/git-publish
> 
> [...]
> 
Yes, I've heard of it. I'm already planning to check if it works well
with stgit, which I also use (and wish to continue to :-) ).

If it does, I'll definitely start using it.

> > --
> > Hello everyone,
> > 
> > This is Dario, from SUSE, and this is the first time I touch QEMU.
> > :-D
> 
> Welcome & thanks for your first patch(es) to QEMU.
> 
Thanks for the warm welcome. :-D

Regards,
Dario
-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [RFC PATCH 0/3] Series short description

2018-11-14 Thread Dario Faggioli
Wow... Mmm, not sure what went wrong... Anyway, this is the cover
letter I thought I had sent. Sorry :-/
--
Hello everyone,

This is Dario, from SUSE, and this is the first time I touch QEMU. :-D

So, basically, while playing with an AMD EPYC box, we came across a weird
performance regression between host and guest. It was happening with the
STREAM benchmark, and we tracked it down to non-temporal stores _not_ being
used, inside the guest.

More specifically, this was because the glibc version we were dealing with had
heuristics for deciding whether or not to use NT instructions. Basically, it
was checking is how big the L2 and L3 caches are, as compared to how many
threads are actually sharing such caches.

Currently, as far as cache layout and size are concerned, we only have the
following options:
- no L3 cache,
- emulated L3 cache, which means the default cache layout for the chosen CPU
  is used,
- host L3 cache info, which means the cache layout of the host is used.

Now, in our case, 'host-cache-info' made sense, because we were pinning vcpus
as well as doing other optimizations. However, as the VM had _less_ vcpus than
the host had pcpus, the result of the heuristics was to avoid non-temporal
stores, causing the unexpectedly high drop in performance. And, as you can
imagine, we could not fix things by using 'l3-cache=on' either.

This made us think this could be a general problem, and not only an issue for
our benchmarks, and here it comes this series. :-)

Basically, while we can, of course, control the number of vcpus a guest has
already --as well as how they are arranged within the guest topology-- we can't
control how big are the caches the guest sees. And this is what this series
tries to implement: giving the user the ability to tweak the actual size of the
L2 and L3 caches, to deal with all those cases when the guest OS or userspace
do check that, and behave differently depending on what they see.

Yes, this is not at all that common, but happens, and hece the feature can
be considered useful, IMO. And yes, it is definitely something meant for those
cases where one is carefully tuning and highly optimizing, with things like
vcpu pinning, etc.

I've tested with many CPU models, and the cahce info from inside the guest
looks consistent. I haven't re-run the benchmarks that triggered all this work,
as I don't have the proper hardware handy right now, but I'm planning to
(although, as said, this looks like a general problem to me).

I've got libvirt patches for exposing these new properties in the works, but
of course they only make sense if/when this series is accepted.

As I said, it's my first submission, and it's RFC because there are a couple
of things that I'm not sure I got right (details in the single patches).

Any comment or advice more than welcome. :-)

Thanks and Regards,
Dario
---
Dario Faggioli (3):
  i386: add properties for customizing L2 and L3 cache sizes
  i386: custom cache size in CPUID2 and CPUID4 descriptors
  i386: custom cache size in AMD's CPUID descriptors too

 include/hw/i386/pc.h |8 
 target/i386/cpu.c|   50 ++
 target/i386/cpu.h|3 +++
 3 files changed, 61 insertions(+)

-- 
<> (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Software Engineer @ SUSE https://www.suse.com/


signature.asc
Description: This is a digitally signed message part


[Qemu-devel] [RFC PATCH 3/3] i386: custom cache size in AMD's CPUID descriptors too

2018-11-14 Thread Dario Faggioli
If specified on the command line, alter the cache size(s)
properties accordingly, before encoding them in the AMD's
CPUID cache descriptors too (i.e., 8006 and 801d).

Signed-off-by: Dario Faggioli 
---
Cc: Paolo Bonzini 
Cc: Richard Henderson 
Cc: Eduardo Habkost 
---
 0 files changed

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 17aff19561..4949d6b907 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -4490,6 +4490,12 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
(L2_DTLB_4K_ENTRIES << 16) | \
(AMD_ENC_ASSOC(L2_ITLB_4K_ASSOC) << 12) | \
(L2_ITLB_4K_ENTRIES);
+if (cpu->l2_cache_size > 0)
+set_custom_cache_size(env->cache_info_amd.l2_cache,
+  cpu->l2_cache_size);
+if (cpu->enable_l3_cache && cpu->l3_cache_size > 0)
+set_custom_cache_size(env->cache_info_amd.l3_cache,
+  cpu->l3_cache_size);
 encode_cache_cpuid8006(env->cache_info_amd.l2_cache,
cpu->enable_l3_cache ?
env->cache_info_amd.l3_cache : NULL,
@@ -4546,10 +4552,16 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
eax, ebx, ecx, edx);
 break;
 case 2: /* L2 cache info */
+if (cpu->l2_cache_size > 0)
+set_custom_cache_size(env->cache_info_amd.l2_cache,
+  cpu->l2_cache_size * MiB);
 encode_cache_cpuid801d(env->cache_info_amd.l2_cache, cs,
eax, ebx, ecx, edx);
 break;
 case 3: /* L3 cache info */
+if (cpu->enable_l3_cache && cpu->l3_cache_size > 0)
+set_custom_cache_size(env->cache_info_amd.l3_cache,
+  cpu->l3_cache_size * MiB);
 encode_cache_cpuid801d(env->cache_info_amd.l3_cache, cs,
eax, ebx, ecx, edx);
 break;




[Qemu-devel] [RFC PATCH 1/3] i386: add properties for customizing L2 and L3 caches size

2018-11-14 Thread Dario Faggioli
Make it possible to specify a custom size for the L2 and
L3 caches, from the command line.

This can be useful in cases where applications or libraries
check, within the guest, the cache size and behave differently
depending on what they actually see.

Signed-off-by: Dario Faggioli 
---
I am not entirely sure I got the include/hw/i386 bits right (i.e.,
whether I should include the new properties in PC_COMPAT_3_0 and, if
yes, if the stanzas are correct). I'll dig further (and accept any
help/advice :-D )
---
Cc: "Michael S. Tsirkin" 
Cc: Marcel Apfelbaum 
Cc: Paolo Bonzini 
Cc: Richard Henderson 
Cc: Eduardo Habkost 
---
 0 files changed

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 136fe497b6..1094bba68c 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -308,6 +308,14 @@ bool e820_get_entry(int, uint32_t, uint64_t *, uint64_t *);
 .driver   = "Skylake-Server-IBRS" "-" TYPE_X86_CPU,\
 .property = "pku",\
 .value= "off",\
+},{\
+.driver   = TYPE_X86_CPU,\
+.property = "l3-cache-size",\
+.value= "off",\
+},{\
+.driver   = TYPE_X86_CPU,\
+.property = "l2-cache-size",\
+.value= "off",\
 },
 
 #define PC_COMPAT_2_12 \
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index f81d35e1f9..b8ccb2be04 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -5778,6 +5778,14 @@ static Property x86_cpu_properties[] = {
 DEFINE_PROP_INT32("x-hv-max-vps", X86CPU, hv_max_vps, -1),
 DEFINE_PROP_BOOL("x-hv-synic-kvm-only", X86CPU, hyperv_synic_kvm_only,
  false),
+
+/*
+ * Custom size for L2 and/or L3 cache. Default (0) means we use the
+ * default value for the CPU.
+ */
+DEFINE_PROP_SIZE("l2-cache-size", X86CPU, l2_cache_size, 0),
+DEFINE_PROP_SIZE("l3-cache-size", X86CPU, l3_cache_size, 0),
+
 DEFINE_PROP_END_OF_LIST()
 };
 
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index 9c52d0cbeb..ba0b913448 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1476,6 +1476,9 @@ struct X86CPU {
 int32_t core_id;
 int32_t thread_id;
 
+uint64_t l2_cache_size;
+uint64_t l3_cache_size;
+
 int32_t hv_max_vps;
 };
 




[Qemu-devel] [RFC PATCH 2/3] i386: custom cache size in CPUID2 and CPUID4 descriptors

2018-11-14 Thread Dario Faggioli
If specified on the command line, alter the cache(s) properties
accordingly, before encoding them in the CPUID descriptors.

Tweak the number of sets (if defined), to retain consistency.

Unless some specific size values are used (either by chance
or voluntarily), we won't find any matching CPUID-2 descriptor,
and 0xFF will be used. This shouldn't be a problem, as we have
CPUID-4.

Signed-off-by: Dario Faggioli 
---
I'm no CPUID expert. I'm not sure I've fully understodd the relationship
between CPUID-2 and CPUID-4. The solution implemented here, is the best
I could come up with, and it worked on all the CPU types that I've tried.
If it's wrong/suboptimal, I'm happy to think to something else/rework.
---
Cc: Paolo Bonzini 
Cc: Richard Henderson 
Cc: Eduardo Habkost 
---
 0 files changed

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index b8ccb2be04..17aff19561 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -426,6 +426,24 @@ static void encode_cache_cpuid801d(CPUCacheInfo 
*cache, CPUState *cs,
(cache->complex_indexing ? CACHE_COMPLEX_IDX : 0);
 }
 
+static void set_custom_cache_size(CPUCacheInfo *c, uint64_t sz)
+{
+/*
+ * Descriptors that have 'sets', also have 'partitions' initialized,
+ * so we can compute the new number of sets. For others, just tweak the
+ * size.
+ */
+assert(c->partitions > 0 || c->sets == 0);
+if (c->sets > 0) {
+uint32_t sets = sz / (c->line_size * c->associativity * c->partitions);
+
+if (sets == 0)
+return;
+c->sets = sets;
+}
+c->size = sz;
+}
+
 /* Data structure to hold the configuration info for a given core index */
 struct core_topology {
 /* core complex id of the current core index */
@@ -4193,8 +4211,14 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 if (!cpu->enable_l3_cache) {
 *ecx = 0;
 } else {
+if (cpu->l3_cache_size > 0)
+set_custom_cache_size(env->cache_info_cpuid2.l3_cache,
+  cpu->l3_cache_size);
 *ecx = cpuid2_cache_descriptor(env->cache_info_cpuid2.l3_cache);
 }
+if (cpu->l2_cache_size > 0)
+set_custom_cache_size(env->cache_info_cpuid2.l2_cache,
+  cpu->l2_cache_size);
 *edx = (cpuid2_cache_descriptor(env->cache_info_cpuid2.l1d_cache) << 
16) |
(cpuid2_cache_descriptor(env->cache_info_cpuid2.l1i_cache) <<  
8) |
(cpuid2_cache_descriptor(env->cache_info_cpuid2.l2_cache));
@@ -4222,6 +4246,9 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 eax, ebx, ecx, edx);
 break;
 case 2: /* L2 cache info */
+if (cpu->l2_cache_size > 0)
+set_custom_cache_size(env->cache_info_cpuid4.l2_cache,
+  cpu->l2_cache_size);
 encode_cache_cpuid4(env->cache_info_cpuid4.l2_cache,
 cs->nr_threads, cs->nr_cores,
 eax, ebx, ecx, edx);
@@ -4229,6 +4256,9 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 case 3: /* L3 cache info */
 pkg_offset = apicid_pkg_offset(cs->nr_cores, cs->nr_threads);
 if (cpu->enable_l3_cache) {
+if (cpu->l3_cache_size > 0)
+set_custom_cache_size(env->cache_info_cpuid4.l3_cache,
+  cpu->l3_cache_size);
 encode_cache_cpuid4(env->cache_info_cpuid4.l3_cache,
 (1 << pkg_offset), cs->nr_cores,
 eax, ebx, ecx, edx);




[Qemu-devel] [RFC PATCH 0/3] Series short description

2018-11-14 Thread Dario Faggioli
The following series implements...

---

Dario Faggioli (3):
  i386: add properties for customizing L2 and L3 caches size
  i386: custom cache size in CPUID2 and CPUID4 descriptors
  i386: custom cache size in AMD's CPUID descriptors too


 0 files changed

--
Signature



Re: [Qemu-devel] [Xen-devel] [PATCH v2] libxl: usb2 and usb3 controller support for upstream qemu

2013-07-12 Thread Dario Faggioli
On ven, 2013-07-12 at 10:43 +0200, Fabio Fantoni wrote:
 Il 11/07/2013 17:56, Dario Faggioli ha scritto:
  Signed-off-by: Fabio Fantoni fabio.fant...@m2r.biz
 
  diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
  index d218a2d..b4c6921 100644
  --- a/tools/libxl/libxl_types.idl
  +++ b/tools/libxl/libxl_types.idl
  @@ -325,6 +325,7 @@ libxl_domain_build_info = Struct(domain_build_info,[
   (serial,   string),
   (boot, string),
   (usb,  
  libxl_defbool),
  +   (usbversion,  
  integer),
   # usbdevice:
   # - tablet for absolute mouse,
   # - mouse for PS/2 protocol 
  relative mouse
 
  I believe this calls for a `#define LIBXL_HAVE_USBVERSION' (or something
  like that) in libxl.h, doesn't it?

 Is it necessary even if I just want to apply it to xen 4.4 (as a new 
 features) and not backport it to older versions?

Although API stability is something I'm not sure I fully master, I would
say, yes it is necessary. :-)

 Or probably I don't understand exactly what is the function of LIBXL_HAVE_*.

The point is to allow people to write code suitable for _all_ versions
of libxl. So, imagine, in the super cool toolstack I'm writing on top of
libxl, I want to use your new parameter, if it's there, i.e., if
compiling on top of 4.4, but I also want the code to compile against
libxl 4.3.

With the LIBXL_HAVE_*, I can do something like this:

...
binfo.usb = true;
#ifdef LIBXL_HAVE_BUILDINFO_USBVERSION
binfo.usbversion = 3;
#endif
...

Does that make sense?

 I tried to grep all code for LIBXL_HAVE_* to see an example of use but I 
 found only one an I don't understand.

Try harder! :-)

http://xenbits.xen.org/gitweb/?p=xen.gita=searchh=refs%2Fheads%2Fstagingst=commits=LIBXL_HAVE_
http://xenbits.xen.org/gitweb/?p=xen.gita=searchh=HEADst=greps=LIBXL_HAVE_

Actually, while at it, given LIBXL_HAVE_BUILDINFO_USBDEVICE_LIST is
added in ac16730d0339d41fd7d1, I'd go for
LIBXL_HAVE_BUILDINFO_USBVERSION for yours, it looks more consistent.

Regards,
Dario

-- 
This happens because I choose it to happen! (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems RD Ltd., Cambridge (UK)



signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [Xen-devel] [PATCH v4] libxl: usb2 and usb3 controller support for upstream qemu

2013-07-12 Thread Dario Faggioli
On ven, 2013-07-12 at 15:58 +0200, Fabio Fantoni wrote:
 Usage: usbversion=1|2|3 (default=2)
 Specifies the type of an emulated USB bus in the guest. 1 for usb1,
 2 for usb2 and 3 for usb3, it is available only with upstream qemu.
 Default is 2.
 
 Signed-off-by: Fabio Fantoni fabio.fant...@m2r.biz

FWIW, Acked-by: Dario Faggioli dario.faggi...@citrix.com
  
  /*
 + * LIBXL_HAVE_BUILDINFO_USBVERSION
 + *
 + * If this is defined, then the libxl_domain_build_info structure will
 + * contain hvm.usbversion, a integer type that contains a USB
 + * controller version to specify on the qemu upstream command-line.
 + *
 + * If it is set, callers may use hvm.usbversion to specify if the usb
 + * controller is usb1, usb2 or usb3.
 + *
 + * If this is not defined, the usb controller is only usb1.
 + */
 +#define LIBXL_HAVE_BUILDINFO_USBVERSION 1
 +
 +/*

Yes, exactly what I (and George) meant. Thanks. :-)

Dario

-- 
This happens because I choose it to happen! (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems RD Ltd., Cambridge (UK)



signature.asc
Description: This is a digitally signed message part


Re: [Qemu-devel] [Xen-devel] [PATCH v2] libxl: usb2 and usb3 controller support for upstream qemu

2013-07-11 Thread Dario Faggioli
On gio, 2013-07-11 at 12:33 +0200, Fabio Fantoni wrote:
 Usage: usbversion=1|2|3 (default=2)
 Specifies the type of an emulated USB bus in the guest. 1 for usb1,
 2 for usb2 and 3 for usb3, it is available only with upstream qemu.
 Default is 2.
 
 Signed-off-by: Fabio Fantoni fabio.fant...@m2r.biz

 diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
 index d218a2d..b4c6921 100644
 --- a/tools/libxl/libxl_types.idl
 +++ b/tools/libxl/libxl_types.idl
 @@ -325,6 +325,7 @@ libxl_domain_build_info = Struct(domain_build_info,[
 (serial,   string),
 (boot, string),
 (usb,  libxl_defbool),
 +   (usbversion,  integer),
 # usbdevice:
 # - tablet for absolute mouse,
 # - mouse for PS/2 protocol 
 relative mouse

I believe this calls for a `#define LIBXL_HAVE_USBVERSION' (or something
like that) in libxl.h, doesn't it?

See the comment about libxl API compatibility in libxl.h itself for
more details.

Regards,
Dario

-- 
This happens because I choose it to happen! (Raistlin Majere)
-
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems RD Ltd., Cambridge (UK)



signature.asc
Description: This is a digitally signed message part