Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 19:28,  wrote:
> On 20/11/17 17:14, Jan Beulich wrote:
> On 20.11.17 at 16:24,  wrote:
>>> On 20/11/17 15:20, Jan Beulich wrote:
>>> On 20.11.17 at 15:14,  wrote:
> On 20/11/17 14:56, Boris Ostrovsky wrote:
>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>> On 20.11.17 at 12:20,  wrote:
 Which restriction? I'm loading the RSDP table to its architectural
 correct addres if possible, otherwise it will be loaded to the same
 address as without my patch. So I'm not adding a restriction, but
 removing one.
>>> What is "architecturally correct" in PVH can't be read out of
>>> specs other than what we write down. When there's no BIOS,
>>> placing anything right below the 1Mb boundary is at least
>>> bogus.
>>
>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>
> I think Jan is right: for PVH its _our_ job to define the correct
> placement. Which still can be the same as in the BIOS case, making
> it easier to adapt any guest systems.
>
> So I'd say: The RSDP address in PVH case is passed in the PVH start
> info block to the guest. In case there is no conflict with the
> physical load address of the guest kernel the preferred address of
> the RSDP is right below the 1MB boundary.
>
> Would this wording be okay?

 To be honest (and in case it wasn't sufficiently clear form my
 earlier replies) - I'm pretty much opposed to this below-1Mb thing.
 There ought to be just plain RAM there for PVH.
>>>
>>> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
>>> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
>>> is being loaded at 1MB).
>>>
>>> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
>>> due to pure luck (the bzImage loader is still using the PV specific
>>> ELF notes and this results in the loader believing RSDP is loadable
>>> at this address, which is true, but the tests used to come to this
>>> conclusion are just not applicable for PVH).
>>>
>>> So in your opinion we should revoke the PVH support from Xen 4.10,
>>> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
>>> guest?
>> 
>> So what's wrong with it being put wherever the next free memory
>> location is being determined to be by the loader, just like is being
>> done for other information, including modules (if any)?
> 
> The RSDP table is marked as "Reserved" in the memory map. So putting it
> somewhere in the middle of the guest's memory will force the guest to
> use 4kB pages instead of 2MB or even 1GB pages. I'd really like to avoid
> this problem, as we've been hit by the very same in HVM guests before
> causing quite measurable performance drops.

This is a valid point.

> So I'd rather put it in the first MB as most kernels have to deal with
> small pages at beginning of RAM today. An alternative would be to put
> it just below 4GB where e.g. the console and Xenstore page are located.

Putting it in the first Mb implies that mappings there will continue to
be 4k ones. I can't, however, see why for PVH that should be
necessary: There's no BIOS and nothing legacy that needs to live
there, so other than HVM it could benefit from using a 1Gb mapping
even at address zero (even if this might be something that can't
be achieved right away). So yes, if anything, the allocation should
be made top down starting from 4Gb. Otoh, I don't see a strict
need for this area to live below 4Gb in the first place.

>>> Doing it in a proper way you are outlining above would render
>>> current PVH guests unusable.
>> 
>> I'm afraid I don't understand which outline of mine you refer to.
>> Iirc all I said was that placing it below 1Mb is bogus. Top of RAM
>> (as I think I saw being mentioned elsewhere) probably isn't much
>> better. Special locations should be used only if there's really no
>> other way to convey information.
> 
> I believe it is better to define where reserved memory areas are located
> in order to make it easier for a kernel to deal with the consequences. A
> location like "somewhere in memory, either just after the loaded kernel,
> or after the boot loader which was loaded previously" seems not to be a
> proper solution. Especially as a change in size of the boot loader could
> make it impossible to load the kernel at the desired location, as this
> would contradict the information returned by the hypervisor in the
> memory map.

Well, you imply here that the kernel has to search for the structure.
In that case having a well defined, reasonably narrow range is of
course helpful. But I think we should, just like EFI does, prefer to
avoid any need for searching here. See my reply to Boris from a few
minutes ago.

Jan


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 17:59,  wrote:
> On 11/20/2017 11:43 AM, Jan Beulich wrote:
> On 20.11.17 at 17:28,  wrote:
>>> On 11/20/2017 11:26 AM, Jan Beulich wrote:
>>> On 20.11.17 at 17:14,  wrote:
> What could cause grub2 to fail to find space for the pointer in the
> first page? Will we ever have anything in EBDA (which is one of the
> possible RSDP locations)?
 Well, the EBDA (see the B in its name) is again something that's
 meaningless without there being a BIOS.
>>> Exactly. So it should always be available for grub to copy the pointer
>>> there.
>> But what use would it be if grub copied it there? It just shouldn't
>> be there, neither before nor after grub (just like grub doesn't
>> introduce firmware into the system).
> 
> So that the guest can find it using standard methods. If Xen can't
> guarantee ACPI-compliant placement of the pointer then someone has to
> help the guest find it in the expected place. We can do it with a
> dedicated entry point by setting the pointer explicitly (although
> admittedly this is not done correctly now) or we need to have firmware
> (grub2) place it in the "right" location.
> 
> (It does look a bit hacky though)

Indeed. Of course ACPI without any actual firmware is sort of odd,
too. As to dedicated entry point and its alternatives: Xen itself
tells grub (aiui we're talking about a flavor of it running PVH itself)
where the RSDP is. Why can't grub forward that information in a
suitable way (e.g. via a new tag, or - for Linux - as a new entry
in the Linux boot header)?

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [Draft Design v3] ACPI/IORT Support in Xen.

2017-11-20 Thread Manish Jaggi


 ACPI/IORT Support in Xen.
 --
  Draft 3

 Revision History:

 Changes since v2:
 - Modified as per comments from Julien /Sameer/Andre

 Changes since v1:
 - Modified IORT Parsing data structures.
 - Added RID-StreamID and RID-DeviceID map as per Andre's suggestion.
 - Added reference code which can be read along with this document.
 - Removed domctl for DomU, it would be covered in PCI-PT design.

 Introduction:
 -

 I had sent out patch series [0] to hide smmu from Dom0 IORT.
 This document is a rework of the series as it:
 (a) extends scope by adding parsing of IORT table once
 and storing it in in-memory data structures, which can then be used
 for querying. This would eliminate the need to parse complete iort
 table multiple times.

 (b) Generation of IORT for domains be independent using a set of
 helper routines.

 Index
 
 1. What is IORT. What are its components ?
 2. Current Support in Xen
 3. IORT for Dom0
 4. IORT for DomU
 5. Parsing of IORT in Xen
 6. Generation of IORT
7. Implementation Phases
 8. References

 1. IORT Structure ?
 
 IORT refers to Input Output remapping table. It is essentially used to 
find
 information about the IO topology (PCIRC-SMMU-ITS) and relationships 
between

 devices.

 A general structure of IORT [1]:
 It has nodes for PCI RC, SMMU, ITS and Platform devices. Using an IORT 
table

 relationship between RID -> StreamID -> Deviceid can be obtained.
 Which device is behind which SMMU and which interrupt controller, topology
 is described in IORT Table.

 Some PCI RC may be not behind an SMMU, and directly map RID-DeviceID.

 RID is a requester ID in PCI context,
 StreamID is the ID of the device in SMMU context,
 DeviceID is the ID programmed in ITS.

 Each iort_node contains an ID map array to translate one ID into another.
 IDmap Entry {input_range, output_range, output_node_ref, id_count}
 This array is associated with PCI RC node, SMMU node, Named component 
node.

 and can reference to a SMMU or ITS node.

 2. Support of IORT
 ---
 It is proposed in this document to parse iort once and use the information
 to translate RID without traversing IORT again and again.

 Also Xen prepares an IORT table for dom0 based on host IORT.
 For DomU IORT table is required only in case of device passthrough.

 3. IORT for Dom0
 -
 IORT for Dom0 is based on host iort. Few nodes could be removed or 
modified.

   For instance
 - Host SMMU nodes should not be present as Xen should only touch it.
 - platform devices (named components) would be passed as is. The 
visibility

   criterion for DOM0 is TDB.

 4. IORT for DomU
 -
 IORT for DomU should be generated by toolstack. IORT table is only present
 in case of device passthrough.

 At a minimum domU IORT should include a single PCIRC and ITS Group.
 Similar PCIRC can be added in DSDT.
 The exact structure of DomU IORT would be covered along with PCI PT 
design.


 5. Parsing of IORT in Xen
 --
 IORT nodes can be saved in structures so that IORT table parsing can 
be done
 once and is reused by all xen subsystems like ITS / SMMU etc, domain 
creation.

 Proposed are the structures to hold IORT information. [4]

 struct rid_map_struct {
    void *pcirc_node;
    u16 inpute_base;
    u32 output_base;
    u16 id_ccount;
    struct list_head entry;
 };

Two global variables would hold the maps.
  struct list_head rid_streamid_map;
  struct list_head rid_deviceid_map;

 5.1 Functions to query StreamID and DeviceID from RID.

 void query_streamid(void *pcirc_node, u16 rid, u32 *streamid);
 void query_deviceid(void *pcirc_node, u16 rid, u32 *deviceid);

 Adding a mapping is done via helper functions

 int add_rid_streamid_map(void *pcirc_node, u32 ib, u32 ob, u32 idc)
 int add_rid_deviceid_map(void *pcirc_node, u32 ib, u32 ob, u32 idc)
 - rid-streamid map is straight forward and is created using pci_rc's idmap
 - rid-deviceid map is created by translating streamids to deviceids.
  fixup_rid_deviceid_map function does that. (See [6])

To keep the api similar to linux iort_node_map_rid be mapped to
query_streamid

6. IORT Generation
---
It is proposed to have a common helper library to generate IORT for dom0/U.
Note: it is desired to have IORT generation code sharing between toolstack
and Xen.

a. For Dom0
 rid_deviceId_map can be used directly to generate dom0 IORT table.
 Exclusions of nodes is still open for suggestions.

b. For DomU
 Minimal structure is discussed in section 4. It will be further discussed
 in the context of PCI PT design.

7. Implementation Phases
-
a. IORT Parsing and RID Query
b. IORT Generation for Dom0
c. IORT Generation for DomU.

8. References:
-
[0] https://www.mail-archive.com/xen-devel@lists.xen.org/msg121667.html
[1] ARM DEN0049C: 

[Xen-devel] [linux-next test] 116368: regressions - trouble: broken/fail/pass

2017-11-20 Thread osstest service owner
flight 116368 linux-next real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116368/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-pair broken
 test-amd64-i386-libvirt-qcow2 broken
 test-amd64-amd64-xl-qemuu-ovmf-amd64 broken
 test-amd64-i386-libvirt-pair 5 host-install/dst_host(5) broken REGR. vs. 116316
 test-amd64-amd64-xl-qemuu-ovmf-amd64 4 host-install(4) broken REGR. vs. 116316
 test-amd64-i386-libvirt-qcow2  4 host-install(4)   broken REGR. vs. 116316
 test-amd64-amd64-xl-qemut-debianhvm-amd64  7 xen-bootfail REGR. vs. 116316
 test-amd64-amd64-xl-qemut-ws16-amd64  7 xen-boot fail REGR. vs. 116316
 test-amd64-amd64-pair10 xen-boot/src_hostfail REGR. vs. 116316
 test-amd64-amd64-pair11 xen-boot/dst_hostfail REGR. vs. 116316
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 116316
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-boot  fail REGR. vs. 116316
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
116316
 test-amd64-amd64-amd64-pvgrub  7 xen-bootfail REGR. vs. 116316
 test-amd64-amd64-xl   7 xen-boot fail REGR. vs. 116316
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 116316
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-boot  fail REGR. vs. 116316
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 116316
 test-amd64-amd64-libvirt-vhd  7 xen-boot fail REGR. vs. 116316
 test-amd64-amd64-rumprun-amd64  7 xen-boot   fail REGR. vs. 116316
 test-amd64-i386-pair 10 xen-boot/src_hostfail REGR. vs. 116316
 test-amd64-i386-pair 11 xen-boot/dst_hostfail REGR. vs. 116316

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop  fail blocked in 116316
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot  fail like 116316
 test-amd64-amd64-xl-pvhv2-amd  7 xen-boot fail like 116316
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot  fail like 116316
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-bootfail like 116316
 test-amd64-i386-xl-raw7 xen-boot fail  like 116316
 test-amd64-i386-rumprun-i386  7 xen-boot fail  like 116316
 test-amd64-i386-examine   8 reboot   fail  like 116316
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot  fail like 116316
 test-amd64-i386-freebsd10-i386  7 xen-bootfail like 116316
 test-amd64-i386-libvirt-xsm   7 xen-boot fail  like 116316
 test-amd64-i386-freebsd10-amd64  7 xen-boot   fail like 116316
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-bootfail like 116316
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116316
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116316
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116316
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116316
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116316
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116316
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116316
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   

[Xen-devel] [qemu-mainline test] 116369: trouble: broken/fail/pass

2017-11-20 Thread osstest service owner
flight 116369 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116369/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-pvhv2-amd broken
 test-armhf-armhf-xl-xsm  broken

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-pvhv2-amd  4 host-install(4) broken pass in 116339
 test-armhf-armhf-xl-xsm   4 host-install(4)  broken pass in 116339
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail in 116339 pass 
in 116369
 test-armhf-armhf-xl   7 xen-boot fail in 116339 pass in 116369
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore.2 fail pass in 116339

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop  fail in 116339 like 116190
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start fail in 116339 never pass
 test-armhf-armhf-xl-xsm 13 migrate-support-check fail in 116339 never pass
 test-armhf-armhf-xl-xsm 14 saverestore-support-check fail in 116339 never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116190
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116190
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu2e02083438962d26ef9dcc7100f3b378104183db
baseline version:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf

Last test of basis   116190  2017-11-15 06:53:12 Z5 days
Failing since116227  2017-11-16 13:17:17 Z4 days5 attempts
Testing same since   116314  2017-11-18 15:17:45 Z2 days3 attempts


People who touched revisions under test:
  "Daniel P. Berrange" 
  Alex Bennée 
  Alexey Kardashevskiy 
  Anton Nefedov 
  BALATON Zoltan 
  Christian Borntraeger 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Dariusz Stojaczyk 
  Dou Liyang 
  Dr. David Alan Gilbert 
  Emilio 

[Xen-devel] [xen-4.7-testing baseline-only test] 72467: regressions - FAIL

2017-11-20 Thread Platform Team regression test user
This run is configured for baseline tests only.

flight 72467 xen-4.7-testing real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72467/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64 21 leak-check/check fail REGR. vs. 
72355
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsm 16 guest-localmigrate/x10 fail 
REGR. vs. 72355
 test-amd64-amd64-xl-qemut-win10-i386 16 guest-localmigrate/x10 fail REGR. vs. 
72355

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail like 72355
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-midway   13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-midway   14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 10 windows-installfail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-qemuu-nested-intel 18 capture-logs/l1(18) fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass

version targeted for testing:
 xen  259a5c3000d840f244dbb30f2b47b95f2dc0f80f
baseline version:
 xen  830224431b67fd2afad9bdc532dc1bede20032d5

Last test of basis72355  2017-10-26 09:18:07 Z   25 days
Testing same since72467  2017-11-20 14:45:02 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 
  Eric Chanudet 
  George Dunlap 
  Jan Beulich 
  Min He 
  Yi Zhang 
  Yu Zhang 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64-xtf  pass
 build-amd64  pass

[Xen-devel] [linux-linus bisection] complete test-amd64-i386-xl-qemut-debianhvm-amd64

2017-11-20 Thread osstest service owner
branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemut-debianhvm-amd64
testid xen-boot

Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  linux 
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
  Bug introduced:  0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d
  Bug not present: 0f07e10f8eebfd1081265f869cbb52a9d16e46f0
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/116386/


  (Revision log too long, omitted.)


For bisection revision-tuple graph see:
   
http://logs.test-lab.xenproject.org/osstest/results/bisect/linux-linus/test-amd64-i386-xl-qemut-debianhvm-amd64.xen-boot.html
Revision IDs in each graph node refer, respectively, to the Trees above.


Running cs-bisection-step 
--graph-out=/home/logs/results/bisect/linux-linus/test-amd64-i386-xl-qemut-debianhvm-amd64.xen-boot
 --summary-out=tmp/116386.bisection-summary --basis-template=115643 
--blessings=real,real-bisect linux-linus 
test-amd64-i386-xl-qemut-debianhvm-amd64 xen-boot
Searching for failure / basis pass:
 116343 fail [host=pinot1] / 116226 [host=fiano0] 116215 [host=nobling1] 116182 
[host=nobling0] 116164 [host=elbling1] 116152 [host=huxelrebe0] 116136 
[host=nocera0] 116119 [host=chardonnay0] 116103 [host=chardonnay0] 115718 
[host=merlot1] 115690 [host=nocera1] 115678 [host=elbling0] 115643 
[host=chardonnay1] 115628 [host=huxelrebe1] 115615 [host=rimava1] 115599 
[host=italia0] 115573 [host=baroque1] 115543 [host=fiano1] 115487 ok.
Failure / basis pass flights: 116343 / 115487
(tree with no url: minios)
(tree with no url: ovmf)
(tree with no url: seabios)
Tree: linux git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
b79708a8ed1b3d18bee67baeaf33b3fa529493e2 
b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
Basis pass 0f07e10f8eebfd1081265f869cbb52a9d16e46f0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
bb2c1a1cc98a22e2d4c14b18421aa7be6c2adf0d
Generating revisions with ./adhoc-revtuple-generator  
git://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git#0f07e10f8eebfd1081265f869cbb52a9d16e46f0-0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d
 
git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860
 
git://xenbits.xen.org/qemu-xen-traditional.git#c8ea0457495342c417c3dc033bba25148b279f60-c8ea0457495342c417c3dc033bba25148b279f60
 
git://xenbits.xen.org/qemu-xen.git#5cd7ce5dde3f228b3b669ed9ca432f588947bd40-b79708a8ed1b3d18bee67baeaf33b3fa529493e2
 
git://xenbits.xen.org/xen.git#bb2c1a1cc98a22e2d4c14b18421aa7be6c2adf0d-b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
adhoc-revtuple-generator: tree discontiguous: linux-2.6
Loaded 2006 nodes in revision graph
Searching for test results:
 115321 [host=chardonnay0]
 115338 [host=elbling0]
 115353 [host=elbling1]
 115387 [host=italia1]
 115373 [host=nocera0]
 115469 [host=nobling1]
 115414 [host=huxelrebe0]
 115459 [host=nobling0]
 115438 [host=rimava0]
 115475 [host=fiano0]
 115487 pass 0f07e10f8eebfd1081265f869cbb52a9d16e46f0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
bb2c1a1cc98a22e2d4c14b18421aa7be6c2adf0d
 115599 [host=italia0]
 115543 [host=fiano1]
 115573 [host=baroque1]
 115615 [host=rimava1]
 115628 [host=huxelrebe1]
 115643 [host=chardonnay1]
 115678 [host=elbling0]
 115690 [host=nocera1]
 115718 [host=merlot1]
 116103 [host=chardonnay0]
 116152 [host=huxelrebe0]
 116117 [host=chardonnay0]
 116127 [host=chardonnay0]
 116119 [host=chardonnay0]
 116136 [host=nocera0]
 116164 [host=elbling1]
 116182 [host=nobling0]
 116215 [host=nobling1]
 116226 [host=fiano0]
 116268 fail irrelevant
 116316 fail irrelevant
 116371 fail 0192f17529fa3f8d78ca0181a2b2aaa7cbb0784d 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
b79708a8ed1b3d18bee67baeaf33b3fa529493e2 
b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f
 116374 pass 0f07e10f8eebfd1081265f869cbb52a9d16e46f0 
c530a75c1e6a472b0eb9558310b518f0dfcd8860 
c8ea0457495342c417c3dc033bba25148b279f60 
5cd7ce5dde3f228b3b669ed9ca432f588947bd40 
1f61c07d79abda1e747d70d83edffe4efca48e17
 116375 pass 0f07e10f8eebfd1081265f869cbb52a9d16e46f0 

[Xen-devel] [xen-unstable test] 116366: regressions - trouble: broken/fail/pass

2017-11-20 Thread osstest service owner
flight 116366 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116366/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-xsm  broken
 test-armhf-armhf-xl-credit2  17 guest-start.2  fail in 116337 REGR. vs. 116214

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-xsm   4 host-install(4)  broken pass in 116337
 test-armhf-armhf-libvirt-raw  7 xen-boot   fail pass in 116337
 test-amd64-i386-xl-qemuu-win7-amd64 13 guest-saverestore   fail pass in 116337
 test-armhf-armhf-xl-credit2  16 guest-start/debian.repeat  fail pass in 116337

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail in 116337 like 
116214
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop   fail in 116337 like 116214
 test-armhf-armhf-xl-xsm 13 migrate-support-check fail in 116337 never pass
 test-armhf-armhf-xl-xsm 14 saverestore-support-check fail in 116337 never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check fail in 116337 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail like 116199
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 116199
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116214
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116214
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 116214
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 116214
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116214
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 116214
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 xen  eb0660c6950e08e44fdfeca3e29320382e2a1554
baseline version:
 xen  b9ee1fd7b98064cf27d0f8f1adf1f5359b72c97f

Last test of basis   116214  2017-11-16 02:14:29 Z4 days
Failing since116224  2017-11-16 11:51:35 Z4 days5 attempts
Testing same since   116261  2017-11-17 10:00:16 Z3 days4 attempts


People who touched revisions under test:
  Adrian Pop 
  Andrew Cooper 
  Jan Beulich 

[Xen-devel] [xtf test] 116370: all pass - PUSHED

2017-11-20 Thread osstest service owner
flight 116370 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116370/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf  167052779c0546e99aadd26ebd848e10f91fb557
baseline version:
 xtf  4d18dd4a163b7879c262ac661ca983fa9266c308

Last test of basis   114652  2017-10-17 13:54:44 Z   34 days
Testing same since   116370  2017-11-20 10:46:19 Z0 days1 attempts


People who touched revisions under test:
  Andrew Cooper 

jobs:
 build-amd64-xtf  pass
 build-amd64  pass
 build-amd64-pvopspass
 test-xtf-amd64-amd64-1   pass
 test-xtf-amd64-amd64-2   pass
 test-xtf-amd64-amd64-3   pass
 test-xtf-amd64-amd64-4   pass
 test-xtf-amd64-amd64-5   pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/xtf.git
   4d18dd4..1670527  167052779c0546e99aadd26ebd848e10f91fb557 -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.5-testing test] 116356: regressions - FAIL

2017-11-20 Thread osstest service owner
flight 116356 xen-4.5-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116356/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115226

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt  16 guest-saverestore.2fail pass in 116326
 test-amd64-i386-xl-qemuu-winxpsp3-vcpus1 16 guest-localmigrate/x10 fail pass 
in 116326
 test-amd64-i386-xl-qemut-winxpsp3-vcpus1 18 guest-start/win.repeat fail pass 
in 116326

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stopfail REGR. vs. 115226
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stopfail REGR. vs. 115226

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 115191
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115191
 test-amd64-amd64-xl-rtds  7 xen-boot fail  like 115226
 test-xtf-amd64-amd64-2   60 leak-check/check fail  like 115226
 test-xtf-amd64-amd64-3   60 leak-check/check fail  like 115226
 test-xtf-amd64-amd64-4   60 leak-check/check fail  like 115226
 test-xtf-amd64-amd64-1   60 leak-check/check fail  like 115226
 test-xtf-amd64-amd64-5   60 leak-check/check fail  like 115226
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115226
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 115226
 test-xtf-amd64-amd64-2   19 xtf/test-hvm32-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-2 34 xtf/test-hvm32pae-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-2 41 xtf/test-hvm32pse-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-2   45 xtf/test-hvm64-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-4   19 xtf/test-hvm32-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-5   19 xtf/test-hvm32-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-1   19 xtf/test-hvm32-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-4 34 xtf/test-hvm32pae-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-5 34 xtf/test-hvm32pae-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-1 34 xtf/test-hvm32pae-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-4 41 xtf/test-hvm32pse-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-4   45 xtf/test-hvm64-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-5 41 xtf/test-hvm32pse-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-1 41 xtf/test-hvm32pse-cpuid-faulting fail never pass
 test-xtf-amd64-amd64-5   45 xtf/test-hvm64-cpuid-faulting fail  never pass
 test-xtf-amd64-amd64-1   45 xtf/test-hvm64-cpuid-faulting fail  never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-2   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-3   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-xtf-amd64-amd64-4   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-xtf-amd64-amd64-5   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-xtf-amd64-amd64-1   59 xtf/test-hvm64-xsa-195   fail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  11 guest-start  fail   never pass
 test-armhf-armhf-libvirt-raw 11 guest-start  fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 

[Xen-devel] [libvirt test] 116362: tolerable all pass - PUSHED

2017-11-20 Thread osstest service owner
flight 116362 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116362/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116328
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116328
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116328
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass

version targeted for testing:
 libvirt  3343ab0cd99c04761c17a36d9af354536df9e741
baseline version:
 libvirt  2f3054c22a85b06299f4c472ba71cb8760c7a67c

Last test of basis   116328  2017-11-19 04:22:51 Z1 days
Testing same since   116362  2017-11-20 04:20:14 Z0 days1 attempts


People who touched revisions under test:
  intrigeri 

jobs:
 build-amd64-xsm  pass
 build-armhf-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-armhf-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-libvirt-xsm pass
 test-armhf-armhf-libvirt-xsm pass
 test-amd64-i386-libvirt-xsm  pass
 test-amd64-amd64-libvirt pass
 test-armhf-armhf-libvirt pass
 test-amd64-i386-libvirt  pass
 test-amd64-amd64-libvirt-pairpass
 test-amd64-i386-libvirt-pair pass
 test-amd64-i386-libvirt-qcow2pass
 test-armhf-armhf-libvirt-raw pass
 test-amd64-amd64-libvirt-vhd pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To osst...@xenbits.xen.org:/home/xen/git/libvirt.git
   2f3054c..3343ab0  3343ab0cd99c04761c17a36d9af354536df9e741 -> 
xen-tested-master

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 17:14, Boris Ostrovsky wrote:
> On 11/20/2017 10:27 AM, Juergen Gross wrote:
>> On 20/11/17 15:25, Boris Ostrovsky wrote:
>>> On 11/20/2017 09:14 AM, Juergen Gross wrote:
 On 20/11/17 14:56, Boris Ostrovsky wrote:
> On 11/20/2017 06:50 AM, Jan Beulich wrote:
> On 20.11.17 at 12:20,  wrote:
>>> Which restriction? I'm loading the RSDP table to its architectural
>>> correct addres if possible, otherwise it will be loaded to the same
>>> address as without my patch. So I'm not adding a restriction, but
>>> removing one.
>> What is "architecturally correct" in PVH can't be read out of
>> specs other than what we write down. When there's no BIOS,
>> placing anything right below the 1Mb boundary is at least
>> bogus.
> Unless it's a UEFI boot -- where else would you put it? Aren't these two
> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
 I think Jan is right: for PVH its _our_ job to define the correct
 placement. 
>>> Yes, and if it is placed in a non-standard location then the guest will
>>> have to deal with it in a non-standard way. Which we can in Linux by
>>> setting acpi_rsdp pointer in the special PVH entry point, before jumping
>>> to Linux "standard" entry --- startup_{32|64}().
>>>
>>> But if your goal is to avoid that special entry point (and thus not set
>>> acpi_rsdp) then how do you expect kernel to find RSDP?
>>>
 Which still can be the same as in the BIOS case, making
 it easier to adapt any guest systems.

 So I'd say: The RSDP address in PVH case is passed in the PVH start
 info block to the guest. In case there is no conflict with the
 physical load address of the guest kernel the preferred address of
 the RSDP is right below the 1MB boundary.
>>> And what do we do if there *is* a conflict?
>> Either as without my patch: use the first available free memory page.
>>
>> Or: add a domain config parameter for specifying the RSDP address
>> (e.g. default: as today, top: end of RAM, legacy: just below 1MB, or
>> a specific value) and fail to load in case of a conflict.
> 
> This feels like a band-aid to work around a problem that we want to fix
> in the long term anyway.
> 
> What could cause grub2 to fail to find space for the pointer in the
> first page? Will we ever have anything in EBDA (which is one of the
> possible RSDP locations)?

This isn't something grub2 has to deal with. The RSDP is in a reserved
area of the memory, so it can't be relocated by grub2.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 17:14, Jan Beulich wrote:
 On 20.11.17 at 16:24,  wrote:
>> On 20/11/17 15:20, Jan Beulich wrote:
>> On 20.11.17 at 15:14,  wrote:
 On 20/11/17 14:56, Boris Ostrovsky wrote:
> On 11/20/2017 06:50 AM, Jan Beulich wrote:
> On 20.11.17 at 12:20,  wrote:
>>> Which restriction? I'm loading the RSDP table to its architectural
>>> correct addres if possible, otherwise it will be loaded to the same
>>> address as without my patch. So I'm not adding a restriction, but
>>> removing one.
>> What is "architecturally correct" in PVH can't be read out of
>> specs other than what we write down. When there's no BIOS,
>> placing anything right below the 1Mb boundary is at least
>> bogus.
>
> Unless it's a UEFI boot -- where else would you put it? Aren't these two
> (UEFI and non-UEFI) the only two options that the ACPI spec provides?

 I think Jan is right: for PVH its _our_ job to define the correct
 placement. Which still can be the same as in the BIOS case, making
 it easier to adapt any guest systems.

 So I'd say: The RSDP address in PVH case is passed in the PVH start
 info block to the guest. In case there is no conflict with the
 physical load address of the guest kernel the preferred address of
 the RSDP is right below the 1MB boundary.

 Would this wording be okay?
>>>
>>> To be honest (and in case it wasn't sufficiently clear form my
>>> earlier replies) - I'm pretty much opposed to this below-1Mb thing.
>>> There ought to be just plain RAM there for PVH.
>>
>> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
>> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
>> is being loaded at 1MB).
>>
>> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
>> due to pure luck (the bzImage loader is still using the PV specific
>> ELF notes and this results in the loader believing RSDP is loadable
>> at this address, which is true, but the tests used to come to this
>> conclusion are just not applicable for PVH).
>>
>> So in your opinion we should revoke the PVH support from Xen 4.10,
>> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
>> guest?
> 
> So what's wrong with it being put wherever the next free memory
> location is being determined to be by the loader, just like is being
> done for other information, including modules (if any)?

The RSDP table is marked as "Reserved" in the memory map. So putting it
somewhere in the middle of the guest's memory will force the guest to
use 4kB pages instead of 2MB or even 1GB pages. I'd really like to avoid
this problem, as we've been hit by the very same in HVM guests before
causing quite measurable performance drops.

So I'd rather put it in the first MB as most kernels have to deal with
small pages at beginning of RAM today. An alternative would be to put
it just below 4GB where e.g. the console and Xenstore page are located.

> 
>> Doing it in a proper way you are outlining above would render
>> current PVH guests unusable.
> 
> I'm afraid I don't understand which outline of mine you refer to.
> Iirc all I said was that placing it below 1Mb is bogus. Top of RAM
> (as I think I saw being mentioned elsewhere) probably isn't much
> better. Special locations should be used only if there's really no
> other way to convey information.

I believe it is better to define where reserved memory areas are located
in order to make it easier for a kernel to deal with the consequences. A
location like "somewhere in memory, either just after the loaded kernel,
or after the boot loader which was loaded previously" seems not to be a
proper solution. Especially as a change in size of the boot loader could
make it impossible to load the kernel at the desired location, as this
would contradict the information returned by the hypervisor in the
memory map.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] Next Xen Arm Community call - Wednesday 22nd November

2017-11-20 Thread Julien Grall
Answering to myself.

On 16 November 2017 at 11:54, Julien Grall  wrote:
> Hi all,
>
> Apologies I was meant to organize the call earlier.
>
> I would suggest to have the next community call on Wednesday 22nd November
> 5pm GMT. Does it sound good?
>
> Do you have any specific topic you would like to discuss?

I would like to discuss about Power Saving when using Xen (e.g
suspend, CPUFreq, idling).

I will send the details of the call tomorrow.

Cheers,

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH RFC v3 3/6] sched/idle: Add a generic poll before enter real idle path

2017-11-20 Thread Daniel Lezcano
On 20/11/2017 08:05, Quan Xu wrote:

[ ... ]

 But the irq_timings stuff is heading into the same direction, with a
 more
 complex prediction logic which should tell you pretty good how long
 that
 idle period is going to be and in case of an interrupt heavy workload
 this
 would skip the extra work of stopping and restarting the tick and
 provide a
 very good input into a polling decision.
>>>
>>> interesting. I have tested with IRQ_TIMINGS related code, which seems
>>> not working so far.
>> I don't know how you tested it, can you elaborate what you meant by
>> "seems not working so far" ?
> 
> Daniel, I tried to enable IRQ_TIMINGS* manually. used
> irq_timings_next_event()
> to return estimation of the earliest interrupt. However I got a constant.

The irq timings gives you an indication of the next interrupt deadline.

This information is a piece of the puzzle, you need to combine it with
the next timer expiration, and the next scheduling event. Then take the
earliest event in a timeline basis.

Using the trivial scheme above will work well with workload like videos
or mp3 but will fail as soon as the interrupts are not coming in a
regular basis and this is where the pattern recognition algorithm must act.

>> There are still some work to do to be more efficient. The prediction
>> based on the irq timings is all right if the interrupts have a simple
>> periodicity. But as soon as there is a pattern, the current code can't
>> handle it properly and does bad predictions.
>>
>> I'm working on a self-learning pattern detection which is too heavy for
>> the kernel, and with it we should be able to detect properly the
>> patterns and re-ajust the period if it changes. I'm in the process of
>> making it suitable for kernel code (both math and perf).
>>
>> One improvement which can be done right now and which can help you is
>> the interrupts rate on the CPU. It is possible to compute it and that
>> will give an accurate information for the polling decision.
>>
>>
> As tglx said, talk to each other / work together to make it usable for
> all use cases.
> could you share how to enable it to get the interrupts rate on the CPU?
> I can try it
> in cloud scenario. of course, I'd like to work with you to improve it.

Sure, I will be glad if we can collaborate. I have some draft code but
before sharing it I would like we define what is the rate and what kind
of information we expect to infer from it. From my point of view it is a
value indicating the interrupt period per CPU, a short value indicates a
high number of interrupts on the CPU.

This value must decay with the time, the question here is what decay
function we apply to the rate from the last timestamp ?




-- 
  Linaro.org │ Open source software for ARM SoCs

Follow Linaro:   Facebook |
 Twitter |
 Blog


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH 01/16] Introduce skeleton SUPPORT.md

2017-11-20 Thread Jan Beulich
>>> On 13.11.17 at 16:41,  wrote:
> Add a machine-readable file to describe what features are in what
> state of being 'supported', as well as information about how long this
> release will be supported, and so on.
> 
> The document should be formatted using "semantic newlines" [1], to make
> changes easier.
> 
> Begin with the basic framework.
> 
> Signed-off-by: Ian Jackson 
> Signed-off-by: George Dunlap 

Acked-by: Jan Beulich 
despite ...

> +We also provide security support for Xen-related code in Linux,
> +which is an external project but doesn't have its own security process.

... not fully agreeing with this part. But at least this way the state
of things is properly spelled out in a sufficiently official place.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 11:43 AM, Jan Beulich wrote:
 On 20.11.17 at 17:28,  wrote:
>> On 11/20/2017 11:26 AM, Jan Beulich wrote:
>> On 20.11.17 at 17:14,  wrote:
 What could cause grub2 to fail to find space for the pointer in the
 first page? Will we ever have anything in EBDA (which is one of the
 possible RSDP locations)?
>>> Well, the EBDA (see the B in its name) is again something that's
>>> meaningless without there being a BIOS.
>> Exactly. So it should always be available for grub to copy the pointer
>> there.
> But what use would it be if grub copied it there? It just shouldn't
> be there, neither before nor after grub (just like grub doesn't
> introduce firmware into the system).

So that the guest can find it using standard methods. If Xen can't
guarantee ACPI-compliant placement of the pointer then someone has to
help the guest find it in the expected place. We can do it with a
dedicated entry point by setting the pointer explicitly (although
admittedly this is not done correctly now) or we need to have firmware
(grub2) place it in the "right" location.

(It does look a bit hacky though)

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 17:28,  wrote:
> On 11/20/2017 11:26 AM, Jan Beulich wrote:
> On 20.11.17 at 17:14,  wrote:
>>> What could cause grub2 to fail to find space for the pointer in the
>>> first page? Will we ever have anything in EBDA (which is one of the
>>> possible RSDP locations)?
>> Well, the EBDA (see the B in its name) is again something that's
>> meaningless without there being a BIOS.
> 
> Exactly. So it should always be available for grub to copy the pointer
> there.

But what use would it be if grub copied it there? It just shouldn't
be there, neither before nor after grub (just like grub doesn't
introduce firmware into the system).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 11:26 AM, Jan Beulich wrote:
 On 20.11.17 at 17:14,  wrote:
>> What could cause grub2 to fail to find space for the pointer in the
>> first page? Will we ever have anything in EBDA (which is one of the
>> possible RSDP locations)?
> Well, the EBDA (see the B in its name) is again something that's
> meaningless without there being a BIOS.

Exactly. So it should always be available for grub to copy the pointer
there.

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.6-testing test] 116350: regressions - trouble: blocked/broken/fail/pass

2017-11-20 Thread osstest service owner
flight 116350 xen-4.6-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116350/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt  broken
 build-armhf-libvirt   5 host-build-prep  fail REGR. vs. 115190
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stopfail REGR. vs. 115190

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-4 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116305 pass 
in 116350
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail in 116325 pass 
in 116350
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 116325 
pass in 116350
 test-xtf-amd64-amd64-3   49 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 116305
 test-xtf-amd64-amd64-1   49 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 116325

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm  1 build-check(1)   blocked  n/a
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-check fail in 116305 like 
115190
 test-armhf-armhf-libvirt 14 saverestore-support-check fail in 116305 like 
115190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check fail in 116305 like 
115190
 test-armhf-armhf-libvirt-xsm 13 migrate-support-check fail in 116305 never pass
 test-armhf-armhf-libvirt13 migrate-support-check fail in 116305 never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check fail in 116305 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115190
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115190
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 115190
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115190
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 115190
 test-xtf-amd64-amd64-5   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-2   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-4   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-xtf-amd64-amd64-3   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-xtf-amd64-amd64-1   73 xtf/test-pv32pae-xsa-194 fail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 

Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 17:14,  wrote:
> What could cause grub2 to fail to find space for the pointer in the
> first page? Will we ever have anything in EBDA (which is one of the
> possible RSDP locations)?

Well, the EBDA (see the B in its name) is again something that's
meaningless without there being a BIOS.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 10:27 AM, Juergen Gross wrote:
> On 20/11/17 15:25, Boris Ostrovsky wrote:
>> On 11/20/2017 09:14 AM, Juergen Gross wrote:
>>> On 20/11/17 14:56, Boris Ostrovsky wrote:
 On 11/20/2017 06:50 AM, Jan Beulich wrote:
 On 20.11.17 at 12:20,  wrote:
>> Which restriction? I'm loading the RSDP table to its architectural
>> correct addres if possible, otherwise it will be loaded to the same
>> address as without my patch. So I'm not adding a restriction, but
>> removing one.
> What is "architecturally correct" in PVH can't be read out of
> specs other than what we write down. When there's no BIOS,
> placing anything right below the 1Mb boundary is at least
> bogus.
 Unless it's a UEFI boot -- where else would you put it? Aren't these two
 (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>>> I think Jan is right: for PVH its _our_ job to define the correct
>>> placement. 
>> Yes, and if it is placed in a non-standard location then the guest will
>> have to deal with it in a non-standard way. Which we can in Linux by
>> setting acpi_rsdp pointer in the special PVH entry point, before jumping
>> to Linux "standard" entry --- startup_{32|64}().
>>
>> But if your goal is to avoid that special entry point (and thus not set
>> acpi_rsdp) then how do you expect kernel to find RSDP?
>>
>>> Which still can be the same as in the BIOS case, making
>>> it easier to adapt any guest systems.
>>>
>>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>>> info block to the guest. In case there is no conflict with the
>>> physical load address of the guest kernel the preferred address of
>>> the RSDP is right below the 1MB boundary.
>> And what do we do if there *is* a conflict?
> Either as without my patch: use the first available free memory page.
>
> Or: add a domain config parameter for specifying the RSDP address
> (e.g. default: as today, top: end of RAM, legacy: just below 1MB, or
> a specific value) and fail to load in case of a conflict.

This feels like a band-aid to work around a problem that we want to fix
in the long term anyway.

What could cause grub2 to fail to find space for the pointer in the
first page? Will we ever have anything in EBDA (which is one of the
possible RSDP locations)?

-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 16:24,  wrote:
> On 20/11/17 15:20, Jan Beulich wrote:
> On 20.11.17 at 15:14,  wrote:
>>> On 20/11/17 14:56, Boris Ostrovsky wrote:
 On 11/20/2017 06:50 AM, Jan Beulich wrote:
 On 20.11.17 at 12:20,  wrote:
>> Which restriction? I'm loading the RSDP table to its architectural
>> correct addres if possible, otherwise it will be loaded to the same
>> address as without my patch. So I'm not adding a restriction, but
>> removing one.
> What is "architecturally correct" in PVH can't be read out of
> specs other than what we write down. When there's no BIOS,
> placing anything right below the 1Mb boundary is at least
> bogus.

 Unless it's a UEFI boot -- where else would you put it? Aren't these two
 (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>>>
>>> I think Jan is right: for PVH its _our_ job to define the correct
>>> placement. Which still can be the same as in the BIOS case, making
>>> it easier to adapt any guest systems.
>>>
>>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>>> info block to the guest. In case there is no conflict with the
>>> physical load address of the guest kernel the preferred address of
>>> the RSDP is right below the 1MB boundary.
>>>
>>> Would this wording be okay?
>> 
>> To be honest (and in case it wasn't sufficiently clear form my
>> earlier replies) - I'm pretty much opposed to this below-1Mb thing.
>> There ought to be just plain RAM there for PVH.
> 
> So without my patch the RSDP table is loaded e.g. at about 6.5MB when
> I'm using grub2 (the loaded grub image is about 5.5MB in size and it
> is being loaded at 1MB).
> 
> When I'm using the PVH Linux kernel directly RSDP is just below 1MB
> due to pure luck (the bzImage loader is still using the PV specific
> ELF notes and this results in the loader believing RSDP is loadable
> at this address, which is true, but the tests used to come to this
> conclusion are just not applicable for PVH).
> 
> So in your opinion we should revoke the PVH support from Xen 4.10,
> Linux and maybe BSD because RSDP is loaded in middle of RAM of the
> guest?

So what's wrong with it being put wherever the next free memory
location is being determined to be by the loader, just like is being
done for other information, including modules (if any)?

> Doing it in a proper way you are outlining above would render
> current PVH guests unusable.

I'm afraid I don't understand which outline of mine you refer to.
Iirc all I said was that placing it below 1Mb is bogus. Top of RAM
(as I think I saw being mentioned elsewhere) probably isn't much
better. Special locations should be used only if there's really no
other way to convey information.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-4.6-testing test] 116250: regressions - FAIL

2017-11-20 Thread Ian Jackson
osstest service owner writes ("[xen-4.6-testing test] 116250: regressions - 
FAIL"):
> flight 116250 xen-4.6-testing real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/116250/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stopfail REGR. vs. 
> 115190
> 
...
> version targeted for testing:
>  xen  9b0c2a223132a07f06f0be8e85da390defe998f5

Force pushed.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [xen-4.5-testing test] 116245: regressions - FAIL

2017-11-20 Thread Ian Jackson
osstest service owner writes ("[xen-4.5-testing test] 116245: regressions - 
FAIL"):
> flight 116245 xen-4.5-testing real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/116245/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 
> 115226
>  test-amd64-i386-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail in 116223 
> REGR. vs. 115226
...
> version targeted for testing:
>  xen  41f6dd05d10fd1b4281c1722e2d8f29e378abe9a

Force pushed.

Ian.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 15:25, Boris Ostrovsky wrote:
> On 11/20/2017 09:14 AM, Juergen Gross wrote:
>> On 20/11/17 14:56, Boris Ostrovsky wrote:
>>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>> On 20.11.17 at 12:20,  wrote:
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.
 What is "architecturally correct" in PVH can't be read out of
 specs other than what we write down. When there's no BIOS,
 placing anything right below the 1Mb boundary is at least
 bogus.
>>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>> I think Jan is right: for PVH its _our_ job to define the correct
>> placement. 
> 
> Yes, and if it is placed in a non-standard location then the guest will
> have to deal with it in a non-standard way. Which we can in Linux by
> setting acpi_rsdp pointer in the special PVH entry point, before jumping
> to Linux "standard" entry --- startup_{32|64}().
> 
> But if your goal is to avoid that special entry point (and thus not set
> acpi_rsdp) then how do you expect kernel to find RSDP?
> 
>> Which still can be the same as in the BIOS case, making
>> it easier to adapt any guest systems.
>>
>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>> info block to the guest. In case there is no conflict with the
>> physical load address of the guest kernel the preferred address of
>> the RSDP is right below the 1MB boundary.
> 
> And what do we do if there *is* a conflict?

Either as without my patch: use the first available free memory page.

Or: add a domain config parameter for specifying the RSDP address
(e.g. default: as today, top: end of RAM, legacy: just below 1MB, or
a specific value) and fail to load in case of a conflict.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 15:20, Jan Beulich wrote:
 On 20.11.17 at 15:14,  wrote:
>> On 20/11/17 14:56, Boris Ostrovsky wrote:
>>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>> On 20.11.17 at 12:20,  wrote:
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.
 What is "architecturally correct" in PVH can't be read out of
 specs other than what we write down. When there's no BIOS,
 placing anything right below the 1Mb boundary is at least
 bogus.
>>>
>>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>>
>> I think Jan is right: for PVH its _our_ job to define the correct
>> placement. Which still can be the same as in the BIOS case, making
>> it easier to adapt any guest systems.
>>
>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>> info block to the guest. In case there is no conflict with the
>> physical load address of the guest kernel the preferred address of
>> the RSDP is right below the 1MB boundary.
>>
>> Would this wording be okay?
> 
> To be honest (and in case it wasn't sufficiently clear form my
> earlier replies) - I'm pretty much opposed to this below-1Mb thing.
> There ought to be just plain RAM there for PVH.

So without my patch the RSDP table is loaded e.g. at about 6.5MB when
I'm using grub2 (the loaded grub image is about 5.5MB in size and it
is being loaded at 1MB).

When I'm using the PVH Linux kernel directly RSDP is just below 1MB
due to pure luck (the bzImage loader is still using the PV specific
ELF notes and this results in the loader believing RSDP is loadable
at this address, which is true, but the tests used to come to this
conclusion are just not applicable for PVH).

So in your opinion we should revoke the PVH support from Xen 4.10,
Linux and maybe BSD because RSDP is loaded in middle of RAM of the
guest? Doing it in a proper way you are outlining above would render
current PVH guests unusable.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2 7/7] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver

2017-11-20 Thread Julien Grall

Hi,

On 20/11/17 15:19, Robin Murphy wrote:

On 20/11/17 14:25, Julien Grall wrote:
[...]



+    else {
   cpu_relax();


Hmmm I now see why you added cpu_relax() at the top. Well, on Xen 
cpu_relax is just a barrier. On Linux it is used to yield.


And that bit is worrying me. The Linux code will allow context 
switching to another tasks if the code is taking too much time.


Xen is not preemptible, so is it fine?
This is used when consuming the command queue and could be a 
potential performance issue if the queue is large. (This is never the 
case).

I am wondering if we should define a yeild in long run?


As I said before, Xen is not preemptible. In this particular case, 
there are spinlock taken by the callers (e.g any function assigning 
device). So yield would just make it worst.


The arguments here don't make much sense - the "yield" instruction has 
nothing to do with software-level concepts of preemption. It is a hint 
to SMT *hardware* that this logical processor is doing nothing useful in 
the short term, so it might be a good idea to let other logical 
processor(s) have priority over shared execution resources if applicable.


Oh, sorry I thought this could also be used by the software to switch 
between thread. Please disregard my comment then.




Until SMT CPUs become commonly available, though, it's somewhat of a 
moot point and mostly just a future-proofing consideration.


Robin.


Cheers,

--
Julien Grall

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2 7/7] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver

2017-11-20 Thread Robin Murphy

On 20/11/17 14:25, Julien Grall wrote:
[...]



+    else {
   cpu_relax();


Hmmm I now see why you added cpu_relax() at the top. Well, on Xen 
cpu_relax is just a barrier. On Linux it is used to yield.


And that bit is worrying me. The Linux code will allow context 
switching to another tasks if the code is taking too much time.


Xen is not preemptible, so is it fine?
This is used when consuming the command queue and could be a potential 
performance issue if the queue is large. (This is never the case).

I am wondering if we should define a yeild in long run?


As I said before, Xen is not preemptible. In this particular case, there 
are spinlock taken by the callers (e.g any function assigning device). 
So yield would just make it worst.


The arguments here don't make much sense - the "yield" instruction has 
nothing to do with software-level concepts of preemption. It is a hint 
to SMT *hardware* that this logical processor is doing nothing useful in 
the short term, so it might be a good idea to let other logical 
processor(s) have priority over shared execution resources if applicable.


Until SMT CPUs become commonly available, though, it's somewhat of a 
moot point and mostly just a future-proofing consideration.


Robin.

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 09:36 AM, Andrew Cooper wrote:
> On 20/11/17 14:25, Boris Ostrovsky wrote:
>> On 11/20/2017 09:14 AM, Juergen Gross wrote:
>>> On 20/11/17 14:56, Boris Ostrovsky wrote:
 On 11/20/2017 06:50 AM, Jan Beulich wrote:
 On 20.11.17 at 12:20,  wrote:
>> Which restriction? I'm loading the RSDP table to its architectural
>> correct addres if possible, otherwise it will be loaded to the same
>> address as without my patch. So I'm not adding a restriction, but
>> removing one.
> What is "architecturally correct" in PVH can't be read out of
> specs other than what we write down. When there's no BIOS,
> placing anything right below the 1Mb boundary is at least
> bogus.
 Unless it's a UEFI boot -- where else would you put it? Aren't these two
 (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>>> I think Jan is right: for PVH its _our_ job to define the correct
>>> placement. 
>> Yes, and if it is placed in a non-standard location then the guest will
>> have to deal with it in a non-standard way. Which we can in Linux by
>> setting acpi_rsdp pointer in the special PVH entry point, before jumping
>> to Linux "standard" entry --- startup_{32|64}().
>>
>> But if your goal is to avoid that special entry point (and thus not set
>> acpi_rsdp) then how do you expect kernel to find RSDP?
>>
>>> Which still can be the same as in the BIOS case, making
>>> it easier to adapt any guest systems.
>>>
>>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>>> info block to the guest. In case there is no conflict with the
>>> physical load address of the guest kernel the preferred address of
>>> the RSDP is right below the 1MB boundary.
>> And what do we do if there *is* a conflict?
> As a random alternative, what about writing up an RSDP reference into
> the zeropage?

zeropage is an ABI with no provision for ACPI.

>
> I'd be surprised if Xen PVH is the only software in this position of
> trying to use the native paths wherever possible, and not retaining
> legacy ideas of a PC system.


I am not aware of any other guests that completely avoid legacy stuff.
But as I mentioned in another reply, KVM people may be looking at this
as well now.

-boris

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Andrew Cooper
On 20/11/17 14:25, Boris Ostrovsky wrote:
> On 11/20/2017 09:14 AM, Juergen Gross wrote:
>> On 20/11/17 14:56, Boris Ostrovsky wrote:
>>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>>> On 20.11.17 at 12:20,  wrote:
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.
 What is "architecturally correct" in PVH can't be read out of
 specs other than what we write down. When there's no BIOS,
 placing anything right below the 1Mb boundary is at least
 bogus.
>>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
>> I think Jan is right: for PVH its _our_ job to define the correct
>> placement. 
> Yes, and if it is placed in a non-standard location then the guest will
> have to deal with it in a non-standard way. Which we can in Linux by
> setting acpi_rsdp pointer in the special PVH entry point, before jumping
> to Linux "standard" entry --- startup_{32|64}().
>
> But if your goal is to avoid that special entry point (and thus not set
> acpi_rsdp) then how do you expect kernel to find RSDP?
>
>> Which still can be the same as in the BIOS case, making
>> it easier to adapt any guest systems.
>>
>> So I'd say: The RSDP address in PVH case is passed in the PVH start
>> info block to the guest. In case there is no conflict with the
>> physical load address of the guest kernel the preferred address of
>> the RSDP is right below the 1MB boundary.
> And what do we do if there *is* a conflict?

As a random alternative, what about writing up an RSDP reference into
the zeropage?

I'd be surprised if Xen PVH is the only software in this position of
trying to use the native paths wherever possible, and not retaining
legacy ideas of a PC system.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] x86/hvm: Don't ignore unknown MSRs in the migration stream

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 15:10,  wrote:
> On 17/11/17 12:10, Jan Beulich wrote:
> On 16.11.17 at 20:15,  wrote:
>>> Doing so amounts to silent state corruption, and must be avoided.
>> I think a little more explanation is needed on why the current code
>> is insufficient. Note specifically this
>>
>> for ( i = 0; !err && i < ctxt->count; ++i )
>> {
>> switch ( ctxt->msr[i].index )
>> {
>> default:
>> if ( !ctxt->msr[i]._rsvd )
>> err = -ENXIO;
>> break;
>> }
>> }
>>
>> in hvm_load_cpu_msrs(), intended to give vendor code a first
>> shot, but allowing for vendor independent MSRs to be handled
>> here.
> 
> That is sufficiently subtle and non-obvious that I'm still having a hard
> time convincing myself that its correct.  Also, this use of _rsvd really
> should be document.

Well, from an abstract pov I agree. The field being defined in the
public interface though, I don't see a good place where to document
that - its point of declaration certainly isn't the right one in such a
case, as the public interface should not document implementation
details.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [xen-4.7-testing test] 116348: tolerable FAIL - PUSHED

2017-11-20 Thread osstest service owner
flight 116348 xen-4.7-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116348/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-qemut-rhel6hvm-amd 13 guest-start.2 fail in 116321 pass in 
116348
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail in 116321 
pass in 116348
 test-amd64-amd64-xl-qemuu-ovmf-amd64 16 guest-localmigrate/x10 fail in 116321 
pass in 116348
 test-xtf-amd64-amd64-1   49 xtf/test-hvm64-lbr-tsx-vmentry fail pass in 116321
 test-armhf-armhf-xl-cubietruck  6 xen-install  fail pass in 116321
 test-armhf-armhf-libvirt  6 xen-installfail pass in 116321
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat  fail pass in 116321
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail pass in 
116321

Regressions which are regarded as allowable (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stopfail REGR. vs. 115210
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stopfail REGR. vs. 115210

Tests which did not succeed, but are not blocking:
 test-xtf-amd64-amd64-4 49 xtf/test-hvm64-lbr-tsx-vmentry fail in 116321 like 
115189
 test-armhf-armhf-libvirt 14 saverestore-support-check fail in 116321 like 
115210
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop  fail in 116321 like 115210
 test-armhf-armhf-libvirt13 migrate-support-check fail in 116321 never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check fail in 116321 never 
pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check fail in 116321 
never pass
 test-xtf-amd64-amd64-5  49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115189
 test-xtf-amd64-amd64-2  49 xtf/test-hvm64-lbr-tsx-vmentry fail like 115210
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail like 115210
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115210
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115210
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeatfail  like 115210
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stopfail like 115210
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115210
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115210
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115210
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass
 test-amd64-amd64-xl-qemut-win10-i386 10 windows-installfail never pass
 test-amd64-i386-xl-qemut-win10-i386 10 windows-install fail never pass

version targeted for testing:
 xen  259a5c3000d840f244dbb30f2b47b95f2dc0f80f
baseline version:
 xen  830224431b67fd2afad9bdc532dc1bede20032d5

Last test of basis   115210  

Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 09:14 AM, Juergen Gross wrote:
> On 20/11/17 14:56, Boris Ostrovsky wrote:
>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>> On 20.11.17 at 12:20,  wrote:
 Which restriction? I'm loading the RSDP table to its architectural
 correct addres if possible, otherwise it will be loaded to the same
 address as without my patch. So I'm not adding a restriction, but
 removing one.
>>> What is "architecturally correct" in PVH can't be read out of
>>> specs other than what we write down. When there's no BIOS,
>>> placing anything right below the 1Mb boundary is at least
>>> bogus.
>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
> I think Jan is right: for PVH its _our_ job to define the correct
> placement. 

Yes, and if it is placed in a non-standard location then the guest will
have to deal with it in a non-standard way. Which we can in Linux by
setting acpi_rsdp pointer in the special PVH entry point, before jumping
to Linux "standard" entry --- startup_{32|64}().

But if your goal is to avoid that special entry point (and thus not set
acpi_rsdp) then how do you expect kernel to find RSDP?

> Which still can be the same as in the BIOS case, making
> it easier to adapt any guest systems.
>
> So I'd say: The RSDP address in PVH case is passed in the PVH start
> info block to the guest. In case there is no conflict with the
> physical load address of the guest kernel the preferred address of
> the RSDP is right below the 1MB boundary.

And what do we do if there *is* a conflict?


-boris


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [RFC v2 7/7] xen/iommu: smmu-v3: Add Xen specific code to enable the ported driver

2017-11-20 Thread Julien Grall

Hi Sameer,

On 19/11/17 07:45, Goel, Sameer wrote:

On 10/12/2017 10:36 AM, Julien Grall wrote:

+
+typedef paddr_t phys_addr_t;
+typedef paddr_t dma_addr_t;
+
+/* Alias to Xen device tree helpers */
+#define device_node dt_device_node
+#define of_phandle_args dt_phandle_args
+#define of_device_id dt_device_match
+#define of_match_node dt_match_node
+#define of_property_read_u32(np, pname, out) (!dt_property_read_u32(np, pname, 
out))
+#define of_property_read_bool dt_property_read_bool
+#define of_parse_phandle_with_args dt_parse_phandle_with_args
+#define mutex spinlock_t
+#define mutex_init spin_lock_init
+#define mutex_lock spin_lock
+#define mutex_unlock spin_unlock


mutex and spinlock are not the same. The former is sleeping whilst the later is 
not.

Can you please explain why this is fine and possibly add that in a comment?


Mutex is used to protect the access to smmu device internal data structure when 
setting up the s2 config and installing stes for a given device in Linux. The 
ste programming  operation can be competitively long but in the current 
testing, I did not see this blocking for too long. I will put in a comment.


Well, I don't think that this is a justification. You tested on one 
platform and does not explain how you perform them.


If I understand correctly, that mutex is only used when assigning 
device. So it might be ok to switch to spinlock. But that's not because 
the operation is not too long, it just because it would be only perform 
by the toolstack (domctl) and will not be issued by guest.





+
+/* Xen: Helpers to get device MMIO and IRQs */
+struct resource {
+    u64 addr;
+    u64 size;
+    unsigned int type;
+};


Likely we want a compat header for defining Linux helpers. This would avoid 
replicating it everywhere.

Agreed.


That should be



+
+#define resource_size(res) ((res)->size)
+
+#define platform_device device
+
+#define IORESOURCE_MEM 0
+#define IORESOURCE_IRQ 1
+
+static struct resource *platform_get_resource(struct platform_device *pdev,
+  unsigned int type,
+  unsigned int num)
+{
+    /*
+ * The resource is only used between 2 calls of platform_get_resource.
+ * It's quite ugly but it's avoid to add too much code in the part
+ * imported from Linux
+ */
+    static struct resource res;
+    struct acpi_iort_node *iort_node;
+    struct acpi_iort_smmu_v3 *node_smmu_data;
+    int ret = 0;
+
+    res.type = type;
+
+    switch (type) {
+    case IORESOURCE_MEM:
+    if (pdev->type == DEV_ACPI) {
+    ret = 1;
+    iort_node = pdev->acpi_node;
+    node_smmu_data =
+    (struct acpi_iort_smmu_v3 *)iort_node->node_data;
+
+    if (node_smmu_data != NULL) {
+    res.addr = node_smmu_data->base_address;
+    res.size = SZ_128K;
+    ret = 0;
+    }
+    } else {
+    ret = dt_device_get_address(dev_to_dt(pdev), num,
+    , );
+    }
+
+    return ((ret) ? NULL : );
+
+    case IORESOURCE_IRQ:
+    ret = platform_get_irq(dev_to_dt(pdev), num);


No IRQ for ACPI?

For IRQs the code calls platform_get_irq_byname. So, the IORESOURCE_IRQ 
implementation is not needed at all. (DT or ACPI)


Please document it then.

[...]




+    udelay(sleep_us); \
+    } \
+    (cond) ? 0 : -ETIMEDOUT; \
+})
+
+#define readl_relaxed_poll_timeout(addr, val, cond, delay_us, timeout_us) \
+    readx_poll_timeout(readl_relaxed, addr, val, cond, delay_us, timeout_us)
+
+/* Xen: Helpers for IRQ functions */
+#define request_irq(irq, func, flags, name, dev) request_irq(irq, flags, func, 
name, dev)
+#define free_irq release_irq
+
+enum irqreturn {
+    IRQ_NONE    = (0 << 0),
+    IRQ_HANDLED    = (1 << 0),
+};
+
+typedef enum irqreturn irqreturn_t;
+
+/* Device logger functions */
+#define dev_print(dev, lvl, fmt, ...)    \
+ printk(lvl "smmu: " fmt, ## __VA_ARGS__)
+
+#define dev_dbg(dev, fmt, ...) dev_print(dev, XENLOG_DEBUG, fmt, ## 
__VA_ARGS__)
+#define dev_notice(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## 
__VA_ARGS__)
+#define dev_warn(dev, fmt, ...) dev_print(dev, XENLOG_WARNING, fmt, ## 
__VA_ARGS__)
+#define dev_err(dev, fmt, ...) dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+#define dev_info(dev, fmt, ...) dev_print(dev, XENLOG_INFO, fmt, ## 
__VA_ARGS__)
+
+#define dev_err_ratelimited(dev, fmt, ...)    \
+ dev_print(dev, XENLOG_ERR, fmt, ## __VA_ARGS__)
+
+#define dev_name(dev) dt_node_full_name(dev_to_dt(dev))
+
+/* Alias to Xen allocation helpers */
+#define kfree xfree
+#define kmalloc(size, flags)    _xmalloc(size, sizeof(void *))
+#define kzalloc(size, flags)    _xzalloc(size, sizeof(void *))
+#define devm_kzalloc(dev, size, flags)    _xzalloc(size, sizeof(void *))
+#define kmalloc_array(size, n, flags)    _xmalloc_array(size, sizeof(void *), 
n)
+
+/* Compatibility defines */
+#undef WARN_ON

Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 15:14,  wrote:
> On 20/11/17 14:56, Boris Ostrovsky wrote:
>> On 11/20/2017 06:50 AM, Jan Beulich wrote:
>> On 20.11.17 at 12:20,  wrote:
 Which restriction? I'm loading the RSDP table to its architectural
 correct addres if possible, otherwise it will be loaded to the same
 address as without my patch. So I'm not adding a restriction, but
 removing one.
>>> What is "architecturally correct" in PVH can't be read out of
>>> specs other than what we write down. When there's no BIOS,
>>> placing anything right below the 1Mb boundary is at least
>>> bogus.
>> 
>> Unless it's a UEFI boot -- where else would you put it? Aren't these two
>> (UEFI and non-UEFI) the only two options that the ACPI spec provides?
> 
> I think Jan is right: for PVH its _our_ job to define the correct
> placement. Which still can be the same as in the BIOS case, making
> it easier to adapt any guest systems.
> 
> So I'd say: The RSDP address in PVH case is passed in the PVH start
> info block to the guest. In case there is no conflict with the
> physical load address of the guest kernel the preferred address of
> the RSDP is right below the 1MB boundary.
> 
> Would this wording be okay?

To be honest (and in case it wasn't sufficiently clear form my
earlier replies) - I'm pretty much opposed to this below-1Mb thing.
There ought to be just plain RAM there for PVH.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] x86/hvm: Don't ignore unknown MSRs in the migration stream

2017-11-20 Thread Andrew Cooper
On 17/11/17 12:10, Jan Beulich wrote:
 On 16.11.17 at 20:15,  wrote:
>> Doing so amounts to silent state corruption, and must be avoided.
> I think a little more explanation is needed on why the current code
> is insufficient. Note specifically this
>
> for ( i = 0; !err && i < ctxt->count; ++i )
> {
> switch ( ctxt->msr[i].index )
> {
> default:
> if ( !ctxt->msr[i]._rsvd )
> err = -ENXIO;
> break;
> }
> }
>
> in hvm_load_cpu_msrs(), intended to give vendor code a first
> shot, but allowing for vendor independent MSRs to be handled
> here.

That is sufficiently subtle and non-obvious that I'm still having a hard
time convincing myself that its correct.  Also, this use of _rsvd really
should be document.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 14:56, Boris Ostrovsky wrote:
> On 11/20/2017 06:50 AM, Jan Beulich wrote:
> On 20.11.17 at 12:20,  wrote:
>>> Which restriction? I'm loading the RSDP table to its architectural
>>> correct addres if possible, otherwise it will be loaded to the same
>>> address as without my patch. So I'm not adding a restriction, but
>>> removing one.
>> What is "architecturally correct" in PVH can't be read out of
>> specs other than what we write down. When there's no BIOS,
>> placing anything right below the 1Mb boundary is at least
>> bogus.
> 
> Unless it's a UEFI boot -- where else would you put it? Aren't these two
> (UEFI and non-UEFI) the only two options that the ACPI spec provides?

I think Jan is right: for PVH its _our_ job to define the correct
placement. Which still can be the same as in the BIOS case, making
it easier to adapt any guest systems.

So I'd say: The RSDP address in PVH case is passed in the PVH start
info block to the guest. In case there is no conflict with the
physical load address of the guest kernel the preferred address of
the RSDP is right below the 1MB boundary.

Would this wording be okay?


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 14:56,  wrote:
> On 11/20/2017 06:50 AM, Jan Beulich wrote:
> On 20.11.17 at 12:20,  wrote:
>>> Which restriction? I'm loading the RSDP table to its architectural
>>> correct addres if possible, otherwise it will be loaded to the same
>>> address as without my patch. So I'm not adding a restriction, but
>>> removing one.
>> What is "architecturally correct" in PVH can't be read out of
>> specs other than what we write down. When there's no BIOS,
>> placing anything right below the 1Mb boundary is at least
>> bogus.
> 
> Unless it's a UEFI boot -- where else would you put it? Aren't these two
> (UEFI and non-UEFI) the only two options that the ACPI spec provides?

Of course - we can't really expect them to cater for something like
PVH. But this also means we can't always use the spec as reference
point here.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 06:50 AM, Jan Beulich wrote:
 On 20.11.17 at 12:20,  wrote:
>> Which restriction? I'm loading the RSDP table to its architectural
>> correct addres if possible, otherwise it will be loaded to the same
>> address as without my patch. So I'm not adding a restriction, but
>> removing one.
> What is "architecturally correct" in PVH can't be read out of
> specs other than what we write down. When there's no BIOS,
> placing anything right below the 1Mb boundary is at least
> bogus.

Unless it's a UEFI boot -- where else would you put it? Aren't these two
(UEFI and non-UEFI) the only two options that the ACPI spec provides?

-boris

>
> As to the prior grub2 discussion you refer to - just like Andrew
> I don't think I was really involved here. If it resulted in any
> decisions affecting the PVH ABI, I think it would be a good idea
> to summarize the outcome, and perhaps even submit a patch
> adding respective documentation (e.g. by way of comment in
> public headers). That'll then allow non-grub Xen folks (like
> Andrew and me) to see what you're intending to do (and of
> course there would be the risk of someone disagreeing with
> what you had come up with while discussing this on the grub
> side).
>
> Jan
>
>
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Boris Ostrovsky
On 11/20/2017 06:20 AM, Juergen Gross wrote:
> On 20/11/17 11:57, Andrew Cooper wrote:
>> On 20/11/17 10:43, Juergen Gross wrote:
>>> On 20/11/17 11:21, Andrew Cooper wrote:
 On 20/11/17 10:04, Juergen Gross wrote:
> On 20/11/17 10:58, Andrew Cooper wrote:
>> On 20/11/2017 09:55, Juergen Gross wrote:
>>> On 20/11/17 10:51, Roger Pau Monné wrote:
 Adding xen-devel, dropped it on my reply. 

 Replying from my phone, sorry for the formatting. 


 El 20 nov. 2017 9:35, "Juergen Gross" > escribió:

 For PVH domains loading of the ACPI RSDP table is done via
 allocating
 a domain loader segment after having loaded the kernel. This
 leads to
 the RSDP table being loaded at an arbitrary guest address 
 instead of
 the architectural correct address just below 1MB. 


 AFAIK this is only true for legacy BIOS boot, when using UEFI the
 RSDP can be anywhere in memory, hence grub2 must already have an
 alternative way of finding the RSDP apart from scanning the low 
 1MB.
>>> The problem isn't grub2, but the loaded linux kernel. Without this
>>> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>>>
>>> I could modify grub2 even further to move the RSDP to the correct
>>> address, but I think doing it correctly on Xen side is the better
>>> option.
>> Why?  The PVH info block contains a pointer directly to the RSDP, and
>> Linux should be following this rather than scanning for it using the
>> legacy method.
> Oh no, please not this discussion again.
>
> We already had a very long discussion how to do PVH support in grub2,
> and the outcome was to try to use the standard boot entry of the kernel
> instead the PVH sepcific one.
>
> The Linux kernel right now doesn't make use of the RSDP pointer in the
> PVH info block, so I think we shouldn't change this when using grub2.


As I mentioned in the other thread --- it will when we get to dom0 support.

 I clearly missed the previous discussion, and I don't advocate using yet
 another PVH-specific entry point, but how does Linux cope in other
 non-BIOS environments?  Does it genuinely rely exclusively on the legacy
 mechanism?

FYI (and this is not directly related to this thread) there was a
discussion with KVM engineers and they may be interested in doing a
PVH-like boot, using Xen PVH entry point.

Adding Maran who is looking at this.

-boris

>>> Looking at the code I think so, yes. Maybe there are cases where no RSDP
>>> is needed, but in the grub2/PVH case we need it to distinguish PVH from
>>> HVM.
>> In which case, being a Linux limitation, I think it is wrong to
>> unilaterally apply this restriction to all other PVH guests.
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.
>
>> Doing this in grub seems like the more appropriate place IMO.
> I don't think so.
>
>
> Juergen
>
> ___
> Xen-devel mailing list
> Xen-devel@lists.xen.org
> https://lists.xen.org/xen-devel


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 0/6] libxl: create standalone vkb device

2017-11-20 Thread Oleksandr Grytsov
On Tue, Nov 14, 2017 at 2:39 PM, Oleksandr Grytsov  wrote:

> On Wed, Nov 1, 2017 at 5:05 PM, Oleksandr Grytsov 
> wrote:
>
>> From: Oleksandr Grytsov 
>>
>> Changes since initial:
>>  * add setting backend-type to xenstore
>>  * add id field to indentify the vkb device on backend side
>>
>> Oleksandr Grytsov (6):
>>   libxl: move vkb device to libxl_vkb.c
>>   libxl: fix vkb XS entry and type
>>   libxl: add backend type and id to vkb
>>   libxl: vkb add list and info functions
>>   xl: add vkb config parser and CLI
>>   docs: add vkb device to xl.cfg and xl
>>
>>  docs/man/xl.cfg.pod.5.in|  28 ++
>>  docs/man/xl.pod.1.in|  22 +
>>  tools/libxl/Makefile|   1 +
>>  tools/libxl/libxl.h |  10 ++
>>  tools/libxl/libxl_console.c |  53 ---
>>  tools/libxl/libxl_create.c  |   3 +
>>  tools/libxl/libxl_dm.c  |   1 +
>>  tools/libxl/libxl_types.idl |  19 
>>  tools/libxl/libxl_utils.h   |   3 +
>>  tools/libxl/libxl_vkb.c | 226 ++
>> ++
>>  tools/xl/Makefile   |   2 +-
>>  tools/xl/xl.h   |   3 +
>>  tools/xl/xl_cmdtable.c  |  15 +++
>>  tools/xl/xl_parse.c |  75 ++-
>>  tools/xl/xl_parse.h |   2 +-
>>  tools/xl/xl_vkb.c   | 142 
>>  16 files changed, 549 insertions(+), 56 deletions(-)
>>  create mode 100644 tools/libxl/libxl_vkb.c
>>  create mode 100644 tools/xl/xl_vkb.c
>>
>> --
>> 2.7.4
>>
>>
> ping
>
> --
> Best Regards,
> Oleksandr Grytsov.
>

ping

-- 
Best Regards,
Oleksandr Grytsov.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v1 0/5] libxl: add PV sound device

2017-11-20 Thread Oleksandr Grytsov
On Tue, Nov 14, 2017 at 2:38 PM, Oleksandr Grytsov  wrote:

> On Wed, Nov 1, 2017 at 5:04 PM, Oleksandr Grytsov 
> wrote:
>
>> From: Oleksandr Grytsov 
>>
>> This patch set adds PV sound device support to xl.cfg and xl.
>> See sndif.h for protocol implementation details.
>>
>> Changes since initial:
>>  * fix code style
>>  * change unique-id from int to string (to make id more user readable)
>>
>> Oleksandr Grytsov (5):
>>   libxl: add PV sound device
>>   libxl: add vsnd list and info
>>   xl: add PV sound condif parser
>>   xl: add vsnd CLI commands
>>   docs: add PV sound device config
>>
>>  docs/man/xl.cfg.pod.5.in | 150 
>>  docs/man/xl.pod.1.in |  30 ++
>>  tools/libxl/Makefile |   2 +-
>>  tools/libxl/libxl.h  |  24 ++
>>  tools/libxl/libxl_create.c   |   1 +
>>  tools/libxl/libxl_internal.h |   1 +
>>  tools/libxl/libxl_types.idl  |  83 +
>>  tools/libxl/libxl_types_internal.idl |   1 +
>>  tools/libxl/libxl_utils.h|   3 +
>>  tools/libxl/libxl_vsnd.c | 699 ++
>> +
>>  tools/xl/Makefile|   2 +-
>>  tools/xl/xl.h|   3 +
>>  tools/xl/xl_cmdtable.c   |  15 +
>>  tools/xl/xl_parse.c  | 246 
>>  tools/xl/xl_parse.h  |   1 +
>>  tools/xl/xl_vsnd.c   | 203 ++
>>  16 files changed, 1462 insertions(+), 2 deletions(-)
>>  create mode 100644 tools/libxl/libxl_vsnd.c
>>  create mode 100644 tools/xl/xl_vsnd.c
>>
>> --
>> 2.7.4
>>
>>
> ping
>
> --
> Best Regards,
> Oleksandr Grytsov.
>

ping

-- 
Best Regards,
Oleksandr Grytsov.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] x86/hvm: Don't corrupt the HVM context stream when writing the MSR record

2017-11-20 Thread Andrew Cooper
On 17/11/17 12:15, Jan Beulich wrote:
 On 16.11.17 at 23:45,  wrote:
>> Ever since it was introduced in c/s bd1f0b45ff, hvm_save_cpu_msrs() has had a
>> bug whereby it corrupts the HVM context stream if some, but fewer than the
>> maximum number of MSRs are written.
>>
>> _hvm_init_entry() creates an hvm_save_descriptor with length for
>> msr_count_max, but in the case that we write fewer than max, h->cur only 
>> moves
>> forward by the amount of space used, causing the subsequent
>> hvm_save_descriptor to be written within the bounds of the previous one.
>>
>> To resolve this, reduce the length reported by the descriptor to match the
>> actual number of bytes used.
>>
>> A typical failure on the destination side looks like:
>>
>> (XEN) HVM4 restore: CPU_MSR 0
>> (XEN) HVM4.0 restore: not enough data left to read 56 MSR bytes
>> (XEN) HVM4 restore: failed to load entry 20/0
>>
>> Signed-off-by: Andrew Cooper 
> Reviewed-by: Jan Beulich 
>
>> --- a/xen/arch/x86/hvm/hvm.c
>> +++ b/xen/arch/x86/hvm/hvm.c
>> @@ -1330,6 +1330,7 @@ static int hvm_save_cpu_msrs(struct domain *d, 
>> hvm_domain_context_t *h)
>>  
>>  for_each_vcpu ( d, v )
>>  {
>> +struct hvm_save_descriptor *d = _p(>data[h->cur]);
>>  struct hvm_msr *ctxt;
>>  unsigned int i;
>>  
>> @@ -1348,8 +1349,13 @@ static int hvm_save_cpu_msrs(struct domain *d, 
>> hvm_domain_context_t *h)
>>  ctxt->msr[i]._rsvd = 0;
>>  
>>  if ( ctxt->count )
>> +{
>> +/* Rewrite length to indicate how much space we actually used. 
>> */
>> +d->length = HVM_CPU_MSR_SIZE(ctxt->count);
> Would of course be nice if we had a function to do this, such that
> the (sufficiently hidden) cast above also wouldn't be necessary to
> open code in places like this one.

This is the one and only case where we need to rewrite the length.  All
records (other than XSAVE) are fixed size, and XSAVE can calculate the
exact required size ahead of setting up descriptor in the first place.

(Also, this code isn't long for the world.  It will be changing when the
CPUID/MSR policy work is complete, and we can rearrange the migration
stream to put data in the order the destination needs to receive it.)

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-next] x86/vmx: Drop more PVHv1 remenants

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 14:19,  wrote:
> Signed-off-by: Andrew Cooper 

Reviewed-by: Jan Beulich 



___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-next] x86/vmx: Drop more PVHv1 remenants

2017-11-20 Thread Andrew Cooper
Signed-off-by: Andrew Cooper 
---
CC: Jan Beulich 
CC: Jun Nakajima 
CC: Kevin Tian 
---
 xen/arch/x86/hvm/vmx/intr.c | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 4c0f1c8..eb9b288 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -229,7 +229,7 @@ void vmx_intr_assist(void)
 struct vcpu *v = current;
 unsigned int tpr_threshold = 0;
 enum hvm_intblk intblk;
-int pt_vector = -1;
+int pt_vector;
 
 /* Block event injection when single step with MTF. */
 if ( unlikely(v->arch.hvm_vcpu.single_step) )
@@ -240,8 +240,7 @@ void vmx_intr_assist(void)
 }
 
 /* Crank the handle on interrupt state. */
-if ( is_hvm_vcpu(v) )
-pt_vector = pt_update_irq(v);
+pt_vector = pt_update_irq(v);
 
 do {
 unsigned long intr_info;
-- 
2.1.4


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread Eduardo Otubo
On Mon, Nov 20, 2017 at 12:17:11PM +0100, Juergen Gross wrote:
> On 20/11/17 11:49, Wei Liu wrote:
> > CC netfront maintainers.
> > 
> > On Mon, Nov 20, 2017 at 11:41:09AM +0100, Eduardo Otubo wrote:
> >> When unloading module xen_netfront from guest, dmesg would output
> >> warning messages like below:
> >>
> >>   [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
> >>   [  105.236839] deferring g.e. 0x903 (pfn 0x35805)
> >>
> >> This problem relies on netfront and netback being out of sync. By the time
> >> netfront revokes the g.e.'s netback didn't have enough time to free all of
> >> them, hence displaying the warnings on dmesg.
> >>
> >> The trick here is to make netfront to wait until netback frees all the 
> >> g.e.'s
> >> and only then continue to cleanup for the module removal, and this is done 
> >> by
> >> manipulating both device states.
> >>
> >> Signed-off-by: Eduardo Otubo 
> >> ---
> >>  drivers/net/xen-netfront.c | 11 +++
> >>  1 file changed, 11 insertions(+)
> >>
> >> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> >> index 8b8689c6d887..b948e2a1ce40 100644
> >> --- a/drivers/net/xen-netfront.c
> >> +++ b/drivers/net/xen-netfront.c
> >> @@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device *dev)
> >>  
> >>dev_dbg(>dev, "%s\n", dev->nodename);
> >>  
> >> +  xenbus_switch_state(dev, XenbusStateClosing);
> >> +  while (xenbus_read_driver_state(dev->otherend) != XenbusStateClosing){
> >> +  cpu_relax();
> >> +  schedule();
> >> +  }
> >> +  xenbus_switch_state(dev, XenbusStateClosed);
> >> +  while (dev->xenbus_state != XenbusStateClosed){
> >> +  cpu_relax();
> >> +  schedule();
> >> +  }
> 
> I really don't like the busy waits.
> 
> Can't you use e.g. a wait queue and wait_event_interruptible() instead?

I thought about using these, but I don't think the busy waits here are much of a
problem because it's just unloading a kernel module, not a very repetitive
action. But yes I can go for this approach on v2.

> 
> BTW: what happens if the device is already in closed state if you enter
> xennet_remove()? In case this is impossible, please add a comment to
> indicate you've thought about that case.

Looks like this is the same problem Paul Durrant mentioned on his comment. I'll
work on this as well on v2.

Thanks for the review and the help on IRC :-)

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread 'Eduardo Otubo'
On Mon, Nov 20, 2017 at 10:55:55AM +, Paul Durrant wrote:
> > -Original Message-
> > From: Eduardo Otubo [mailto:ot...@redhat.com]
> > Sent: 20 November 2017 10:41
> > To: xen-de...@lists.xenproject.org
> > Cc: net...@vger.kernel.org; Paul Durrant ; Wei
> > Liu ; linux-ker...@vger.kernel.org;
> > vkuzn...@redhat.com; cav...@redhat.com; che...@redhat.com;
> > mga...@redhat.com; Eduardo Otubo 
> > Subject: [PATCH] xen-netfront: remove warning when unloading module
> > 
> > When unloading module xen_netfront from guest, dmesg would output
> > warning messages like below:
> > 
> >   [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
> >   [  105.236839] deferring g.e. 0x903 (pfn 0x35805)
> > 
> > This problem relies on netfront and netback being out of sync. By the time
> > netfront revokes the g.e.'s netback didn't have enough time to free all of
> > them, hence displaying the warnings on dmesg.
> > 
> > The trick here is to make netfront to wait until netback frees all the 
> > g.e.'s
> > and only then continue to cleanup for the module removal, and this is done
> > by
> > manipulating both device states.
> > 
> > Signed-off-by: Eduardo Otubo 
> > ---
> >  drivers/net/xen-netfront.c | 11 +++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> > index 8b8689c6d887..b948e2a1ce40 100644
> > --- a/drivers/net/xen-netfront.c
> > +++ b/drivers/net/xen-netfront.c
> > @@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device
> > *dev)
> > 
> > dev_dbg(>dev, "%s\n", dev->nodename);
> > 
> > +   xenbus_switch_state(dev, XenbusStateClosing);
> > +   while (xenbus_read_driver_state(dev->otherend) !=
> > XenbusStateClosing){
> > +   cpu_relax();
> > +   schedule();
> > +   }
> > +   xenbus_switch_state(dev, XenbusStateClosed);
> > +   while (dev->xenbus_state != XenbusStateClosed){
> > +   cpu_relax();
> > +   schedule();
> > +   }
> > +
> 
> Waitiing for closing should be ok but waiting for closed is risky. As soon as 
> a backend is in the closed state then a toolstack can completely remove the 
> backend xenstore area, resulting a state of XenbusStateUnknown, which would 
> cause your second loop to spin forever.
> 
>   Paul

Well, that's a scenario I didn't foresee. I'll come up with a solution in order
avoid this problem. Thanks for the review.

> 
> > xennet_disconnect_backend(info);
> > 
> > unregister_netdev(info->netdev);
> > --
> > 2.13.6
> 

-- 
Eduardo Otubo

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [seabios test] 116346: regressions - FAIL

2017-11-20 Thread osstest service owner
flight 116346 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116346/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop   fail REGR. vs. 115539

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115539
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115539
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop fail like 115539
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 seabios  df46d10c8a7b88eb82f3ceb2aa31782dee15593d
baseline version:
 seabios  0ca6d6277dfafc671a5b3718cbeb5c78e2a888ea

Last test of basis   115539  2017-11-03 20:48:58 Z   16 days
Failing since115733  2017-11-10 17:19:59 Z9 days   16 attempts
Testing same since   116211  2017-11-16 00:20:45 Z4 days6 attempts


People who touched revisions under test:
  Kevin O'Connor 
  Stefan Berger 

jobs:
 build-amd64-xsm  pass
 build-i386-xsm   pass
 build-amd64  pass
 build-i386   pass
 build-amd64-libvirt  pass
 build-i386-libvirt   pass
 build-amd64-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm   pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsmpass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-xsmpass
 test-amd64-i386-xl-qemuu-debianhvm-amd64-xsm pass
 test-amd64-amd64-qemuu-nested-amdfail
 test-amd64-i386-qemuu-rhel6hvm-amd   pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64pass
 test-amd64-i386-xl-qemuu-debianhvm-amd64 pass
 test-amd64-amd64-xl-qemuu-win7-amd64 fail
 test-amd64-i386-xl-qemuu-win7-amd64  fail
 test-amd64-amd64-xl-qemuu-ws16-amd64 fail
 test-amd64-i386-xl-qemuu-ws16-amd64  fail
 test-amd64-amd64-xl-qemuu-win10-i386 fail
 test-amd64-i386-xl-qemuu-win10-i386  fail
 test-amd64-amd64-qemuu-nested-intel  pass
 test-amd64-i386-qemuu-rhel6hvm-intel pass



sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.


commit df46d10c8a7b88eb82f3ceb2aa31782dee15593d
Author: Stefan Berger 
Date:   Tue Nov 14 15:03:47 2017 -0500

tpm: Add support for TPM2 ACPI table

Add support for the TPM2 ACPI table. If we find it and its
of the appropriate size, we can get the log_area_start_address
and log_area_minimum_size from it.

The latest version of the spec can be found here:

https://trustedcomputinggroup.org/tcg-acpi-specification/

Signed-off-by: Stefan Berger 

commit 0541f2f0f246e77d7c726926976920e8072d1119
Author: Kevin O'Connor 
Date:   Fri Nov 10 12:20:35 2017 -0500

paravirt: Only enable sercon in NOGRAPHIC mode if no other console specified

Signed-off-by: Kevin O'Connor 

commit 9ce6778f08c632c52b25bc8f754291ef18710d53
Author: Kevin O'Connor 
Date:   Fri Nov 10 12:16:36 2017 -0500

docs: Add sercon-port to Runtime_config.md documentation

Signed-off-by: Kevin O'Connor 

[Xen-devel] [linux-linus test] 116343: regressions - FAIL

2017-11-20 Thread osstest service owner
flight 116343 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116343/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-win10-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-pygrub   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-pvhv2-amd  7 xen-bootfail REGR. vs. 115643
 test-amd64-i386-freebsd10-i386  7 xen-boot   fail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-xsm7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-freebsd10-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-i386-libvirt-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qcow2 7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-examine   8 reboot   fail REGR. vs. 115643
 test-amd64-i386-xl7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-rumprun-i386  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 
115643
 test-amd64-i386-xl-raw7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-libvirt-qcow2  7 xen-bootfail REGR. vs. 115643
 test-amd64-amd64-xl-multivcpu  7 xen-bootfail REGR. vs. 115643
 test-amd64-amd64-xl-credit2   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-xsm   7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-i386-pvgrub  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemut-debianhvm-amd64-xsm  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-debianhvm-amd64-xsm 7 xen-boot fail REGR. vs. 115643
 test-amd64-amd64-xl-qemut-win7-amd64  7 xen-boot fail REGR. vs. 115643
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-boot  fail REGR. vs. 115643
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-boot   fail REGR. vs. 115643

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 115643
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 115643
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 115643
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 115643
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stopfail like 115643
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 115643
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 

Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 12:20,  wrote:
> Which restriction? I'm loading the RSDP table to its architectural
> correct addres if possible, otherwise it will be loaded to the same
> address as without my patch. So I'm not adding a restriction, but
> removing one.

What is "architecturally correct" in PVH can't be read out of
specs other than what we write down. When there's no BIOS,
placing anything right below the 1Mb boundary is at least
bogus.

As to the prior grub2 discussion you refer to - just like Andrew
I don't think I was really involved here. If it resulted in any
decisions affecting the PVH ABI, I think it would be a good idea
to summarize the outcome, and perhaps even submit a patch
adding respective documentation (e.g. by way of comment in
public headers). That'll then allow non-grub Xen folks (like
Andrew and me) to see what you're intending to do (and of
course there would be the risk of someone disagreeing with
what you had come up with while discussing this on the grub
side).

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH v8] x86/altp2m: support for setting restrictions for an array of pages

2017-11-20 Thread Jan Beulich
>>> On 20.11.17 at 10:35,  wrote:
> On Ma, 2017-10-24 at 13:19 +0300, Petre Pircalabu wrote:
>> From: Razvan Cojocaru 
>>
>> For the default EPT view we have xc_set_mem_access_multi(), which
>> is able to set an array of pages to an array of access rights with
>> a single hypercall. However, this functionality was lacking for the
>> altp2m subsystem, which could only set page restrictions for one
>> page at a time. This patch addresses the gap.
>>
>> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as
>> opposed to a
>> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access
>> counterpart (and
>> hence with the original altp2m design, where domains are allowed -
>> with the
>> proper altp2m access rights - to alter these settings), in the
>> absence of an
>> official position on the issue from the original altp2m designers.
>>
>> Signed-off-by: Razvan Cojocaru 
>> Signed-off-by: Petre Pircalabu 
> 
> Are there still any outstanding issues with this patch?

I for one don't know - I simply did't get around to look at it yet.
The tree's still frozen right now anyway.

Jan


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 11:57, Andrew Cooper wrote:
> On 20/11/17 10:43, Juergen Gross wrote:
>> On 20/11/17 11:21, Andrew Cooper wrote:
>>> On 20/11/17 10:04, Juergen Gross wrote:
 On 20/11/17 10:58, Andrew Cooper wrote:
> On 20/11/2017 09:55, Juergen Gross wrote:
>> On 20/11/17 10:51, Roger Pau Monné wrote:
>>> Adding xen-devel, dropped it on my reply. 
>>>
>>> Replying from my phone, sorry for the formatting. 
>>>
>>>
>>> El 20 nov. 2017 9:35, "Juergen Gross" >> > escribió:
>>>
>>> For PVH domains loading of the ACPI RSDP table is done via
>>> allocating
>>> a domain loader segment after having loaded the kernel. This
>>> leads to
>>> the RSDP table being loaded at an arbitrary guest address 
>>> instead of
>>> the architectural correct address just below 1MB. 
>>>
>>>
>>> AFAIK this is only true for legacy BIOS boot, when using UEFI the
>>> RSDP can be anywhere in memory, hence grub2 must already have an
>>> alternative way of finding the RSDP apart from scanning the low 1MB.
>> The problem isn't grub2, but the loaded linux kernel. Without this
>> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>>
>> I could modify grub2 even further to move the RSDP to the correct
>> address, but I think doing it correctly on Xen side is the better
>> option.
> Why?  The PVH info block contains a pointer directly to the RSDP, and
> Linux should be following this rather than scanning for it using the
> legacy method.
 Oh no, please not this discussion again.

 We already had a very long discussion how to do PVH support in grub2,
 and the outcome was to try to use the standard boot entry of the kernel
 instead the PVH sepcific one.

 The Linux kernel right now doesn't make use of the RSDP pointer in the
 PVH info block, so I think we shouldn't change this when using grub2.
>>> I clearly missed the previous discussion, and I don't advocate using yet
>>> another PVH-specific entry point, but how does Linux cope in other
>>> non-BIOS environments?  Does it genuinely rely exclusively on the legacy
>>> mechanism?
>> Looking at the code I think so, yes. Maybe there are cases where no RSDP
>> is needed, but in the grub2/PVH case we need it to distinguish PVH from
>> HVM.
> 
> In which case, being a Linux limitation, I think it is wrong to
> unilaterally apply this restriction to all other PVH guests.

Which restriction? I'm loading the RSDP table to its architectural
correct addres if possible, otherwise it will be loaded to the same
address as without my patch. So I'm not adding a restriction, but
removing one.

> Doing this in grub seems like the more appropriate place IMO.

I don't think so.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread Juergen Gross
On 20/11/17 11:49, Wei Liu wrote:
> CC netfront maintainers.
> 
> On Mon, Nov 20, 2017 at 11:41:09AM +0100, Eduardo Otubo wrote:
>> When unloading module xen_netfront from guest, dmesg would output
>> warning messages like below:
>>
>>   [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
>>   [  105.236839] deferring g.e. 0x903 (pfn 0x35805)
>>
>> This problem relies on netfront and netback being out of sync. By the time
>> netfront revokes the g.e.'s netback didn't have enough time to free all of
>> them, hence displaying the warnings on dmesg.
>>
>> The trick here is to make netfront to wait until netback frees all the g.e.'s
>> and only then continue to cleanup for the module removal, and this is done by
>> manipulating both device states.
>>
>> Signed-off-by: Eduardo Otubo 
>> ---
>>  drivers/net/xen-netfront.c | 11 +++
>>  1 file changed, 11 insertions(+)
>>
>> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
>> index 8b8689c6d887..b948e2a1ce40 100644
>> --- a/drivers/net/xen-netfront.c
>> +++ b/drivers/net/xen-netfront.c
>> @@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device *dev)
>>  
>>  dev_dbg(>dev, "%s\n", dev->nodename);
>>  
>> +xenbus_switch_state(dev, XenbusStateClosing);
>> +while (xenbus_read_driver_state(dev->otherend) != XenbusStateClosing){
>> +cpu_relax();
>> +schedule();
>> +}
>> +xenbus_switch_state(dev, XenbusStateClosed);
>> +while (dev->xenbus_state != XenbusStateClosed){
>> +cpu_relax();
>> +schedule();
>> +}

I really don't like the busy waits.

Can't you use e.g. a wait queue and wait_event_interruptible() instead?

BTW: what happens if the device is already in closed state if you enter
xennet_remove()? In case this is impossible, please add a comment to
indicate you've thought about that case.

Other than that: you should run ./scripts/checkpatch.p1 against your
patch to avoid common style problems.


Juergen


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Andrew Cooper
On 20/11/17 10:43, Juergen Gross wrote:
> On 20/11/17 11:21, Andrew Cooper wrote:
>> On 20/11/17 10:04, Juergen Gross wrote:
>>> On 20/11/17 10:58, Andrew Cooper wrote:
 On 20/11/2017 09:55, Juergen Gross wrote:
> On 20/11/17 10:51, Roger Pau Monné wrote:
>> Adding xen-devel, dropped it on my reply. 
>>
>> Replying from my phone, sorry for the formatting. 
>>
>>
>> El 20 nov. 2017 9:35, "Juergen Gross" > > escribió:
>>
>> For PVH domains loading of the ACPI RSDP table is done via
>> allocating
>> a domain loader segment after having loaded the kernel. This
>> leads to
>> the RSDP table being loaded at an arbitrary guest address 
>> instead of
>> the architectural correct address just below 1MB. 
>>
>>
>> AFAIK this is only true for legacy BIOS boot, when using UEFI the
>> RSDP can be anywhere in memory, hence grub2 must already have an
>> alternative way of finding the RSDP apart from scanning the low 1MB.
> The problem isn't grub2, but the loaded linux kernel. Without this
> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>
> I could modify grub2 even further to move the RSDP to the correct
> address, but I think doing it correctly on Xen side is the better
> option.
 Why?  The PVH info block contains a pointer directly to the RSDP, and
 Linux should be following this rather than scanning for it using the
 legacy method.
>>> Oh no, please not this discussion again.
>>>
>>> We already had a very long discussion how to do PVH support in grub2,
>>> and the outcome was to try to use the standard boot entry of the kernel
>>> instead the PVH sepcific one.
>>>
>>> The Linux kernel right now doesn't make use of the RSDP pointer in the
>>> PVH info block, so I think we shouldn't change this when using grub2.
>> I clearly missed the previous discussion, and I don't advocate using yet
>> another PVH-specific entry point, but how does Linux cope in other
>> non-BIOS environments?  Does it genuinely rely exclusively on the legacy
>> mechanism?
> Looking at the code I think so, yes. Maybe there are cases where no RSDP
> is needed, but in the grub2/PVH case we need it to distinguish PVH from
> HVM.

In which case, being a Linux limitation, I think it is wrong to
unilaterally apply this restriction to all other PVH guests.

Doing this in grub seems like the more appropriate place IMO.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread Paul Durrant
> -Original Message-
> From: Eduardo Otubo [mailto:ot...@redhat.com]
> Sent: 20 November 2017 10:41
> To: xen-de...@lists.xenproject.org
> Cc: net...@vger.kernel.org; Paul Durrant ; Wei
> Liu ; linux-ker...@vger.kernel.org;
> vkuzn...@redhat.com; cav...@redhat.com; che...@redhat.com;
> mga...@redhat.com; Eduardo Otubo 
> Subject: [PATCH] xen-netfront: remove warning when unloading module
> 
> When unloading module xen_netfront from guest, dmesg would output
> warning messages like below:
> 
>   [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
>   [  105.236839] deferring g.e. 0x903 (pfn 0x35805)
> 
> This problem relies on netfront and netback being out of sync. By the time
> netfront revokes the g.e.'s netback didn't have enough time to free all of
> them, hence displaying the warnings on dmesg.
> 
> The trick here is to make netfront to wait until netback frees all the g.e.'s
> and only then continue to cleanup for the module removal, and this is done
> by
> manipulating both device states.
> 
> Signed-off-by: Eduardo Otubo 
> ---
>  drivers/net/xen-netfront.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 8b8689c6d887..b948e2a1ce40 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device
> *dev)
> 
>   dev_dbg(>dev, "%s\n", dev->nodename);
> 
> + xenbus_switch_state(dev, XenbusStateClosing);
> + while (xenbus_read_driver_state(dev->otherend) !=
> XenbusStateClosing){
> + cpu_relax();
> + schedule();
> + }
> + xenbus_switch_state(dev, XenbusStateClosed);
> + while (dev->xenbus_state != XenbusStateClosed){
> + cpu_relax();
> + schedule();
> + }
> +

Waitiing for closing should be ok but waiting for closed is risky. As soon as a 
backend is in the closed state then a toolstack can completely remove the 
backend xenstore area, resulting a state of XenbusStateUnknown, which would 
cause your second loop to spin forever.

  Paul

>   xennet_disconnect_backend(info);
> 
>   unregister_netdev(info->netdev);
> --
> 2.13.6


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread Wei Liu
CC netfront maintainers.

On Mon, Nov 20, 2017 at 11:41:09AM +0100, Eduardo Otubo wrote:
> When unloading module xen_netfront from guest, dmesg would output
> warning messages like below:
> 
>   [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
>   [  105.236839] deferring g.e. 0x903 (pfn 0x35805)
> 
> This problem relies on netfront and netback being out of sync. By the time
> netfront revokes the g.e.'s netback didn't have enough time to free all of
> them, hence displaying the warnings on dmesg.
> 
> The trick here is to make netfront to wait until netback frees all the g.e.'s
> and only then continue to cleanup for the module removal, and this is done by
> manipulating both device states.
> 
> Signed-off-by: Eduardo Otubo 
> ---
>  drivers/net/xen-netfront.c | 11 +++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
> index 8b8689c6d887..b948e2a1ce40 100644
> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device *dev)
>  
>   dev_dbg(>dev, "%s\n", dev->nodename);
>  
> + xenbus_switch_state(dev, XenbusStateClosing);
> + while (xenbus_read_driver_state(dev->otherend) != XenbusStateClosing){
> + cpu_relax();
> + schedule();
> + }
> + xenbus_switch_state(dev, XenbusStateClosed);
> + while (dev->xenbus_state != XenbusStateClosed){
> + cpu_relax();
> + schedule();
> + }
> +
>   xennet_disconnect_backend(info);
>  
>   unregister_netdev(info->netdev);
> -- 
> 2.13.6
> 

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 11:21, Andrew Cooper wrote:
> On 20/11/17 10:04, Juergen Gross wrote:
>> On 20/11/17 10:58, Andrew Cooper wrote:
>>> On 20/11/2017 09:55, Juergen Gross wrote:
 On 20/11/17 10:51, Roger Pau Monné wrote:
> Adding xen-devel, dropped it on my reply. 
>
> Replying from my phone, sorry for the formatting. 
>
>
> El 20 nov. 2017 9:35, "Juergen Gross"  > escribió:
>
> For PVH domains loading of the ACPI RSDP table is done via
> allocating
> a domain loader segment after having loaded the kernel. This
> leads to
> the RSDP table being loaded at an arbitrary guest address instead 
> of
> the architectural correct address just below 1MB. 
>
>
> AFAIK this is only true for legacy BIOS boot, when using UEFI the
> RSDP can be anywhere in memory, hence grub2 must already have an
> alternative way of finding the RSDP apart from scanning the low 1MB.
 The problem isn't grub2, but the loaded linux kernel. Without this
 patch Linux won't find the RSDP when booted in a PVH domain via grub2.

 I could modify grub2 even further to move the RSDP to the correct
 address, but I think doing it correctly on Xen side is the better
 option.
>>> Why?  The PVH info block contains a pointer directly to the RSDP, and
>>> Linux should be following this rather than scanning for it using the
>>> legacy method.
>> Oh no, please not this discussion again.
>>
>> We already had a very long discussion how to do PVH support in grub2,
>> and the outcome was to try to use the standard boot entry of the kernel
>> instead the PVH sepcific one.
>>
>> The Linux kernel right now doesn't make use of the RSDP pointer in the
>> PVH info block, so I think we shouldn't change this when using grub2.
> 
> I clearly missed the previous discussion, and I don't advocate using yet
> another PVH-specific entry point, but how does Linux cope in other
> non-BIOS environments?  Does it genuinely rely exclusively on the legacy
> mechanism?

Looking at the code I think so, yes. Maybe there are cases where no RSDP
is needed, but in the grub2/PVH case we need it to distinguish PVH from
HVM.

Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH] xen-netfront: remove warning when unloading module

2017-11-20 Thread Eduardo Otubo
When unloading module xen_netfront from guest, dmesg would output
warning messages like below:

  [  105.236836] xen:grant_table: WARNING: g.e. 0x903 still in use!
  [  105.236839] deferring g.e. 0x903 (pfn 0x35805)

This problem relies on netfront and netback being out of sync. By the time
netfront revokes the g.e.'s netback didn't have enough time to free all of
them, hence displaying the warnings on dmesg.

The trick here is to make netfront to wait until netback frees all the g.e.'s
and only then continue to cleanup for the module removal, and this is done by
manipulating both device states.

Signed-off-by: Eduardo Otubo 
---
 drivers/net/xen-netfront.c | 11 +++
 1 file changed, 11 insertions(+)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 8b8689c6d887..b948e2a1ce40 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -2130,6 +2130,17 @@ static int xennet_remove(struct xenbus_device *dev)
 
dev_dbg(>dev, "%s\n", dev->nodename);
 
+   xenbus_switch_state(dev, XenbusStateClosing);
+   while (xenbus_read_driver_state(dev->otherend) != XenbusStateClosing){
+   cpu_relax();
+   schedule();
+   }
+   xenbus_switch_state(dev, XenbusStateClosed);
+   while (dev->xenbus_state != XenbusStateClosed){
+   cpu_relax();
+   schedule();
+   }
+
xennet_disconnect_backend(info);
 
unregister_netdev(info->netdev);
-- 
2.13.6


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Andrew Cooper
On 20/11/17 10:04, Juergen Gross wrote:
> On 20/11/17 10:58, Andrew Cooper wrote:
>> On 20/11/2017 09:55, Juergen Gross wrote:
>>> On 20/11/17 10:51, Roger Pau Monné wrote:
 Adding xen-devel, dropped it on my reply. 

 Replying from my phone, sorry for the formatting. 


 El 20 nov. 2017 9:35, "Juergen Gross" > escribió:

 For PVH domains loading of the ACPI RSDP table is done via
 allocating
 a domain loader segment after having loaded the kernel. This
 leads to
 the RSDP table being loaded at an arbitrary guest address instead 
 of
 the architectural correct address just below 1MB. 


 AFAIK this is only true for legacy BIOS boot, when using UEFI the
 RSDP can be anywhere in memory, hence grub2 must already have an
 alternative way of finding the RSDP apart from scanning the low 1MB.
>>> The problem isn't grub2, but the loaded linux kernel. Without this
>>> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>>>
>>> I could modify grub2 even further to move the RSDP to the correct
>>> address, but I think doing it correctly on Xen side is the better
>>> option.
>> Why?  The PVH info block contains a pointer directly to the RSDP, and
>> Linux should be following this rather than scanning for it using the
>> legacy method.
> Oh no, please not this discussion again.
>
> We already had a very long discussion how to do PVH support in grub2,
> and the outcome was to try to use the standard boot entry of the kernel
> instead the PVH sepcific one.
>
> The Linux kernel right now doesn't make use of the RSDP pointer in the
> PVH info block, so I think we shouldn't change this when using grub2.

I clearly missed the previous discussion, and I don't advocate using yet
another PVH-specific entry point, but how does Linux cope in other
non-BIOS environments?  Does it genuinely rely exclusively on the legacy
mechanism?

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [distros-debian-sid test] 72466: tolerable FAIL

2017-11-20 Thread Platform Team regression test user
flight 72466 distros-debian-sid real [real]
http://osstest.xs.citrite.net/~osstest/testlogs/logs/72466/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-i386-sid-netboot-pvgrub 10 debian-di-install   fail like 72441
 test-armhf-armhf-armhf-sid-netboot-pygrub 10 debian-di-install fail like 72441
 test-amd64-i386-amd64-sid-netboot-pygrub 10 debian-di-install  fail like 72441
 test-amd64-amd64-amd64-sid-netboot-pvgrub 10 debian-di-install fail like 72441
 test-amd64-amd64-i386-sid-netboot-pygrub 10 debian-di-install  fail like 72441

baseline version:
 flight   72441

jobs:
 build-amd64  pass
 build-armhf  pass
 build-i386   pass
 build-amd64-pvopspass
 build-armhf-pvopspass
 build-i386-pvops pass
 test-amd64-amd64-amd64-sid-netboot-pvgrubfail
 test-amd64-i386-i386-sid-netboot-pvgrub  fail
 test-amd64-i386-amd64-sid-netboot-pygrub fail
 test-armhf-armhf-armhf-sid-netboot-pygrubfail
 test-amd64-amd64-i386-sid-netboot-pygrub fail



sg-report-flight on osstest.xs.citrite.net
logs: /home/osstest/logs
images: /home/osstest/images

Logs, config files, etc. are available at
http://osstest.xs.citrite.net/~osstest/testlogs/logs

Test harness code can be found at
http://xenbits.xensource.com/gitweb?p=osstest.git;a=summary


Push not applicable.


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 10:58, Andrew Cooper wrote:
> On 20/11/2017 09:55, Juergen Gross wrote:
>> On 20/11/17 10:51, Roger Pau Monné wrote:
>>> Adding xen-devel, dropped it on my reply. 
>>>
>>> Replying from my phone, sorry for the formatting. 
>>>
>>>
>>> El 20 nov. 2017 9:35, "Juergen Gross" >> > escribió:
>>>
>>> For PVH domains loading of the ACPI RSDP table is done via
>>> allocating
>>> a domain loader segment after having loaded the kernel. This
>>> leads to
>>> the RSDP table being loaded at an arbitrary guest address instead of
>>> the architectural correct address just below 1MB. 
>>>
>>>
>>> AFAIK this is only true for legacy BIOS boot, when using UEFI the
>>> RSDP can be anywhere in memory, hence grub2 must already have an
>>> alternative way of finding the RSDP apart from scanning the low 1MB.
>> The problem isn't grub2, but the loaded linux kernel. Without this
>> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>>
>> I could modify grub2 even further to move the RSDP to the correct
>> address, but I think doing it correctly on Xen side is the better
>> option.
> 
> Why?  The PVH info block contains a pointer directly to the RSDP, and
> Linux should be following this rather than scanning for it using the
> legacy method.

Oh no, please not this discussion again.

We already had a very long discussion how to do PVH support in grub2,
and the outcome was to try to use the standard boot entry of the kernel
instead the PVH sepcific one.

The Linux kernel right now doesn't make use of the RSDP pointer in the
PVH info block, so I think we shouldn't change this when using grub2.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Andrew Cooper
On 20/11/2017 09:55, Juergen Gross wrote:
> On 20/11/17 10:51, Roger Pau Monné wrote:
>> Adding xen-devel, dropped it on my reply. 
>>
>> Replying from my phone, sorry for the formatting. 
>>
>>
>> El 20 nov. 2017 9:35, "Juergen Gross" > > escribió:
>>
>> For PVH domains loading of the ACPI RSDP table is done via
>> allocating
>> a domain loader segment after having loaded the kernel. This
>> leads to
>> the RSDP table being loaded at an arbitrary guest address instead of
>> the architectural correct address just below 1MB. 
>>
>>
>> AFAIK this is only true for legacy BIOS boot, when using UEFI the
>> RSDP can be anywhere in memory, hence grub2 must already have an
>> alternative way of finding the RSDP apart from scanning the low 1MB.
> The problem isn't grub2, but the loaded linux kernel. Without this
> patch Linux won't find the RSDP when booted in a PVH domain via grub2.
>
> I could modify grub2 even further to move the RSDP to the correct
> address, but I think doing it correctly on Xen side is the better
> option.

Why?  The PVH info block contains a pointer directly to the RSDP, and
Linux should be following this rather than scanning for it using the
legacy method.

~Andrew

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
On 20/11/17 10:51, Roger Pau Monné wrote:
> Adding xen-devel, dropped it on my reply. 
> 
> Replying from my phone, sorry for the formatting. 
> 
> 
> El 20 nov. 2017 9:35, "Juergen Gross"  > escribió:
> 
> For PVH domains loading of the ACPI RSDP table is done via
> allocating
> a domain loader segment after having loaded the kernel. This
> leads to
> the RSDP table being loaded at an arbitrary guest address instead of
> the architectural correct address just below 1MB. 
> 
> 
> AFAIK this is only true for legacy BIOS boot, when using UEFI the
> RSDP can be anywhere in memory, hence grub2 must already have an
> alternative way of finding the RSDP apart from scanning the low 1MB.

The problem isn't grub2, but the loaded linux kernel. Without this
patch Linux won't find the RSDP when booted in a PVH domain via grub2.

I could modify grub2 even further to move the RSDP to the correct
address, but I think doing it correctly on Xen side is the better
option.


Juergen

___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


Re: [Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Roger Pau Monné
Adding xen-devel, dropped it on my reply.

Replying from my phone, sorry for the formatting.


El 20 nov. 2017 9:35, "Juergen Gross"  escribió:

For PVH domains loading of the ACPI RSDP table is done via allocating
a domain loader segment after having loaded the kernel. This leads to
the RSDP table being loaded at an arbitrary guest address instead of
the architectural correct address just below 1MB.


AFAIK this is only true for legacy BIOS boot, when using UEFI the RSDP can
be anywhere in memory, hence grub2 must already have an alternative way of
finding the RSDP apart from scanning the low 1MB.

Thanks, Roger.
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [qemu-mainline test] 116339: regressions - FAIL

2017-11-20 Thread osstest service owner
flight 116339 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/116339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-credit2 16 guest-start/debian.repeat fail REGR. vs. 116190

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail in 116314 
pass in 116339
 test-armhf-armhf-xl   7 xen-boot   fail pass in 116314

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl 13 migrate-support-check fail in 116314 never pass
 test-armhf-armhf-xl 14 saverestore-support-check fail in 116314 never pass
 test-armhf-armhf-libvirt-xsm 14 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stopfail like 116190
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stopfail like 116190
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop fail like 116190
 test-armhf-armhf-libvirt 14 saverestore-support-checkfail  like 116190
 test-armhf-armhf-libvirt-raw 13 saverestore-support-checkfail  like 116190
 test-amd64-amd64-xl-pvhv2-intel 12 guest-start fail never pass
 test-amd64-i386-libvirt  13 migrate-support-checkfail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-checkfail   never pass
 test-amd64-amd64-xl-pvhv2-amd 12 guest-start  fail  never pass
 test-amd64-amd64-libvirt 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check 
fail never pass
 test-amd64-i386-libvirt-qcow2 12 migrate-support-checkfail  never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-checkfail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-rtds 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-xl-xsm  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-checkfail   never pass
 test-armhf-armhf-libvirt 13 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-checkfail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-checkfail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-checkfail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-checkfail  never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop  fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  12 migrate-support-checkfail   never pass
 test-armhf-armhf-xl-vhd  13 saverestore-support-checkfail   never pass
 test-amd64-i386-xl-qemuu-win10-i386 10 windows-install fail never pass
 test-amd64-amd64-xl-qemuu-win10-i386 10 windows-installfail never pass

version targeted for testing:
 qemuu2e02083438962d26ef9dcc7100f3b378104183db
baseline version:
 qemuu1fa0f627d03cd0d0755924247cafeb42969016bf

Last test of basis   116190  2017-11-15 06:53:12 Z5 days
Failing since116227  2017-11-16 13:17:17 Z3 days4 attempts
Testing same since   116314  2017-11-18 15:17:45 Z1 days2 attempts


People who touched revisions under test:
  "Daniel P. Berrange" 
  Alex Bennée 
  Alexey Kardashevskiy 
  Anton Nefedov 
  BALATON Zoltan 
  Christian Borntraeger 
  Daniel Henrique Barboza 
  Daniel P. Berrange 
  Dariusz Stojaczyk 
  Dou Liyang 
  Dr. David Alan Gilbert 
  Emilio G. Cota 
  Eric Blake 
  Gerd Hoffmann 
  Jindrich Makovicka 
  Kevin Wolf 
  linzhecheng 
  Marc-André Lureau 
  Marcel Apfelbaum 

Re: [Xen-devel] [PATCH v8] x86/altp2m: support for setting restrictions for an array of pages

2017-11-20 Thread Petre Ovidiu PIRCALABU
On Ma, 2017-10-24 at 13:19 +0300, Petre Pircalabu wrote:
> From: Razvan Cojocaru 
>
> For the default EPT view we have xc_set_mem_access_multi(), which
> is able to set an array of pages to an array of access rights with
> a single hypercall. However, this functionality was lacking for the
> altp2m subsystem, which could only set page restrictions for one
> page at a time. This patch addresses the gap.
>
> HVMOP_altp2m_set_mem_access_multi has been added as a HVMOP (as
> opposed to a
> DOMCTL) for consistency with its HVMOP_altp2m_set_mem_access
> counterpart (and
> hence with the original altp2m design, where domains are allowed -
> with the
> proper altp2m access rights - to alter these settings), in the
> absence of an
> official position on the issue from the original altp2m designers.
>
> Signed-off-by: Razvan Cojocaru 
> Signed-off-by: Petre Pircalabu 
>
Hello,

Are there still any outstanding issues with this patch?

Many thanks,
Petre


This email was scanned by Bitdefender
___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel


[Xen-devel] [PATCH for-4.10] libxc: load acpi RSDP table at correct address

2017-11-20 Thread Juergen Gross
For PVH domains loading of the ACPI RSDP table is done via allocating
a domain loader segment after having loaded the kernel. This leads to
the RSDP table being loaded at an arbitrary guest address instead of
the architectural correct address just below 1MB.

When using the Linux kernel this is currently no problem as the
bzImage loader is being used, which is loading ACPI tables via an
alternative method.

Using grub2 however exposes this problem leading to the selected
kernel no longer being able to find the RSDP table.

To solve this issue allow to load the RSDP table below the already
loaded kernel if this space hasn't been used before.

Signed-off-by: Juergen Gross 
---
Not sure if this is acceptable for 4.10, but I think PVH guest support
should include the possibility to use grub2 as a boot loader.

So please consider this patch for 4.10.
---
 tools/libxc/xc_dom_hvmloader.c | 39 +++
 1 file changed, 31 insertions(+), 8 deletions(-)

diff --git a/tools/libxc/xc_dom_hvmloader.c b/tools/libxc/xc_dom_hvmloader.c
index 59f94e51e5..2284c7f9df 100644
--- a/tools/libxc/xc_dom_hvmloader.c
+++ b/tools/libxc/xc_dom_hvmloader.c
@@ -135,20 +135,43 @@ static int module_init_one(struct xc_dom_image *dom,
 {
 struct xc_dom_seg seg;
 void *dest;
+xen_pfn_t start, end;
+unsigned int page_size = XC_DOM_PAGE_SIZE(dom);
 
 if ( module->length )
 {
-if ( xc_dom_alloc_segment(dom, , name, 0, module->length) )
-goto err;
-dest = xc_dom_seg_to_ptr(dom, );
-if ( dest == NULL )
+/*
+ * Check for module located below kernel.
+ * Make sure not to be fooled by a kernel based on virtual address.
+ */
+if ( module->guest_addr_out && !(dom->kernel_seg.vstart >> 32) &&
+ module->guest_addr_out + module->length <= dom->kernel_seg.vstart 
)
 {
-DOMPRINTF("%s: xc_dom_seg_to_ptr(dom, ) => NULL",
-  __FUNCTION__);
-goto err;
+start = module->guest_addr_out / page_size;
+end = (module->guest_addr_out + module->length + page_size - 1) /
+  page_size;
+dest = xc_dom_pfn_to_ptr(dom, start, end - start);
+if ( dest == NULL )
+{
+DOMPRINTF("%s: xc_dom_pfn_to_ptr() => NULL", __FUNCTION__);
+goto err;
+}
+dest += module->guest_addr_out - start * page_size;
+}
+else
+{
+if ( xc_dom_alloc_segment(dom, , name, 0, module->length) )
+goto err;
+dest = xc_dom_seg_to_ptr(dom, );
+if ( dest == NULL )
+{
+DOMPRINTF("%s: xc_dom_seg_to_ptr(dom, ) => NULL",
+  __FUNCTION__);
+goto err;
+}
+module->guest_addr_out = seg.vstart;
 }
 memcpy(dest, module->data, module->length);
-module->guest_addr_out = seg.vstart;
 
 assert(dom->mmio_start > 0 && dom->mmio_start < UINT32_MAX);
 if ( module->guest_addr_out > dom->mmio_start ||
-- 
2.12.3


___
Xen-devel mailing list
Xen-devel@lists.xen.org
https://lists.xen.org/xen-devel