On 16.06.2021 16:49, Jan Beulich wrote:
> On 16.06.2021 16:21, Anthony PERARD wrote:
>> On Wed, Jun 16, 2021 at 09:12:52AM +0200, Jan Beulich wrote:
>>> On 16.06.2021 08:54, osstest service owner wrote:
>>>> flight 162845 xen-unstable real [real]
>>>> flight 162853 xen-unstable real-retest [real]
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162845/
>>>> http://logs.test-lab.xenproject.org/osstest/logs/162853/
>>>>
>>>> Regressions :-(
>>>>
>>>> Tests which did not succeed and are blocking,
>>>> including tests which could not be run:
>>>>  test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 
>>>> 162533
>>>>  test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 
>>>> 162533
>>>
>>> There looks to still be an issue with the ovmf version used. I'm
>>> puzzled to find this flight reporting
>>>
>>> built_revision_ovmf e1999b264f1f9d7230edf2448f757c73da567832
>>>
>>> which isn't what the tree recently was rewound to, but about two
>>> dozen commits older. I hope one of you has a clue at what is going
>>> on here.
>>
>> So this commit is "master" from https://xenbits.xen.org/git-http/ovmf.git
>> rather than "xen-tested-master" from 
>> https://xenbits.xen.org/git-http/osstest/ovmf.git
>>
>> master is what xen.git would have cloned. And "xen-tested-master" is the
>> commit that I was expecting osstest to pick up, but maybe that as been
>> setup only for stable trees?
>>
>> Anyway, after aad7b5c11d51 ("tools/firmware/ovmf: Use OvmfXen platform
>> file is exist"), it isn't the same OVMF that is been used. We used to
>> use OvmfX64, but now we are going to use OvmfXen. (Xen support in
>> OvmfX64 has been removed so can't be used anymore.)
>>
>>
>> So there is maybe an issue with OvmfXen which doesn't need to block
>> xen-unstable flights.
>>
>>
>> As for the failure, I can think of one thing in that is different,
>> OvmfXen maps the XENMAPSPACE_shared_info page as high as possible in the
>> guest physical memory, in order to avoid creating hole the RAM, but a
>> call to XENMEM_remove_from_physmap is done as well. Could that actually
>> cause issues with saverestore?
> 
> I don't think it should. But I now notice I should have looked at the
> logs of these tests:
> 
> xc: info: Saving domain 2, type x86 HVM
> xc: error: Unable to obtain the guest p2m size (1 = Operation not permitted): 
> Internal error
> xc: error: Save failed (1 = Operation not permitted): Internal error
> 
> which looks suspiciously similar to the issue Jürgen's d21121685fac
> ("tools/libs/guest: fix save and restore of pv domains after 32-bit
> de-support") took care of, just that here we're dealing with a HVM
> guest. I'll have to go inspect what exactly the library is doing there,
> and hence where in Xen the -EPERM may be coming from all of the
> sudden (and only for OVMF).

The *-amd64-i386-* variant has

xc: info: Saving domain 2, type x86 HVM
xc: error: Cannot save this big a guest (7 = Argument list too long): Internal 
error

which to me hints at ...

> Of course the behavior you describe above may play into this, since
> aiui this might lead to an excessively large p2m (depending what
> exactly you mean with "as high as possible").

.. a connection, but I'm not sure at all. XENMEM_maximum_gpfn returns
its result as the hypercall return value, so huge values could be a
problem at least for 32-bit tool stacks.

What page number are you mapping the shared info page at in OVMF?

Jan


Reply via email to