Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-29 Thread David Hildenbrand
>> Once we support 2GB pages, we'll might have think about what you
>> describe here, depending on what the KVM interface promises us. If the
>> interfaces promises "If 2GB are enabled, 1MB are enabled implicitly", we
>> are fine, otherwise we would have to check per mapped backend.
> 
> I guess.  I'm generally in favour of checking explicitly for the
> condition you need, rather than something that should be equivalent
> based on a bunch of assumptions, even if those assumptions are pretty
> solid.  At least if it's practical to do so, which explicitly
> iterating through the backends seems like it would be here.
> 
> But, when it comes down to it, I don't really care that much which
> solution you go with.

I guess it will be all easier to handle once we would have all RAM
converted to (internal) memory devices. No need to check for (e.g.
mem-path) side conditions or memory backends used for different purposes
than RAM.

> 
>>> It also occurs to me: why does this logic need to be in qemu at all?
>>> KVM must know what pagesizes it supports, and I assume it will throw
>>> an error if you try to put something with the wrong size into a
>>> memslot.  So can't qemu just report that error, rather than checking
>>> the pagesizes itself?
>>
>> There are multiple things to that
>>
>> 1. "KVM must know what pagesizes it supports"
>>
>> Yes it does, and this is reflected via KVM capabilities (e.g.
>> KVM_CAP_S390_HPAGE_1M) . To make use of
>> these capabilities, they have to be enabled by user space. Once we have
>> support for 2G pages, we'll have KVM_CAP_S390_HPAGE_2G.
>>
>> In case the capability is enabled, certain things have to be changed in KVM
>> - CMMA can no longer be used (user space has to properly take care of that)
>> - Certain HW assists (PFMF interpretation, Storage Key Facility) have to
>> be disabled early.
>>
>>
>> 2. "it will throw an error if you try to put something with the wrong
>> size into a memslot"
>>
>> An error will be reported when trying to map a huge page into the GMAP
>> (e.g. on a host page fault in virtualization mode). So not when the
>> memslot is configured, but during kvm_run.
> 
> Ah, ok, if we don't get the error at set memslot time then yes that's
> definitely something we'd need to check for in qemu in advance.
> 
>> Checking the memslot might be
>>
>> a) complicated (check all VMAs)
> 
> Yeah, maybe.
> 
>> b) waste of time (many VMAs)
> 
> I doubt that's really the case, but it doesn't mater because..
> 
>> c) incorrect - the content of a memslot can change any time. (KVM
>> synchronous MMU). Think of someone wanting to remap some pages part of a
>> memslot using huge pages.
> 
> ..good point.  Yeah, ok, that's not going to work.
> 
>> 3. Can you elaborate on "So can't qemu just report that error, rather
>> than checking the pagesizes itself?" We effectively check against the
>> capabilities of KVM and the page size. Based on that, we report the
>> error in QEMU. Reporting an error after the guest has already started
>> and crashed during kvm_run due to a huge page is way too late.
> 
> Agreed.
> 


-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-28 Thread David Gibson
On Thu, Mar 28, 2019 at 10:26:07AM +0100, David Hildenbrand wrote:
> On 28.03.19 02:18, David Gibson wrote:
> > On Wed, Mar 27, 2019 at 03:22:58PM +0100, David Hildenbrand wrote:
> >> On 27.03.19 10:03, David Gibson wrote:
> >>> On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
>  On 27.03.19 01:12, David Gibson wrote:
> > On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
> >> On 26.03.19 15:08, Igor Mammedov wrote:
> >>> On Tue, 26 Mar 2019 14:50:58 +1100
> >>> David Gibson  wrote:
> >>>
>  qemu_getrampagesize() works out the minimum host page size backing 
>  any of
>  guest RAM.  This is required in a few places, such as for POWER8 
>  PAPR KVM
>  guests, because limitations of the hardware virtualization mean the 
>  guest
>  can't use pagesizes larger than the host pages backing its memory.
> 
>  However, it currently checks against *every* memory backend, whether 
>  or not
>  it is actually mapped into guest memory at the moment.  This is 
>  incorrect.
> 
>  This can cause a problem attempting to add memory to a POWER8 
>  pseries KVM
>  guest which is configured to allow hugepages in the guest (e.g.
>  -machine cap-hpt-max-page-size=16m).  If you attempt to add 
>  non-hugepage,
>  you can (correctly) create a memory backend, however it (correctly) 
>  will
>  throw an error when you attempt to map that memory into the guest by
>  'device_add'ing a pc-dimm.
> 
>  What's not correct is that if you then reset the guest a startup 
>  check
>  against qemu_getrampagesize() will cause a fatal error because of 
>  the new
>  memory object, even though it's not mapped into the guest.
> >>> I'd say that backend should be remove by mgmt app since device_add 
> >>> failed
> >>> instead of leaving it to hang around. (but fatal error either not a 
> >>> nice
> >>> behavior on QEMU part)
> >>
> >> Indeed, it should be removed. Depending on the options (huge pages with
> >> prealloc?) memory might be consumed for unused memory. Undesired.
> >
> > Right, but if the guest initiates a reboot before the management gets
> > to that, we'll have a crash.
> >
> 
>  Yes, I agree.
> 
>  At least on s390x (extending on what Igor said):
> 
>  mc->init() -> s390_memory_init() ->
>  memory_region_allocate_system_memory() -> 
>  host_memory_backend_set_mapped()
> 
> 
>  ac->init_machine() -> kvm_arch_init() ->
>  kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> 
> 
>  And in vl.c
> 
>  configure_accelerator(current_machine, argv[0]);
>  ...
>  machine_run_board_init()
> 
>  So memory is indeed not mapped before calling qemu_getrampagesize().
> 
> 
>  We *could* move the call to kvm_s390_configure_mempath_backing() to
>  s390_memory_init().
> 
>  cap_hpage_1m is not needed before we create VCPUs, so this would work 
>  fine.
> 
>  We could than eventually make qemu_getrampagesize() asssert if no
>  backends are mapped at all, to catch other user that rely on this being
>  correct.
> >>>
> >>> So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
> >>> and I'm pretty sure it's broken.  It will work in the case where
> >>> there's only one backend.  And if that's the default -mem-path rather
> >>> than an explicit memory backend then my patch won't break it any
> >>> further.
> >>
> >> On the second look, I think I get your point.
> >>
> >> 1. Why on earth does "find_max_supported_pagesize" find the "minimum
> >> page size". What kind of nasty stuff is this.
> > 
> > Ah, yeah, the naming is bad because of history.
> > 
> > The original usecase of this was because on POWER (before POWER9) the
> > way MMU virtualization works, pages inserted into the guest MMU view
> > have to be host-contiguous: there's no 2nd level translation that lets
> > them be broken into smaller host pages.
> > 
> > The upshot is that a KVM guest can only use large pages if it's backed
> > by large pages on the host.  We have to advertise the availability of
> > large pages to the guest at boot time though, and there's no way to
> > restrict it to certain parts of guest RAM.
> > 
> > So, this code path was finding the _maximum_ page size the guest could
> > use... which depends on the _minimum page_ size used on the host.
> > When this was moved to (partly) generic code we didn't think to
> > improve all the names.
> > 
> >> 2. qemu_mempath_getpagesize() is not affected by your patch
> > 
> > Correct.
> > 
> >> and that
> >> seems to be the only thing used on s390x for now.
> > 
> > Uh.. what?
> > 
> >> I sent a patch to move the call on s390x. But we really have to detect
> 

Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-28 Thread David Hildenbrand
On 28.03.19 02:18, David Gibson wrote:
> On Wed, Mar 27, 2019 at 03:22:58PM +0100, David Hildenbrand wrote:
>> On 27.03.19 10:03, David Gibson wrote:
>>> On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
 On 27.03.19 01:12, David Gibson wrote:
> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
>> On 26.03.19 15:08, Igor Mammedov wrote:
>>> On Tue, 26 Mar 2019 14:50:58 +1100
>>> David Gibson  wrote:
>>>
 qemu_getrampagesize() works out the minimum host page size backing any 
 of
 guest RAM.  This is required in a few places, such as for POWER8 PAPR 
 KVM
 guests, because limitations of the hardware virtualization mean the 
 guest
 can't use pagesizes larger than the host pages backing its memory.

 However, it currently checks against *every* memory backend, whether 
 or not
 it is actually mapped into guest memory at the moment.  This is 
 incorrect.

 This can cause a problem attempting to add memory to a POWER8 pseries 
 KVM
 guest which is configured to allow hugepages in the guest (e.g.
 -machine cap-hpt-max-page-size=16m).  If you attempt to add 
 non-hugepage,
 you can (correctly) create a memory backend, however it (correctly) 
 will
 throw an error when you attempt to map that memory into the guest by
 'device_add'ing a pc-dimm.

 What's not correct is that if you then reset the guest a startup check
 against qemu_getrampagesize() will cause a fatal error because of the 
 new
 memory object, even though it's not mapped into the guest.
>>> I'd say that backend should be remove by mgmt app since device_add 
>>> failed
>>> instead of leaving it to hang around. (but fatal error either not a nice
>>> behavior on QEMU part)
>>
>> Indeed, it should be removed. Depending on the options (huge pages with
>> prealloc?) memory might be consumed for unused memory. Undesired.
>
> Right, but if the guest initiates a reboot before the management gets
> to that, we'll have a crash.
>

 Yes, I agree.

 At least on s390x (extending on what Igor said):

 mc->init() -> s390_memory_init() ->
 memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()


 ac->init_machine() -> kvm_arch_init() ->
 kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()


 And in vl.c

 configure_accelerator(current_machine, argv[0]);
 ...
 machine_run_board_init()

 So memory is indeed not mapped before calling qemu_getrampagesize().


 We *could* move the call to kvm_s390_configure_mempath_backing() to
 s390_memory_init().

 cap_hpage_1m is not needed before we create VCPUs, so this would work fine.

 We could than eventually make qemu_getrampagesize() asssert if no
 backends are mapped at all, to catch other user that rely on this being
 correct.
>>>
>>> So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
>>> and I'm pretty sure it's broken.  It will work in the case where
>>> there's only one backend.  And if that's the default -mem-path rather
>>> than an explicit memory backend then my patch won't break it any
>>> further.
>>
>> On the second look, I think I get your point.
>>
>> 1. Why on earth does "find_max_supported_pagesize" find the "minimum
>> page size". What kind of nasty stuff is this.
> 
> Ah, yeah, the naming is bad because of history.
> 
> The original usecase of this was because on POWER (before POWER9) the
> way MMU virtualization works, pages inserted into the guest MMU view
> have to be host-contiguous: there's no 2nd level translation that lets
> them be broken into smaller host pages.
> 
> The upshot is that a KVM guest can only use large pages if it's backed
> by large pages on the host.  We have to advertise the availability of
> large pages to the guest at boot time though, and there's no way to
> restrict it to certain parts of guest RAM.
> 
> So, this code path was finding the _maximum_ page size the guest could
> use... which depends on the _minimum page_ size used on the host.
> When this was moved to (partly) generic code we didn't think to
> improve all the names.
> 
>> 2. qemu_mempath_getpagesize() is not affected by your patch
> 
> Correct.
> 
>> and that
>> seems to be the only thing used on s390x for now.
> 
> Uh.. what?
> 
>> I sent a patch to move the call on s390x. But we really have to detect
>> the maximum page size (what find_max_supported_pagesize promises), not
>> the minimum page size.
> 
> Well.. sort of.  In the ppc case it really is the minimum page size we
> care about, in the sense that if some part of RAM has a larger page
> size, that's fine - even if it's a weird size that we didn't expect.
> 
> IIUC for s390 the 

Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-28 Thread David Hildenbrand
On 28.03.19 01:27, David Gibson wrote:
> On Wed, Mar 27, 2019 at 02:19:41PM +0100, David Hildenbrand wrote:
>> On 27.03.19 10:03, David Gibson wrote:
>>> On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
 On 27.03.19 01:12, David Gibson wrote:
> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
>> On 26.03.19 15:08, Igor Mammedov wrote:
>>> On Tue, 26 Mar 2019 14:50:58 +1100
>>> David Gibson  wrote:
>>>
 qemu_getrampagesize() works out the minimum host page size backing any 
 of
 guest RAM.  This is required in a few places, such as for POWER8 PAPR 
 KVM
 guests, because limitations of the hardware virtualization mean the 
 guest
 can't use pagesizes larger than the host pages backing its memory.

 However, it currently checks against *every* memory backend, whether 
 or not
 it is actually mapped into guest memory at the moment.  This is 
 incorrect.

 This can cause a problem attempting to add memory to a POWER8 pseries 
 KVM
 guest which is configured to allow hugepages in the guest (e.g.
 -machine cap-hpt-max-page-size=16m).  If you attempt to add 
 non-hugepage,
 you can (correctly) create a memory backend, however it (correctly) 
 will
 throw an error when you attempt to map that memory into the guest by
 'device_add'ing a pc-dimm.

 What's not correct is that if you then reset the guest a startup check
 against qemu_getrampagesize() will cause a fatal error because of the 
 new
 memory object, even though it's not mapped into the guest.
>>> I'd say that backend should be remove by mgmt app since device_add 
>>> failed
>>> instead of leaving it to hang around. (but fatal error either not a nice
>>> behavior on QEMU part)
>>
>> Indeed, it should be removed. Depending on the options (huge pages with
>> prealloc?) memory might be consumed for unused memory. Undesired.
>
> Right, but if the guest initiates a reboot before the management gets
> to that, we'll have a crash.
>

 Yes, I agree.

 At least on s390x (extending on what Igor said):

 mc->init() -> s390_memory_init() ->
 memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()


 ac->init_machine() -> kvm_arch_init() ->
 kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()


 And in vl.c

 configure_accelerator(current_machine, argv[0]);
 ...
 machine_run_board_init()

 So memory is indeed not mapped before calling qemu_getrampagesize().


 We *could* move the call to kvm_s390_configure_mempath_backing() to
 s390_memory_init().

 cap_hpage_1m is not needed before we create VCPUs, so this would work fine.

 We could than eventually make qemu_getrampagesize() asssert if no
 backends are mapped at all, to catch other user that rely on this being
 correct.
>>>
>>> So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
>>> and I'm pretty sure it's broken.  It will work in the case where
>>> there's only one backend.  And if that's the default -mem-path rather
>>> than an explicit memory backend then my patch won't break it any
>>> further.
>>
>> It works for the current scenarios, where you only have one (maximum
>> two) backings of the same kind. Your patch would break that.
> 
> Actually it wouldn't.  My patch only affects checking of explicit
> backend objects - checking of the base -mem-path implicit backend
> remains the same.

Yes, you're right.

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Gibson
On Wed, Mar 27, 2019 at 02:19:41PM +0100, David Hildenbrand wrote:
> On 27.03.19 10:03, David Gibson wrote:
> > On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
> >> On 27.03.19 01:12, David Gibson wrote:
> >>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
>  On 26.03.19 15:08, Igor Mammedov wrote:
> > On Tue, 26 Mar 2019 14:50:58 +1100
> > David Gibson  wrote:
> >
> >> qemu_getrampagesize() works out the minimum host page size backing any 
> >> of
> >> guest RAM.  This is required in a few places, such as for POWER8 PAPR 
> >> KVM
> >> guests, because limitations of the hardware virtualization mean the 
> >> guest
> >> can't use pagesizes larger than the host pages backing its memory.
> >>
> >> However, it currently checks against *every* memory backend, whether 
> >> or not
> >> it is actually mapped into guest memory at the moment.  This is 
> >> incorrect.
> >>
> >> This can cause a problem attempting to add memory to a POWER8 pseries 
> >> KVM
> >> guest which is configured to allow hugepages in the guest (e.g.
> >> -machine cap-hpt-max-page-size=16m).  If you attempt to add 
> >> non-hugepage,
> >> you can (correctly) create a memory backend, however it (correctly) 
> >> will
> >> throw an error when you attempt to map that memory into the guest by
> >> 'device_add'ing a pc-dimm.
> >>
> >> What's not correct is that if you then reset the guest a startup check
> >> against qemu_getrampagesize() will cause a fatal error because of the 
> >> new
> >> memory object, even though it's not mapped into the guest.
> > I'd say that backend should be remove by mgmt app since device_add 
> > failed
> > instead of leaving it to hang around. (but fatal error either not a nice
> > behavior on QEMU part)
> 
>  Indeed, it should be removed. Depending on the options (huge pages with
>  prealloc?) memory might be consumed for unused memory. Undesired.
> >>>
> >>> Right, but if the guest initiates a reboot before the management gets
> >>> to that, we'll have a crash.
> >>>
> >>
> >> Yes, I agree.
> >>
> >> At least on s390x (extending on what Igor said):
> >>
> >> mc->init() -> s390_memory_init() ->
> >> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
> >>
> >>
> >> ac->init_machine() -> kvm_arch_init() ->
> >> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> >>
> >>
> >> And in vl.c
> >>
> >> configure_accelerator(current_machine, argv[0]);
> >> ...
> >> machine_run_board_init()
> >>
> >> So memory is indeed not mapped before calling qemu_getrampagesize().
> >>
> >>
> >> We *could* move the call to kvm_s390_configure_mempath_backing() to
> >> s390_memory_init().
> >>
> >> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
> >>
> >> We could than eventually make qemu_getrampagesize() asssert if no
> >> backends are mapped at all, to catch other user that rely on this being
> >> correct.
> > 
> > So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
> > and I'm pretty sure it's broken.  It will work in the case where
> > there's only one backend.  And if that's the default -mem-path rather
> > than an explicit memory backend then my patch won't break it any
> > further.
> 
> It works for the current scenarios, where you only have one (maximum
> two) backings of the same kind. Your patch would break that.

Actually it wouldn't.  My patch only affects checking of explicit
backend objects - checking of the base -mem-path implicit backend
remains the same.

> > qemu_getrampagesize() returns the smallest host page size for any
> > memory backend.  That's what matters for pcc KVM (in several cases)
> > because we need certain things to be host-contiguous, not just
> > guest-contiguous.  Bigger host page sizes are fine for that purpose,
> > clearly.
> > 
> > AFAICT on s390 you're looking to determine if any backend is using
> > hugepages, because KVM may not support that.  The minimum host page
> > size isn't adequate to determine that, so qemu_getrampagesize() won't
> > tell you what you need.
> 
> Well, as long as we don't support DIMMS or anything like that it works
> perfectly fine. But yes it is far from beautiful.
> 
> First of all, I'll prepare a patch to do the call from a different
> context. Then we can fine tune to using something else than
> qemu_getrampagesize()
> 

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Gibson
On Wed, Mar 27, 2019 at 03:22:58PM +0100, David Hildenbrand wrote:
> On 27.03.19 10:03, David Gibson wrote:
> > On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
> >> On 27.03.19 01:12, David Gibson wrote:
> >>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
>  On 26.03.19 15:08, Igor Mammedov wrote:
> > On Tue, 26 Mar 2019 14:50:58 +1100
> > David Gibson  wrote:
> >
> >> qemu_getrampagesize() works out the minimum host page size backing any 
> >> of
> >> guest RAM.  This is required in a few places, such as for POWER8 PAPR 
> >> KVM
> >> guests, because limitations of the hardware virtualization mean the 
> >> guest
> >> can't use pagesizes larger than the host pages backing its memory.
> >>
> >> However, it currently checks against *every* memory backend, whether 
> >> or not
> >> it is actually mapped into guest memory at the moment.  This is 
> >> incorrect.
> >>
> >> This can cause a problem attempting to add memory to a POWER8 pseries 
> >> KVM
> >> guest which is configured to allow hugepages in the guest (e.g.
> >> -machine cap-hpt-max-page-size=16m).  If you attempt to add 
> >> non-hugepage,
> >> you can (correctly) create a memory backend, however it (correctly) 
> >> will
> >> throw an error when you attempt to map that memory into the guest by
> >> 'device_add'ing a pc-dimm.
> >>
> >> What's not correct is that if you then reset the guest a startup check
> >> against qemu_getrampagesize() will cause a fatal error because of the 
> >> new
> >> memory object, even though it's not mapped into the guest.
> > I'd say that backend should be remove by mgmt app since device_add 
> > failed
> > instead of leaving it to hang around. (but fatal error either not a nice
> > behavior on QEMU part)
> 
>  Indeed, it should be removed. Depending on the options (huge pages with
>  prealloc?) memory might be consumed for unused memory. Undesired.
> >>>
> >>> Right, but if the guest initiates a reboot before the management gets
> >>> to that, we'll have a crash.
> >>>
> >>
> >> Yes, I agree.
> >>
> >> At least on s390x (extending on what Igor said):
> >>
> >> mc->init() -> s390_memory_init() ->
> >> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
> >>
> >>
> >> ac->init_machine() -> kvm_arch_init() ->
> >> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> >>
> >>
> >> And in vl.c
> >>
> >> configure_accelerator(current_machine, argv[0]);
> >> ...
> >> machine_run_board_init()
> >>
> >> So memory is indeed not mapped before calling qemu_getrampagesize().
> >>
> >>
> >> We *could* move the call to kvm_s390_configure_mempath_backing() to
> >> s390_memory_init().
> >>
> >> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
> >>
> >> We could than eventually make qemu_getrampagesize() asssert if no
> >> backends are mapped at all, to catch other user that rely on this being
> >> correct.
> > 
> > So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
> > and I'm pretty sure it's broken.  It will work in the case where
> > there's only one backend.  And if that's the default -mem-path rather
> > than an explicit memory backend then my patch won't break it any
> > further.
> 
> On the second look, I think I get your point.
> 
> 1. Why on earth does "find_max_supported_pagesize" find the "minimum
> page size". What kind of nasty stuff is this.

Ah, yeah, the naming is bad because of history.

The original usecase of this was because on POWER (before POWER9) the
way MMU virtualization works, pages inserted into the guest MMU view
have to be host-contiguous: there's no 2nd level translation that lets
them be broken into smaller host pages.

The upshot is that a KVM guest can only use large pages if it's backed
by large pages on the host.  We have to advertise the availability of
large pages to the guest at boot time though, and there's no way to
restrict it to certain parts of guest RAM.

So, this code path was finding the _maximum_ page size the guest could
use... which depends on the _minimum page_ size used on the host.
When this was moved to (partly) generic code we didn't think to
improve all the names.

> 2. qemu_mempath_getpagesize() is not affected by your patch

Correct.

> and that
> seems to be the only thing used on s390x for now.

Uh.. what?

> I sent a patch to move the call on s390x. But we really have to detect
> the maximum page size (what find_max_supported_pagesize promises), not
> the minimum page size.

Well.. sort of.  In the ppc case it really is the minimum page size we
care about, in the sense that if some part of RAM has a larger page
size, that's fine - even if it's a weird size that we didn't expect.

IIUC for s390 the problem is that KVM doesn't necessarily support
putting large pages into the guest at all, and what large page 

Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 18:08, Igor Mammedov wrote:
> On Wed, 27 Mar 2019 15:01:37 +0100
> David Hildenbrand  wrote:
> 
>> On 27.03.19 10:09, Igor Mammedov wrote:
>>> On Wed, 27 Mar 2019 09:10:01 +0100
>>> David Hildenbrand  wrote:
>>>
 On 27.03.19 01:12, David Gibson wrote:
> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:  
>> On 26.03.19 15:08, Igor Mammedov wrote:  
>>> On Tue, 26 Mar 2019 14:50:58 +1100
>>> David Gibson  wrote:
>>>  
 qemu_getrampagesize() works out the minimum host page size backing any 
 of
 guest RAM.  This is required in a few places, such as for POWER8 PAPR 
 KVM
 guests, because limitations of the hardware virtualization mean the 
 guest
 can't use pagesizes larger than the host pages backing its memory.

 However, it currently checks against *every* memory backend, whether 
 or not
 it is actually mapped into guest memory at the moment.  This is 
 incorrect.

 This can cause a problem attempting to add memory to a POWER8 pseries 
 KVM
 guest which is configured to allow hugepages in the guest (e.g.
 -machine cap-hpt-max-page-size=16m).  If you attempt to add 
 non-hugepage,
 you can (correctly) create a memory backend, however it (correctly) 
 will
 throw an error when you attempt to map that memory into the guest by
 'device_add'ing a pc-dimm.

 What's not correct is that if you then reset the guest a startup check
 against qemu_getrampagesize() will cause a fatal error because of the 
 new
 memory object, even though it's not mapped into the guest.  
>>> I'd say that backend should be remove by mgmt app since device_add 
>>> failed
>>> instead of leaving it to hang around. (but fatal error either not a nice
>>> behavior on QEMU part)  
>>
>> Indeed, it should be removed. Depending on the options (huge pages with
>> prealloc?) memory might be consumed for unused memory. Undesired.  
>
> Right, but if the guest initiates a reboot before the management gets
> to that, we'll have a crash.
>   

 Yes, I agree.

 At least on s390x (extending on what Igor said):

 mc->init() -> s390_memory_init() ->
 memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()


 ac->init_machine() -> kvm_arch_init() ->
 kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()


 And in vl.c

 configure_accelerator(current_machine, argv[0]);
>>> Looking more at it, it is seems s390 is 'broken' anyways.
>>> We call qemu_getrampagesize() here with huge page backends on CLI
>>> but memory-backends are initialized later
>>>  qemu_opts_foreach(..., object_create_delayed, ...)
>>> so s390 doesn't take into account memory backends currently
>>
>> BTW that might indeed be true, we only check against --mem-path.
> 
> It's possible to break it with '-numa node,memdev=...' since we don't really 
> have
> anything to block that call chain for s390 (but I'd argue it's invalid use of 
> CLI
> for s390 but it's effectively -mem-path on steroids alternative)
> 

I remember that -numa on s390x is completely blocked, but my mind might
play tricks with me. Anyhow, detecting the biggest page size also ahs to
be fixed.

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread Igor Mammedov
On Wed, 27 Mar 2019 15:01:37 +0100
David Hildenbrand  wrote:

> On 27.03.19 10:09, Igor Mammedov wrote:
> > On Wed, 27 Mar 2019 09:10:01 +0100
> > David Hildenbrand  wrote:
> > 
> >> On 27.03.19 01:12, David Gibson wrote:
> >>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:  
>  On 26.03.19 15:08, Igor Mammedov wrote:  
> > On Tue, 26 Mar 2019 14:50:58 +1100
> > David Gibson  wrote:
> >  
> >> qemu_getrampagesize() works out the minimum host page size backing any 
> >> of
> >> guest RAM.  This is required in a few places, such as for POWER8 PAPR 
> >> KVM
> >> guests, because limitations of the hardware virtualization mean the 
> >> guest
> >> can't use pagesizes larger than the host pages backing its memory.
> >>
> >> However, it currently checks against *every* memory backend, whether 
> >> or not
> >> it is actually mapped into guest memory at the moment.  This is 
> >> incorrect.
> >>
> >> This can cause a problem attempting to add memory to a POWER8 pseries 
> >> KVM
> >> guest which is configured to allow hugepages in the guest (e.g.
> >> -machine cap-hpt-max-page-size=16m).  If you attempt to add 
> >> non-hugepage,
> >> you can (correctly) create a memory backend, however it (correctly) 
> >> will
> >> throw an error when you attempt to map that memory into the guest by
> >> 'device_add'ing a pc-dimm.
> >>
> >> What's not correct is that if you then reset the guest a startup check
> >> against qemu_getrampagesize() will cause a fatal error because of the 
> >> new
> >> memory object, even though it's not mapped into the guest.  
> > I'd say that backend should be remove by mgmt app since device_add 
> > failed
> > instead of leaving it to hang around. (but fatal error either not a nice
> > behavior on QEMU part)  
> 
>  Indeed, it should be removed. Depending on the options (huge pages with
>  prealloc?) memory might be consumed for unused memory. Undesired.  
> >>>
> >>> Right, but if the guest initiates a reboot before the management gets
> >>> to that, we'll have a crash.
> >>>   
> >>
> >> Yes, I agree.
> >>
> >> At least on s390x (extending on what Igor said):
> >>
> >> mc->init() -> s390_memory_init() ->
> >> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
> >>
> >>
> >> ac->init_machine() -> kvm_arch_init() ->
> >> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> >>
> >>
> >> And in vl.c
> >>
> >> configure_accelerator(current_machine, argv[0]);
> > Looking more at it, it is seems s390 is 'broken' anyways.
> > We call qemu_getrampagesize() here with huge page backends on CLI
> > but memory-backends are initialized later
> >  qemu_opts_foreach(..., object_create_delayed, ...)
> > so s390 doesn't take into account memory backends currently
> 
> BTW that might indeed be true, we only check against --mem-path.

It's possible to break it with '-numa node,memdev=...' since we don't really 
have
anything to block that call chain for s390 (but I'd argue it's invalid use of 
CLI
for s390 but it's effectively -mem-path on steroids alternative)




Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 10:03, David Gibson wrote:
> On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
>> On 27.03.19 01:12, David Gibson wrote:
>>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
 On 26.03.19 15:08, Igor Mammedov wrote:
> On Tue, 26 Mar 2019 14:50:58 +1100
> David Gibson  wrote:
>
>> qemu_getrampagesize() works out the minimum host page size backing any of
>> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>> guests, because limitations of the hardware virtualization mean the guest
>> can't use pagesizes larger than the host pages backing its memory.
>>
>> However, it currently checks against *every* memory backend, whether or 
>> not
>> it is actually mapped into guest memory at the moment.  This is 
>> incorrect.
>>
>> This can cause a problem attempting to add memory to a POWER8 pseries KVM
>> guest which is configured to allow hugepages in the guest (e.g.
>> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>> you can (correctly) create a memory backend, however it (correctly) will
>> throw an error when you attempt to map that memory into the guest by
>> 'device_add'ing a pc-dimm.
>>
>> What's not correct is that if you then reset the guest a startup check
>> against qemu_getrampagesize() will cause a fatal error because of the new
>> memory object, even though it's not mapped into the guest.
> I'd say that backend should be remove by mgmt app since device_add failed
> instead of leaving it to hang around. (but fatal error either not a nice
> behavior on QEMU part)

 Indeed, it should be removed. Depending on the options (huge pages with
 prealloc?) memory might be consumed for unused memory. Undesired.
>>>
>>> Right, but if the guest initiates a reboot before the management gets
>>> to that, we'll have a crash.
>>>
>>
>> Yes, I agree.
>>
>> At least on s390x (extending on what Igor said):
>>
>> mc->init() -> s390_memory_init() ->
>> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
>>
>>
>> ac->init_machine() -> kvm_arch_init() ->
>> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
>>
>>
>> And in vl.c
>>
>> configure_accelerator(current_machine, argv[0]);
>> ...
>> machine_run_board_init()
>>
>> So memory is indeed not mapped before calling qemu_getrampagesize().
>>
>>
>> We *could* move the call to kvm_s390_configure_mempath_backing() to
>> s390_memory_init().
>>
>> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
>>
>> We could than eventually make qemu_getrampagesize() asssert if no
>> backends are mapped at all, to catch other user that rely on this being
>> correct.
> 
> So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
> and I'm pretty sure it's broken.  It will work in the case where
> there's only one backend.  And if that's the default -mem-path rather
> than an explicit memory backend then my patch won't break it any
> further.

On the second look, I think I get your point.

1. Why on earth does "find_max_supported_pagesize" find the "minimum
page size". What kind of nasty stuff is this.

2. qemu_mempath_getpagesize() is not affected by your patch and that
seems to be the only thing used on s390x for now.

I sent a patch to move the call on s390x. But we really have to detect
the maximum page size (what find_max_supported_pagesize promises), not
the minimum page size.

> 
> qemu_getrampagesize() returns the smallest host page size for any
> memory backend.  That's what matters for pcc KVM (in several cases)
> because we need certain things to be host-contiguous, not just
> guest-contiguous.  Bigger host page sizes are fine for that purpose,
> clearly.
> 
> AFAICT on s390 you're looking to determine if any backend is using
> hugepages, because KVM may not support that.  The minimum host page
> size isn't adequate to determine that, so qemu_getrampagesize() won't
> tell you what you need.
> 

Indeed.

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 10:09, Igor Mammedov wrote:
> On Wed, 27 Mar 2019 09:10:01 +0100
> David Hildenbrand  wrote:
> 
>> On 27.03.19 01:12, David Gibson wrote:
>>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:  
 On 26.03.19 15:08, Igor Mammedov wrote:  
> On Tue, 26 Mar 2019 14:50:58 +1100
> David Gibson  wrote:
>  
>> qemu_getrampagesize() works out the minimum host page size backing any of
>> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>> guests, because limitations of the hardware virtualization mean the guest
>> can't use pagesizes larger than the host pages backing its memory.
>>
>> However, it currently checks against *every* memory backend, whether or 
>> not
>> it is actually mapped into guest memory at the moment.  This is 
>> incorrect.
>>
>> This can cause a problem attempting to add memory to a POWER8 pseries KVM
>> guest which is configured to allow hugepages in the guest (e.g.
>> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>> you can (correctly) create a memory backend, however it (correctly) will
>> throw an error when you attempt to map that memory into the guest by
>> 'device_add'ing a pc-dimm.
>>
>> What's not correct is that if you then reset the guest a startup check
>> against qemu_getrampagesize() will cause a fatal error because of the new
>> memory object, even though it's not mapped into the guest.  
> I'd say that backend should be remove by mgmt app since device_add failed
> instead of leaving it to hang around. (but fatal error either not a nice
> behavior on QEMU part)  

 Indeed, it should be removed. Depending on the options (huge pages with
 prealloc?) memory might be consumed for unused memory. Undesired.  
>>>
>>> Right, but if the guest initiates a reboot before the management gets
>>> to that, we'll have a crash.
>>>   
>>
>> Yes, I agree.
>>
>> At least on s390x (extending on what Igor said):
>>
>> mc->init() -> s390_memory_init() ->
>> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
>>
>>
>> ac->init_machine() -> kvm_arch_init() ->
>> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
>>
>>
>> And in vl.c
>>
>> configure_accelerator(current_machine, argv[0]);
> Looking more at it, it is seems s390 is 'broken' anyways.
> We call qemu_getrampagesize() here with huge page backends on CLI
> but memory-backends are initialized later
>  qemu_opts_foreach(..., object_create_delayed, ...)
> so s390 doesn't take into account memory backends currently

BTW that might indeed be true, we only check against --mem-path.


-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 10:09, Igor Mammedov wrote:
> On Wed, 27 Mar 2019 09:10:01 +0100
> David Hildenbrand  wrote:
> 
>> On 27.03.19 01:12, David Gibson wrote:
>>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:  
 On 26.03.19 15:08, Igor Mammedov wrote:  
> On Tue, 26 Mar 2019 14:50:58 +1100
> David Gibson  wrote:
>  
>> qemu_getrampagesize() works out the minimum host page size backing any of
>> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>> guests, because limitations of the hardware virtualization mean the guest
>> can't use pagesizes larger than the host pages backing its memory.
>>
>> However, it currently checks against *every* memory backend, whether or 
>> not
>> it is actually mapped into guest memory at the moment.  This is 
>> incorrect.
>>
>> This can cause a problem attempting to add memory to a POWER8 pseries KVM
>> guest which is configured to allow hugepages in the guest (e.g.
>> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>> you can (correctly) create a memory backend, however it (correctly) will
>> throw an error when you attempt to map that memory into the guest by
>> 'device_add'ing a pc-dimm.
>>
>> What's not correct is that if you then reset the guest a startup check
>> against qemu_getrampagesize() will cause a fatal error because of the new
>> memory object, even though it's not mapped into the guest.  
> I'd say that backend should be remove by mgmt app since device_add failed
> instead of leaving it to hang around. (but fatal error either not a nice
> behavior on QEMU part)  

 Indeed, it should be removed. Depending on the options (huge pages with
 prealloc?) memory might be consumed for unused memory. Undesired.  
>>>
>>> Right, but if the guest initiates a reboot before the management gets
>>> to that, we'll have a crash.
>>>   
>>
>> Yes, I agree.
>>
>> At least on s390x (extending on what Igor said):
>>
>> mc->init() -> s390_memory_init() ->
>> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
>>
>>
>> ac->init_machine() -> kvm_arch_init() ->
>> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
>>
>>
>> And in vl.c
>>
>> configure_accelerator(current_machine, argv[0]);
> Looking more at it, it is seems s390 is 'broken' anyways.
> We call qemu_getrampagesize() here with huge page backends on CLI
> but memory-backends are initialized later
>  qemu_opts_foreach(..., object_create_delayed, ...)
> so s390 doesn't take into account memory backends currently
> 
>> ...
>> machine_run_board_init()
>>
>> So memory is indeed not mapped before calling qemu_getrampagesize().
> 
> 
> 
>>
>>
>> We *could* move the call to kvm_s390_configure_mempath_backing() to
>> s390_memory_init().
>>
>> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
>>
>> We could than eventually make qemu_getrampagesize() asssert if no
>> backends are mapped at all, to catch other user that rely on this being
>> correct.
> Looks like a reasonable way to fix immediate crash in 4.0 with mandatory 
> assert
> (but see my other reply, about getting rid of qemu_getrampagesize())
> 

I'll send a patch to move the call for s390x. We can than decide how to
proceed with qemu_getrampagesize() in general.

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 10:03, David Gibson wrote:
> On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
>> On 27.03.19 01:12, David Gibson wrote:
>>> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
 On 26.03.19 15:08, Igor Mammedov wrote:
> On Tue, 26 Mar 2019 14:50:58 +1100
> David Gibson  wrote:
>
>> qemu_getrampagesize() works out the minimum host page size backing any of
>> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>> guests, because limitations of the hardware virtualization mean the guest
>> can't use pagesizes larger than the host pages backing its memory.
>>
>> However, it currently checks against *every* memory backend, whether or 
>> not
>> it is actually mapped into guest memory at the moment.  This is 
>> incorrect.
>>
>> This can cause a problem attempting to add memory to a POWER8 pseries KVM
>> guest which is configured to allow hugepages in the guest (e.g.
>> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>> you can (correctly) create a memory backend, however it (correctly) will
>> throw an error when you attempt to map that memory into the guest by
>> 'device_add'ing a pc-dimm.
>>
>> What's not correct is that if you then reset the guest a startup check
>> against qemu_getrampagesize() will cause a fatal error because of the new
>> memory object, even though it's not mapped into the guest.
> I'd say that backend should be remove by mgmt app since device_add failed
> instead of leaving it to hang around. (but fatal error either not a nice
> behavior on QEMU part)

 Indeed, it should be removed. Depending on the options (huge pages with
 prealloc?) memory might be consumed for unused memory. Undesired.
>>>
>>> Right, but if the guest initiates a reboot before the management gets
>>> to that, we'll have a crash.
>>>
>>
>> Yes, I agree.
>>
>> At least on s390x (extending on what Igor said):
>>
>> mc->init() -> s390_memory_init() ->
>> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
>>
>>
>> ac->init_machine() -> kvm_arch_init() ->
>> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
>>
>>
>> And in vl.c
>>
>> configure_accelerator(current_machine, argv[0]);
>> ...
>> machine_run_board_init()
>>
>> So memory is indeed not mapped before calling qemu_getrampagesize().
>>
>>
>> We *could* move the call to kvm_s390_configure_mempath_backing() to
>> s390_memory_init().
>>
>> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
>>
>> We could than eventually make qemu_getrampagesize() asssert if no
>> backends are mapped at all, to catch other user that rely on this being
>> correct.
> 
> So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
> and I'm pretty sure it's broken.  It will work in the case where
> there's only one backend.  And if that's the default -mem-path rather
> than an explicit memory backend then my patch won't break it any
> further.

It works for the current scenarios, where you only have one (maximum
two) backings of the same kind. Your patch would break that.

> 
> qemu_getrampagesize() returns the smallest host page size for any
> memory backend.  That's what matters for pcc KVM (in several cases)
> because we need certain things to be host-contiguous, not just
> guest-contiguous.  Bigger host page sizes are fine for that purpose,
> clearly.
> 
> AFAICT on s390 you're looking to determine if any backend is using
> hugepages, because KVM may not support that.  The minimum host page
> size isn't adequate to determine that, so qemu_getrampagesize() won't
> tell you what you need.

Well, as long as we don't support DIMMS or anything like that it works
perfectly fine. But yes it is far from beautiful.

First of all, I'll prepare a patch to do the call from a different
context. Then we can fine tune to using something else than
qemu_getrampagesize()

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread Igor Mammedov
On Wed, 27 Mar 2019 09:10:01 +0100
David Hildenbrand  wrote:

> On 27.03.19 01:12, David Gibson wrote:
> > On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:  
> >> On 26.03.19 15:08, Igor Mammedov wrote:  
> >>> On Tue, 26 Mar 2019 14:50:58 +1100
> >>> David Gibson  wrote:
> >>>  
>  qemu_getrampagesize() works out the minimum host page size backing any of
>  guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>  guests, because limitations of the hardware virtualization mean the guest
>  can't use pagesizes larger than the host pages backing its memory.
> 
>  However, it currently checks against *every* memory backend, whether or 
>  not
>  it is actually mapped into guest memory at the moment.  This is 
>  incorrect.
> 
>  This can cause a problem attempting to add memory to a POWER8 pseries KVM
>  guest which is configured to allow hugepages in the guest (e.g.
>  -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>  you can (correctly) create a memory backend, however it (correctly) will
>  throw an error when you attempt to map that memory into the guest by
>  'device_add'ing a pc-dimm.
> 
>  What's not correct is that if you then reset the guest a startup check
>  against qemu_getrampagesize() will cause a fatal error because of the new
>  memory object, even though it's not mapped into the guest.  
> >>> I'd say that backend should be remove by mgmt app since device_add failed
> >>> instead of leaving it to hang around. (but fatal error either not a nice
> >>> behavior on QEMU part)  
> >>
> >> Indeed, it should be removed. Depending on the options (huge pages with
> >> prealloc?) memory might be consumed for unused memory. Undesired.  
> > 
> > Right, but if the guest initiates a reboot before the management gets
> > to that, we'll have a crash.
> >   
> 
> Yes, I agree.
> 
> At least on s390x (extending on what Igor said):
> 
> mc->init() -> s390_memory_init() ->
> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
> 
> 
> ac->init_machine() -> kvm_arch_init() ->
> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> 
> 
> And in vl.c
> 
> configure_accelerator(current_machine, argv[0]);
Looking more at it, it is seems s390 is 'broken' anyways.
We call qemu_getrampagesize() here with huge page backends on CLI
but memory-backends are initialized later
 qemu_opts_foreach(..., object_create_delayed, ...)
so s390 doesn't take into account memory backends currently

> ...
> machine_run_board_init()
> 
> So memory is indeed not mapped before calling qemu_getrampagesize().



> 
> 
> We *could* move the call to kvm_s390_configure_mempath_backing() to
> s390_memory_init().
> 
> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
> 
> We could than eventually make qemu_getrampagesize() asssert if no
> backends are mapped at all, to catch other user that rely on this being
> correct.
Looks like a reasonable way to fix immediate crash in 4.0 with mandatory assert
(but see my other reply, about getting rid of qemu_getrampagesize())




Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Gibson
On Wed, Mar 27, 2019 at 09:10:01AM +0100, David Hildenbrand wrote:
> On 27.03.19 01:12, David Gibson wrote:
> > On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
> >> On 26.03.19 15:08, Igor Mammedov wrote:
> >>> On Tue, 26 Mar 2019 14:50:58 +1100
> >>> David Gibson  wrote:
> >>>
>  qemu_getrampagesize() works out the minimum host page size backing any of
>  guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>  guests, because limitations of the hardware virtualization mean the guest
>  can't use pagesizes larger than the host pages backing its memory.
> 
>  However, it currently checks against *every* memory backend, whether or 
>  not
>  it is actually mapped into guest memory at the moment.  This is 
>  incorrect.
> 
>  This can cause a problem attempting to add memory to a POWER8 pseries KVM
>  guest which is configured to allow hugepages in the guest (e.g.
>  -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>  you can (correctly) create a memory backend, however it (correctly) will
>  throw an error when you attempt to map that memory into the guest by
>  'device_add'ing a pc-dimm.
> 
>  What's not correct is that if you then reset the guest a startup check
>  against qemu_getrampagesize() will cause a fatal error because of the new
>  memory object, even though it's not mapped into the guest.
> >>> I'd say that backend should be remove by mgmt app since device_add failed
> >>> instead of leaving it to hang around. (but fatal error either not a nice
> >>> behavior on QEMU part)
> >>
> >> Indeed, it should be removed. Depending on the options (huge pages with
> >> prealloc?) memory might be consumed for unused memory. Undesired.
> > 
> > Right, but if the guest initiates a reboot before the management gets
> > to that, we'll have a crash.
> > 
> 
> Yes, I agree.
> 
> At least on s390x (extending on what Igor said):
> 
> mc->init() -> s390_memory_init() ->
> memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()
> 
> 
> ac->init_machine() -> kvm_arch_init() ->
> kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()
> 
> 
> And in vl.c
> 
> configure_accelerator(current_machine, argv[0]);
> ...
> machine_run_board_init()
> 
> So memory is indeed not mapped before calling qemu_getrampagesize().
> 
> 
> We *could* move the call to kvm_s390_configure_mempath_backing() to
> s390_memory_init().
> 
> cap_hpage_1m is not needed before we create VCPUs, so this would work fine.
> 
> We could than eventually make qemu_getrampagesize() asssert if no
> backends are mapped at all, to catch other user that rely on this being
> correct.

So.. I had a look at the usage in kvm_s390_configure_mempath_backing()
and I'm pretty sure it's broken.  It will work in the case where
there's only one backend.  And if that's the default -mem-path rather
than an explicit memory backend then my patch won't break it any
further.

qemu_getrampagesize() returns the smallest host page size for any
memory backend.  That's what matters for pcc KVM (in several cases)
because we need certain things to be host-contiguous, not just
guest-contiguous.  Bigger host page sizes are fine for that purpose,
clearly.

AFAICT on s390 you're looking to determine if any backend is using
hugepages, because KVM may not support that.  The minimum host page
size isn't adequate to determine that, so qemu_getrampagesize() won't
tell you what you need.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-27 Thread David Hildenbrand
On 27.03.19 01:12, David Gibson wrote:
> On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
>> On 26.03.19 15:08, Igor Mammedov wrote:
>>> On Tue, 26 Mar 2019 14:50:58 +1100
>>> David Gibson  wrote:
>>>
 qemu_getrampagesize() works out the minimum host page size backing any of
 guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
 guests, because limitations of the hardware virtualization mean the guest
 can't use pagesizes larger than the host pages backing its memory.

 However, it currently checks against *every* memory backend, whether or not
 it is actually mapped into guest memory at the moment.  This is incorrect.

 This can cause a problem attempting to add memory to a POWER8 pseries KVM
 guest which is configured to allow hugepages in the guest (e.g.
 -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
 you can (correctly) create a memory backend, however it (correctly) will
 throw an error when you attempt to map that memory into the guest by
 'device_add'ing a pc-dimm.

 What's not correct is that if you then reset the guest a startup check
 against qemu_getrampagesize() will cause a fatal error because of the new
 memory object, even though it's not mapped into the guest.
>>> I'd say that backend should be remove by mgmt app since device_add failed
>>> instead of leaving it to hang around. (but fatal error either not a nice
>>> behavior on QEMU part)
>>
>> Indeed, it should be removed. Depending on the options (huge pages with
>> prealloc?) memory might be consumed for unused memory. Undesired.
> 
> Right, but if the guest initiates a reboot before the management gets
> to that, we'll have a crash.
> 

Yes, I agree.

At least on s390x (extending on what Igor said):

mc->init() -> s390_memory_init() ->
memory_region_allocate_system_memory() -> host_memory_backend_set_mapped()


ac->init_machine() -> kvm_arch_init() ->
kvm_s390_configure_mempath_backing() -> qemu_getrampagesize()


And in vl.c

configure_accelerator(current_machine, argv[0]);
...
machine_run_board_init()

So memory is indeed not mapped before calling qemu_getrampagesize().


We *could* move the call to kvm_s390_configure_mempath_backing() to
s390_memory_init().

cap_hpage_1m is not needed before we create VCPUs, so this would work fine.

We could than eventually make qemu_getrampagesize() asssert if no
backends are mapped at all, to catch other user that rely on this being
correct.

-- 

Thanks,

David / dhildenb



Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-26 Thread David Gibson
On Tue, Mar 26, 2019 at 06:02:51PM +0100, David Hildenbrand wrote:
> On 26.03.19 15:08, Igor Mammedov wrote:
> > On Tue, 26 Mar 2019 14:50:58 +1100
> > David Gibson  wrote:
> > 
> >> qemu_getrampagesize() works out the minimum host page size backing any of
> >> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
> >> guests, because limitations of the hardware virtualization mean the guest
> >> can't use pagesizes larger than the host pages backing its memory.
> >>
> >> However, it currently checks against *every* memory backend, whether or not
> >> it is actually mapped into guest memory at the moment.  This is incorrect.
> >>
> >> This can cause a problem attempting to add memory to a POWER8 pseries KVM
> >> guest which is configured to allow hugepages in the guest (e.g.
> >> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
> >> you can (correctly) create a memory backend, however it (correctly) will
> >> throw an error when you attempt to map that memory into the guest by
> >> 'device_add'ing a pc-dimm.
> >>
> >> What's not correct is that if you then reset the guest a startup check
> >> against qemu_getrampagesize() will cause a fatal error because of the new
> >> memory object, even though it's not mapped into the guest.
> > I'd say that backend should be remove by mgmt app since device_add failed
> > instead of leaving it to hang around. (but fatal error either not a nice
> > behavior on QEMU part)
> 
> Indeed, it should be removed. Depending on the options (huge pages with
> prealloc?) memory might be consumed for unused memory. Undesired.

Right, but if the guest initiates a reboot before the management gets
to that, we'll have a crash.

-- 
David Gibson| I'll have my music baroque, and my code
david AT gibson.dropbear.id.au  | minimalist, thank you.  NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson


signature.asc
Description: PGP signature


Re: [Qemu-devel] [qemu-s390x] [PATCH for-4.0?] exec: Only count mapped memory backends for qemu_getrampagesize()

2019-03-26 Thread David Hildenbrand
On 26.03.19 15:08, Igor Mammedov wrote:
> On Tue, 26 Mar 2019 14:50:58 +1100
> David Gibson  wrote:
> 
>> qemu_getrampagesize() works out the minimum host page size backing any of
>> guest RAM.  This is required in a few places, such as for POWER8 PAPR KVM
>> guests, because limitations of the hardware virtualization mean the guest
>> can't use pagesizes larger than the host pages backing its memory.
>>
>> However, it currently checks against *every* memory backend, whether or not
>> it is actually mapped into guest memory at the moment.  This is incorrect.
>>
>> This can cause a problem attempting to add memory to a POWER8 pseries KVM
>> guest which is configured to allow hugepages in the guest (e.g.
>> -machine cap-hpt-max-page-size=16m).  If you attempt to add non-hugepage,
>> you can (correctly) create a memory backend, however it (correctly) will
>> throw an error when you attempt to map that memory into the guest by
>> 'device_add'ing a pc-dimm.
>>
>> What's not correct is that if you then reset the guest a startup check
>> against qemu_getrampagesize() will cause a fatal error because of the new
>> memory object, even though it's not mapped into the guest.
> I'd say that backend should be remove by mgmt app since device_add failed
> instead of leaving it to hang around. (but fatal error either not a nice
> behavior on QEMU part)

Indeed, it should be removed. Depending on the options (huge pages with
prealloc?) memory might be consumed for unused memory. Undesired.

> 
>>
>> This patch corrects the problem by adjusting find_max_supported_pagesize()
>> (called from qemu_getrampagesize() via object_child_foreach) to exclude
>> non-mapped memory backends.
> I'm not sure about if it's ok do so. It depends on where from
> qemu_getrampagesize() is called. For example s390 calls it rather early
> when initializing KVM, so there isn't anything mapped yet.
> 
> And once I replace -mem-path with hostmem backend and drop
> qemu_mempath_getpagesize(mem_path) /which btw aren't guarantied to be mapped 
> either/
> this patch might lead to incorrect results for initial memory as well.
> 
>>
>> Signed-off-by: David Gibson 
>> ---
>>  exec.c | 5 +++--
>>  1 file changed, 3 insertions(+), 2 deletions(-)
>>
>> This is definitely a bug, but it's not a regression.  I'm not sure if
>> this is 4.0 material at this stage of the freeze or not.
>>
>> diff --git a/exec.c b/exec.c
>> index 86a38d3b3b..6ab62f4eee 100644
>> --- a/exec.c
>> +++ b/exec.c
>> @@ -1692,9 +1692,10 @@ static int find_max_supported_pagesize(Object *obj, 
>> void *opaque)
>>  long *hpsize_min = opaque;
>>  
>>  if (object_dynamic_cast(obj, TYPE_MEMORY_BACKEND)) {
>> -long hpsize = host_memory_backend_pagesize(MEMORY_BACKEND(obj));
>> +HostMemoryBackend *backend = MEMORY_BACKEND(obj);
>> +long hpsize = host_memory_backend_pagesize(backend);
>>  
>> -if (hpsize < *hpsize_min) {
>> +if (host_memory_backend_is_mapped(backend) && (hpsize < 
>> *hpsize_min)) {
>>  *hpsize_min = hpsize;
>>  }
>>  }
> 
> 


-- 

Thanks,

David / dhildenb