On 07/25/2018 02:56 PM, Andrew Cooper wrote:
> On 25/07/18 17:29, Juergen Gross wrote:
>> On 25/07/18 18:12, Roger Pau Monné wrote:
>>> On Wed, Jul 25, 2018 at 05:05:35PM +0300, berca...@amazon.com wrote:
>>>> On 07/25/2018 05:02 PM, Wei Liu wrote:
>>>>> On Wed, Jul 25, 2018 at 03:41:11PM +0200, Juergen Gross wrote:
>>>>>> On 25/07/18 15:35, Roger Pau Monné wrote:
>>>>>>>> What could be causing the available memory loss problem?
>>>>>>> That seems to be Linux aggressively ballooning out memory, you go from
>>>>>>> 7129M total memory to 246M. Are you creating a lot of domains?
>>>>>> This might be related to the tools thinking dom0 is a PV domain.
>>>>> Good point.
>>>>>
>>>>> In that case, xenstore-ls -fp would also be useful. The output should
>>>>> show the balloon target for Dom0.
>>>>>
>>>>> You can also try to set the autoballoon to off in /etc/xen/xl.cfg to see
>>>>> if it makes any difference.
>>>>>
>>>>> Wei.
>>>> Also tried setting autoballooning off, but it had no effect.
>>> This is a Linux/libxl issue that I'm not sure what's the best way to
>>> solve. Linux has the following 'workaround' in the balloon driver:
>>>
>>> err = xenbus_scanf(XBT_NIL, "memory", "static-max", "%llu",
>>>                &static_max);
>>> if (err != 1)
>>>     static_max = new_target;
>>> else
>>>     static_max >>= PAGE_SHIFT - 10;
>>> target_diff = xen_pv_domain() ? 0
>>>             : static_max - balloon_stats.target_pages;
>> Hmm, shouldn't PVH behave the same way as PV here? I don't think
>> there is memory missing for PVH, opposed to HVM's firmware memory.
>>
>> Adding Boris for a second opinion.

(Notwithstanding Andrews' rant below ;-))

I am trying to remember --- what memory were we trying not to online for
HVM here?


-boris


> /sigh
>
> <rant>
>
> Ballooning, and guest memory accounting is a known, growing, clustermess
> of swamps.  The ballooning protocol itself is sufficiently broken as to
> be useless outside of contrived scenarios, owing to the lack of any
> ability to nack the request and the guest not knowing or being able to
> work out how much RAM it actually has.
>
> The Xen/toolstack/qemu-{trad,upstream}/hvmloader guessathon contributes
> to lots of corner cases where things explode spectacularly on migration,
> such as having more than 4 network cards, or having vram != 64M, or
> generally anything involving PCI Passthrough.
>
> Can we take this hint that maybe its time to try fixing the problem
> properly rather than applying even more duct tape?  I'd like to remind
> people that there is a design which has been discussed at various
> conferences in the past, not not overly objected to.
>
> </rant>
>
> ~Andrew


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xenproject.org
https://lists.xenproject.org/mailman/listinfo/xen-devel

Reply via email to