On 2019-08-12, Florian Weimer <fwei...@redhat.com> wrote:
> Do you use the built-in Intel graphics?  Can you test with something
> else?
Does it have any effect? It happens to me even with a discrete GPU.

As far as I know integrated graphics arrays do not share physical memory
from point of view of the CPU address space. The physical memory is
split between GPU and CPU regions and CPU never see the GPU's physical
memory. IOMMU can be asked for mapping GPU's memory into CPU's virtual
space as can be done with any PCI card, but the physical memory is
always separated. (Although it lives in the same memory chip.) Some
BIOSes allows to define the UMA split (ratio beteen GPU and CPU memory).
But that is out of control of an operating system and cannot be change
until reset.

What actually happens is that some CPU physical memory is used for a GUI
program text and some CPU memory for a block device I/O cache. Both
purposes are handled uniformly by Linux. When the physical memory is
exhausted, a memory allocator starts paging to a swap device. The evil
thing is how memory pages are selected to be swapped out. The algorithm
is to swap out the least recently used ones. And that is often the
program text. Not the block cache. As a result your GUI becomes
unresponsive because all the physical memory is filled with a block
cache and the program text has to be reloaded from a block device. And
what's worse, this happens even without swap space because program text
pages are backed by a file and thus can dropped and loaded from a file
system later. I.e. program text is always swapable.

A cure would be more fair memory allocator that could magically
discover that a user is more interested in the few megabytes of his
window manager than the gigabytes of a transfered file. The issue is
that the allocator does not discriminate. A process can actully provide
some hints using madvise(2) and mlock(2), but that does not apply to
the program text, neither to the block cache in the kernel space. And
even if processes provided hints, there always could be some adversarial
program abusing others. Maybe if ulimit were augmented with a block
cache maximal usage and an I/O scheduler accounted for that. That could

-- Petr
devel mailing list -- devel@lists.fedoraproject.org
To unsubscribe send an email to devel-le...@lists.fedoraproject.org
Fedora Code of Conduct: 
List Guidelines: https://fedoraproject.org/wiki/Mailing_list_guidelines
List Archives: 

Reply via email to