On 11.05.22 11:34, Daniel P. Berrangé wrote:
> On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote:
Long story short, management application has no way of learning
TIDs of allocator threads so it can't make them run NUMA aware.
>>>
>>> This feels like the key issue. The
On Thu, May 12, 2022 at 09:41:29AM +0200, Paolo Bonzini wrote:
> On 5/11/22 18:54, Daniel P. Berrangé wrote:
> > On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote:
> > > On 5/11/22 12:10, Daniel P. Berrangé wrote:
> > > > I expect creating/deleting I/O threads is cheap in comparison to
On 5/11/22 18:54, Daniel P. Berrangé wrote:
On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote:
On 5/11/22 12:10, Daniel P. Berrangé wrote:
I expect creating/deleting I/O threads is cheap in comparison to
the work done for preallocation. If libvirt is using -preconfig
and object-add
On Wed, May 11, 2022 at 01:07:47PM +0200, Paolo Bonzini wrote:
> On 5/11/22 12:10, Daniel P. Berrangé wrote:
> > If all we needs is NUMA affinity, not CPU affinity, then it would
> > be sufficient to create 1 I/O thread per host NUMA node that the
> > VM needs to use. The job running in the I/O
On 11.05.22 17:08, Daniel P. Berrangé wrote:
> On Wed, May 11, 2022 at 03:16:55PM +0200, Michal Prívozník wrote:
>> On 5/10/22 11:12, Daniel P. Berrangé wrote:
>>> On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
When allocating large amounts of memory the task is offloaded
On Wed, May 11, 2022 at 03:16:55PM +0200, Michal Prívozník wrote:
> On 5/10/22 11:12, Daniel P. Berrangé wrote:
> > On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
> >> When allocating large amounts of memory the task is offloaded
> >> onto threads. These threads then use various
>>
>> The very last cases is the only one where this patch can potentially
>> be beneficial. The problem is that because libvirt is in charge of
>> enforcing CPU affinity, IIRC, we explicitly block QEMU from doing
>> anything with CPU affinity. So AFAICT, this patch should result in
>> an error
On 5/10/22 11:12, Daniel P. Berrangé wrote:
> On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
>> When allocating large amounts of memory the task is offloaded
>> onto threads. These threads then use various techniques to
>> allocate the memory fully (madvise(), writing into the
On 5/11/22 12:10, Daniel P. Berrangé wrote:
If all we needs is NUMA affinity, not CPU affinity, then it would
be sufficient to create 1 I/O thread per host NUMA node that the
VM needs to use. The job running in the I/O can spawn further
threads and inherit the NUMA affinity. This might be more
On Wed, May 11, 2022 at 12:03:24PM +0200, David Hildenbrand wrote:
> On 11.05.22 11:34, Daniel P. Berrangé wrote:
> > On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote:
> Long story short, management application has no way of learning
> TIDs of allocator threads so it
On 11.05.22 11:34, Daniel P. Berrangé wrote:
> On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote:
Long story short, management application has no way of learning
TIDs of allocator threads so it can't make them run NUMA aware.
>>>
>>> This feels like the key issue. The
On Wed, May 11, 2022 at 11:31:23AM +0200, David Hildenbrand wrote:
> >> Long story short, management application has no way of learning
> >> TIDs of allocator threads so it can't make them run NUMA aware.
> >
> > This feels like the key issue. The preallocation threads are
> > invisible to
>> Long story short, management application has no way of learning
>> TIDs of allocator threads so it can't make them run NUMA aware.
>
> This feels like the key issue. The preallocation threads are
> invisible to libvirt, regardless of whether we're doing coldplug
> or hotplug of
On Wed, May 11, 2022 at 09:34:07AM +0100, Dr. David Alan Gilbert wrote:
> * Michal Privoznik (mpriv...@redhat.com) wrote:
> > When allocating large amounts of memory the task is offloaded
> > onto threads. These threads then use various techniques to
> > allocate the memory fully (madvise(),
On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
> When allocating large amounts of memory the task is offloaded
> onto threads. These threads then use various techniques to
> allocate the memory fully (madvise(), writing into the memory).
> However, these threads are free to run
* Michal Privoznik (mpriv...@redhat.com) wrote:
> When allocating large amounts of memory the task is offloaded
> onto threads. These threads then use various techniques to
> allocate the memory fully (madvise(), writing into the memory).
> However, these threads are free to run on any CPU, which
* Daniel P. Berrangé (berra...@redhat.com) wrote:
> On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
> > When allocating large amounts of memory the task is offloaded
> > onto threads. These threads then use various techniques to
> > allocate the memory fully (madvise(), writing
On Tue, May 10, 2022 at 08:55:33AM +0200, Michal Privoznik wrote:
> When allocating large amounts of memory the task is offloaded
> onto threads. These threads then use various techniques to
> allocate the memory fully (madvise(), writing into the memory).
> However, these threads are free to run
When allocating large amounts of memory the task is offloaded
onto threads. These threads then use various techniques to
allocate the memory fully (madvise(), writing into the memory).
However, these threads are free to run on any CPU, which becomes
problematic on NUMA machines because it may
19 matches
Mail list logo