It seems that is an issue on 5.10.xxx kernels, but no longer the case
for the 5.15.

On Wed, Nov 23, 2022 at 10:39 AM Ivan Volosyuk <ivan.volos...@gmail.com> wrote:
>
> I'm highly confident that I use preallocated hugetlb pages and I have
> most of ram free on 64GB system. The problem appears only with:
> CONFIG_PREEMPT=y
>
> $ cat /proc/sys/vm/nr_hugepages
> 8192
> $ cat /proc/meminfo
> ..
> HugePages_Total:    8192
> HugePages_Free:        0
> $ mount |grep huge
> hugetlbfs on /dev/hugepages type hugetlbfs (rw,relatime,pagesize=2M)
>
> # Allow realtime tasks to monopolize CPU
> echo -1 > /proc/sys/kernel/sched_rt_runtime_us
>
> qemu realtime priority:
> chrt -r 5 /usr/bin/qemu-system-x86_64 -overcommit mem-lock=on
> -mem-path /dev/hugepages/qemu-mem [allow use 10 of 16 cpu threads]
> i9900ks + rtx 2080ti + 64GB ram
>
> On Wed, Nov 23, 2022 at 2:42 AM Alex Williamson
> <alex.william...@redhat.com> wrote:
> >
> > On Tue, 22 Nov 2022 17:57:37 +1100
> > Ivan Volosyuk <ivan.volos...@gmail.com> wrote:
> >
> > > Is there something special about the pinning step? When I start a new
> > > VM with 16G in dedicated hugepages my system becomes quite
> > > unresponsive for several seconds, significant packet loss and random
> > > hang device oopses if I use preemptive kernel?
> >
> > Pinning in blocking in the ioctl, but this should only affect the QEMU
> > process, not other host tasks, there are various schedule calls to
> > prevent this.  A 16GB, hugepage VM certainly shouldn't cause such
> > problems.  Are you sure the VM is really using pre-allocated hugepages?
> > This sounds more like a host system that's being forced to swap to
> > accommodate the VM page pinning.  Thanks,
> >
> > Alex
> >

_______________________________________________
vfio-users mailing list
vfio-users@redhat.com
https://listman.redhat.com/mailman/listinfo/vfio-users

Reply via email to