On Mon, Nov 05, 2018 at 02:51:41PM -0700, Alex Williamson wrote:
> On Mon, 5 Nov 2018 11:55:51 -0500
> Daniel Jordan wrote:
> > +static int vfio_pin_map_dma_chunk(unsigned long start_vaddr,
> > + unsigned long end_vaddr,
> > + struct
On Mon, Nov 05, 2018 at 02:51:41PM -0700, Alex Williamson wrote:
> On Mon, 5 Nov 2018 11:55:51 -0500
> Daniel Jordan wrote:
> > +static int vfio_pin_map_dma_chunk(unsigned long start_vaddr,
> > + unsigned long end_vaddr,
> > + struct
On Mon, 5 Nov 2018 11:55:51 -0500
Daniel Jordan wrote:
> When starting a large-memory kvm guest, it takes an excessively long
> time to start the boot process because qemu must pin all guest pages to
> accommodate DMA when VFIO is in use. Currently just one CPU is
> responsible for the page
On Mon, 5 Nov 2018 11:55:51 -0500
Daniel Jordan wrote:
> When starting a large-memory kvm guest, it takes an excessively long
> time to start the boot process because qemu must pin all guest pages to
> accommodate DMA when VFIO is in use. Currently just one CPU is
> responsible for the page
When starting a large-memory kvm guest, it takes an excessively long
time to start the boot process because qemu must pin all guest pages to
accommodate DMA when VFIO is in use. Currently just one CPU is
responsible for the page pinning, which usually boils down to page
clearing time-wise, so the
When starting a large-memory kvm guest, it takes an excessively long
time to start the boot process because qemu must pin all guest pages to
accommodate DMA when VFIO is in use. Currently just one CPU is
responsible for the page pinning, which usually boils down to page
clearing time-wise, so the
6 matches
Mail list logo