On Mon, Jun 29, 2020 at 11:28:41AM -0400, Michael S. Tsirkin wrote:
> On Mon, Jun 29, 2020 at 10:26:46AM +0100, Stefan Hajnoczi wrote:
> > On Sun, Jun 28, 2020 at 02:34:37PM +0800, Jason Wang wrote:
> > > 
> > > On 2020/6/25 下午9:57, Stefan Hajnoczi wrote:
> > > > These patches are not ready to be merged because I was unable to 
> > > > measure a
> > > > performance improvement. I'm publishing them so they are archived in 
> > > > case
> > > > someone picks up this work again in the future.
> > > > 
> > > > The goal of these patches is to allocate virtqueues and driver state 
> > > > from the
> > > > device's NUMA node for optimal memory access latency. Only guests with 
> > > > a vNUMA
> > > > topology and virtio devices spread across vNUMA nodes benefit from 
> > > > this.  In
> > > > other cases the memory placement is fine and we don't need to take NUMA 
> > > > into
> > > > account inside the guest.
> > > > 
> > > > These patches could be extended to virtio_net.ko and other devices in 
> > > > the
> > > > future. I only tested virtio_blk.ko.
> > > > 
> > > > The benchmark configuration was designed to trigger worst-case NUMA 
> > > > placement:
> > > >   * Physical NVMe storage controller on host NUMA node 0
> 
> It's possible that numa is not such a big deal for NVMe.
> And it's possible that bios misconfigures ACPI reporting NUMA placement
> incorrectly.
> I think that the best thing to try is to use a ramdisk
> on a specific numa node.

Using ramdisk is an interesting idea, thanks.

Stefan

Attachment: signature.asc
Description: PGP signature

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

Reply via email to