On 3/6/07, Ralf Baechle <[EMAIL PROTECTED]> wrote:
Price question: why would this patch make a difference under VMware? :-)

Moving the struct pcnet32_private from the GFP_DMA32 init_block to the
GFP_KERNEL netdev allocation may be a win even on systems where
GFP_DMA32 is normally cached, because the private area will get read
ahead into cache when the netdev is touched.  (This could be a bigger
win if the most often accessed members were moved to the beginning of
the pcnet32_private struct.)

On the other hand, VMWare may engage in some sort of sleight of hand
to keep the low 16MB or more of the VM's memory contiguously allocated
and warm in the real MMU (locked hugepage TLB entries?  I'm
speculating).  So having allocated the private area as part of a
DMA-able page may have silently spared you a page fault on access.

On the third hand, the new layout will rarely be a problem if the
whole netdev (including private area) fits in one page, since if you
were going to take a page fault you took it when you looked into the
netdev.  So it's hard to see how it could cause a performance
regression unless VMWare loses its timeslice (and the TLB entry for
the page containing the netdev) in the middle of pcnet32_rx, etc.

Lennart is of course right that most VMWare VMs are using vmxnet
instead, but they're also using distro kernels.  :-)  I find VMWare
useful for certain kinds of kernel experimentation, and don't want to
fool with vmxnet every time I flip a kernel config switch.  Linus
kernels run just fine on VMWare Workstation using piix, mptspi, and
pcnet32 (I'm running vanilla 2.6.20.1 right now).  I would think that
changes to those drivers should be regression tested under VMWare, and
I'm happy to help.

Cheers,
- Michael
-
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to