On (12/12/14 11:16), Sowmini Varadhan wrote:
But getting back to linux, 3 Gbps is a far cry from 10 Gbps.
I need to spend some time collecting data to convince myself that
this is purely because of HV/IOMMU inefficiency.
[e1000-devel has been Bcc'ed]
I collected the stats, and I have
On (12/11/14 15:27), David Miller wrote:
BTW, Solaris also does things which are remotely exploitable, so
these optimizations that get them line rate have a serious cost.
In their NIU driver, the recycle all buffers in an RX queue rather
than allocating new buffers.
This means that a
From: Sowmini Varadhan sowmini.varad...@oracle.com
Date: Thu, 11 Dec 2014 14:45:42 -0500
1. lockstat and perf report that iommu-lock is the hot-lock (in a typical
instance, I get about 21M contentions out of 27M acquisitions, 25 us
avg wait time). Even if I fix this issue (see below), I
On (12/11/14 15:09), David Miller wrote:
The real overhead is unavoidable due to the way the hypervisor access
to the IOMMU is implemented in sun4v.
If we had direct access to the hardware, we could avoid all of the
real overhead in %99 of all IOMMU mappings, as we do for pre-sun4v
From: Sowmini Varadhan sowmini.varad...@oracle.com
Date: Thu, 11 Dec 2014 15:21:00 -0500
All this may be true, but it would also be true for Solaris, which
manages to do line-speed (for the exact same setup), so there must be
some other bottleneck going on?
They have DMA mapping interfaces
From: David Miller da...@davemloft.net
Date: Thu, 11 Dec 2014 15:24:17 -0500 (EST)
From: Sowmini Varadhan sowmini.varad...@oracle.com
Date: Thu, 11 Dec 2014 15:21:00 -0500
All this may be true, but it would also be true for Solaris, which
manages to do line-speed (for the exact same setup),