Re: [PATCH net] xsk: remove cheap_dma optimization

2020-07-08 Thread Robin Murphy

On 2020-07-08 07:50, Christoph Hellwig wrote:

On Mon, Jun 29, 2020 at 04:41:16PM +0100, Robin Murphy wrote:

On 2020-06-28 18:16, Bj�rn T�pel wrote:


On 2020-06-27 09:04, Christoph Hellwig wrote:

On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:

Given there is roughly a ~5 weeks window at max where this removal could
still be applied in the worst case, could we come up with a fix /
proposal
first that moves this into the DMA mapping core? If there is something
that
can be agreed upon by all parties, then we could avoid re-adding the 9%
slowdown. :/


I'd rather turn it upside down - this abuse of the internals blocks work
that has basically just missed the previous window and I'm not going
to wait weeks to sort out the API misuse.� But we can add optimizations
back later if we find a sane way.



I'm not super excited about the performance loss, but I do get
Christoph's frustration about gutting the DMA API making it harder for
DMA people to get work done. Lets try to solve this properly using
proper DMA APIs.



That being said I really can't see how this would make so much of a
difference.� What architecture and what dma_ops are you using for
those measurements?� What is the workload?



The 9% is for an AF_XDP (Fast raw Ethernet socket. Think AF_PACKET, but
faster.) benchmark: receive the packet from the NIC, and drop it. The DMA
syncs stand out in the perf top:

  � 28.63%� [kernel]������������������ 
[k] i40e_clean_rx_irq_zc
  � 17.12%� [kernel]������������������ 
[k] xp_alloc
  �� 8.80%� 
[kernel]������������������ [k] __xsk_rcv_zc
  �� 7.69%� 
[kernel]������������������ [k] 
xdp_do_redirect
  �� 5.35%� bpf_prog_992d9ddc835e5629� [k] bpf_prog_992d9ddc835e5629
  �� 4.77%� 
[kernel]������������������ [k] 
xsk_rcv.part.0
  �� 4.07%� 
[kernel]������������������ [k] 
__xsk_map_redirect
  �� 3.80%� 
[kernel]������������������ [k] 
dma_direct_sync_single_for_cpu
  �� 3.03%� 
[kernel]������������������ [k] 
dma_direct_sync_single_for_device
  �� 2.76%� 
[kernel]������������������ [k] 
i40e_alloc_rx_buffers_zc
  �� 1.83%� 
[kernel]������������������ [k] xsk_flush
...

For this benchmark the dma_ops are NULL (dma_is_direct() == true), and
the main issue is that SWIOTLB is now unconditionally enabled [1] for
x86, and for each sync we have to check that if is_swiotlb_buffer()
which involves a some costly indirection.

That was pretty much what my hack avoided. Instead we did all the checks
upfront, since AF_XDP has long-term DMA mappings, and just set a flag
for that.

Avoiding the whole "is this address swiotlb" in
dma_direct_sync_single_for_{cpu, device]() per-packet
would help a lot.


I'm pretty sure that's one of the things we hope to achieve with the
generic bypass flag :)


Somewhat related to the DMA API; It would have performance benefits for
AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
utilization. I've started hacking a thing a little bit, but it would be
nice if such API was part of the mapping core.

Input: array of pages Output: array of dma addrs (and obviously dev,
flags and such)

For non-IOMMU len(array of pages) == len(array of dma addrs)
For best-case IOMMU len(array of dma addrs) == 1 (large linear space)

But that's for later. :-)


FWIW you will typically get that behaviour from IOMMU-based implementations
of dma_map_sg() right now, although it's not strictly guaranteed. If you
can weather some additional setup cost of calling
sg_alloc_table_from_pages() plus walking the list after mapping to test
whether you did get a contiguous result, you could start taking advantage
of it as some of the dma-buf code in DRM and v4l2 does already (although
those cases actually treat it as a strict dependency rather than an
optimisation).


Yikes.


Heh, consider it as iommu_dma_alloc_remap() and 
vb2_dc_get_contiguous_size() having a beautiful baby ;)



I'm inclined to agree that if we're going to see more of these cases, a new
API call that did formally guarantee a DMA-contiguous mapping (either via
IOMMU or bounce buffering) or failure might indeed be handy.


I was planning on adding a dma-level API to add more pages to an
IOMMU batch, but was waiting for at least the intel IOMMU driver to be
converted to the dma-iommu code (and preferably arm32 and s390 as well).


FWIW I did finally get round to having an initial crack at arm32 
recently[1] - of course it needs significant rework already for all the 
IOMMU API motion, and I still need to attempt to test any of it (at 
least I do have a couple of 32-bit boards here), but with 

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-07-08 Thread Christoph Hellwig
On Wed, Jul 08, 2020 at 07:57:23AM +, Song Bao Hua (Barry Song) wrote:
> > int dma_map_batch_start(struct device *dev, size_t rounded_len,
> > enum dma_data_direction dir, unsigned long attrs, dma_addr_t *addr);
> > int dma_map_batch_add(struct device *dev, dma_addr_t *addr, struct page
> > *page,
> > unsigned long offset, size_t size);
> > int dma_map_batch_end(struct device *dev, int ret, dma_addr_t start_addr);
> > 
> 
> Hello Christoph,
> 
> What is the different between dma_map_batch_add() and adding the buffer to sg 
> of dma_map_sg()?

There is not struct scatterlist involved in this API, avoiding the
overhead to allocate it (which is kinda the point).
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


RE: [PATCH net] xsk: remove cheap_dma optimization

2020-07-08 Thread Song Bao Hua (Barry Song)



> -Original Message-
> From: netdev-ow...@vger.kernel.org [mailto:netdev-ow...@vger.kernel.org]
> On Behalf Of Christoph Hellwig
> Sent: Wednesday, July 8, 2020 6:50 PM
> To: Robin Murphy 
> Cc: Björn Töpel ; Christoph Hellwig ;
> Daniel Borkmann ; maxi...@mellanox.com;
> konrad.w...@oracle.com; jonathan.le...@gmail.com;
> linux-ker...@vger.kernel.org; iommu@lists.linux-foundation.org;
> net...@vger.kernel.org; b...@vger.kernel.org; da...@davemloft.net;
> magnus.karls...@intel.com
> Subject: Re: [PATCH net] xsk: remove cheap_dma optimization
> 
> On Mon, Jun 29, 2020 at 04:41:16PM +0100, Robin Murphy wrote:
> > On 2020-06-28 18:16, Björn Töpel wrote:
> >>
> >> On 2020-06-27 09:04, Christoph Hellwig wrote:
> >>> On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:
> >>>> Given there is roughly a ~5 weeks window at max where this removal
> could
> >>>> still be applied in the worst case, could we come up with a fix /
> >>>> proposal
> >>>> first that moves this into the DMA mapping core? If there is something
> >>>> that
> >>>> can be agreed upon by all parties, then we could avoid re-adding the 9%
> >>>> slowdown. :/
> >>>
> >>> I'd rather turn it upside down - this abuse of the internals blocks work
> >>> that has basically just missed the previous window and I'm not going
> >>> to wait weeks to sort out the API misuse.  But we can add optimizations
> >>> back later if we find a sane way.
> >>>
> >>
> >> I'm not super excited about the performance loss, but I do get
> >> Christoph's frustration about gutting the DMA API making it harder for
> >> DMA people to get work done. Lets try to solve this properly using
> >> proper DMA APIs.
> >>
> >>
> >>> That being said I really can't see how this would make so much of a
> >>> difference.  What architecture and what dma_ops are you using for
> >>> those measurements?  What is the workload?
> >>>
> >>
> >> The 9% is for an AF_XDP (Fast raw Ethernet socket. Think AF_PACKET, but
> >> faster.) benchmark: receive the packet from the NIC, and drop it. The DMA
> >> syncs stand out in the perf top:
> >>
> >>    28.63%  [kernel]   [k] i40e_clean_rx_irq_zc
> >>    17.12%  [kernel]   [k] xp_alloc
> >>     8.80%  [kernel]   [k] __xsk_rcv_zc
> >>     7.69%  [kernel]   [k] xdp_do_redirect
> >>     5.35%  bpf_prog_992d9ddc835e5629  [k]
> bpf_prog_992d9ddc835e5629
> >>     4.77%  [kernel]   [k] xsk_rcv.part.0
> >>     4.07%  [kernel]   [k] __xsk_map_redirect
> >>     3.80%  [kernel]   [k]
> dma_direct_sync_single_for_cpu
> >>     3.03%  [kernel]   [k]
> dma_direct_sync_single_for_device
> >>     2.76%  [kernel]   [k]
> i40e_alloc_rx_buffers_zc
> >>     1.83%  [kernel]   [k] xsk_flush
> >> ...
> >>
> >> For this benchmark the dma_ops are NULL (dma_is_direct() == true), and
> >> the main issue is that SWIOTLB is now unconditionally enabled [1] for
> >> x86, and for each sync we have to check that if is_swiotlb_buffer()
> >> which involves a some costly indirection.
> >>
> >> That was pretty much what my hack avoided. Instead we did all the checks
> >> upfront, since AF_XDP has long-term DMA mappings, and just set a flag
> >> for that.
> >>
> >> Avoiding the whole "is this address swiotlb" in
> >> dma_direct_sync_single_for_{cpu, device]() per-packet
> >> would help a lot.
> >
> > I'm pretty sure that's one of the things we hope to achieve with the
> > generic bypass flag :)
> >
> >> Somewhat related to the DMA API; It would have performance benefits for
> >> AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
> >> utilization. I've started hacking a thing a little bit, but it would be
> >> nice if such API was part of the mapping core.
> >>
> >> Input: array of pages Output: array of dma addrs (and obviously dev,
> >> flags and such)
> >>
> >> For non-IOMMU len(array of pages) == len(array of dma addrs)
> >> For best-case IOMMU len(array of dma addrs) == 1 (large linear space)
> >>
> >> But that's for later. :-)
> >
> > FWIW you will typically get that behaviour from IOMMU-based
>

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-07-08 Thread Christoph Hellwig
On Mon, Jun 29, 2020 at 04:41:16PM +0100, Robin Murphy wrote:
> On 2020-06-28 18:16, Björn Töpel wrote:
>>
>> On 2020-06-27 09:04, Christoph Hellwig wrote:
>>> On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:
 Given there is roughly a ~5 weeks window at max where this removal could
 still be applied in the worst case, could we come up with a fix / 
 proposal
 first that moves this into the DMA mapping core? If there is something 
 that
 can be agreed upon by all parties, then we could avoid re-adding the 9%
 slowdown. :/
>>>
>>> I'd rather turn it upside down - this abuse of the internals blocks work
>>> that has basically just missed the previous window and I'm not going
>>> to wait weeks to sort out the API misuse.  But we can add optimizations
>>> back later if we find a sane way.
>>>
>>
>> I'm not super excited about the performance loss, but I do get
>> Christoph's frustration about gutting the DMA API making it harder for
>> DMA people to get work done. Lets try to solve this properly using
>> proper DMA APIs.
>>
>>
>>> That being said I really can't see how this would make so much of a
>>> difference.  What architecture and what dma_ops are you using for
>>> those measurements?  What is the workload?
>>>
>>
>> The 9% is for an AF_XDP (Fast raw Ethernet socket. Think AF_PACKET, but 
>> faster.) benchmark: receive the packet from the NIC, and drop it. The DMA 
>> syncs stand out in the perf top:
>>
>>    28.63%  [kernel]   [k] i40e_clean_rx_irq_zc
>>    17.12%  [kernel]   [k] xp_alloc
>>     8.80%  [kernel]   [k] __xsk_rcv_zc
>>     7.69%  [kernel]   [k] xdp_do_redirect
>>     5.35%  bpf_prog_992d9ddc835e5629  [k] bpf_prog_992d9ddc835e5629
>>     4.77%  [kernel]   [k] xsk_rcv.part.0
>>     4.07%  [kernel]   [k] __xsk_map_redirect
>>     3.80%  [kernel]   [k] dma_direct_sync_single_for_cpu
>>     3.03%  [kernel]   [k] dma_direct_sync_single_for_device
>>     2.76%  [kernel]   [k] i40e_alloc_rx_buffers_zc
>>     1.83%  [kernel]   [k] xsk_flush
>> ...
>>
>> For this benchmark the dma_ops are NULL (dma_is_direct() == true), and
>> the main issue is that SWIOTLB is now unconditionally enabled [1] for
>> x86, and for each sync we have to check that if is_swiotlb_buffer()
>> which involves a some costly indirection.
>>
>> That was pretty much what my hack avoided. Instead we did all the checks
>> upfront, since AF_XDP has long-term DMA mappings, and just set a flag
>> for that.
>>
>> Avoiding the whole "is this address swiotlb" in
>> dma_direct_sync_single_for_{cpu, device]() per-packet
>> would help a lot.
>
> I'm pretty sure that's one of the things we hope to achieve with the 
> generic bypass flag :)
>
>> Somewhat related to the DMA API; It would have performance benefits for
>> AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
>> utilization. I've started hacking a thing a little bit, but it would be
>> nice if such API was part of the mapping core.
>>
>> Input: array of pages Output: array of dma addrs (and obviously dev,
>> flags and such)
>>
>> For non-IOMMU len(array of pages) == len(array of dma addrs)
>> For best-case IOMMU len(array of dma addrs) == 1 (large linear space)
>>
>> But that's for later. :-)
>
> FWIW you will typically get that behaviour from IOMMU-based implementations 
> of dma_map_sg() right now, although it's not strictly guaranteed. If you 
> can weather some additional setup cost of calling 
> sg_alloc_table_from_pages() plus walking the list after mapping to test 
> whether you did get a contiguous result, you could start taking advantage 
> of it as some of the dma-buf code in DRM and v4l2 does already (although 
> those cases actually treat it as a strict dependency rather than an 
> optimisation).

Yikes.

> I'm inclined to agree that if we're going to see more of these cases, a new 
> API call that did formally guarantee a DMA-contiguous mapping (either via 
> IOMMU or bounce buffering) or failure might indeed be handy.

I was planning on adding a dma-level API to add more pages to an
IOMMU batch, but was waiting for at least the intel IOMMU driver to be
converted to the dma-iommu code (and preferably arm32 and s390 as well).

Here is my old pseudo-code sketch for what I was aiming for from the
block/nvme perspective.  I haven't even implemented it yet, so there might
be some holes in the design:


/*
 * Returns 0 if batching is possible, postitive number of segments required
 * if batching is not possible, or negatie values on error.
 */
int dma_map_batch_start(struct device *dev, size_t rounded_len,
enum dma_data_direction dir, unsigned long attrs, dma_addr_t *addr);
int dma_map_batch_add(struct device *dev, dma_addr_t *addr, struct page *page,
unsigned long offset, size_t size);
int dma_map_batch_end(struct device *dev, int 

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-07-01 Thread Björn Töpel

On 2020-06-29 17:41, Robin Murphy wrote:

On 2020-06-28 18:16, Björn Töpel wrote:

[...]>

Somewhat related to the DMA API; It would have performance benefits for
AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
utilization. I've started hacking a thing a little bit, but it would be
nice if such API was part of the mapping core.

Input: array of pages Output: array of dma addrs (and obviously dev,
flags and such)

For non-IOMMU len(array of pages) == len(array of dma addrs)
For best-case IOMMU len(array of dma addrs) == 1 (large linear space)

But that's for later. :-)


FWIW you will typically get that behaviour from IOMMU-based 
implementations of dma_map_sg() right now, although it's not strictly 
guaranteed. If you can weather some additional setup cost of calling 
sg_alloc_table_from_pages() plus walking the list after mapping to test 
whether you did get a contiguous result, you could start taking 
advantage of it as some of the dma-buf code in DRM and v4l2 does already 
(although those cases actually treat it as a strict dependency rather 
than an optimisation).


I'm inclined to agree that if we're going to see more of these cases, a 
new API call that did formally guarantee a DMA-contiguous mapping 
(either via IOMMU or bounce buffering) or failure might indeed be handy.




I forgot to reply to this one! My current hack is using the iommu code 
directly, similar to what vfio-pci does (hopefully not gutting the API 
this time ;-)).


Your approach sound much nicer, and easier. I'll try that out! Thanks a 
lot for the pointers, and I might be back with more questions.



Cheers,
Björn


Robin.

___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-30 Thread Daniel Borkmann

On 6/30/20 7:07 AM, Christoph Hellwig wrote:

On Mon, Jun 29, 2020 at 05:18:38PM +0200, Daniel Borkmann wrote:

On 6/29/20 5:10 PM, Björn Töpel wrote:

On 2020-06-29 15:52, Daniel Borkmann wrote:


Ok, fair enough, please work with DMA folks to get this properly integrated and
restored then. Applied, thanks!


Daniel, you were too quick! Please revert this one; Christoph just submitted a 
4-patch-series that addresses both the DMA API, and the perf regression!


Nice, tossed from bpf tree then! (Looks like it didn't land on the bpf list yet,
but seems other mails are currently stuck as well on vger. I presume it will be
routed to Linus via Christoph?)


I send the patches to the bpf list, did you get them now that vger
is unclogged?  Thinking about it the best route might be through
bpf/net, so if that works for you please pick it up.


Yeah, that's fine, I just applied your series to the bpf tree. Thanks!
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Christoph Hellwig
On Mon, Jun 29, 2020 at 05:18:38PM +0200, Daniel Borkmann wrote:
> On 6/29/20 5:10 PM, Björn Töpel wrote:
>> On 2020-06-29 15:52, Daniel Borkmann wrote:
>>>
>>> Ok, fair enough, please work with DMA folks to get this properly integrated 
>>> and
>>> restored then. Applied, thanks!
>>
>> Daniel, you were too quick! Please revert this one; Christoph just submitted 
>> a 4-patch-series that addresses both the DMA API, and the perf regression!
>
> Nice, tossed from bpf tree then! (Looks like it didn't land on the bpf list 
> yet,
> but seems other mails are currently stuck as well on vger. I presume it will 
> be
> routed to Linus via Christoph?)

I send the patches to the bpf list, did you get them now that vger
is unclogged?  Thinking about it the best route might be through
bpf/net, so if that works for you please pick it up.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Björn Töpel


On 2020-06-29 17:18, Daniel Borkmann wrote:
Nice, tossed from bpf tree then! (Looks like it didn't land on the bpf 
list yet,
but seems other mails are currently stuck as well on vger. I presume it 
will be

routed to Linus via Christoph?)


Thanks!

Christoph (according to the other mail) was OK taking the series via 
your bpf, Dave's net, or his dma-mapping tree.



Björn
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Robin Murphy

On 2020-06-28 18:16, Björn Töpel wrote:


On 2020-06-27 09:04, Christoph Hellwig wrote:

On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:

Given there is roughly a ~5 weeks window at max where this removal could
still be applied in the worst case, could we come up with a fix / 
proposal
first that moves this into the DMA mapping core? If there is 
something that

can be agreed upon by all parties, then we could avoid re-adding the 9%
slowdown. :/


I'd rather turn it upside down - this abuse of the internals blocks work
that has basically just missed the previous window and I'm not going
to wait weeks to sort out the API misuse.  But we can add optimizations
back later if we find a sane way.



I'm not super excited about the performance loss, but I do get
Christoph's frustration about gutting the DMA API making it harder for
DMA people to get work done. Lets try to solve this properly using
proper DMA APIs.



That being said I really can't see how this would make so much of a
difference.  What architecture and what dma_ops are you using for
those measurements?  What is the workload?



The 9% is for an AF_XDP (Fast raw Ethernet socket. Think AF_PACKET, but 
faster.) benchmark: receive the packet from the NIC, and drop it. The 
DMA syncs stand out in the perf top:


   28.63%  [kernel]   [k] i40e_clean_rx_irq_zc
   17.12%  [kernel]   [k] xp_alloc
    8.80%  [kernel]   [k] __xsk_rcv_zc
    7.69%  [kernel]   [k] xdp_do_redirect
    5.35%  bpf_prog_992d9ddc835e5629  [k] bpf_prog_992d9ddc835e5629
    4.77%  [kernel]   [k] xsk_rcv.part.0
    4.07%  [kernel]   [k] __xsk_map_redirect
    3.80%  [kernel]   [k] dma_direct_sync_single_for_cpu
    3.03%  [kernel]   [k] dma_direct_sync_single_for_device
    2.76%  [kernel]   [k] i40e_alloc_rx_buffers_zc
    1.83%  [kernel]   [k] xsk_flush
...

For this benchmark the dma_ops are NULL (dma_is_direct() == true), and
the main issue is that SWIOTLB is now unconditionally enabled [1] for
x86, and for each sync we have to check that if is_swiotlb_buffer()
which involves a some costly indirection.

That was pretty much what my hack avoided. Instead we did all the checks
upfront, since AF_XDP has long-term DMA mappings, and just set a flag
for that.

Avoiding the whole "is this address swiotlb" in
dma_direct_sync_single_for_{cpu, device]() per-packet
would help a lot.


I'm pretty sure that's one of the things we hope to achieve with the 
generic bypass flag :)



Somewhat related to the DMA API; It would have performance benefits for
AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
utilization. I've started hacking a thing a little bit, but it would be
nice if such API was part of the mapping core.

Input: array of pages Output: array of dma addrs (and obviously dev,
flags and such)

For non-IOMMU len(array of pages) == len(array of dma addrs)
For best-case IOMMU len(array of dma addrs) == 1 (large linear space)

But that's for later. :-)


FWIW you will typically get that behaviour from IOMMU-based 
implementations of dma_map_sg() right now, although it's not strictly 
guaranteed. If you can weather some additional setup cost of calling 
sg_alloc_table_from_pages() plus walking the list after mapping to test 
whether you did get a contiguous result, you could start taking 
advantage of it as some of the dma-buf code in DRM and v4l2 does already 
(although those cases actually treat it as a strict dependency rather 
than an optimisation).


I'm inclined to agree that if we're going to see more of these cases, a 
new API call that did formally guarantee a DMA-contiguous mapping 
(either via IOMMU or bounce buffering) or failure might indeed be handy.


Robin.
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Daniel Borkmann

On 6/29/20 5:10 PM, Björn Töpel wrote:

On 2020-06-29 15:52, Daniel Borkmann wrote:


Ok, fair enough, please work with DMA folks to get this properly integrated and
restored then. Applied, thanks!


Daniel, you were too quick! Please revert this one; Christoph just submitted a 
4-patch-series that addresses both the DMA API, and the perf regression!


Nice, tossed from bpf tree then! (Looks like it didn't land on the bpf list yet,
but seems other mails are currently stuck as well on vger. I presume it will be
routed to Linus via Christoph?)

Thanks,
Daniel
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Björn Töpel

On 2020-06-29 15:52, Daniel Borkmann wrote:


Ok, fair enough, please work with DMA folks to get this properly 
integrated and

restored then. Applied, thanks!


Daniel, you were too quick! Please revert this one; Christoph just 
submitted a 4-patch-series that addresses both the DMA API, and the perf 
regression!



Björn
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-29 Thread Daniel Borkmann

On 6/28/20 7:16 PM, Björn Töpel wrote:

On 2020-06-27 09:04, Christoph Hellwig wrote:

On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:

Given there is roughly a ~5 weeks window at max where this removal could
still be applied in the worst case, could we come up with a fix / proposal
first that moves this into the DMA mapping core? If there is something that
can be agreed upon by all parties, then we could avoid re-adding the 9%
slowdown. :/


I'd rather turn it upside down - this abuse of the internals blocks work
that has basically just missed the previous window and I'm not going
to wait weeks to sort out the API misuse.  But we can add optimizations
back later if we find a sane way.


I'm not super excited about the performance loss, but I do get
Christoph's frustration about gutting the DMA API making it harder for
DMA people to get work done. Lets try to solve this properly using
proper DMA APIs.


Ok, fair enough, please work with DMA folks to get this properly integrated and
restored then. Applied, thanks!
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-28 Thread Björn Töpel


On 2020-06-27 09:04, Christoph Hellwig wrote:

On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:

Given there is roughly a ~5 weeks window at max where this removal could
still be applied in the worst case, could we come up with a fix / proposal
first that moves this into the DMA mapping core? If there is something that
can be agreed upon by all parties, then we could avoid re-adding the 9%
slowdown. :/


I'd rather turn it upside down - this abuse of the internals blocks work
that has basically just missed the previous window and I'm not going
to wait weeks to sort out the API misuse.  But we can add optimizations
back later if we find a sane way.



I'm not super excited about the performance loss, but I do get
Christoph's frustration about gutting the DMA API making it harder for
DMA people to get work done. Lets try to solve this properly using
proper DMA APIs.



That being said I really can't see how this would make so much of a
difference.  What architecture and what dma_ops are you using for
those measurements?  What is the workload?



The 9% is for an AF_XDP (Fast raw Ethernet socket. Think AF_PACKET, but 
faster.) benchmark: receive the packet from the NIC, and drop it. The 
DMA syncs stand out in the perf top:


  28.63%  [kernel]   [k] i40e_clean_rx_irq_zc
  17.12%  [kernel]   [k] xp_alloc
   8.80%  [kernel]   [k] __xsk_rcv_zc
   7.69%  [kernel]   [k] xdp_do_redirect
   5.35%  bpf_prog_992d9ddc835e5629  [k] bpf_prog_992d9ddc835e5629
   4.77%  [kernel]   [k] xsk_rcv.part.0
   4.07%  [kernel]   [k] __xsk_map_redirect
   3.80%  [kernel]   [k] dma_direct_sync_single_for_cpu
   3.03%  [kernel]   [k] dma_direct_sync_single_for_device
   2.76%  [kernel]   [k] i40e_alloc_rx_buffers_zc
   1.83%  [kernel]   [k] xsk_flush
...

For this benchmark the dma_ops are NULL (dma_is_direct() == true), and
the main issue is that SWIOTLB is now unconditionally enabled [1] for
x86, and for each sync we have to check that if is_swiotlb_buffer()
which involves a some costly indirection.

That was pretty much what my hack avoided. Instead we did all the checks
upfront, since AF_XDP has long-term DMA mappings, and just set a flag
for that.

Avoiding the whole "is this address swiotlb" in
dma_direct_sync_single_for_{cpu, device]() per-packet
would help a lot.

Somewhat related to the DMA API; It would have performance benefits for
AF_XDP if the DMA range of the mapped memory was linear, i.e. by IOMMU
utilization. I've started hacking a thing a little bit, but it would be
nice if such API was part of the mapping core.

Input: array of pages Output: array of dma addrs (and obviously dev,
flags and such)

For non-IOMMU len(array of pages) == len(array of dma addrs)
For best-case IOMMU len(array of dma addrs) == 1 (large linear space)

But that's for later. :-)


Björn


[1] commit: 09230cbc1bab ("swiotlb: move the SWIOTLB config symbol to 
lib/Kconfig")


___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-27 Thread Christoph Hellwig
On Sat, Jun 27, 2020 at 01:00:19AM +0200, Daniel Borkmann wrote:
> Given there is roughly a ~5 weeks window at max where this removal could
> still be applied in the worst case, could we come up with a fix / proposal
> first that moves this into the DMA mapping core? If there is something that
> can be agreed upon by all parties, then we could avoid re-adding the 9%
> slowdown. :/

I'd rather turn it upside down - this abuse of the internals blocks work
that has basically just missed the previous window and I'm not going
to wait weeks to sort out the API misuse.  But we can add optimizations
back later if we find a sane way.

That being said I really can't see how this would make so much of a
difference.  What architecture and what dma_ops are you using for
those measurements?  What is the workload?
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu


Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-26 Thread Daniel Borkmann

On 6/26/20 3:43 PM, Björn Töpel wrote:

From: Björn Töpel 

When the AF_XDP buffer allocation API was introduced it had an
optimization, "cheap_dma". The idea was that when the umem was DMA
mapped, the pool also checked whether the mapping required a
synchronization (CPU to device, and vice versa). If not, it would be
marked as "cheap_dma" and the synchronization would be elided.

In [1] Christoph points out that the optimization above breaks the DMA
API abstraction, and should be removed. Further, Christoph points out
that optimizations like this should be done within the DMA mapping
core, and not elsewhere.

Unfortunately this has implications for the packet rate
performance. The AF_XDP rxdrop scenario shows a 9% decrease in packets
per second.

[1] https://lore.kernel.org/netdev/20200626074725.ga21...@lst.de/

Cc: Christoph Hellwig 
Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
Signed-off-by: Björn Töpel 


Given there is roughly a ~5 weeks window at max where this removal could
still be applied in the worst case, could we come up with a fix / proposal
first that moves this into the DMA mapping core? If there is something that
can be agreed upon by all parties, then we could avoid re-adding the 9%
slowdown. :/

Thanks,
Daniel
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

Re: [PATCH net] xsk: remove cheap_dma optimization

2020-06-26 Thread Jonathan Lemon
On Fri, Jun 26, 2020 at 03:43:58PM +0200, Björn Töpel wrote:
> From: Björn Töpel 
> 
> When the AF_XDP buffer allocation API was introduced it had an
> optimization, "cheap_dma". The idea was that when the umem was DMA
> mapped, the pool also checked whether the mapping required a
> synchronization (CPU to device, and vice versa). If not, it would be
> marked as "cheap_dma" and the synchronization would be elided.
> 
> In [1] Christoph points out that the optimization above breaks the DMA
> API abstraction, and should be removed. Further, Christoph points out
> that optimizations like this should be done within the DMA mapping
> core, and not elsewhere.
> 
> Unfortunately this has implications for the packet rate
> performance. The AF_XDP rxdrop scenario shows a 9% decrease in packets
> per second.
> 
> [1] https://lore.kernel.org/netdev/20200626074725.ga21...@lst.de/
> 
> Cc: Christoph Hellwig 
> Fixes: 2b43470add8c ("xsk: Introduce AF_XDP buffer allocation API")
> Signed-off-by: Björn Töpel 

Acked-by: Jonathan Lemon 
___
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu