Am 24.11.2016 um 18:55 schrieb Logan Gunthorpe:
Hey,

On 24/11/16 02:45 AM, Christian König wrote:
E.g. it can happen that PCI device A exports it's BAR using ZONE_DEVICE.
Not PCI device B (a SATA device) can directly read/write to it because
it is on the same bus segment, but PCI device C (a network card for
example) can't because it is on a different bus segment and the bridge
can't handle P2P transactions.
Yeah, that could be an issue but in our experience we have yet to see
it. We've tested with two separate PCI buses on different CPUs connected
through QPI links and it works fine. (It is rather slow but I understand
Intel has improved the bottleneck in newer CPUs than the ones we tested.)

Well Serguei send me a couple of documents about QPI when we started to discuss this internally as well and that's exactly one of the cases I had in mind when writing this.

If I understood it correctly for such systems P2P is technical possible, but not necessary a good idea. Usually it is faster to just use a bouncing buffer when the peers are a bit "father" apart.

That this problem is solved on newer hardware is good, but doesn't helps us at all if we at want to support at least systems from the last five years or so.

It may just be older hardware that has this issue. I expect that as long
as a failed transfer can be handled gracefully by the initiator I don't
see a need to predetermine whether a device can see another devices memory.

I don't want to predetermine whether a device can see another devices memory at get_user_pages() time.

My thinking was more going into the direction of a whitelist to figure out during dma_map_single()/dma_map_sg() time if we should use a bouncing buffer or not.

Christian.



Logan


--
To unsubscribe from this list: send the line "unsubscribe linux-media" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to