On 11/30/2016 7:28 PM, Serguei Sagalovitch wrote:
> On 2016-11-30 11:23 AM, Jason Gunthorpe wrote:
>>> Yes, that sounds fine. Can we simply kill the process from the GPU driver?
>>> Or do we need to extend the OOM killer to manage GPU pages?
>> I don't know..
> We could use send_sig_info to send signal from  kernel  to user space. So 
> theoretically GPU driver
> could issue KILL signal to some process.
> 
>> On Wed, Nov 30, 2016 at 12:45:58PM +0200, Haggai Eran wrote:
>>> I think we can achieve the kernel's needs with ZONE_DEVICE and DMA-API 
>>> support
>>> for peer to peer. I'm not sure we need vmap. We need a way to have a 
>>> scatterlist
>>> of MMIO pfns, and ZONE_DEVICE allows that.
> I do not think that using DMA-API as it is is the best solution (at least in 
> the current form):
> 
> -  It deals with handles/fd for the whole allocation but client could/will 
> use sub-allocation as
> well as theoretically possible to "merge" several allocations in one from GPU 
> perspective.
> -  It require knowledge to export but because "sharing" is controlled from 
> user space it
> means that we must "export" all allocation by default
> - It deals with 'fd'/handles but user application may work with 
> addresses/pointers.

Aren't you confusing DMABUF and DMA-API? DMA-API is how you program the IOMMU 
(dma_map_page/dma_map_sg/etc.).
The comment above is just about the need to extend this API to allow mapping 
peer device pages to bus addresses.

In the past I sent an RFC for using DMABUF for peer to peer. I think it had some
advantages for legacy devices. I agree that working with addresses and pointers 
through
something like HMM/ODP is much more flexible and easier to program from 
user-space.
For legacy, DMABUF would have allowed you a way to pin the pages so the GPU 
knows not to
move them. However, that can probably also be achieved simply via the reference 
count
on ZONE_DEVICE pages. The other nice thing about DMABUF is that it migrate the 
buffer
itself during attachment according to the requirements of the device that is 
attaching,
so you can automatically decide in the exporter whether to use p2p or a staging 
buffer.

> 
> Also current  DMA-API force each time to do all DMA table programming 
> unrelated if
> location was changed or not. With  vma / mmu  we are  able to install 
> notifier to intercept
> changes in location and update  translation tables only as needed (we do not 
> need to keep
> get_user_pages()  lock).
I agree.
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to