On Tue, Apr 05, 2016 at 10:02:07AM -0700, John Baldwin wrote:
> For the ioctl I planned to either 1) call vm_mmap_object() or the like
> and return the virtual address to the user, or 2) return the mmap offset to
> from the ioctl that the user would then supply to mmap() and d_mmap_single
> use to find the object created by the ioctl. 1) is probably simpler and is
> what I was leaning towards. Still, I want to be able to handle invalidations
> either by pinning the BAR while the object is mapped, or being able to
> the object. Given that you can eject a hotplug PCI device, I think explicit
> invalidation is the better route in this case. I would create a VM object for
> each BAR on the first mmap request and save a reference to it in the PCI bus
> ivars. If the BAR is ever cleared I would be able to find the object and
> invalidate it ensuring any programs that then tried to access it would get a
> page fault instead of accessing some other random thing.
Option 2) is what I discussed, and what has been used for GEM and TTM.
It allows to create an object per buffer (per BAR for /dev/pci case),
and you indeed can easily iterate over managed pages belonging to the
given buffer/BAR because they belong to the object' queue.
This scheme utilizes d_mmap_single() on the 'global' object (/dev/pci,
/dev/card/dri etc), which takes offset and decodes it into the
In my opinion, it is prettier than explicit call to vm_mmap_object()
since it leaves all high-level stuff to the VM subsystem proper. Driver
only needs to create the suitable object (and manage offsets).
On the other hand, GEM has to emulate another Linux interface, where
ioctl() really performs mapping. But again, there it is simpler
to ensure that vm_object for buffer/BAR is created, and then call
vm_map_find((), not even touching middle-level of vm_mmap_object(). See
firstname.lastname@example.org mailing list
To unsubscribe, send any mail to "freebsd-current-unsubscr...@freebsd.org"