On Fri, Feb 15, 2019 at 2:51 AM Brian Starkey <brian.star...@arm.com> wrote:
>
> Hi John,
>
> On Thu, Feb 14, 2019 at 09:38:29AM -0800, John Stultz wrote:
> >
> [snip]
>
> > Some thoughts, as this ABI break has the potential to be pretty painful.
> >
> > 1) Unfortunately, this ABI is exposed *through* libion via
> > ion_alloc/ion_alloc_fd out to gralloc implementations. Which means it
> > will have a wider impact to vendor userland code.
>
> I figured libion could fairly easily loop through all the set bits in
> heap_mask and call the ioctl for each until it succeeds. That
> preserves the old behaviour from the libion clients' perspective.

Potentially, though that implicitly still caps the heaps to 32.  So
I'm not sure what the net benefit would be.


> > 2) For patches that cause ABI breaks, it might be good to make it
> > clear in the commit what the userland impact looks like in userspace,
> > possibly with an example, so the poor folks who bisect down the change
> > as breaking their system in a year or so have a clear example as to
> > what they need to change in their code.
> >
> > 3) Also, its not clear how a given userland should distinguish between
> > the different ABIs.  We already have logic in libion to distinguish
> > between pre-4.12 legacy and post-4.12 implementations (using implicit
> > ion_free() behavior). I don't see any such check we can make with this
> > code. Adding another ABI version may require we provide an actual
> > interface version ioctl.
> >
>
> A slightly fragile/ugly approach might be to attempt a small
> allocation with a heap_mask of 0xffffffff. On an "old" implementation,
> you'd expect that to succeed, whereas it would/could be made to fail
> in the "new" one.

Yea I think having a proper ION_IOC_VERSION is going to be necessary.

I'm hoping to send out an ugly first stab at the kernel side for
switching to per-heap devices (with a config for keeping /dev/ion for
legacy compat), which I hope will address the core issue this patch
does (moving away from heap masks to specifically requested heaps).

thanks
-john
_______________________________________________
devel mailing list
de...@linuxdriverproject.org
http://driverdev.linuxdriverproject.org/mailman/listinfo/driverdev-devel

Reply via email to