Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Christian König

Hi Daniel,

thanks for jumping in here.

And yes, you are absolutely right we need to get this fixed and not yell 
at each other that we have a different understanding of things.


Your proposal sounds sane to me, but I wouldn't call it slots. Rather 
something like "use cases" since we can have multiple fences for each 
category I think.


And I see at four here:

1. Internal kernel memory management. Everybody needs to wait for this, 
it's equal to bo->moving.

2. Writers for implicit sync, implicit sync readers should wait for them.
3. Readers for implicit sync, implicit sync writers should wait for them.
4. Things like TLB flushes and page table updates, no implicit sync but 
memory management must take them into account before moving/freeing 
backing store.


Happy weekend and hopefully not so much heat guys.

Cheers,
Christian.

Am 18.06.21 um 20:20 schrieb Daniel Stone:

Sorry for the mobile reply, but V4L2 is absolutely not write-only; there has 
never been an intersection of V4L2 supporting dmabuf and not supporting reads.

I see your point about the heritage of dma_resv but it’s a red herring. It 
doesn’t matter who’s right, or who was first, or where the code was extracted 
from.

It’s well defined that amdgpu defines resv to be one thing, that every other 
non-TTM user defines it to be something very different, and that the other TTM 
users define it to be something in the middle.

We’ll never get to anything workable if we keep arguing who’s right. Everyone 
is wrong, because dma_resv doesn’t globally mean anything.

It seems clear that there are three classes of synchronisation barrier (not 
using the ‘f’ word here), in descending exclusion order:
   - memory management barriers (amdgpu exclusive fence / ttm_bo->moving)
   - implicit synchronisation write barriers (everyone else’s exclusive fences, 
amdgpu’s shared fences)
   - implicit synchronisation read barriers (everyone else’s shared fences, 
also amdgpu’s shared fences sometimes)

I don’t see a world in which these three uses can be reduced to two slots. What 
also isn’t clear to me though, is how the memory-management barriers can 
exclude all other access in the original proposal with purely userspace CS. 
Retaining the three separate modes also seems like a hard requirement to not 
completely break userspace, but then I don’t see how three separate slots would 
work if they need to be temporally ordered. amdgpu fixed this by redefining the 
meaning of the two slots, others fixed this by not doing one of the three modes.

So how do we square the circle without encoding a DAG into the kernel? Do the 
two slots need to become a single list which is ordered by time + ‘weight’ and 
flattened whenever modified? Something else?

Have a great weekend.

-d


On 18 Jun 2021, at 5:43 pm, Christian König  wrote:

Am 18.06.21 um 17:17 schrieb Daniel Vetter:

[SNIP]
Ignoring _all_ fences is officially ok for pinned dma-buf. This is
what v4l does. Aside from it's definitely not just i915 that does this
even on the drm side, we have a few more drivers nowadays.

No it seriously isn't. If drivers are doing this they are more than broken.

See the comment in dma-resv.h

  * Based on bo.c which bears the following copyright notice,
  * but is dual licensed:



The handling in ttm_bo.c is and always was that the exclusive fence is used for 
buffer moves.

As I said multiple times now the *MAIN* purpose of the dma_resv object is 
memory management and *NOT* synchronization.

Those restrictions come from the original design of TTM where the dma_resv 
object originated from.

The resulting consequences are that:

a) If you access the buffer without waiting for the exclusive fence you run 
into a potential information leak.
 We kind of let that slip for V4L since they only access the buffers for 
writes, so you can't do any harm there.

b) If you overwrite the exclusive fence with a new one without waiting for the 
old one to signal you open up the possibility for userspace to access freed up 
memory.
 This is a complete show stopper since it means that taking over the system 
is just a typing exercise.


What you have done by allowing this in is ripping open a major security hole 
for any DMA-buf import in i915 from all TTM based driver.

This needs to be fixed ASAP, either by waiting in i915 and all other drivers 
doing this for the exclusive fence while importing a DMA-buf or by marking i915 
and all other drivers as broken.

Sorry, but if you allowed that in you seriously have no idea what you are 
talking about here and where all of this originated from.

Regards,
Christian.


___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Christian König

Am 18.06.21 um 19:20 schrieb Daniel Vetter:

On Fri, Jun 18, 2021 at 6:43 PM Christian König
 wrote:

Am 18.06.21 um 17:17 schrieb Daniel Vetter:

[SNIP]
Ignoring _all_ fences is officially ok for pinned dma-buf. This is
what v4l does. Aside from it's definitely not just i915 that does this
even on the drm side, we have a few more drivers nowadays.

No it seriously isn't. If drivers are doing this they are more than broken.

See the comment in dma-resv.h

   * Based on bo.c which bears the following copyright notice,
   * but is dual licensed:



The handling in ttm_bo.c is and always was that the exclusive fence is
used for buffer moves.

As I said multiple times now the *MAIN* purpose of the dma_resv object
is memory management and *NOT* synchronization.

Those restrictions come from the original design of TTM where the
dma_resv object originated from.

The resulting consequences are that:

a) If you access the buffer without waiting for the exclusive fence you
run into a potential information leak.
  We kind of let that slip for V4L since they only access the buffers
for writes, so you can't do any harm there.

b) If you overwrite the exclusive fence with a new one without waiting
for the old one to signal you open up the possibility for userspace to
access freed up memory.
  This is a complete show stopper since it means that taking over the
system is just a typing exercise.


What you have done by allowing this in is ripping open a major security
hole for any DMA-buf import in i915 from all TTM based driver.

This needs to be fixed ASAP, either by waiting in i915 and all other
drivers doing this for the exclusive fence while importing a DMA-buf or
by marking i915 and all other drivers as broken.

Sorry, but if you allowed that in you seriously have no idea what you
are talking about here and where all of this originated from.

Dude, get a grip, seriously. dma-buf landed in 2011

commit d15bd7ee445d0702ad801fdaece348fdb79e6581
Author: Sumit Semwal 
Date:   Mon Dec 26 14:53:15 2011 +0530

dma-buf: Introduce dma buffer sharing mechanism

and drm prime landed in the same year

commit 3248877ea1796915419fba7c89315fdbf00cb56a
(airlied/drm-prime-dmabuf-initial)
Author: Dave Airlie 
Date:   Fri Nov 25 15:21:02 2011 +

drm: base prime/dma-buf support (v5)

dma-resv was extracted much later

commit 786d7257e537da0674c02e16e3b30a44665d1cee
Author: Maarten Lankhorst 
Date:   Thu Jun 27 13:48:16 2013 +0200

reservation: cross-device reservation support, v4

Maarten's patch only extracted the dma_resv stuff so it's there,
optionally. There was never any effort to roll this out to all the
existing drivers, of which there were plenty.

It is, and has been since 10 years, totally fine to access dma-buf
without looking at any fences at all. From your pov of a ttm driver
dma-resv is mainly used for memory management and not sync, but I
think that's also due to some reinterpretation of the actual sync
rules on your side. For everyone else the dma_resv attached to a
dma-buf has been about implicit sync only, nothing else.


No, that was way before my time.

The whole thing was introduced with this commit here:

commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
Author: Maarten Lankhorst 
Date:   Wed Apr 2 17:14:48 2014 +0200

    drm/ttm: flip the switch, and convert to dma_fence

    Signed-off-by: Maarten Lankhorst 

 int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,

-   bo->sync_obj = driver->sync_obj_ref(sync_obj);
+   reservation_object_add_excl_fence(bo->resv, fence);
    if (evict) {

Maarten replaced the bo->sync_obj reference with the dma_resv exclusive 
fence.


This means that we need to apply the sync_obj semantic to all drivers 
using a DMA-buf with its dma_resv object, otherwise you break imports 
from TTM drivers.


Since then and up till now the exclusive fence must be waited on and 
never replaced with anything which signals before the old fence.


Maarten and I think Thomas did that and I was always assuming that you 
know about this design decision.


It's absolutely not that this is my invention, I'm just telling you how 
it ever was.


Anyway this means we have a seriously misunderstanding and yes now some 
of our discussions about dynamic P2P suddenly make much more sense.


Regards,
Christian.




_only_ when you have a dynamic importer/exporter can you assume that
the dma_resv fences must actually be obeyed. That's one of the reasons
why we had to make this a completely new mode (the other one was
locking, but they really tie together).

Wrt your problems:
a) needs to be fixed in drivers exporting buffers and failing to make
sure the memory is there by the time dma_buf_map_attachment returns.
b) needs to be fixed in the importers, and there's quite a few of
those. There's more than i915 here, which is why I think we should
have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
Avoids hand-rolling this about 5 times (6 if we include 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Christian König

Am 18.06.21 um 17:17 schrieb Daniel Vetter:

[SNIP]
Ignoring _all_ fences is officially ok for pinned dma-buf. This is
what v4l does. Aside from it's definitely not just i915 that does this
even on the drm side, we have a few more drivers nowadays.


No it seriously isn't. If drivers are doing this they are more than broken.

See the comment in dma-resv.h

 * Based on bo.c which bears the following copyright notice,
 * but is dual licensed:



The handling in ttm_bo.c is and always was that the exclusive fence is 
used for buffer moves.


As I said multiple times now the *MAIN* purpose of the dma_resv object 
is memory management and *NOT* synchronization.


Those restrictions come from the original design of TTM where the 
dma_resv object originated from.


The resulting consequences are that:

a) If you access the buffer without waiting for the exclusive fence you 
run into a potential information leak.
    We kind of let that slip for V4L since they only access the buffers 
for writes, so you can't do any harm there.


b) If you overwrite the exclusive fence with a new one without waiting 
for the old one to signal you open up the possibility for userspace to 
access freed up memory.
    This is a complete show stopper since it means that taking over the 
system is just a typing exercise.



What you have done by allowing this in is ripping open a major security 
hole for any DMA-buf import in i915 from all TTM based driver.


This needs to be fixed ASAP, either by waiting in i915 and all other 
drivers doing this for the exclusive fence while importing a DMA-buf or 
by marking i915 and all other drivers as broken.


Sorry, but if you allowed that in you seriously have no idea what you 
are talking about here and where all of this originated from.


Regards,
Christian.
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Daniel Vetter
On Fri, Jun 18, 2021 at 8:02 PM Christian König
 wrote:
>
> Am 18.06.21 um 19:20 schrieb Daniel Vetter:
> > On Fri, Jun 18, 2021 at 6:43 PM Christian König
> >  wrote:
> >> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
> >>> [SNIP]
> >>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
> >>> what v4l does. Aside from it's definitely not just i915 that does this
> >>> even on the drm side, we have a few more drivers nowadays.
> >> No it seriously isn't. If drivers are doing this they are more than broken.
> >>
> >> See the comment in dma-resv.h
> >>
> >>* Based on bo.c which bears the following copyright notice,
> >>* but is dual licensed:
> >> 
> >>
> >>
> >> The handling in ttm_bo.c is and always was that the exclusive fence is
> >> used for buffer moves.
> >>
> >> As I said multiple times now the *MAIN* purpose of the dma_resv object
> >> is memory management and *NOT* synchronization.
> >>
> >> Those restrictions come from the original design of TTM where the
> >> dma_resv object originated from.
> >>
> >> The resulting consequences are that:
> >>
> >> a) If you access the buffer without waiting for the exclusive fence you
> >> run into a potential information leak.
> >>   We kind of let that slip for V4L since they only access the buffers
> >> for writes, so you can't do any harm there.
> >>
> >> b) If you overwrite the exclusive fence with a new one without waiting
> >> for the old one to signal you open up the possibility for userspace to
> >> access freed up memory.
> >>   This is a complete show stopper since it means that taking over the
> >> system is just a typing exercise.
> >>
> >>
> >> What you have done by allowing this in is ripping open a major security
> >> hole for any DMA-buf import in i915 from all TTM based driver.
> >>
> >> This needs to be fixed ASAP, either by waiting in i915 and all other
> >> drivers doing this for the exclusive fence while importing a DMA-buf or
> >> by marking i915 and all other drivers as broken.
> >>
> >> Sorry, but if you allowed that in you seriously have no idea what you
> >> are talking about here and where all of this originated from.
> > Dude, get a grip, seriously. dma-buf landed in 2011
> >
> > commit d15bd7ee445d0702ad801fdaece348fdb79e6581
> > Author: Sumit Semwal 
> > Date:   Mon Dec 26 14:53:15 2011 +0530
> >
> > dma-buf: Introduce dma buffer sharing mechanism
> >
> > and drm prime landed in the same year
> >
> > commit 3248877ea1796915419fba7c89315fdbf00cb56a
> > (airlied/drm-prime-dmabuf-initial)
> > Author: Dave Airlie 
> > Date:   Fri Nov 25 15:21:02 2011 +
> >
> > drm: base prime/dma-buf support (v5)
> >
> > dma-resv was extracted much later
> >
> > commit 786d7257e537da0674c02e16e3b30a44665d1cee
> > Author: Maarten Lankhorst 
> > Date:   Thu Jun 27 13:48:16 2013 +0200
> >
> > reservation: cross-device reservation support, v4
> >
> > Maarten's patch only extracted the dma_resv stuff so it's there,
> > optionally. There was never any effort to roll this out to all the
> > existing drivers, of which there were plenty.
> >
> > It is, and has been since 10 years, totally fine to access dma-buf
> > without looking at any fences at all. From your pov of a ttm driver
> > dma-resv is mainly used for memory management and not sync, but I
> > think that's also due to some reinterpretation of the actual sync
> > rules on your side. For everyone else the dma_resv attached to a
> > dma-buf has been about implicit sync only, nothing else.
>
> No, that was way before my time.
>
> The whole thing was introduced with this commit here:
>
> commit f2c24b83ae90292d315aa7ac029c6ce7929e01aa
> Author: Maarten Lankhorst 
> Date:   Wed Apr 2 17:14:48 2014 +0200
>
>  drm/ttm: flip the switch, and convert to dma_fence
>
>  Signed-off-by: Maarten Lankhorst 
>
>   int ttm_bo_move_accel_cleanup(struct ttm_buffer_object *bo,
> 
> -   bo->sync_obj = driver->sync_obj_ref(sync_obj);
> +   reservation_object_add_excl_fence(bo->resv, fence);
>  if (evict) {
>
> Maarten replaced the bo->sync_obj reference with the dma_resv exclusive
> fence.
>
> This means that we need to apply the sync_obj semantic to all drivers
> using a DMA-buf with its dma_resv object, otherwise you break imports
> from TTM drivers.
>
> Since then and up till now the exclusive fence must be waited on and
> never replaced with anything which signals before the old fence.
>
> Maarten and I think Thomas did that and I was always assuming that you
> know about this design decision.

Surprisingly I do actually know this.

Still the commit you cite did _not_ change any of the rules around
dma_buf: Importers have _no_ obligation to obey the exclusive fence,
because the buffer is pinned. None of the work that Maarten has done
has fundamentally changed this contract in any way.

If amdgpu (or any other ttm based driver) hands back and sgt without
waiting for ttm_bo->moving or the exclusive fence first, then that's a
bug we need to fix 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Christian König

Am 18.06.21 um 16:31 schrieb Daniel Vetter:

[SNIP]

And that drivers choose to ignore the exclusive fence is an absolutely
no-go from a memory management and security point of view. Exclusive
access means exclusive access. Ignoring that won't work.

Yeah, this is why I've been going all over the place about lifting
ttm_bo->moving to dma_resv. And also that I flat out don't trust your
audit, if you havent found these drivers then very clearly you didn't
audit much at all :-)


I just didn't though that anybody could be so stupid to allow such a 
thing in.



The only thing which saved us so far is the fact that drivers doing this
are not that complex.

BTW: How does it even work? I mean then you would run into the same
problem as amdgpu with its page table update fences, e.g. that your
shared fences might signal before the exclusive one.

So we don't ignore any fences when we rip out the backing storage.

And yes there's currently a bug in all these drivers that if you set
both the "ignore implicit fences" and the "set the exclusive fence"
flag, then we just break this. Which is why I think we want to have a
dma_fence_add_shared_exclusive() helper extracted from your amdgpu
code, which we can then use everywhere to plug this.


Daniel are you realizing what you are talking about here? Does that also 
apply for imported DMA-bufs?


If yes than that is a security hole you can push an elephant through.

Can you point me to the code using that?


For dma-buf this isn't actually a problem, because dma-buf are pinned. You
can't move them while other drivers are using them, hence there's not
actually a ttm_bo->moving fence we can ignore.

p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
these other drivers) need to change before they can do dynamic dma-buf.


Otherwise we have an information leak worth a CVE and that is certainly not
something we want.

Because yes otherwise we get a CVE. But right now I don't think we have
one.

Yeah, agree. But this is just because of coincident and not because of
good engineering :)

Well the good news is that I think we're now talking slightly less
past each another than the past few weeks :-)


We do have a quite big confusion on what exactly the signaling ordering is
supposed to be between exclusive and the collective set of shared fences,
and there's some unifying that needs to happen here. But I think what
Jason implements here in the import ioctl is the most defensive version
possible, so really can't break any driver. It really works like you have
an ad-hoc gpu engine that does nothing itself, but waits for the current
exclusive fence and then sets the exclusive fence with its "CS" completion
fence.

That's imo perfectly legit use-case.

The use case is certainly legit, but I'm not sure if merging this at the
moment is a good idea.

Your note that drivers are already ignoring the exclusive fence in the
dma_resv object was eye opening to me. And I now have the very strong
feeling that the synchronization and the design of the dma_resv object
is even more messy then I thought it is.

To summarize we can be really lucky that it didn't blow up into our
faces already.

I don't think there was that much luck involved (ok I did find a
possible bug in i915 already around cpu cache flushing) - for SoC the
exclusive slot in dma_resv really is only used for implicit sync and
nothing else. The fun only starts when you throw in pipelined backing
storage movement.

I guess this also explains why you just seemed to ignore me when I was
asking for a memory management exclusive fence for the p2p stuff, or
some other way to specifically handling movements (like ttm_bo->moving
or whatever it is). From my pov we clearly needed that to make p2p
dma-buf work well enough, mixing up the memory management exclusive
slot with the implicit sync exclusive slot never looked like a bright
idea to me.

I think at least we now have some understanding here.


Well to be honest what you have just told me means that i915 is 
seriously broken.


Ignoring the exclusive fence on an imported DMA-buf is an absolutely 
*NO-GO* even without P2P.


What you have stitched together here allows anybody to basically read 
any memory on the system with both i915 and nouveau, radeon or amdgpu.


We need to fix that ASAP!

Regards,
Christian.


Same for the export one. Waiting for a previous snapshot of implicit
fences is imo perfectly ok use-case and useful for compositors - client
might soon start more rendering, and on some drivers that always results
in the exclusive slot being set, so if you dont take a snapshot you
oversync real bad for your atomic flip.

The export use case is unproblematic as far as I can see.


Those changes are years in the past.  If we have a real problem here (not sure 
on
that yet), then we'll have to figure out how to fix it without nuking
uAPI.

Well, that was the basic idea of attaching flags to the fences in the
dma_resv object.

In other words you clearly 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Daniel Stone
Sorry for the mobile reply, but V4L2 is absolutely not write-only; there has 
never been an intersection of V4L2 supporting dmabuf and not supporting reads.

I see your point about the heritage of dma_resv but it’s a red herring. It 
doesn’t matter who’s right, or who was first, or where the code was extracted 
from.

It’s well defined that amdgpu defines resv to be one thing, that every other 
non-TTM user defines it to be something very different, and that the other TTM 
users define it to be something in the middle.

We’ll never get to anything workable if we keep arguing who’s right. Everyone 
is wrong, because dma_resv doesn’t globally mean anything.

It seems clear that there are three classes of synchronisation barrier (not 
using the ‘f’ word here), in descending exclusion order:
  - memory management barriers (amdgpu exclusive fence / ttm_bo->moving)
  - implicit synchronisation write barriers (everyone else’s exclusive fences, 
amdgpu’s shared fences)
  - implicit synchronisation read barriers (everyone else’s shared fences, also 
amdgpu’s shared fences sometimes)

I don’t see a world in which these three uses can be reduced to two slots. What 
also isn’t clear to me though, is how the memory-management barriers can 
exclude all other access in the original proposal with purely userspace CS. 
Retaining the three separate modes also seems like a hard requirement to not 
completely break userspace, but then I don’t see how three separate slots would 
work if they need to be temporally ordered. amdgpu fixed this by redefining the 
meaning of the two slots, others fixed this by not doing one of the three modes.

So how do we square the circle without encoding a DAG into the kernel? Do the 
two slots need to become a single list which is ordered by time + ‘weight’ and 
flattened whenever modified? Something else?

Have a great weekend.

-d

> On 18 Jun 2021, at 5:43 pm, Christian König  wrote:
> 
> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
>> [SNIP]
>> Ignoring _all_ fences is officially ok for pinned dma-buf. This is
>> what v4l does. Aside from it's definitely not just i915 that does this
>> even on the drm side, we have a few more drivers nowadays.
> 
> No it seriously isn't. If drivers are doing this they are more than broken.
> 
> See the comment in dma-resv.h
> 
>  * Based on bo.c which bears the following copyright notice,
>  * but is dual licensed:
> 
> 
> 
> The handling in ttm_bo.c is and always was that the exclusive fence is used 
> for buffer moves.
> 
> As I said multiple times now the *MAIN* purpose of the dma_resv object is 
> memory management and *NOT* synchronization.
> 
> Those restrictions come from the original design of TTM where the dma_resv 
> object originated from.
> 
> The resulting consequences are that:
> 
> a) If you access the buffer without waiting for the exclusive fence you run 
> into a potential information leak.
> We kind of let that slip for V4L since they only access the buffers for 
> writes, so you can't do any harm there.
> 
> b) If you overwrite the exclusive fence with a new one without waiting for 
> the old one to signal you open up the possibility for userspace to access 
> freed up memory.
> This is a complete show stopper since it means that taking over the 
> system is just a typing exercise.
> 
> 
> What you have done by allowing this in is ripping open a major security hole 
> for any DMA-buf import in i915 from all TTM based driver.
> 
> This needs to be fixed ASAP, either by waiting in i915 and all other drivers 
> doing this for the exclusive fence while importing a DMA-buf or by marking 
> i915 and all other drivers as broken.
> 
> Sorry, but if you allowed that in you seriously have no idea what you are 
> talking about here and where all of this originated from.
> 
> Regards,
> Christian.

___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 2/2] drm/doc/rfc: i915 new parallel submission uAPI plan

2021-06-18 Thread Ye, Tony
Acked-by: Tony Ye 

Regards,
Tony

On 6/11/2021 4:40 PM, Matthew Brost wrote:
> Add entry for i915 new parallel submission uAPI plan.
> 
> v2:
>   (Daniel Vetter):
>- Expand logical order explaination
>- Add dummy header
>- Only allow N BBs in execbuf IOCTL
>- Configure parallel submission per slot not per gem context
> v3:
>   (Marcin Ślusarz):
>- Lot's of typos / bad english fixed
>   (Tvrtko Ursulin):
>- Consistent pseudo code, clean up wording in descriptions
> v4:
>   (Daniel Vetter)
>- Drop flags
>- Add kernel doc
>- Reword a few things / fix typos
>   (Tvrtko)
>- Reword a few things / fix typos
> 
> Cc: Tvrtko Ursulin 
> Cc: Tony Ye 
> CC: Carl Zhang 
> Cc: Daniel Vetter 
> Cc: Jason Ekstrand 
> Signed-off-by: Matthew Brost 
> Acked-by: Daniel Vetter 
> ---
>   Documentation/gpu/rfc/i915_parallel_execbuf.h | 117 ++
>   Documentation/gpu/rfc/i915_scheduler.rst  |  59 -
>   2 files changed, 175 insertions(+), 1 deletion(-)
>   create mode 100644 Documentation/gpu/rfc/i915_parallel_execbuf.h
> 
> diff --git a/Documentation/gpu/rfc/i915_parallel_execbuf.h 
> b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> new file mode 100644
> index ..c22af3a359e4
> --- /dev/null
> +++ b/Documentation/gpu/rfc/i915_parallel_execbuf.h
> @@ -0,0 +1,117 @@
> +#define I915_CONTEXT_ENGINES_EXT_PARALLEL_SUBMIT 2 /* see 
> i915_context_engines_parallel_submit */
> +
> +/**
> + * struct drm_i915_context_engines_parallel_submit - Configure engine for
> + * parallel submission.
> + *
> + * Setup a slot in the context engine map to allow multiple BBs to be 
> submitted
> + * in a single execbuf IOCTL. Those BBs will then be scheduled to run on the 
> GPU
> + * in parallel. Multiple hardware contexts are created internally in the i915
> + * run these BBs. Once a slot is configured for N BBs only N BBs can be
> + * submitted in each execbuf IOCTL and this is implicit behavior e.g. The 
> user
> + * doesn't tell the execbuf IOCTL there are N BBs, the execbuf IOCTL knows 
> how
> + * many BBs there are based on the slot's configuration. The N BBs are the 
> last
> + * N buffer objects or first N if I915_EXEC_BATCH_FIRST is set.
> + *
> + * The default placement behavior is to create implicit bonds between each
> + * context if each context maps to more than 1 physical engine (e.g. context 
> is
> + * a virtual engine). Also we only allow contexts of same engine class and 
> these
> + * contexts must be in logically contiguous order. Examples of the placement
> + * behavior described below. Lastly, the default is to not allow BBs to
> + * preempted mid BB rather insert coordinated preemption on all hardware
> + * contexts between each set of BBs. Flags may be added in the future to 
> change
> + * bott of these default behaviors.
> + *
> + * Returns -EINVAL if hardware context placement configuration is invalid or 
> if
> + * the placement configuration isn't supported on the platform / submission
> + * interface.
> + * Returns -ENODEV if extension isn't supported on the platform / submission
> + * inteface.
> + *
> + * .. code-block::
> + *
> + *   Example 1 pseudo code:
> + *   CS[X] = generic engine of same class, logical instance X
> + *   INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + *   set_engines(INVALID)
> + *   set_parallel(engine_index=0, width=2, num_siblings=1,
> + *engines=CS[0],CS[1])
> + *
> + *   Results in the following valid placement:
> + *   CS[0], CS[1]
> + *
> + *   Example 2 pseudo code:
> + *   CS[X] = generic engine of same class, logical instance X
> + *   INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + *   set_engines(INVALID)
> + *   set_parallel(engine_index=0, width=2, num_siblings=2,
> + *engines=CS[0],CS[2],CS[1],CS[3])
> + *
> + *   Results in the following valid placements:
> + *   CS[0], CS[1]
> + *   CS[2], CS[3]
> + *
> + *   This can also be thought of as 2 virtual engines described by 2-D array
> + *   in the engines the field with bonds placed between each index of the
> + *   virtual engines. e.g. CS[0] is bonded to CS[1], CS[2] is bonded to
> + *   CS[3].
> + *   VE[0] = CS[0], CS[2]
> + *   VE[1] = CS[1], CS[3]
> + *
> + *   Example 3 pseudo code:
> + *   CS[X] = generic engine of same class, logical instance X
> + *   INVALID = I915_ENGINE_CLASS_INVALID, I915_ENGINE_CLASS_INVALID_NONE
> + *   set_engines(INVALID)
> + *   set_parallel(engine_index=0, width=2, num_siblings=2,
> + *engines=CS[0],CS[1],CS[1],CS[3])
> + *
> + *   Results in the following valid and invalid placements:
> + *   CS[0], CS[1]
> + *   CS[1], CS[3] - Not logical contiguous, return -EINVAL
> + */
> +struct drm_i915_context_engines_parallel_submit {
> + /**
> +  * @base: base user extension.
> +  */
> + struct i915_user_extension base;
> +
> + /**
> +  * @engine_index: slot for parallel engine
> +  */
> + __u16 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Daniel Vetter
On Fri, Jun 18, 2021 at 6:43 PM Christian König
 wrote:
>
> Am 18.06.21 um 17:17 schrieb Daniel Vetter:
> > [SNIP]
> > Ignoring _all_ fences is officially ok for pinned dma-buf. This is
> > what v4l does. Aside from it's definitely not just i915 that does this
> > even on the drm side, we have a few more drivers nowadays.
>
> No it seriously isn't. If drivers are doing this they are more than broken.
>
> See the comment in dma-resv.h
>
>   * Based on bo.c which bears the following copyright notice,
>   * but is dual licensed:
> 
>
>
> The handling in ttm_bo.c is and always was that the exclusive fence is
> used for buffer moves.
>
> As I said multiple times now the *MAIN* purpose of the dma_resv object
> is memory management and *NOT* synchronization.
>
> Those restrictions come from the original design of TTM where the
> dma_resv object originated from.
>
> The resulting consequences are that:
>
> a) If you access the buffer without waiting for the exclusive fence you
> run into a potential information leak.
>  We kind of let that slip for V4L since they only access the buffers
> for writes, so you can't do any harm there.
>
> b) If you overwrite the exclusive fence with a new one without waiting
> for the old one to signal you open up the possibility for userspace to
> access freed up memory.
>  This is a complete show stopper since it means that taking over the
> system is just a typing exercise.
>
>
> What you have done by allowing this in is ripping open a major security
> hole for any DMA-buf import in i915 from all TTM based driver.
>
> This needs to be fixed ASAP, either by waiting in i915 and all other
> drivers doing this for the exclusive fence while importing a DMA-buf or
> by marking i915 and all other drivers as broken.
>
> Sorry, but if you allowed that in you seriously have no idea what you
> are talking about here and where all of this originated from.

Dude, get a grip, seriously. dma-buf landed in 2011

commit d15bd7ee445d0702ad801fdaece348fdb79e6581
Author: Sumit Semwal 
Date:   Mon Dec 26 14:53:15 2011 +0530

   dma-buf: Introduce dma buffer sharing mechanism

and drm prime landed in the same year

commit 3248877ea1796915419fba7c89315fdbf00cb56a
(airlied/drm-prime-dmabuf-initial)
Author: Dave Airlie 
Date:   Fri Nov 25 15:21:02 2011 +

   drm: base prime/dma-buf support (v5)

dma-resv was extracted much later

commit 786d7257e537da0674c02e16e3b30a44665d1cee
Author: Maarten Lankhorst 
Date:   Thu Jun 27 13:48:16 2013 +0200

   reservation: cross-device reservation support, v4

Maarten's patch only extracted the dma_resv stuff so it's there,
optionally. There was never any effort to roll this out to all the
existing drivers, of which there were plenty.

It is, and has been since 10 years, totally fine to access dma-buf
without looking at any fences at all. From your pov of a ttm driver
dma-resv is mainly used for memory management and not sync, but I
think that's also due to some reinterpretation of the actual sync
rules on your side. For everyone else the dma_resv attached to a
dma-buf has been about implicit sync only, nothing else.

_only_ when you have a dynamic importer/exporter can you assume that
the dma_resv fences must actually be obeyed. That's one of the reasons
why we had to make this a completely new mode (the other one was
locking, but they really tie together).

Wrt your problems:
a) needs to be fixed in drivers exporting buffers and failing to make
sure the memory is there by the time dma_buf_map_attachment returns.
b) needs to be fixed in the importers, and there's quite a few of
those. There's more than i915 here, which is why I think we should
have the dma_resv_add_shared_exclusive helper extracted from amdgpu.
Avoids hand-rolling this about 5 times (6 if we include the import
ioctl from Jason).

Also I've like been trying to explain this ever since the entire
dynamic dma-buf thing started.
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


[Mesa-dev] [ANNOUNCE] mesa 21.1.3

2021-06-18 Thread Eric Engestrom
Hello everyone!

The third bugfix release for the 21.1 branch is finally here, a couple of 
days late because life got in the way. This one contains mostly AMD fixes.

The next bugfix release is scheduled for two weeks from now, on June 30th.

Cheers,
  Eric

---

Adam Jackson (1):
  classic/xlib: Fix the build after !9817

Bas Nieuwenhuizen (3):
  radv: Don't skip barriers that only change queues.
  radv: Actually return correct value for read-only DCC compressedness.
  radv: Allow DCC images to be compressed with foreign queues.

Dave Airlie (1):
  llvmpipe: add the interesting bit of cpu detection to the cache.

Duncan Hopkins (1):
  zink: Correct compiler issue with have_moltenvk member having been moved.

Eric Engestrom (5):
  .pick_status.json: Update to db83dc619c96c35a039f2d8a32e1a179c0f00d64
  .pick_status.json: Update to f884c2e3be363903a59dbee01868c7ad0bf0f346
  .pick_status.json: Update to 561f9ae74b2b7da06bb4830aaca8d017a3dd2746
  docs: add release notes for 21.1.3
  VERSION: bump for 21.1.3 release

Erik Faye-Lund (1):
  llvmpipe: fix edge-rule logic for lines

Felix DeGrood (1):
  anv: Clear all pending stall after pipe flush

Ian Romanick (1):
  util: Zero out all of mask in util_set_thread_affinity

Icecream95 (1):
  panfrost: Use first_tiler to check if tiling is needed

Jason Ekstrand (2):
  intel/vec4: Also use MOV_FOR_SCRATCH for swizzle resolves
  anv: Handle OOM in the pinned path in anv_reloc_list_add

Matt Turner (1):
  sparc: Avoid some redefinition warnings

Mike Blumenkrantz (6):
  zink: ci updates
  anv: fix availability for copying timestamp query results
  util/vbuf: fix buffer overrun in attribute conversions
  zink: fix caching of shader variants with inlined uniforms
  zink: use scissor region for discarding clears during blit
  zink: fix typo that's definitely not at all embarrassing or anything like 
that

Neha Bhende (1):
  svga: Initialize pipe_shader_state for transform shaders

Petr Vaněk (1):
  docs/install: remove one extra when

Pierre-Eric Pelloux-Prayer (6):
  frontend/dri: set PIPE_BIND_PROTECTED later
  frontend/dri: fix bool/int comparison
  radeonsi: fix encryption check for buffers
  radeonsi: add a gfx10 bug workaround for NOT_EOP
  radeonsi: dirty msaa_config on rs->multisample_enable change
  winsys/amdgpu: don't read bo->u.slab.entry after pb_slab_free

Rhys Perry (3):
  aco: do not clause NSA instructions
  aco: don't create 4 and 5 dword NSA instructions on GFX10
  aco: use v1b/v2b for ds_read_u8/ds_read_u16

Rob Clark (2):
  egl: zero is a valid fd
  freedreno/ir3: Fix use after free

Samuel Pitoiset (6):
  radv: enable RADV_DEBUG=invariantgeom for SotTR DX11/DX12 versions
  radv: emit PA_SC_CONSERVATIVE_RASTERIZATION_CNTL only on GFX9+
  aco: fix range checking for SSBO loads/stores with SGPR offset on GFX6-7
  aco: fix emitting literal offsets with SMEM on GFX7
  radv: do not launch an IB2 for secondary cmdbuf with INDIRECT_MULTI on 
GFX7
  radv: fix aligning the image offset by using align64()

Sergii Melikhov (1):
  util/format: Change the pointer offset.

Tony Wasserka (1):
  aco/ra: Fix off-by-one-error in print_regs

Vinson Lee (1):
  travis: Download XQuartz from GitHub.

git tag: mesa-21.1.3

https://mesa.freedesktop.org/archive/mesa-21.1.3.tar.xz
SHA256: cbe221282670875ffd762247b6a2c95dcee91d0a34c29802c75ef761fc891e69  
mesa-21.1.3.tar.xz
SHA512: 
8ca6d5516035484ea2a63bc6338794003ef167239ab0c220f8d3693f97f9725b46fc9d9a704c4ba11b83197d4b8e5f658d65ef0cce1e0957f5e58bd13726b9e0
  mesa-21.1.3.tar.xz
PGP:  https://mesa.freedesktop.org/archive/mesa-21.1.3.tar.xz.sig



signature.asc
Description: PGP signature
___
mesa-dev mailing list
mesa-dev@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/mesa-dev


Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Daniel Vetter
On Fri, Jun 18, 2021 at 4:42 PM Christian König
 wrote:
>
> Am 18.06.21 um 16:31 schrieb Daniel Vetter:
> > [SNIP]
> >> And that drivers choose to ignore the exclusive fence is an absolutely
> >> no-go from a memory management and security point of view. Exclusive
> >> access means exclusive access. Ignoring that won't work.
> > Yeah, this is why I've been going all over the place about lifting
> > ttm_bo->moving to dma_resv. And also that I flat out don't trust your
> > audit, if you havent found these drivers then very clearly you didn't
> > audit much at all :-)
>
> I just didn't though that anybody could be so stupid to allow such a
> thing in.
>
> >> The only thing which saved us so far is the fact that drivers doing this
> >> are not that complex.
> >>
> >> BTW: How does it even work? I mean then you would run into the same
> >> problem as amdgpu with its page table update fences, e.g. that your
> >> shared fences might signal before the exclusive one.
> > So we don't ignore any fences when we rip out the backing storage.
> >
> > And yes there's currently a bug in all these drivers that if you set
> > both the "ignore implicit fences" and the "set the exclusive fence"
> > flag, then we just break this. Which is why I think we want to have a
> > dma_fence_add_shared_exclusive() helper extracted from your amdgpu
> > code, which we can then use everywhere to plug this.
>
> Daniel are you realizing what you are talking about here? Does that also
> apply for imported DMA-bufs?
>
> If yes than that is a security hole you can push an elephant through.
>
> Can you point me to the code using that?
>
> >>> For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> >>> can't move them while other drivers are using them, hence there's not
> >>> actually a ttm_bo->moving fence we can ignore.
> >>>
> >>> p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> >>> these other drivers) need to change before they can do dynamic dma-buf.
> >>>
>  Otherwise we have an information leak worth a CVE and that is certainly 
>  not
>  something we want.
> >>> Because yes otherwise we get a CVE. But right now I don't think we have
> >>> one.
> >> Yeah, agree. But this is just because of coincident and not because of
> >> good engineering :)
> > Well the good news is that I think we're now talking slightly less
> > past each another than the past few weeks :-)
> >
> >>> We do have a quite big confusion on what exactly the signaling ordering is
> >>> supposed to be between exclusive and the collective set of shared fences,
> >>> and there's some unifying that needs to happen here. But I think what
> >>> Jason implements here in the import ioctl is the most defensive version
> >>> possible, so really can't break any driver. It really works like you have
> >>> an ad-hoc gpu engine that does nothing itself, but waits for the current
> >>> exclusive fence and then sets the exclusive fence with its "CS" completion
> >>> fence.
> >>>
> >>> That's imo perfectly legit use-case.
> >> The use case is certainly legit, but I'm not sure if merging this at the
> >> moment is a good idea.
> >>
> >> Your note that drivers are already ignoring the exclusive fence in the
> >> dma_resv object was eye opening to me. And I now have the very strong
> >> feeling that the synchronization and the design of the dma_resv object
> >> is even more messy then I thought it is.
> >>
> >> To summarize we can be really lucky that it didn't blow up into our
> >> faces already.
> > I don't think there was that much luck involved (ok I did find a
> > possible bug in i915 already around cpu cache flushing) - for SoC the
> > exclusive slot in dma_resv really is only used for implicit sync and
> > nothing else. The fun only starts when you throw in pipelined backing
> > storage movement.
> >
> > I guess this also explains why you just seemed to ignore me when I was
> > asking for a memory management exclusive fence for the p2p stuff, or
> > some other way to specifically handling movements (like ttm_bo->moving
> > or whatever it is). From my pov we clearly needed that to make p2p
> > dma-buf work well enough, mixing up the memory management exclusive
> > slot with the implicit sync exclusive slot never looked like a bright
> > idea to me.
> >
> > I think at least we now have some understanding here.
>
> Well to be honest what you have just told me means that i915 is
> seriously broken.
>
> Ignoring the exclusive fence on an imported DMA-buf is an absolutely
> *NO-GO* even without P2P.
>
> What you have stitched together here allows anybody to basically read
> any memory on the system with both i915 and nouveau, radeon or amdgpu.
>
> We need to fix that ASAP!

Ignoring _all_ fences is officially ok for pinned dma-buf. This is
what v4l does. Aside from it's definitely not just i915 that does this
even on the drm side, we have a few more drivers nowadays.

The rules are that after you've called dma_buf_map_attachment the

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Daniel Vetter
On Fri, Jun 18, 2021 at 11:15 AM Christian König
 wrote:
>
> Am 17.06.21 um 21:58 schrieb Daniel Vetter:
> > On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
> >> [SNIP]
> >>> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> >>> i915 inserts and/or depends on fences.
> >>>
>  When you combine that with complex drivers which use TTM and buffer
>  moves underneath you can construct an information leak using this and
>  give userspace access to memory which is allocated to the driver, but
>  not yet initialized.
> 
>  This way you can leak things like page tables, passwords, kernel data
>  etc... in large amounts to userspace and is an absolutely no-go for
>  security.
> >>> Ugh...  Unfortunately, I'm really out of my depth on the implications
> >>> going on here but I think I see your point.
> >>>
>  That's why I'm said we need to get this fixed before we upstream this
>  patch set here and especially the driver change which is using that.
> >>> Well, i915 has had uAPI for a while to ignore fences.
> >> Yeah, exactly that's illegal.
> > You're a few years too late with closing that barn door. The following
> > drives have this concept
> > - i915
> > - msm
> > - etnaviv
> >
> > Because you can't write a competent vulkan driver without this.
>
> WHAT? ^^
>
> > This was discussed at absolute epic length in various xdcs iirc. We did 
> > ignore a
> > bit the vram/ttm/bo-moving problem because all the people present were
> > hacking on integrated gpu (see list above), but that just means we need to
> > treat the ttm_bo->moving fence properly.
>
> I should have visited more XDCs in the past, the problem is much larger
> than this.
>
> But I now start to understand what you are doing with that design and
> why it looks so messy to me, amdgpu is just currently the only driver
> which does Vulkan and complex memory management at the same time.
>
> >> At least the kernel internal fences like moving or clearing a buffer object
> >> needs to be taken into account before a driver is allowed to access a
> >> buffer.
> > Yes i915 needs to make sure it never ignores ttm_bo->moving.
>
> No, that is only the tip of the iceberg. See TTM for example also puts
> fences which drivers needs to wait for into the shared slots. Same thing
> for use cases like clear on release etc
>
>  From my point of view the main purpose of the dma_resv object is to
> serve memory management, synchronization for command submission is just
> a secondary use case.
>
> And that drivers choose to ignore the exclusive fence is an absolutely
> no-go from a memory management and security point of view. Exclusive
> access means exclusive access. Ignoring that won't work.

Yeah, this is why I've been going all over the place about lifting
ttm_bo->moving to dma_resv. And also that I flat out don't trust your
audit, if you havent found these drivers then very clearly you didn't
audit much at all :-)

> The only thing which saved us so far is the fact that drivers doing this
> are not that complex.
>
> BTW: How does it even work? I mean then you would run into the same
> problem as amdgpu with its page table update fences, e.g. that your
> shared fences might signal before the exclusive one.

So we don't ignore any fences when we rip out the backing storage.

And yes there's currently a bug in all these drivers that if you set
both the "ignore implicit fences" and the "set the exclusive fence"
flag, then we just break this. Which is why I think we want to have a
dma_fence_add_shared_exclusive() helper extracted from your amdgpu
code, which we can then use everywhere to plug this.

> > For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> > can't move them while other drivers are using them, hence there's not
> > actually a ttm_bo->moving fence we can ignore.
> >
> > p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> > these other drivers) need to change before they can do dynamic dma-buf.
> >
> >> Otherwise we have an information leak worth a CVE and that is certainly not
> >> something we want.
> > Because yes otherwise we get a CVE. But right now I don't think we have
> > one.
>
> Yeah, agree. But this is just because of coincident and not because of
> good engineering :)

Well the good news is that I think we're now talking slightly less
past each another than the past few weeks :-)

> > We do have a quite big confusion on what exactly the signaling ordering is
> > supposed to be between exclusive and the collective set of shared fences,
> > and there's some unifying that needs to happen here. But I think what
> > Jason implements here in the import ioctl is the most defensive version
> > possible, so really can't break any driver. It really works like you have
> > an ad-hoc gpu engine that does nothing itself, but waits for the current
> > exclusive fence and then sets the exclusive fence with its "CS" completion
> > fence.
> 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Jason Ekstrand
On Fri, Jun 18, 2021 at 4:15 AM Christian König
 wrote:
>
> Am 17.06.21 um 21:58 schrieb Daniel Vetter:
> > On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:
> >> [SNIP]
> >>> But, to the broader point, maybe?  I'm a little fuzzy on exactly where
> >>> i915 inserts and/or depends on fences.
> >>>
>  When you combine that with complex drivers which use TTM and buffer
>  moves underneath you can construct an information leak using this and
>  give userspace access to memory which is allocated to the driver, but
>  not yet initialized.
> 
>  This way you can leak things like page tables, passwords, kernel data
>  etc... in large amounts to userspace and is an absolutely no-go for
>  security.
> >>> Ugh...  Unfortunately, I'm really out of my depth on the implications
> >>> going on here but I think I see your point.
> >>>
>  That's why I'm said we need to get this fixed before we upstream this
>  patch set here and especially the driver change which is using that.
> >>> Well, i915 has had uAPI for a while to ignore fences.
> >> Yeah, exactly that's illegal.
> > You're a few years too late with closing that barn door. The following
> > drives have this concept
> > - i915
> > - msm
> > - etnaviv
> >
> > Because you can't write a competent vulkan driver without this.
>
> WHAT? ^^

I think it's fair to say that you can't write a competent Vulkan
driver with implicit sync getting in the way.  Since AMD removes all
the implicit sync internally, this solves most of the problems there.
RADV does suffer some heartache around WSI which is related but I'd
hardly say that makes it incompetent.

> > This was discussed at absolute epic length in various xdcs iirc. We did 
> > ignore a
> > bit the vram/ttm/bo-moving problem because all the people present were
> > hacking on integrated gpu (see list above), but that just means we need to
> > treat the ttm_bo->moving fence properly.
>
> I should have visited more XDCs in the past, the problem is much larger
> than this.
>
> But I now start to understand what you are doing with that design and
> why it looks so messy to me, amdgpu is just currently the only driver
> which does Vulkan and complex memory management at the same time.

I'm reading "complex memory management" here and elsewhere as "has
VRAM".  All memory management is complex; shuffling to/from VRAM just
adds more layers.

> >> At least the kernel internal fences like moving or clearing a buffer object
> >> needs to be taken into account before a driver is allowed to access a
> >> buffer.
> > Yes i915 needs to make sure it never ignores ttm_bo->moving.
>
> No, that is only the tip of the iceberg. See TTM for example also puts
> fences which drivers needs to wait for into the shared slots. Same thing
> for use cases like clear on release etc
>
>  From my point of view the main purpose of the dma_resv object is to
> serve memory management, synchronization for command submission is just
> a secondary use case.
>
> And that drivers choose to ignore the exclusive fence is an absolutely
> no-go from a memory management and security point of view. Exclusive
> access means exclusive access. Ignoring that won't work.
>
> The only thing which saved us so far is the fact that drivers doing this
> are not that complex.

I think there's something important in Daniel's list above with
drivers that have a "no implicit sync uAPI": None of them are TTM
based.  We (i915) have been doing our own thing for memory management
for a while and it may not follow your TTM mental model.  Sure,
there's a dma_resv in our BOs and we can import/export dma-buf but
that doesn't mean that, internally, we think of it the same way.  I
say this in very generic terms because there are a whole lot of
details that I don't know.  What I do know is that, whatever we're
doing, it's been pretty robust for many years.

That said, we are moving to TTM so, if I'm right that this is a GEM
<-> TTM conflict, we've got some thinking to do.

> BTW: How does it even work? I mean then you would run into the same
> problem as amdgpu with its page table update fences, e.g. that your
> shared fences might signal before the exclusive one.
>
> > For dma-buf this isn't actually a problem, because dma-buf are pinned. You
> > can't move them while other drivers are using them, hence there's not
> > actually a ttm_bo->moving fence we can ignore.
> >
> > p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
> > these other drivers) need to change before they can do dynamic dma-buf.
> >
> >> Otherwise we have an information leak worth a CVE and that is certainly not
> >> something we want.
> > Because yes otherwise we get a CVE. But right now I don't think we have
> > one.
>
> Yeah, agree. But this is just because of coincident and not because of
> good engineering :)
>
> > We do have a quite big confusion on what exactly the signaling ordering is
> > supposed to be between exclusive and the 

Re: [Mesa-dev] [PATCH 0/6] dma-buf: Add an API for exporting sync files (v12)

2021-06-18 Thread Christian König

Am 17.06.21 um 21:58 schrieb Daniel Vetter:

On Thu, Jun 17, 2021 at 09:37:36AM +0200, Christian König wrote:

[SNIP]

But, to the broader point, maybe?  I'm a little fuzzy on exactly where
i915 inserts and/or depends on fences.


When you combine that with complex drivers which use TTM and buffer
moves underneath you can construct an information leak using this and
give userspace access to memory which is allocated to the driver, but
not yet initialized.

This way you can leak things like page tables, passwords, kernel data
etc... in large amounts to userspace and is an absolutely no-go for
security.

Ugh...  Unfortunately, I'm really out of my depth on the implications
going on here but I think I see your point.


That's why I'm said we need to get this fixed before we upstream this
patch set here and especially the driver change which is using that.

Well, i915 has had uAPI for a while to ignore fences.

Yeah, exactly that's illegal.

You're a few years too late with closing that barn door. The following
drives have this concept
- i915
- msm
- etnaviv

Because you can't write a competent vulkan driver without this.


WHAT? ^^


This was discussed at absolute epic length in various xdcs iirc. We did ignore a
bit the vram/ttm/bo-moving problem because all the people present were
hacking on integrated gpu (see list above), but that just means we need to
treat the ttm_bo->moving fence properly.


I should have visited more XDCs in the past, the problem is much larger 
than this.


But I now start to understand what you are doing with that design and 
why it looks so messy to me, amdgpu is just currently the only driver 
which does Vulkan and complex memory management at the same time.



At least the kernel internal fences like moving or clearing a buffer object
needs to be taken into account before a driver is allowed to access a
buffer.

Yes i915 needs to make sure it never ignores ttm_bo->moving.


No, that is only the tip of the iceberg. See TTM for example also puts 
fences which drivers needs to wait for into the shared slots. Same thing 
for use cases like clear on release etc


From my point of view the main purpose of the dma_resv object is to 
serve memory management, synchronization for command submission is just 
a secondary use case.


And that drivers choose to ignore the exclusive fence is an absolutely 
no-go from a memory management and security point of view. Exclusive 
access means exclusive access. Ignoring that won't work.


The only thing which saved us so far is the fact that drivers doing this 
are not that complex.


BTW: How does it even work? I mean then you would run into the same 
problem as amdgpu with its page table update fences, e.g. that your 
shared fences might signal before the exclusive one.



For dma-buf this isn't actually a problem, because dma-buf are pinned. You
can't move them while other drivers are using them, hence there's not
actually a ttm_bo->moving fence we can ignore.

p2p dma-buf aka dynamic dma-buf is a different beast, and i915 (and fwiw
these other drivers) need to change before they can do dynamic dma-buf.


Otherwise we have an information leak worth a CVE and that is certainly not
something we want.

Because yes otherwise we get a CVE. But right now I don't think we have
one.


Yeah, agree. But this is just because of coincident and not because of 
good engineering :)



We do have a quite big confusion on what exactly the signaling ordering is
supposed to be between exclusive and the collective set of shared fences,
and there's some unifying that needs to happen here. But I think what
Jason implements here in the import ioctl is the most defensive version
possible, so really can't break any driver. It really works like you have
an ad-hoc gpu engine that does nothing itself, but waits for the current
exclusive fence and then sets the exclusive fence with its "CS" completion
fence.

That's imo perfectly legit use-case.


The use case is certainly legit, but I'm not sure if merging this at the 
moment is a good idea.


Your note that drivers are already ignoring the exclusive fence in the 
dma_resv object was eye opening to me. And I now have the very strong 
feeling that the synchronization and the design of the dma_resv object 
is even more messy then I thought it is.


To summarize we can be really lucky that it didn't blow up into our 
faces already.



Same for the export one. Waiting for a previous snapshot of implicit
fences is imo perfectly ok use-case and useful for compositors - client
might soon start more rendering, and on some drivers that always results
in the exclusive slot being set, so if you dont take a snapshot you
oversync real bad for your atomic flip.


The export use case is unproblematic as far as I can see.


Those changes are years in the past.  If we have a real problem here (not sure 
on
that yet), then we'll have to figure out how to fix it without nuking
uAPI.

Well, that was the basic idea of attaching flags to