On Sun, Sep 14, 2014 at 12:36:43PM +0200, Christian K?nig wrote:
> Yeah, right. Providing the fd to reassign to a fence would indeed reduce the
> create/close overhead.
>
> But it would still be more overhead than for example a simple on demand
> growing ring buffer which then uses 64bit sequence
Yeah, right. Providing the fd to reassign to a fence would indeed reduce
the create/close overhead.
But it would still be more overhead than for example a simple on demand
growing ring buffer which then uses 64bit sequence numbers in userspace
to refer to a fence in the kernel.
Apart from
BTW, we can recycle fences in userspace just like we recycle buffers.
That should make the create/close overhead non-existent.
Marek
On Sat, Sep 13, 2014 at 2:25 PM, Christian K?nig
wrote:
>> Doing such combining and cleaning up fds as soon as they have been passed
>> on should keep each
> Doing such combining and cleaning up fds as soon as they have been
> passed on should keep each application's fd usage fairly small.
Yeah, but this is exactly what we wanted to avoid internally because of
the IOCTL overhead.
And thinking more about it for our driver internal use we will
> As Daniel said using fd is most likely the way we want to do it but this
> remains vague.
Separating the discussion if it should be an fd or not. Using an fd
sounds fine to me in general, but I have some concerns as well.
For example what was the maximum number of opened FDs per process again?
Am 12.09.2014 um 17:48 schrieb Jerome Glisse:
> On Fri, Sep 12, 2014 at 05:42:57PM +0200, Christian K?nig wrote:
>> Am 12.09.2014 um 17:33 schrieb Jerome Glisse:
>>> On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse
wrote:
Am 12.09.2014 um 17:33 schrieb Jerome Glisse:
> On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
>> On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse
>> wrote:
>>> On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter
On Fri, Sep 12, 2014 at 05:58:09PM +0200, Christian K?nig wrote:
> pass in a list of fences to wait for before beginning a command
submission.
The Android implementation has a mechanism for combining multiple sync
points into a brand new single sync pt. Thus APIs only ever need to take
in a
On Fri, Sep 12, 2014 at 10:50:49AM -0400, Jerome Glisse wrote:
> On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
> > On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter wrote:
> > > On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian K?nig wrote:
> > >> Hello everyone,
> > >>
> > >> to
On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter wrote:
> On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian K?nig wrote:
>> Hello everyone,
>>
>> to allow concurrent buffer access by different engines beyond the multiple
>> readers/single writer model that we currently use in radeon and other
>>
On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian K?nig wrote:
> Hello everyone,
>
> to allow concurrent buffer access by different engines beyond the multiple
> readers/single writer model that we currently use in radeon and other
> drivers we need some kind of synchonization object exposed to
Hello everyone,
to allow concurrent buffer access by different engines beyond the
multiple readers/single writer model that we currently use in radeon and
other drivers we need some kind of synchonization object exposed to
userspace.
My initial patch set for this used (or rather abused) zero
On Fri, Sep 12, 2014 at 05:58:09PM +0200, Christian K?nig wrote:
> Am 12.09.2014 um 17:48 schrieb Jerome Glisse:
> >On Fri, Sep 12, 2014 at 05:42:57PM +0200, Christian K?nig wrote:
> >>Am 12.09.2014 um 17:33 schrieb Jerome Glisse:
> >>>On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
On Fri, Sep 12, 2014 at 05:42:57PM +0200, Christian K?nig wrote:
> Am 12.09.2014 um 17:33 schrieb Jerome Glisse:
> >On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
> >>On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse
> >>wrote:
> >>>On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel
On Fri, Sep 12, 2014 at 11:33 AM, Jerome Glisse wrote:
> On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
>> On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse
>> wrote:
>> > On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
>> >> On Fri, Sep 12, 2014 at 4:09 PM, Daniel
On Fri, Sep 12, 2014 at 11:25:12AM -0400, Alex Deucher wrote:
> On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse wrote:
> > On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
> >> On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter wrote:
> >> > On Fri, Sep 12, 2014 at 03:23:22PM +0200,
On Fri, Sep 12, 2014 at 10:50 AM, Jerome Glisse wrote:
> On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
>> On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter wrote:
>> > On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian K?nig wrote:
>> >> Hello everyone,
>> >>
>> >> to allow
On Fri, Sep 12, 2014 at 04:43:44PM +0200, Daniel Vetter wrote:
> On Fri, Sep 12, 2014 at 4:09 PM, Daniel Vetter wrote:
> > On Fri, Sep 12, 2014 at 03:23:22PM +0200, Christian K?nig wrote:
> >> Hello everyone,
> >>
> >> to allow concurrent buffer access by different engines beyond the multiple
>
On Fri, 12 Sep 2014 18:08:23 +0200
Christian K?nig wrote:
> > As Daniel said using fd is most likely the way we want to do it but this
> > remains vague.
> Separating the discussion if it should be an fd or not. Using an fd
> sounds fine to me in general, but I have some concerns as well.
>
>
19 matches
Mail list logo