On Fri, Mar 13, 2015 at 12:36 PM, Daniel Vetter <daniel at ffwll.ch> wrote:
> On Fri, Mar 13, 2015 at 06:46:33PM +0900, Michel Dänzer wrote:
>> On 13.03.2015 18:11, Daniel Vetter wrote:
>> > On Thu, Mar 12, 2015 at 06:02:56PM +0900, Michel Dänzer wrote:
>> >> On 12.03.2015 06:14, Alex Deucher wrote:
>> >>> On Wed, Mar 11, 2015 at 4:51 PM, Alex Deucher <alexdeucher at gmail.com> 
>> >>> wrote:
>> >>>> On Wed, Mar 11, 2015 at 2:21 PM, Christian König
>> >>>> <deathsimple at vodafone.de> wrote:
>> >>>>> On 11.03.2015 16:44, Alex Deucher wrote:
>> >>>>>>
>> >>>>>> radeon_bo_create() calls radeon_ttm_placement_from_domain()
>> >>>>>> before ttm_bo_init() is called.  radeon_ttm_placement_from_domain()
>> >>>>>> uses the ttm bo size to determine when to select top down
>> >>>>>> allocation but since the ttm bo is not initialized yet the
>> >>>>>> check is always false.
>> >>>>>>
>> >>>>>> Noticed-by: Oded Gabbay <oded.gabbay at amd.com>
>> >>>>>> Signed-off-by: Alex Deucher <alexander.deucher at amd.com>
>> >>>>>> Cc: stable at vger.kernel.org
>> >>>>>
>> >>>>>
>> >>>>> And I was already wondering why the heck the BOs always made this 
>> >>>>> ping/pong
>> >>>>> in memory after creation.
>> >>>>>
>> >>>>> Patch is Reviewed-by: Christian König <christian.koenig at amd.com>
>> >>>>
>> >>>> And fixing that promptly broke VCE due to vram location requirements.
>> >>>> Updated patch attached.  Thoughts?
>> >>>
>> >>> And one more take to make things a bit more explicit for static kernel
>> >>> driver allocations.
>> >>
>> >> struct ttm_place::lpfn is honoured even with TTM_PL_FLAG_TOPDOWN, so
>> >> latter should work with RADEON_GEM_CPU_ACCESS. It sounds like the
>> >> problem is really that some BOs are expected to be within a certain
>> >> range from the beginning of VRAM, but lpfn isn't set accordingly. It
>> >> would be better to fix that by setting lpfn directly than indirectly via
>> >> RADEON_GEM_CPU_ACCESS.
>> >>
>> >>
>> >> Anyway, since this isn't the first bug which prevents
>> >> TTM_PL_FLAG_TOPDOWN from working as intended in the radeon driver, I
>> >> wonder if its performance impact should be re-evaluated. Lauri?
>> >
>> > Topdown allocation in drm_mm is just a hint/bias really, it won't try too
>> > hard to segregate things. If you depend upon perfect topdown allocation
>> > for correctness then this won't work well. The trouble starts once you've
>> > split your free space up - it's not going to look for the topmost hole
>> > first but still picks just the one on top of the stack.
>>
>> TTM_PL_FLAG_TOPDOWN sets DRM_MM_SEARCH_BELOW as well as
>> DRM_MM_CREATE_TOP. So it should traverse the list of holes in reverse
>> order, right?
>
> Sure that additional segregation helps a bit more, but in the end if you
> split things badly and are a bit unlucky then the buffer can end up pretty
> much anywhere. Just wanted to mention that in case someone gets confused
> when the buffers end up in unexpected places.

There's no explicit requirement that they have to be at the top or
bottom per se, it's just the the buffers in question have a specific
restricted location requirement and they are set up at driver init
time and not moved for the life of the driver so I'd rather not put
them somewhere too sub-optimal.

Alex

> -Daniel
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> +41 (0) 79 365 57 48 - http://blog.ffwll.ch
> _______________________________________________
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to