On 04/04/18 13:28, Laurent Pinchart wrote:
> Hi Tomi,
> On Wednesday, 4 April 2018 13:02:04 EEST Tomi Valkeinen wrote:
>> On 04/04/18 12:51, Laurent Pinchart wrote:
>>> On Wednesday, 4 April 2018 10:37:05 EEST Tomi Valkeinen wrote:
>>>> On 04/04/18 00:11, Laurent Pinchart wrote:
>>>>> I assume access to DMM-mapped buffers to be way more frequent than
>>>>> access to the DMM registers. If that's the case, this partial workaround
>>>>> should only slightly lower the probability of system lock-up. Do you
>>>>> have plans to implement a workaround that will fix the problem
>>>>> completely ?
>>>> CPU only accesses memory via DMM when using TILER 2D buffers, which are
>>>> not officially supported. For non-2D, the pages are mapped directly to
>>>> the CPU without DMM in between.
>>> What is the DMM used for with non-2D then ? Does it need to be setup at
>>> all ?
>> It creates a contiguous view of memory for IPs without IOMMUs, like DSS.
> OK, got it. In that case the CPU accesses don't need to go through the DMM, 
> only the device accesses do, as the CPU will go through the MMU. Sorry for 
> the 
> noise.

Slightly related, just thinking out loud:

This is the first part of the work-around. The other part would be to
make TILER 2D available to the CPU via some kind of indirect access.
TILER 2D memory is mapped in a custom way to the CPU even now (if I
recall right, only two pages are mapped at once, with a custom DMM
mapping for those).

I think sDMA would be the choice there too, allocating two pages as a
"cache" and using sDMA to fill and flush those pages.

I haven't spent any time on that, as TILER 2D has other issues and is
not very usable.


Texas Instruments Finland Oy, Porkkalankatu 22, 00180 Helsinki.
Y-tunnus/Business ID: 0615521-4. Kotipaikka/Domicile: Helsinki
dri-devel mailing list

Reply via email to