25.01.2019 11:57, Mikko Perttunen пишет: > On 24.1.2019 23.53, Dmitry Osipenko wrote: >> 24.01.2019 21:02, Thierry Reding пишет: >>> From: Thierry Reding <tred...@nvidia.com> >>> >>> Tegra186 and later are different from earlier generations in that they >>> use an ARM SMMU rather than the Tegra SMMU. The ARM SMMU driver behaves >>> slightly differently in that the geometry for IOMMU domains is set only >>> after a device was attached to it. This is to make sure that the SMMU >>> instance that the domain belongs to is known, because each instance can >>> have a different input address space (i.e. geometry). >>> >>> Work around this by moving all IOVA allocations to a point where the >>> geometry of the domain is properly initialized. >>> >>> This second version of the series addresses all review comments and adds >>> a number of patches that will actually allow host1x to work with an SMMU >>> enabled on Tegra186. The patches also add programming required to >>> address the full 40 bits of address space. >>> >>> This supersedes the following patch: >>> >>> https://patchwork.kernel.org/patch/10775579/ >> >> Secondly, seems there are additional restrictions for the host1x jobs on >> T186, at least T186 TRM suggests so. In particular looks like each client is >> hardwired to a specific sync point and to a specific channel. Or maybe there >> is assumption that upstream kernel could work only in a hypervisor mode or >> with all protections disable. Could you please clarify? >> > > There are no such syncpoint/channel restrictions. The upstream driver indeed > currently only supports the case where there is no "hypervisor" (that is, > server process that allocates host1x resources) running and the kernel has > access to the Host1x COMMON/"hypervisor" register aperture. > > Adding support for the situation where this is not the case shouldn't be very > difficult, but we currently don't have any upstream platforms where the > Host1x server exists (it's only there on automotive platforms).
Thank you very much for the clarification! _______________________________________________ dri-devel mailing list dri-devel@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/dri-devel