On Thu, Nov 29, 2018 at 10:57 AM Christoph Hellwig wrote:
>
> On Thu, Nov 29, 2018 at 03:43:50PM +0100, Daniel Vetter wrote:
> > Yeah we had patches to add manual cache management code to drm, so we
> > don't have to abuse the dma streaming api anymore. Got shouted down.
> > Abusing the dma
On Thu, Nov 29, 2018 at 12:03 PM Robin Murphy wrote:
>
> On 29/11/2018 19:57, Tomasz Figa wrote:
> > On Thu, Nov 29, 2018 at 11:40 AM Jordan Crouse
> > wrote:
> >>
> >> On Thu, Nov 29, 2018 at 01:48:15PM -0500, Rob Clark wrote:
> >>> On Thu, Nov 29, 2018 at 10:54 AM Christoph Hellwig wrote:
>
On Tue, Nov 20, 2018 at 02:04:14PM -0800, Jeykumar Sankaran wrote:
> On 2018-11-07 07:55, Sean Paul wrote:
> > On Tue, Nov 06, 2018 at 02:36:30PM -0800, Jeykumar Sankaran wrote:
> > > msm maintains a separate structure to define vblank
> > > work definitions and a list to track events submitted
>
Hi Christoph,
On Thu, Nov 29, 2018 at 05:57:15PM +0100, Christoph Hellwig wrote:
>
> As for the buffer sharing: at least for the DMA API side I want to
> move the current buffer sharing users away from dma_alloc_coherent
> (and coherent dma_alloc_attrs users) and the remapping done in there
>
On 29/11/2018 19:57, Tomasz Figa wrote:
On Thu, Nov 29, 2018 at 11:40 AM Jordan Crouse wrote:
On Thu, Nov 29, 2018 at 01:48:15PM -0500, Rob Clark wrote:
On Thu, Nov 29, 2018 at 10:54 AM Christoph Hellwig wrote:
On Thu, Nov 29, 2018 at 09:42:50AM -0500, Rob Clark wrote:
Maybe the thing we
On Thu, Nov 29, 2018 at 11:40 AM Jordan Crouse wrote:
>
> On Thu, Nov 29, 2018 at 01:48:15PM -0500, Rob Clark wrote:
> > On Thu, Nov 29, 2018 at 10:54 AM Christoph Hellwig wrote:
> > >
> > > On Thu, Nov 29, 2018 at 09:42:50AM -0500, Rob Clark wrote:
> > > > Maybe the thing we need to do is just
On Thu, Nov 29, 2018 at 01:48:15PM -0500, Rob Clark wrote:
> On Thu, Nov 29, 2018 at 10:54 AM Christoph Hellwig wrote:
> >
> > On Thu, Nov 29, 2018 at 09:42:50AM -0500, Rob Clark wrote:
> > > Maybe the thing we need to do is just implement a blacklist of
> > > compatible strings for devices which
On Thu, Nov 29, 2018 at 12:24 PM Tomasz Figa wrote:
>
> [CC Marek]
>
> On Thu, Nov 29, 2018 at 9:09 AM Daniel Vetter wrote:
> >
> > On Thu, Nov 29, 2018 at 5:57 PM Christoph Hellwig wrote:
> > >
> > > Note that one thing I'd like to avoid is exposing these funtions directly
> > > to drivers, as
On Thu, Nov 29, 2018 at 10:54 AM Christoph Hellwig wrote:
>
> On Thu, Nov 29, 2018 at 09:42:50AM -0500, Rob Clark wrote:
> > Maybe the thing we need to do is just implement a blacklist of
> > compatible strings for devices which should skip the automatic
> > iommu/dma hookup. Maybe a bit ugly,
On Thu, Nov 29, 2018 at 10:53 AM Christoph Hellwig wrote:
>
> On Thu, Nov 29, 2018 at 09:25:43AM -0500, Rob Clark wrote:
> > > As I told you before: hell no. If you spent the slightest amount of
> > > actually trying to understand what you are doing here you'd know this
> > > can't work. Just
On Thu, Nov 29, 2018 at 05:33:03PM +, Brian Starkey wrote:
> This sounds very useful for ion, to avoid CPU cache maintenance as
> long as the buffer stays in device-land.
>
> One question though: How would you determine "the last user to unmap"
> to know when to do the final "make visible to
On Thu, Nov 29, 2018 at 09:24:17AM -0800, Tomasz Figa wrote:
> Whether the cache maintenance operation needs to actually do anything
> or not is a function of `dev`. We can have some devices that are
> coherent with CPU caches, and some that are not, on the same system.
Yes, but that part is not
On Thu, Nov 29, 2018 at 06:09:05PM +0100, Daniel Vetter wrote:
> What kind of abuse do you expect? It could very well be that gpu folks
> call that "standard use case" ... At least on x86 with the i915 driver
> we pretty much rely on architectural guarantees for how cache flushes
> work very much.
(grr - resending because I didn't actually send the correct patch.
My fingers were going faster than my brain).
This is an updated version of [1] adding interconnect support
without OPP bindings to get maximum performance from the GPU.
Big delta here is that I stupidly confused Bps and KBps and
Try to get the interconnect path for the GPU and vote for the maximum
bandwidth to support all frequencies. This is needed for performance.
Later we will want to scale the bandwidth based on the frequency to
also optimize for power but that will require some device tree
infrastructure that does
[CC Marek]
On Thu, Nov 29, 2018 at 9:09 AM Daniel Vetter wrote:
>
> On Thu, Nov 29, 2018 at 5:57 PM Christoph Hellwig wrote:
> > On Thu, Nov 29, 2018 at 05:28:07PM +0100, Daniel Vetter wrote:
> > > Just spend a bit of time reading through the implementations already
> > > merged. Is the struct
Try to get the interconnect path for the GPU and vote for the maximum
bandwidth to support all frequencies. This is needed for performance.
Later we will want to scale the bandwidth based on the frequency to
also optimize for power but that will require some device tree
infrastructure that does
This is an updated version of [1] adding interconnect support
without OPP bindings to get maximum performance from the GPU.
Big delta here is that I stupidly confused Bps and KBps and
passed a value that overflowed the API. Correct bandwidth
values are now passed.
[1]
On Thu, Nov 29, 2018 at 5:57 PM Christoph Hellwig wrote:
> On Thu, Nov 29, 2018 at 05:28:07PM +0100, Daniel Vetter wrote:
> > Just spend a bit of time reading through the implementations already
> > merged. Is the struct device *dev parameter actually needed anywhere?
> > dma-api definitely needs
On Thu, Nov 29, 2018 at 05:28:07PM +0100, Daniel Vetter wrote:
> Just spend a bit of time reading through the implementations already
> merged. Is the struct device *dev parameter actually needed anywhere?
> dma-api definitely needs it, because we need that to pick the right iommu.
> But for cache
On Thu, Nov 29, 2018 at 04:57:58PM +0100, Christoph Hellwig wrote:
> On Thu, Nov 29, 2018 at 03:43:50PM +0100, Daniel Vetter wrote:
> > Yeah we had patches to add manual cache management code to drm, so we
> > don't have to abuse the dma streaming api anymore. Got shouted down.
> > Abusing the dma
Am 29.11.18 um 11:05 schrieb Sharat Masetty:
> This patch adds two new functions to help client drivers suspend and
> resume the scheduler job timeout. This can be useful in cases where the
> hardware has preemption support enabled. Using this, it is possible to have
> the timeout active only for
Am 29.11.18 um 11:05 schrieb Sharat Masetty:
> In cases where the scheduler instance is used as a base object of another
> driver object, it's not clear if the driver can call scheduler cleanup on the
> fail path. So, Set the sched->thread to NULL, so that the driver can safely
> call
On Thu, Nov 29, 2018 at 03:43:50PM +0100, Daniel Vetter wrote:
> Yeah we had patches to add manual cache management code to drm, so we
> don't have to abuse the dma streaming api anymore. Got shouted down.
> Abusing the dma streaming api also gets shouted down. It's a gpu, any
> idea of these
On Thu, Nov 29, 2018 at 09:42:50AM -0500, Rob Clark wrote:
> Maybe the thing we need to do is just implement a blacklist of
> compatible strings for devices which should skip the automatic
> iommu/dma hookup. Maybe a bit ugly, but it would also solve a problem
> preventing us from enabling
On Thu, Nov 29, 2018 at 09:25:43AM -0500, Rob Clark wrote:
> > As I told you before: hell no. If you spent the slightest amount of
> > actually trying to understand what you are doing here you'd know this
> > can't work. Just turn on dma debugging and this will blow up in your
> > face.
>
>
On Wed, Nov 28, 2018 at 10:07 PM Robin Murphy wrote:
>
> On 28/11/2018 16:24, Stephen Boyd wrote:
> > Quoting Vivek Gautam (2018-11-27 02:11:41)
> >> @@ -1966,6 +1970,23 @@ static const struct of_device_id
> >> arm_smmu_of_match[] = {
> >> };
> >> MODULE_DEVICE_TABLE(of, arm_smmu_of_match);
On Thu, Nov 29, 2018 at 3:26 PM Rob Clark wrote:
>
> On Thu, Nov 29, 2018 at 9:14 AM Christoph Hellwig wrote:
> >
> > On Thu, Nov 29, 2018 at 07:33:15PM +0530, Vivek Gautam wrote:
> > > dma_map_sg() expects a DMA domain. However, the drm devices
> > > have been traditionally using unmanaged
On Thu, Nov 29, 2018 at 9:25 AM Rob Clark wrote:
>
> On Thu, Nov 29, 2018 at 9:14 AM Christoph Hellwig wrote:
> >
> > On Thu, Nov 29, 2018 at 07:33:15PM +0530, Vivek Gautam wrote:
> > > dma_map_sg() expects a DMA domain. However, the drm devices
> > > have been traditionally using unmanaged
On Thu, Nov 29, 2018 at 07:33:15PM +0530, Vivek Gautam wrote:
> dma_map_sg() expects a DMA domain. However, the drm devices
> have been traditionally using unmanaged iommu domain which
> is non-dma type. Using dma mapping APIs with that domain is bad.
>
> Replace dma_map_sg() calls with
The error checks on ret for a negative error return always fails because
the return value of iommu_map_sg() is unsigned and can never be negative.
Detected with Coccinelle:
drivers/gpu/drm/msm/msm_iommu.c:69:9-12: WARNING: Unsigned expression
compared with zero: ret < 0
Signed-off-by: Wen Yang
On Thu, Nov 29, 2018 at 9:14 AM Christoph Hellwig wrote:
>
> On Thu, Nov 29, 2018 at 07:33:15PM +0530, Vivek Gautam wrote:
> > dma_map_sg() expects a DMA domain. However, the drm devices
> > have been traditionally using unmanaged iommu domain which
> > is non-dma type. Using dma mapping APIs
dma_map_sg() expects a DMA domain. However, the drm devices
have been traditionally using unmanaged iommu domain which
is non-dma type. Using dma mapping APIs with that domain is bad.
Replace dma_map_sg() calls with dma_sync_sg_for_device{|cpu}()
to do the cache maintenance.
Signed-off-by: Vivek
In cases where the scheduler instance is used as a base object of another
driver object, it's not clear if the driver can call scheduler cleanup on the
fail path. So, Set the sched->thread to NULL, so that the driver can safely
call drm_sched_fini() during cleanup.
Signed-off-by: Sharat Masetty
This patch adds two new functions to help client drivers suspend and
resume the scheduler job timeout. This can be useful in cases where the
hardware has preemption support enabled. Using this, it is possible to have
the timeout active only for the ring which is active on the ringbuffer.
This
On Wed, Nov 28, 2018 at 6:09 PM Rob Clark wrote:
>
> On Wed, Nov 28, 2018 at 2:39 AM Christoph Hellwig wrote:
> >
> > > + /*
> > > + * dma_sync_sg_*() flush the physical pages, so point
> > > + * sg->dma_address to the physical ones for the right
> > >
36 matches
Mail list logo