On Thu, Aug 08, 2019 at 09:44:32AM -0700, Rob Clark wrote:
> > GFP_HIGHUSER basically just means that this is an allocation that could
> > dip into highmem, in which case it would not have a kernel mapping.
> > This can happen on arm + LPAE, but not on arm64.
>
> Just a dumb question, but why is
On Thu, Aug 08, 2019 at 01:58:08PM +0200, Daniel Vetter wrote:
> > > We use shmem to get at swappable pages. We generally just assume that
> > > the gpu can get at those pages, but things fall apart in fun ways:
> > > - some setups somehow inject bounce buffers. Some drivers just give
> > > up,
On Thu, Aug 8, 2019 at 12:59 AM Christoph Hellwig wrote:
>
> On Wed, Aug 07, 2019 at 10:30:04AM -0700, Rob Clark wrote:
> > So, we do end up using GFP_HIGHUSER, which appears to get passed thru
> > when shmem gets to the point of actually allocating pages.. not sure
> > if that just ends up being
On Thu, Aug 8, 2019 at 3:00 AM Christoph Hellwig wrote:
>
> On Wed, Aug 07, 2019 at 09:09:53AM -0700, Rob Clark wrote:
> > > > (Eventually I'd like to support pages passed in from userspace.. but
> > > > that is down the road.)
> > >
> > > Eww. Please talk to the iommu list before starting on
On Thu, Aug 08, 2019 at 11:55:06AM +0200, Christoph Hellwig wrote:
> On Wed, Aug 07, 2019 at 10:48:56AM +0200, Daniel Vetter wrote:
> > >other drm drivers how do they guarantee addressability without an
> > >iommu?)
> >
> > We use shmem to get at swappable pages. We generally just assume
On Thu, Aug 08, 2019 at 11:20:53AM +0100, Mark Rutland wrote:
> On Thu, Aug 08, 2019 at 09:58:27AM +0200, Christoph Hellwig wrote:
> > On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> > > For arm64, we can tear down portions of the linear map, but that has to
> > > be done
On Thu, Aug 08, 2019 at 11:20:53AM +0100, Mark Rutland wrote:
> On Thu, Aug 08, 2019 at 09:58:27AM +0200, Christoph Hellwig wrote:
> > On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> > > For arm64, we can tear down portions of the linear map, but that has to
> > > be done
On Thu, Aug 08, 2019 at 09:58:27AM +0200, Christoph Hellwig wrote:
> On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> > For arm64, we can tear down portions of the linear map, but that has to
> > be done explicitly, and this is only possible when using rodata_full. If
> > not using
On Wed, Aug 07, 2019 at 09:09:53AM -0700, Rob Clark wrote:
> > > (Eventually I'd like to support pages passed in from userspace.. but
> > > that is down the road.)
> >
> > Eww. Please talk to the iommu list before starting on that.
>
> This is more of a long term goal, we can't do it until we
On Wed, Aug 07, 2019 at 10:48:56AM +0200, Daniel Vetter wrote:
> >other drm drivers how do they guarantee addressability without an
> >iommu?)
>
> We use shmem to get at swappable pages. We generally just assume that
> the gpu can get at those pages, but things fall apart in fun ways:
> -
On Wed, Aug 07, 2019 at 10:30:04AM -0700, Rob Clark wrote:
> So, we do end up using GFP_HIGHUSER, which appears to get passed thru
> when shmem gets to the point of actually allocating pages.. not sure
> if that just ends up being a hint, or if it guarantees that we don't
> get something in the
On Wed, Aug 07, 2019 at 05:49:59PM +0100, Mark Rutland wrote:
> I'm fairly confident that the linear/direct map cacheable alias is not
> torn down when pages are allocated. The gneeric page allocation code
> doesn't do so, and I see nothing the shmem code to do so.
It is not torn down anywhere.
On Wed, Aug 07, 2019 at 01:38:08PM +0100, Mark Rutland wrote:
> > I *believe* that there are not alias mappings (that I don't control
> > myself) for pages coming from
> > shmem_file_setup()/shmem_read_mapping_page()..
>
> AFAICT, that's regular anonymous memory, so there will be a cacheable
>
On Wed, Aug 7, 2019 at 9:50 AM Mark Rutland wrote:
>
> On Wed, Aug 07, 2019 at 09:15:54AM -0700, Rob Clark wrote:
> > On Wed, Aug 7, 2019 at 5:38 AM Mark Rutland wrote:
> > >
> > > On Tue, Aug 06, 2019 at 09:31:55AM -0700, Rob Clark wrote:
> > > > On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland
>
On Wed, Aug 07, 2019 at 09:15:54AM -0700, Rob Clark wrote:
> On Wed, Aug 7, 2019 at 5:38 AM Mark Rutland wrote:
> >
> > On Tue, Aug 06, 2019 at 09:31:55AM -0700, Rob Clark wrote:
> > > On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland wrote:
> > > >
> > > > On Tue, Aug 06, 2019 at 07:11:41AM -0700,
On Wed, Aug 7, 2019 at 5:38 AM Mark Rutland wrote:
>
> On Tue, Aug 06, 2019 at 09:31:55AM -0700, Rob Clark wrote:
> > On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland wrote:
> > >
> > > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > > On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig
On Tue, Aug 6, 2019 at 11:25 PM Christoph Hellwig wrote:
>
> On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote:
> > On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
> > >
> > > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > > Agreed that drm_cflush_* isn't a
On Tue, Aug 06, 2019 at 09:31:55AM -0700, Rob Clark wrote:
> On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland wrote:
> >
> > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig wrote:
> > > >
> > > > This goes in the wrong direction.
On Wed, Aug 7, 2019 at 8:25 AM Christoph Hellwig wrote:
> On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote:
> > On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
> > >
> > > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > > Agreed that drm_cflush_* isn't a great
On Tue, Aug 6, 2019 at 7:35 AM Mark Rutland wrote:
>
> On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig wrote:
> > >
> > > This goes in the wrong direction. drm_cflush_* are a bad API we need to
> > > get rid of, not add use of it.
On Tue, Aug 06, 2019 at 09:23:51AM -0700, Rob Clark wrote:
> On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
> >
> > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > Agreed that drm_cflush_* isn't a great API. In this particular case
> > > (IIUC), I need wb+inv so that
On Tue, Aug 6, 2019 at 9:23 AM Rob Clark wrote:
>
> On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
> >
> > On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > > Agreed that drm_cflush_* isn't a great API. In this particular case
> > > (IIUC), I need wb+inv so that there
On Tue, Aug 6, 2019 at 8:50 AM Christoph Hellwig wrote:
>
> On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> > Agreed that drm_cflush_* isn't a great API. In this particular case
> > (IIUC), I need wb+inv so that there aren't dirty cache lines that drop
> > out to memory later, and
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> Agreed that drm_cflush_* isn't a great API. In this particular case
> (IIUC), I need wb+inv so that there aren't dirty cache lines that drop
> out to memory later, and so that I don't get a cache hit on
> uncached/wc mmap'ing.
So what
On Tue, Aug 06, 2019 at 07:11:41AM -0700, Rob Clark wrote:
> On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig wrote:
> >
> > This goes in the wrong direction. drm_cflush_* are a bad API we need to
> > get rid of, not add use of it. The reason for that is two-fold:
> >
> > a) it doesn't address
On Tue, Aug 6, 2019 at 1:48 AM Christoph Hellwig wrote:
>
> This goes in the wrong direction. drm_cflush_* are a bad API we need to
> get rid of, not add use of it. The reason for that is two-fold:
>
> a) it doesn't address how cache maintaince actually works in most
> platforms. When
On Tue, Aug 06, 2019 at 11:38:16AM +0200, Daniel Vetter wrote:
> I just read through all the arch_sync_dma_for_device/cpu functions and
> none seem to use the struct *dev argument. Iirc you've said that's on the
> way out?
Not actively on the way out yet, but now that we support all
architectures
On Tue, Aug 06, 2019 at 10:48:21AM +0200, Christoph Hellwig wrote:
> This goes in the wrong direction. drm_cflush_* are a bad API we need to
> get rid of, not add use of it. The reason for that is two-fold:
>
> a) it doesn't address how cache maintaince actually works in most
> platforms.
This goes in the wrong direction. drm_cflush_* are a bad API we need to
get rid of, not add use of it. The reason for that is two-fold:
a) it doesn't address how cache maintaince actually works in most
platforms. When talking about a cache we three fundamental operations:
1)
29 matches
Mail list logo