On Mon, Sep 11, 2023 at 09:24:42AM -0600, Jeffrey Hugo wrote:
> On 9/1/2023 10:48 AM, Stanislaw Gruszka wrote:
> > From: Jacek Lawrynowicz <jacek.lawrynow...@linux.intel.com>
> > 
> > Usages of DRM_IVPU_BO_UNCACHED should be replaced by DRM_IVPU_BO_WC.
> > There is no functional benefit from DRM_IVPU_BO_UNCACHED if these
> > buffers are never mapped to host VM.
> > 
> > This allows to cut the buffer handling code in the kernel driver
> > by half.
> > 
> > Signed-off-by: Jacek Lawrynowicz <jacek.lawrynow...@linux.intel.com>
> > Signed-off-by: Stanislaw Gruszka <stanislaw.grus...@linux.intel.com>
> > ---
> >   drivers/accel/ivpu/ivpu_fw.c  | 2 +-
> >   drivers/accel/ivpu/ivpu_gem.c | 3 ---
> >   include/uapi/drm/ivpu_accel.h | 2 +-
> >   3 files changed, 2 insertions(+), 5 deletions(-)
> > 
> > diff --git a/drivers/accel/ivpu/ivpu_fw.c b/drivers/accel/ivpu/ivpu_fw.c
> > index 2fef9fe154aa..8ab0f3225205 100644
> > --- a/drivers/accel/ivpu/ivpu_fw.c
> > +++ b/drivers/accel/ivpu/ivpu_fw.c
> > @@ -248,7 +248,7 @@ static int ivpu_fw_mem_init(struct ivpu_device *vdev)
> >     if (fw->shave_nn_size) {
> >             fw->mem_shave_nn = ivpu_bo_alloc_internal(vdev, 
> > vdev->hw->ranges.shave.start,
> > -                                                     fw->shave_nn_size, 
> > DRM_IVPU_BO_UNCACHED);
> > +                                                     fw->shave_nn_size, 
> > DRM_IVPU_BO_WC);
> >             if (!fw->mem_shave_nn) {
> >                     ivpu_err(vdev, "Failed to allocate shavenn buffer\n");
> >                     ret = -ENOMEM;
> > diff --git a/drivers/accel/ivpu/ivpu_gem.c b/drivers/accel/ivpu/ivpu_gem.c
> > index 915c53d7bb97..2a91eb1e3627 100644
> > --- a/drivers/accel/ivpu/ivpu_gem.c
> > +++ b/drivers/accel/ivpu/ivpu_gem.c
> > @@ -89,8 +89,6 @@ static int __must_check shmem_alloc_pages_locked(struct 
> > ivpu_bo *bo)
> >     if (bo->flags & DRM_IVPU_BO_WC)
> >             set_pages_array_wc(pages, npages);
> > -   else if (bo->flags & DRM_IVPU_BO_UNCACHED)
> > -           set_pages_array_uc(pages, npages);
> >     bo->pages = pages;
> >     return 0;
> > @@ -366,7 +364,6 @@ ivpu_bo_alloc(struct ivpu_device *vdev, u64 size, u32 
> > flags, const struct ivpu_b
> >     switch (flags & DRM_IVPU_BO_CACHE_MASK) {
> >     case DRM_IVPU_BO_CACHED:
> > -   case DRM_IVPU_BO_UNCACHED:
> >     case DRM_IVPU_BO_WC:
> >             break;
> >     default:
> > diff --git a/include/uapi/drm/ivpu_accel.h b/include/uapi/drm/ivpu_accel.h
> > index 262db0c3beee..de1944e42c65 100644
> > --- a/include/uapi/drm/ivpu_accel.h
> > +++ b/include/uapi/drm/ivpu_accel.h
> > @@ -196,7 +196,7 @@ struct drm_ivpu_bo_create {
> >      *
> >      * %DRM_IVPU_BO_UNCACHED:
> >      *
> > -    * Allocated BO will not be cached on host side nor snooped on the VPU 
> > side.
> > +    * Not supported. Use DRM_IVPU_BO_WC instead.
> >      *
> >      * %DRM_IVPU_BO_WC:
> >      *
> 
> Feels like this will break existing userspace.  You could see if userspace
> specified UNCACHED and change it to WC before processing the request.  Maybe
> also use WARN_ONCE to indicate that userspace should be updated.
> 
> Or is it the case that userspace never actually used this?

Usage of those buffers was removed some time ago:
https://github.com/intel/linux-vpu-driver/commit/c473c9826cb28fa3dcf8883fc804b1e84c6b5fb1

And will not be part of first user-mode driver release. I think we can
safely do the change.

Regards
Stanislaw

Reply via email to