Re: [Linaro-mm-sig] [PATCH v2] dma-buf/sw_sync: Avoid recursive lock during fence signal

2023-08-27 Thread Christian König

Am 22.08.23 um 19:15 schrieb Rob Clark:

On Tue, Aug 22, 2023 at 6:01 AM Christian König
 wrote:

Am 18.08.23 um 16:59 schrieb Rob Clark:

From: Rob Clark 

If a signal callback releases the sw_sync fence, that will trigger a
deadlock as the timeline_fence_release recurses onto the fence->lock
(used both for signaling and the the timeline tree).

To avoid that, temporarily hold an extra reference to the signalled
fences until after we drop the lock.

(This is an alternative implementation of 
https://patchwork.kernel.org/patch/11664717/
which avoids some potential UAF issues with the original patch.)

v2: Remove now obsolete comment, use list_move_tail() and
  list_del_init()

Reported-by: Bas Nieuwenhuizen 
Fixes: d3c6dd1fb30d ("dma-buf/sw_sync: Synchronize signal vs syncpt free")
Signed-off-by: Rob Clark 

Reviewed-by: Christian König 

Thanks, any chance you could take this via drm-misc?


I've already pushed this quite a while ago.

At the moment I have problem answering because AMD has a new security 
policy which makes it impossible to push patches and access mails at the 
same time.


We are working with our IT to get this fixed, but at the moment its 
eating my time.


Sorry for the delay,
Christian.



BR,
-R


---
   drivers/dma-buf/sw_sync.c | 18 +-
   1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/drivers/dma-buf/sw_sync.c b/drivers/dma-buf/sw_sync.c
index 63f0aeb66db6..f0a35277fd84 100644
--- a/drivers/dma-buf/sw_sync.c
+++ b/drivers/dma-buf/sw_sync.c
@@ -191,6 +191,7 @@ static const struct dma_fence_ops timeline_fence_ops = {
*/
   static void sync_timeline_signal(struct sync_timeline *obj, unsigned int inc)
   {
+ LIST_HEAD(signalled);
   struct sync_pt *pt, *next;

   trace_sync_timeline(obj);
@@ -203,21 +204,20 @@ static void sync_timeline_signal(struct sync_timeline 
*obj, unsigned int inc)
   if (!timeline_fence_signaled(>base))
   break;

- list_del_init(>link);
+ dma_fence_get(>base);
+
+ list_move_tail(>link, );
   rb_erase(>node, >pt_tree);

- /*
-  * A signal callback may release the last reference to this
-  * fence, causing it to be freed. That operation has to be
-  * last to avoid a use after free inside this loop, and must
-  * be after we remove the fence from the timeline in order to
-  * prevent deadlocking on timeline->lock inside
-  * timeline_fence_release().
-  */
   dma_fence_signal_locked(>base);
   }

   spin_unlock_irq(>lock);
+
+ list_for_each_entry_safe(pt, next, , link) {
+ list_del_init(>link);
+ dma_fence_put(>base);
+ }
   }

   /**




RE: [RFC v1 1/3] mm/mmu_notifier: Add a new notifier for mapping updates (new pages)

2023-08-27 Thread Kasireddy, Vivek
Hi Alistair,

> 
> > >
> > >> >> > > > No, adding HMM_PFN_REQ_WRITE still doesn't help in fixing the
> > >> issue.
> > >> >> > > > Although, I do not have THP enabled (or built-in), shmem does
> > not
> > >> evict
> > >> >> > > > the pages after hole punch as noted in the comment in
> > >> >> shmem_fallocate():
> > >> >> > >
> > >> >> > > This is the source of all your problems.
> > >> >> > >
> > >> >> > > Things that are mm-centric are supposed to track the VMAs and
> > >> changes
> > >> >> to
> > >> >> > > the PTEs. If you do something in userspace and it doesn't cause
> the
> > >> >> > > CPU page tables to change then it certainly shouldn't cause any
> > mmu
> > >> >> > > notifiers or hmm_range_fault changes.
> > >> >> > I am not doing anything out of the blue in the userspace. I think 
> > >> >> > the
> > >> >> behavior
> > >> >> > I am seeing with shmem (where an invalidation event
> > >> >> (MMU_NOTIFY_CLEAR)
> > >> >> > does occur because of a hole punch but the PTEs don't really get
> > >> updated)
> > >> >> > can arguably be considered an optimization.
> > >> >>
> > >> >> Your explanations don't make sense.
> > >> >>
> > >> >> If MMU_NOTIFER_CLEAR was sent but the PTEs were left present
> then:
> > >> >>
> > >> >> > > There should still be an invalidation notifier at some point when
> the
> > >> >> > > CPU tables do eventually change, whenever that is. Missing that
> > >> >> > > notification would be a bug.
> > >> >> > I clearly do not see any notification getting triggered (from both
> > >> >> shmem_fault()
> > >> >> > and hugetlb_fault()) when the PTEs do get updated as the hole is
> > refilled
> > >> >> > due to writes. Are you saying that there needs to be an invalidation
> > >> event
> > >> >> > (MMU_NOTIFY_CLEAR?) dispatched at this point?
> > >> >>
> > >> >> You don't get to get shmem_fault in the first place.
> > >> > What I am observing is that even after MMU_NOTIFY_CLEAR (hole
> > punch)
> > >> is sent,
> > >> > hmm_range_fault() finds that the PTEs associated with the hole are 
> > >> > still
> > >> pte_present().
> > >> > I think it remains this way as long as there are reads on the hole. 
> > >> > Once
> > >> there are
> > >> > writes, it triggers shmem_fault() which results in PTEs getting updated
> > but
> > >> without
> > >> > any notification.
> > >>
> > >> Oh wait, this is shmem. The read from hmm_range_fault() (assuming
> you
> > >> specified HMM_PFN_REQ_FAULT) will trigger shmem_fault() due to the
> > >> missing PTE.
> > > When running one of the udmabuf subtests (introduced in the third patch
> > of
> > > this series), I see that MMU_NOTIFY_CLEAR is sent when a hole is
> punched.
> > > As a response, hmm_range_fault() is called from the udmabuf invalidate
> > callback,
> >
> > Actually I'm suprised that works. If you've setup an interval notifier
> > and are updating the notifier sequence numbers correctly I would expect
> > hmm_range_fault() to return -EBUSY until
> > mmu_notifier_invalidate_range_end() is called.
> >
> > It might be helpful to post the code you're testing with somewhere but
> > are you calling mmu_interval_read_begin() to start the critical section
> > and mmu_interval_set_seq() to update the sequence in another notifier?
> > I'm not at all convinced calling hmm_range_fault() from a notifier can
> > be made to work though.
Turns out, calling hmm_range_fault() from the invalidate callback was indeed
a problem and the reason why new pages were not faulted-in. In other words,
it looks like the invalidate callback is not the right place to invoke 
hmm_range_fault()
as the PTEs may not have been cleared.

> That could be part of the problem. I mean the way hmm_range_fault()
> is invoked from the invalidate callback is probably incorrect as you are
> suggesting. Anyway, here is the code I am testing with:
> static bool invalidate_udmabuf(struct mmu_interval_notifier *mn,
>const struct mmu_notifier_range *range_mn,
>unsigned long cur_seq)
> {
> struct udmabuf_vma_range *range =
> container_of(mn, struct udmabuf_vma_range, range_mn);
> struct udmabuf *ubuf = range->ubuf;
> struct hmm_range hrange = {0};
> unsigned long *pfns, num_pages, timeout;
> int i, ret;
> 
> printk("invalidate; start = %lu, end = %lu\n",
>range->start, range->end);
> 
> hrange.notifier = mn;
> hrange.default_flags = HMM_PFN_REQ_FAULT;
> hrange.start = max(range_mn->start, range->start);
> hrange.end = min(range_mn->end, range->end);
> num_pages = (hrange.end - hrange.start) >> PAGE_SHIFT;
> 
> pfns = kmalloc_array(num_pages, sizeof(*pfns), GFP_KERNEL);
> if (!pfns)
> return true;
> 
> printk("invalidate; num pages = %lu\n", num_pages);
> 
> hrange.hmm_pfns = pfns;
> timeout = jiffies + msecs_to_jiffies(HMM_RANGE_DEFAULT_TIMEOUT);
> do {
>

Re: [V10 5/8] drm/amd/pm: setup the framework to support Wifi RFI mitigation feature

2023-08-27 Thread Lazar, Lijo
[AMD Official Use Only - General]

> 'j' was initially set as 'num_of_wbrf_ranges - 1'. So, I suppose 
> 'num_of_wbrf_ranges' should be set as 'j' instead of 'j - 1'. Right?

Yes.

Thanks,
Lijo

From: Quan, Evan 
Sent: Monday, August 28, 2023 7:23:55 AM
To: Lazar, Lijo ; l...@kernel.org ; 
johan...@sipsolutions.net ; da...@davemloft.net 
; eduma...@google.com ; 
k...@kernel.org ; pab...@redhat.com ; 
Deucher, Alexander ; raf...@kernel.org 
; Limonciello, Mario 
Cc: linux-ker...@vger.kernel.org ; 
linux-a...@vger.kernel.org ; 
amd-...@lists.freedesktop.org ; 
dri-devel@lists.freedesktop.org ; 
linux-wirel...@vger.kernel.org ; 
net...@vger.kernel.org 
Subject: RE: [V10 5/8] drm/amd/pm: setup the framework to support Wifi RFI 
mitigation feature

[AMD Official Use Only - General]

> -Original Message-
> From: Lazar, Lijo 
> Sent: Friday, August 25, 2023 10:09 PM
> To: Quan, Evan ; l...@kernel.org;
> johan...@sipsolutions.net; da...@davemloft.net; eduma...@google.com;
> k...@kernel.org; pab...@redhat.com; Deucher, Alexander
> ; raf...@kernel.org; Limonciello, Mario
> 
> Cc: linux-ker...@vger.kernel.org; linux-a...@vger.kernel.org; amd-
> g...@lists.freedesktop.org; dri-devel@lists.freedesktop.org; linux-
> wirel...@vger.kernel.org; net...@vger.kernel.org
> Subject: Re: [V10 5/8] drm/amd/pm: setup the framework to support Wifi
> RFI mitigation feature
>
>
>
> On 8/25/2023 2:08 PM, Evan Quan wrote:
> > With WBRF feature supported, as a driver responding to the
> > frequencies, amdgpu driver is able to do shadow pstate switching to
> > mitigate possible interference(between its (G-)DDR memory clocks and
> > local radio module frequency bands used by Wifi 6/6e/7).
> >
> > Signed-off-by: Evan Quan 
> > Reviewed-by: Mario Limonciello 
> > --
> > v1->v2:
> >- update the prompt for feature support(Lijo)
> > v8->v9:
> >- update parameter document for smu_wbrf_event_handler(Simon)
> > v9->v10:
> >   - correct the logics for wbrf range sorting(Lijo)
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h   |   2 +
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  17 ++
> >   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 195
> ++
> >   drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  23 +++
> >   drivers/gpu/drm/amd/pm/swsmu/smu_internal.h   |   3 +
> >   5 files changed, 240 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > index a3b86b86dc47..2bfc9111ab00 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > @@ -247,6 +247,8 @@ extern int amdgpu_sg_display;
> >
> >   extern int amdgpu_user_partt_mode;
> >
> > +extern int amdgpu_wbrf;
> > +
> >   #define AMDGPU_VM_MAX_NUM_CTX 4096
> >   #define AMDGPU_SG_THRESHOLD   (256*1024*1024)
> >   #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS3000
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > index 0593ef8fe0a6..1c574bd3b60d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > @@ -195,6 +195,7 @@ int amdgpu_use_xgmi_p2p = 1;
> >   int amdgpu_vcnfw_log;
> >   int amdgpu_sg_display = -1; /* auto */
> >   int amdgpu_user_partt_mode =
> AMDGPU_AUTO_COMPUTE_PARTITION_MODE;
> > +int amdgpu_wbrf = -1;
> >
> >   static void amdgpu_drv_delayed_reset_work_handler(struct work_struct
> > *work);
> >
> > @@ -981,6 +982,22 @@ module_param_named(user_partt_mode,
> amdgpu_user_partt_mode, uint, 0444);
> >   module_param(enforce_isolation, bool, 0444);
> >   MODULE_PARM_DESC(enforce_isolation, "enforce process isolation
> > between graphics and compute . enforce_isolation = on");
> >
> > +/**
> > + * DOC: wbrf (int)
> > + * Enable Wifi RFI interference mitigation feature.
> > + * Due to electrical and mechanical constraints there may be likely
> > +interference of
> > + * relatively high-powered harmonics of the (G-)DDR memory clocks
> > +with local radio
> > + * module frequency bands used by Wifi 6/6e/7. To mitigate the
> > +possible RFI interference,
> > + * with this feature enabled, PMFW will use either “shadowed P-State”
> > +or “P-State” based
> > + * on active list of frequencies in-use (to be avoided) as part of
> > +initial setting or
> > + * P-state transition. However, there may be potential performance
> > +impact with this
> > + * feature enabled.
> > + * (0 = disabled, 1 = enabled, -1 = auto (default setting, will be
> > +enabled if supported))  */ MODULE_PARM_DESC(wbrf,
> > +   "Enable Wifi RFI interference mitigation (0 = disabled, 1 = enabled,
> > +-1 = auto(default)"); module_param_named(wbrf, amdgpu_wbrf, int,
> > +0444);
> > +
> >   /* These devices are not supported by amdgpu.
> >* They are supported by the mach64, r128, radeon drivers
> >*/
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > 

RE: [V10 1/8] ACPI: Add support for AMD ACPI based Wifi band RFI mitigation feature

2023-08-27 Thread Quan, Evan
[AMD Official Use Only - General]

> -Original Message-
> From: Simon Horman 
> Sent: Sunday, August 27, 2023 11:43 PM
> To: Quan, Evan 
> Cc: l...@kernel.org; johan...@sipsolutions.net; da...@davemloft.net;
> eduma...@google.com; k...@kernel.org; pab...@redhat.com; Deucher,
> Alexander ; raf...@kernel.org; Lazar, Lijo
> ; Limonciello, Mario ;
> linux-ker...@vger.kernel.org; linux-a...@vger.kernel.org; amd-
> g...@lists.freedesktop.org; dri-devel@lists.freedesktop.org; linux-
> wirel...@vger.kernel.org; net...@vger.kernel.org
> Subject: Re: [V10 1/8] ACPI: Add support for AMD ACPI based Wifi band RFI
> mitigation feature
>
> On Fri, Aug 25, 2023 at 04:38:39PM +0800, Evan Quan wrote:
> > Due to electrical and mechanical constraints in certain platform
> > designs there may be likely interference of relatively high-powered
> > harmonics of the (G-)DDR memory clocks with local radio module
> > frequency bands used by Wifi 6/6e/7.
> >
> > To mitigate this, AMD has introduced a mechanism that devices can use
> > to notify active use of particular frequencies so that other devices
> > can make relative internal adjustments as necessary to avoid this resonance.
> >
> > Signed-off-by: Evan Quan 
>
> ...
>
> > diff --git a/drivers/acpi/amd_wbrf.c b/drivers/acpi/amd_wbrf.c
>
> ...
>
> > +/**
> > + * acpi_amd_wbrf_add_exclusion - broadcast the frequency band the
> device
> > + *   is using
> > + *
> > + * @dev: device pointer
> > + * @in: input structure containing the frequency band the device is
> > +using
> > + *
> > + * Broadcast to other consumers the frequency band the device starts
> > + * to use. Underneath the surface the information is cached into an
> > + * internal buffer first. Then a notification is sent to all those
> > + * registered consumers. So then they can retrieve that buffer to
> > + * know the latest active frequency bands. The benifit with such
> > +design
>
> nit: ./checkpatch.pl --codespell suggests benifit -> benefit.
Thanks, will fix that.

Evan
>
> > + * is for those consumers which have not been registered yet, they
> > +can
> > + * still have a chance to retrieve such information later.
> > + */
> > +int acpi_amd_wbrf_add_exclusion(struct device *dev,
> > +   struct wbrf_ranges_in_out *in)
> > +{
> > +   struct acpi_device *adev = ACPI_COMPANION(dev);
> > +   int ret;
> > +
> > +   if (!adev)
> > +   return -ENODEV;
> > +
> > +   ret = wbrf_record(adev, WBRF_RECORD_ADD, in);
> > +   if (ret)
> > +   return ret;
> > +
> > +   blocking_notifier_call_chain(_chain_head,
> > +WBRF_CHANGED,
> > +NULL);
> > +
> > +   return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(acpi_amd_wbrf_add_exclusion);
>
> ...


RE: [V10 7/8] drm/amd/pm: enable Wifi RFI mitigation feature support for SMU13.0.0

2023-08-27 Thread Quan, Evan
[AMD Official Use Only - General]

> -Original Message-
> From: Lazar, Lijo 
> Sent: Friday, August 25, 2023 10:13 PM
> To: Quan, Evan ; l...@kernel.org;
> johan...@sipsolutions.net; da...@davemloft.net; eduma...@google.com;
> k...@kernel.org; pab...@redhat.com; Deucher, Alexander
> ; raf...@kernel.org; Limonciello, Mario
> 
> Cc: linux-ker...@vger.kernel.org; linux-a...@vger.kernel.org; amd-
> g...@lists.freedesktop.org; dri-devel@lists.freedesktop.org; linux-
> wirel...@vger.kernel.org; net...@vger.kernel.org
> Subject: Re: [V10 7/8] drm/amd/pm: enable Wifi RFI mitigation feature
> support for SMU13.0.0
>
>
>
> On 8/25/2023 2:08 PM, Evan Quan wrote:
> > Fulfill the SMU13.0.0 support for Wifi RFI mitigation feature.
> >
> > Signed-off-by: Evan Quan 
> > Reviewed-by: Mario Limonciello 
> > ---
> >   drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  3 +
> >   drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h  |  3 +-
> >   drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h  |  3 +
> >   .../gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c|  9 +++
> >   .../drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c  | 60
> +++
> >   5 files changed, 77 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > index 60d595344c45..a081e6bb27c4 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h
> > @@ -325,6 +325,7 @@ enum smu_table_id
> > SMU_TABLE_PACE,
> > SMU_TABLE_ECCINFO,
> > SMU_TABLE_COMBO_PPTABLE,
> > +   SMU_TABLE_WIFIBAND,
> > SMU_TABLE_COUNT,
> >   };
> >
> > @@ -1501,6 +1502,8 @@ enum smu_baco_seq {
> >  __dst_size);  \
> >   })
> >
> > +#define HZ_IN_MHZ  100U
> > +
> >   #if !defined(SWSMU_CODE_LAYER_L2)
> && !defined(SWSMU_CODE_LAYER_L3)
> && !defined(SWSMU_CODE_LAYER_L4)
> >   int smu_get_power_limit(void *handle,
> > uint32_t *limit,
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > index 297b70b9388f..5bbb60289a79 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_types.h
> > @@ -245,7 +245,8 @@
> > __SMU_DUMMY_MAP(AllowGpo),  \
> > __SMU_DUMMY_MAP(Mode2Reset),\
> > __SMU_DUMMY_MAP(RequestI2cTransaction), \
> > -   __SMU_DUMMY_MAP(GetMetricsTable),
> > +   __SMU_DUMMY_MAP(GetMetricsTable), \
> > +   __SMU_DUMMY_MAP(EnableUCLKShadow),
> >
> >   #undef __SMU_DUMMY_MAP
> >   #define __SMU_DUMMY_MAP(type) SMU_MSG_##type
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > index 355c156d871a..dd70b56aa71e 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/inc/smu_v13_0.h
> > @@ -299,5 +299,8 @@ int smu_v13_0_update_pcie_parameters(struct
> smu_context *smu,
> >  uint32_t pcie_gen_cap,
> >  uint32_t pcie_width_cap);
> >
> > +int smu_v13_0_enable_uclk_shadow(struct smu_context *smu,
> > +bool enablement);
> > +
> >   #endif
> >   #endif
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > index 9b62b45ebb7f..6a5cb582aa92 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0.c
> > @@ -2472,3 +2472,12 @@ int smu_v13_0_update_pcie_parameters(struct
> > smu_context *smu,
> >
> > return 0;
> >   }
> > +
> > +int smu_v13_0_enable_uclk_shadow(struct smu_context *smu,
> > +bool enablement)
> > +{
> > +   return smu_cmn_send_smc_msg_with_param(smu,
> > +  SMU_MSG_EnableUCLKShadow,
> > +  enablement,
> > +  NULL);
> > +}
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> > b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> > index 3d188616ba24..fd3ac18653ed 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/smu13/smu_v13_0_0_ppt.c
> > @@ -154,6 +154,7 @@ static struct cmn2asic_msg_mapping
> smu_v13_0_0_message_map[SMU_MSG_MAX_COUNT] =
> > MSG_MAP(AllowGpo,   PPSMC_MSG_SetGpoAllow,
> 0),
> > MSG_MAP(AllowIHHostInterrupt,
>   PPSMC_MSG_AllowIHHostInterrupt,   0),
> > MSG_MAP(ReenableAcDcInterrupt,
>   PPSMC_MSG_ReenableAcDcInterrupt,   0),
> > +   MSG_MAP(EnableUCLKShadow,
>   PPSMC_MSG_EnableUCLKShadow,0),
> >   };
> >
> >   static struct cmn2asic_mapping
> smu_v13_0_0_clk_map[SMU_CLK_COUNT] =
> > { @@ -237,6 +238,7 @@ static struct cmn2asic_mapping
> 

RE: [V10 5/8] drm/amd/pm: setup the framework to support Wifi RFI mitigation feature

2023-08-27 Thread Quan, Evan
[AMD Official Use Only - General]

> -Original Message-
> From: Lazar, Lijo 
> Sent: Friday, August 25, 2023 10:09 PM
> To: Quan, Evan ; l...@kernel.org;
> johan...@sipsolutions.net; da...@davemloft.net; eduma...@google.com;
> k...@kernel.org; pab...@redhat.com; Deucher, Alexander
> ; raf...@kernel.org; Limonciello, Mario
> 
> Cc: linux-ker...@vger.kernel.org; linux-a...@vger.kernel.org; amd-
> g...@lists.freedesktop.org; dri-devel@lists.freedesktop.org; linux-
> wirel...@vger.kernel.org; net...@vger.kernel.org
> Subject: Re: [V10 5/8] drm/amd/pm: setup the framework to support Wifi
> RFI mitigation feature
>
>
>
> On 8/25/2023 2:08 PM, Evan Quan wrote:
> > With WBRF feature supported, as a driver responding to the
> > frequencies, amdgpu driver is able to do shadow pstate switching to
> > mitigate possible interference(between its (G-)DDR memory clocks and
> > local radio module frequency bands used by Wifi 6/6e/7).
> >
> > Signed-off-by: Evan Quan 
> > Reviewed-by: Mario Limonciello 
> > --
> > v1->v2:
> >- update the prompt for feature support(Lijo)
> > v8->v9:
> >- update parameter document for smu_wbrf_event_handler(Simon)
> > v9->v10:
> >   - correct the logics for wbrf range sorting(Lijo)
> > ---
> >   drivers/gpu/drm/amd/amdgpu/amdgpu.h   |   2 +
> >   drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c   |  17 ++
> >   drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c | 195
> ++
> >   drivers/gpu/drm/amd/pm/swsmu/inc/amdgpu_smu.h |  23 +++
> >   drivers/gpu/drm/amd/pm/swsmu/smu_internal.h   |   3 +
> >   5 files changed, 240 insertions(+)
> >
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > index a3b86b86dc47..2bfc9111ab00 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h
> > @@ -247,6 +247,8 @@ extern int amdgpu_sg_display;
> >
> >   extern int amdgpu_user_partt_mode;
> >
> > +extern int amdgpu_wbrf;
> > +
> >   #define AMDGPU_VM_MAX_NUM_CTX 4096
> >   #define AMDGPU_SG_THRESHOLD   (256*1024*1024)
> >   #define AMDGPU_WAIT_IDLE_TIMEOUT_IN_MS3000
> > diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > index 0593ef8fe0a6..1c574bd3b60d 100644
> > --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
> > @@ -195,6 +195,7 @@ int amdgpu_use_xgmi_p2p = 1;
> >   int amdgpu_vcnfw_log;
> >   int amdgpu_sg_display = -1; /* auto */
> >   int amdgpu_user_partt_mode =
> AMDGPU_AUTO_COMPUTE_PARTITION_MODE;
> > +int amdgpu_wbrf = -1;
> >
> >   static void amdgpu_drv_delayed_reset_work_handler(struct work_struct
> > *work);
> >
> > @@ -981,6 +982,22 @@ module_param_named(user_partt_mode,
> amdgpu_user_partt_mode, uint, 0444);
> >   module_param(enforce_isolation, bool, 0444);
> >   MODULE_PARM_DESC(enforce_isolation, "enforce process isolation
> > between graphics and compute . enforce_isolation = on");
> >
> > +/**
> > + * DOC: wbrf (int)
> > + * Enable Wifi RFI interference mitigation feature.
> > + * Due to electrical and mechanical constraints there may be likely
> > +interference of
> > + * relatively high-powered harmonics of the (G-)DDR memory clocks
> > +with local radio
> > + * module frequency bands used by Wifi 6/6e/7. To mitigate the
> > +possible RFI interference,
> > + * with this feature enabled, PMFW will use either “shadowed P-State”
> > +or “P-State” based
> > + * on active list of frequencies in-use (to be avoided) as part of
> > +initial setting or
> > + * P-state transition. However, there may be potential performance
> > +impact with this
> > + * feature enabled.
> > + * (0 = disabled, 1 = enabled, -1 = auto (default setting, will be
> > +enabled if supported))  */ MODULE_PARM_DESC(wbrf,
> > +   "Enable Wifi RFI interference mitigation (0 = disabled, 1 = enabled,
> > +-1 = auto(default)"); module_param_named(wbrf, amdgpu_wbrf, int,
> > +0444);
> > +
> >   /* These devices are not supported by amdgpu.
> >* They are supported by the mach64, r128, radeon drivers
> >*/
> > diff --git a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > index ce41a8309582..bdfd234d1558 100644
> > --- a/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > +++ b/drivers/gpu/drm/amd/pm/swsmu/amdgpu_smu.c
> > @@ -1228,6 +1228,174 @@ static int
> smu_get_thermal_temperature_range(struct smu_context *smu)
> > return ret;
> >   }
> >
> > +/**
> > + * smu_wbrf_handle_exclusion_ranges - consume the wbrf exclusion
> > +ranges
> > + *
> > + * @smu: smu_context pointer
> > + *
> > + * Retrieve the wbrf exclusion ranges and send them to PMFW for proper
> handling.
> > + * Returns 0 on success, error on failure.
> > + */
> > +static int smu_wbrf_handle_exclusion_ranges(struct smu_context *smu)
> > +{
> > +   struct wbrf_ranges_in_out wbrf_exclusion = {0};
> > +   struct exclusion_range *wifi_bands = 

[Bug 217664] Laptop doesnt wake up from suspend mode.

2023-08-27 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=217664

--- Comment #39 from Mario Limonciello (AMD) (mario.limoncie...@amd.com) ---
> 2023-08-24T10:52:38.933500+01:00 Crawler-E25 kernel: [3.671686] ahci
> :06:00.0: AHCI 0001.0301 32 slots 1 ports 6 Gbps 0x1 impl SATA mode
> 2023-08-24T10:52:38.933501+01:00 Crawler-E25 kernel: [3.671690] ahci
> :06:00.0: flags: 64bit ncq sntf ilck pm led clo only pmp fbs pio slum
> part
> 2023-08-24T10:28:47.625144+01:00 Crawler-E25 kernel: [4.672965] ata1.00:
> ATA-11: Samsung SSD 860 EVO 250GB, RVT04B6Q, max UDMA/133
> 2023-08-24T10:28:47.625144+01:00 Crawler-E25 kernel: [4.677878] ata1.00:
> Features: Trust Dev-Sleep NCQ-sndrcv

1) What do you have SATA_MOBILE_LPM_POLICY set to in your kernel?

2) Can you please try to remove your SATA disks from the system and only run
with the NVME?

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.

RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-27 Thread Kasireddy, Vivek
Hi Jason, David,

> > > Sure, we can simply always fail when we detect ZONE_MOVABLE or
> > MIGRATE_CMA.
> > > Maybe that keeps at least some use cases working.
> >
> > That seems fairly reasonable
> AFAICS, failing udmabuf_create() if we detect one or more pages are in
> ZONE_MOVABLE or MIGRATE_CMA would not be a recoverable failure --
> as it would result in the failure of Guest GUI (or compositor).
> 
> I think it makes sense to have a generic version of
> And, since check_and_migrate_movable_pages() is GUP-specific, would
> it be ok to create a generic version of that (in mm/migrate.c) which can be
> used by udmabuf and/or other drivers in the future?
Sorry, I accidentally sent this earlier email before finishing it. 
What I meant to say is since the same situation (inadvertently pinning pages
in movable) may probably arise in the future with another driver, I think it 
makes
sense to have a generic (non-GUP) version of check_and_migrate_movable_pages()
available in migration.h that drivers can use to ensure that they don't break
memory hotunplug accidentally.

Thanks,
Vivek

> 
> Thanks,
> Vivek
> 
> >
> > Jason
> 



RE: [PATCH v1 0/3] udmabuf: Add support for page migration out of movable zone or CMA

2023-08-27 Thread Kasireddy, Vivek
Hi Jason, David,

> 
> > Sure, we can simply always fail when we detect ZONE_MOVABLE or
> MIGRATE_CMA.
> > Maybe that keeps at least some use cases working.
> 
> That seems fairly reasonable
AFAICS, failing udmabuf_create() if we detect one or more pages are in
ZONE_MOVABLE or MIGRATE_CMA would not be a recoverable failure --
as it would result in the failure of Guest GUI (or compositor).

I think it makes sense to have a generic version of 
And, since check_and_migrate_movable_pages() is GUP-specific, would
it be ok to create a generic version of that (in mm/migrate.c) which can be
used by udmabuf and/or other drivers in the future?

Thanks,
Vivek

> 
> Jason



[PATCH v15 22/23] drm/virtio: Support memory shrinking

2023-08-27 Thread Dmitry Osipenko
Support generic drm-shmem memory shrinker and add new madvise IOCTL to
the VirtIO-GPU driver. BO cache manager of Mesa driver will mark BOs as
"don't need" using the new IOCTL to let shrinker purge the marked BOs on
OOM, the shrinker will also evict unpurgeable shmem BOs from memory if
guest supports SWAP file or partition.

Acked-by: Gerd Hoffmann 
Signed-off-by: Daniel Almeida 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h| 13 +-
 drivers/gpu/drm/virtio/virtgpu_gem.c| 35 ++
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 25 ++
 drivers/gpu/drm/virtio/virtgpu_kms.c|  8 
 drivers/gpu/drm/virtio/virtgpu_object.c | 61 +
 drivers/gpu/drm/virtio/virtgpu_vq.c | 40 
 include/uapi/drm/virtgpu_drm.h  | 14 ++
 7 files changed, 195 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 8c82530eae82..a34da2036221 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -278,7 +278,7 @@ struct virtio_gpu_fpriv {
 };
 
 /* virtgpu_ioctl.c */
-#define DRM_VIRTIO_NUM_IOCTLS 12
+#define DRM_VIRTIO_NUM_IOCTLS 13
 extern struct drm_ioctl_desc virtio_gpu_ioctls[DRM_VIRTIO_NUM_IOCTLS];
 void virtio_gpu_create_context(struct drm_device *dev, struct drm_file *file);
 
@@ -316,6 +316,8 @@ void virtio_gpu_array_put_free_delayed(struct 
virtio_gpu_device *vgdev,
 void virtio_gpu_array_put_free_work(struct work_struct *work);
 int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
 struct virtio_gpu_object_array *objs);
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo);
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *obj, int madv);
 int virtio_gpu_gem_pin(struct virtio_gpu_object *bo);
 void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo);
 
@@ -329,6 +331,8 @@ void virtio_gpu_cmd_create_resource(struct 
virtio_gpu_device *vgdev,
struct virtio_gpu_fence *fence);
 void virtio_gpu_cmd_unref_resource(struct virtio_gpu_device *vgdev,
   struct virtio_gpu_object *bo);
+int virtio_gpu_cmd_release_resource(struct virtio_gpu_device *vgdev,
+   struct virtio_gpu_object *bo);
 void virtio_gpu_cmd_transfer_to_host_2d(struct virtio_gpu_device *vgdev,
uint64_t offset,
uint32_t width, uint32_t height,
@@ -349,6 +353,9 @@ void virtio_gpu_object_attach(struct virtio_gpu_device 
*vgdev,
  struct virtio_gpu_object *obj,
  struct virtio_gpu_mem_entry *ents,
  unsigned int nents);
+void virtio_gpu_object_detach(struct virtio_gpu_device *vgdev,
+ struct virtio_gpu_object *obj,
+ struct virtio_gpu_fence *fence);
 int virtio_gpu_attach_status_page(struct virtio_gpu_device *vgdev);
 int virtio_gpu_detach_status_page(struct virtio_gpu_device *vgdev);
 void virtio_gpu_cursor_ping(struct virtio_gpu_device *vgdev,
@@ -499,4 +506,8 @@ void virtio_gpu_vram_unmap_dma_buf(struct device *dev,
 int virtio_gpu_execbuffer_ioctl(struct drm_device *dev, void *data,
struct drm_file *file);
 
+/* virtgpu_gem_shrinker.c */
+int virtio_gpu_gem_shrinker_init(struct virtio_gpu_device *vgdev);
+void virtio_gpu_gem_shrinker_fini(struct virtio_gpu_device *vgdev);
+
 #endif
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 97e67064c97e..748f7bbb0e6d 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -147,10 +147,20 @@ void virtio_gpu_gem_object_close(struct drm_gem_object 
*obj,
struct virtio_gpu_device *vgdev = obj->dev->dev_private;
struct virtio_gpu_fpriv *vfpriv = file->driver_priv;
struct virtio_gpu_object_array *objs;
+   struct virtio_gpu_object *bo;
 
if (!vgdev->has_virgl_3d)
return;
 
+   bo = gem_to_virtio_gpu_obj(obj);
+
+   /*
+* Purged BO was already detached and released, the resource ID
+* is invalid by now.
+*/
+   if (!virtio_gpu_gem_madvise(bo, VIRTGPU_MADV_WILLNEED))
+   return;
+
objs = virtio_gpu_array_alloc(1);
if (!objs)
return;
@@ -315,6 +325,31 @@ int virtio_gpu_array_prepare(struct virtio_gpu_device 
*vgdev,
return ret;
 }
 
+int virtio_gpu_gem_madvise(struct virtio_gpu_object *bo, int madv)
+{
+   if (virtio_gpu_is_shmem(bo))
+   return drm_gem_shmem_object_madvise(>base.base, madv);
+
+   return 1;
+}
+
+int virtio_gpu_gem_host_mem_release(struct virtio_gpu_object *bo)
+{
+   struct virtio_gpu_device *vgdev = bo->base.base.dev->dev_private;
+   int err;
+
+   

[PATCH v15 16/23] drm/shmem-helper: Use kref for vmap_use_count

2023-08-27 Thread Dmitry Osipenko
Use kref helper for vmap_use_count to make refcounting consistent with
pages_use_count and pages_pin_count that use kref. This will allow to
optimize unlocked vmappings by skipping reservation locking if refcnt > 1.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 37 ++
 include/drm/drm_gem_shmem_helper.h |  2 +-
 2 files changed, 21 insertions(+), 18 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 17a0177acb5d..d96fee3d6166 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -144,7 +144,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
} else if (!shmem->imported_sgt) {
dma_resv_lock(shmem->base.resv, NULL);
 
-   drm_WARN_ON(obj->dev, shmem->vmap_use_count);
+   drm_WARN_ON(obj->dev, kref_read(>vmap_use_count));
 
if (shmem->sgt) {
dma_unmap_sgtable(obj->dev->dev, shmem->sgt,
@@ -359,23 +359,25 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
 
dma_resv_assert_held(shmem->base.resv);
 
-   if (shmem->vmap_use_count++ > 0) {
+   if (kref_get_unless_zero(>vmap_use_count)) {
iosys_map_set_vaddr(map, shmem->vaddr);
return 0;
}
 
ret = drm_gem_shmem_pin_locked(shmem);
if (ret)
-   goto err_zero_use;
+   return ret;
 
if (shmem->map_wc)
prot = pgprot_writecombine(prot);
shmem->vaddr = vmap(shmem->pages, obj->size >> PAGE_SHIFT,
VM_MAP, prot);
-   if (!shmem->vaddr)
+   if (!shmem->vaddr) {
ret = -ENOMEM;
-   else
+   } else {
iosys_map_set_vaddr(map, shmem->vaddr);
+   kref_init(>vmap_use_count);
+   }
}
 
if (ret) {
@@ -388,13 +390,22 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
 err_put_pages:
if (!obj->import_attach)
drm_gem_shmem_unpin_locked(shmem);
-err_zero_use:
-   shmem->vmap_use_count = 0;
 
return ret;
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap_locked);
 
+static void drm_gem_shmem_kref_vunmap(struct kref *kref)
+{
+   struct drm_gem_shmem_object *shmem;
+
+   shmem = container_of(kref, struct drm_gem_shmem_object,
+vmap_use_count);
+
+   vunmap(shmem->vaddr);
+   drm_gem_shmem_unpin_locked(shmem);
+}
+
 /*
  * drm_gem_shmem_vunmap_locked - Unmap a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
@@ -416,15 +427,7 @@ void drm_gem_shmem_vunmap_locked(struct 
drm_gem_shmem_object *shmem,
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
dma_resv_assert_held(shmem->base.resv);
-
-   if (drm_WARN_ON_ONCE(obj->dev, !shmem->vmap_use_count))
-   return;
-
-   if (--shmem->vmap_use_count > 0)
-   return;
-
-   vunmap(shmem->vaddr);
-   drm_gem_shmem_unpin_locked(shmem);
+   kref_put(>vmap_use_count, drm_gem_shmem_kref_vunmap);
}
 
shmem->vaddr = NULL;
@@ -663,7 +666,7 @@ void drm_gem_shmem_print_info(const struct 
drm_gem_shmem_object *shmem,
return;
 
drm_printf_indent(p, indent, "pages_use_count=%u\n", 
kref_read(>pages_use_count));
-   drm_printf_indent(p, indent, "vmap_use_count=%u\n", 
shmem->vmap_use_count);
+   drm_printf_indent(p, indent, "vmap_use_count=%u\n", 
kref_read(>vmap_use_count));
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info);
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index 400ecd63f45f..0e0ccd380f66 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -81,7 +81,7 @@ struct drm_gem_shmem_object {
 * Reference count on the virtual address.
 * The address are un-mapped when the count reaches zero.
 */
-   unsigned int vmap_use_count;
+   struct kref vmap_use_count;
 
/**
 * @got_sgt:
-- 
2.41.0



[PATCH v15 23/23] drm/panfrost: Switch to generic memory shrinker

2023-08-27 Thread Dmitry Osipenko
Replace Panfrost's custom memory shrinker with a common drm-shmem
memory shrinker.

Tested-by: Steven Price  # Firefly-RK3288
Reviewed-by: Steven Price 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/panfrost/Makefile |   1 -
 drivers/gpu/drm/panfrost/panfrost_device.h|   4 -
 drivers/gpu/drm/panfrost/panfrost_drv.c   |  27 ++--
 drivers/gpu/drm/panfrost/panfrost_gem.c   |  30 ++--
 drivers/gpu/drm/panfrost/panfrost_gem.h   |   9 --
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  | 129 --
 drivers/gpu/drm/panfrost/panfrost_job.c   |  18 ++-
 include/drm/drm_gem_shmem_helper.h|   7 -
 8 files changed, 47 insertions(+), 178 deletions(-)
 delete mode 100644 drivers/gpu/drm/panfrost/panfrost_gem_shrinker.c

diff --git a/drivers/gpu/drm/panfrost/Makefile 
b/drivers/gpu/drm/panfrost/Makefile
index 7da2b3f02ed9..11622e22cf15 100644
--- a/drivers/gpu/drm/panfrost/Makefile
+++ b/drivers/gpu/drm/panfrost/Makefile
@@ -5,7 +5,6 @@ panfrost-y := \
panfrost_device.o \
panfrost_devfreq.o \
panfrost_gem.o \
-   panfrost_gem_shrinker.o \
panfrost_gpu.o \
panfrost_job.o \
panfrost_mmu.o \
diff --git a/drivers/gpu/drm/panfrost/panfrost_device.h 
b/drivers/gpu/drm/panfrost/panfrost_device.h
index b0126b9fbadc..dcc2571c092b 100644
--- a/drivers/gpu/drm/panfrost/panfrost_device.h
+++ b/drivers/gpu/drm/panfrost/panfrost_device.h
@@ -116,10 +116,6 @@ struct panfrost_device {
atomic_t pending;
} reset;
 
-   struct mutex shrinker_lock;
-   struct list_head shrinker_list;
-   struct shrinker shrinker;
-
struct panfrost_devfreq pfdevfreq;
 };
 
diff --git a/drivers/gpu/drm/panfrost/panfrost_drv.c 
b/drivers/gpu/drm/panfrost/panfrost_drv.c
index 175443eacead..8cf338c2a03b 100644
--- a/drivers/gpu/drm/panfrost/panfrost_drv.c
+++ b/drivers/gpu/drm/panfrost/panfrost_drv.c
@@ -170,7 +170,6 @@ panfrost_lookup_bos(struct drm_device *dev,
break;
}
 
-   atomic_inc(>gpu_usecount);
job->mappings[i] = mapping;
}
 
@@ -395,7 +394,6 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
void *data,
 {
struct panfrost_file_priv *priv = file_priv->driver_priv;
struct drm_panfrost_madvise *args = data;
-   struct panfrost_device *pfdev = dev->dev_private;
struct drm_gem_object *gem_obj;
struct panfrost_gem_object *bo;
int ret = 0;
@@ -408,11 +406,15 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
void *data,
 
bo = to_panfrost_bo(gem_obj);
 
+   if (bo->is_heap) {
+   args->retained = 1;
+   goto out_put_object;
+   }
+
ret = dma_resv_lock_interruptible(bo->base.base.resv, NULL);
if (ret)
goto out_put_object;
 
-   mutex_lock(>shrinker_lock);
mutex_lock(>mappings.lock);
if (args->madv == PANFROST_MADV_DONTNEED) {
struct panfrost_gem_mapping *first;
@@ -438,17 +440,8 @@ static int panfrost_ioctl_madvise(struct drm_device *dev, 
void *data,
 
args->retained = drm_gem_shmem_madvise_locked(>base, args->madv);
 
-   if (args->retained) {
-   if (args->madv == PANFROST_MADV_DONTNEED)
-   list_move_tail(>base.madv_list,
-  >shrinker_list);
-   else if (args->madv == PANFROST_MADV_WILLNEED)
-   list_del_init(>base.madv_list);
-   }
-
 out_unlock_mappings:
mutex_unlock(>mappings.lock);
-   mutex_unlock(>shrinker_lock);
dma_resv_unlock(bo->base.base.resv);
 out_put_object:
drm_gem_object_put(gem_obj);
@@ -577,9 +570,6 @@ static int panfrost_probe(struct platform_device *pdev)
ddev->dev_private = pfdev;
pfdev->ddev = ddev;
 
-   mutex_init(>shrinker_lock);
-   INIT_LIST_HEAD(>shrinker_list);
-
err = panfrost_device_init(pfdev);
if (err) {
if (err != -EPROBE_DEFER)
@@ -601,10 +591,14 @@ static int panfrost_probe(struct platform_device *pdev)
if (err < 0)
goto err_out1;
 
-   panfrost_gem_shrinker_init(ddev);
+   err = drmm_gem_shmem_init(ddev);
+   if (err < 0)
+   goto err_out2;
 
return 0;
 
+err_out2:
+   drm_dev_unregister(ddev);
 err_out1:
pm_runtime_disable(pfdev->dev);
panfrost_device_fini(pfdev);
@@ -620,7 +614,6 @@ static void panfrost_remove(struct platform_device *pdev)
struct drm_device *ddev = pfdev->ddev;
 
drm_dev_unregister(ddev);
-   panfrost_gem_shrinker_cleanup(ddev);
 
pm_runtime_get_sync(pfdev->dev);
pm_runtime_disable(pfdev->dev);
diff --git a/drivers/gpu/drm/panfrost/panfrost_gem.c 
b/drivers/gpu/drm/panfrost/panfrost_gem.c
index 59c8c73c6a59..00165fca7f3d 100644
--- a/drivers/gpu/drm/panfrost/panfrost_gem.c
+++ 

[PATCH v15 19/23] drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked()

2023-08-27 Thread Dmitry Osipenko
Export drm_gem_shmem_get_pages_sgt_locked() that will be used by virtio-gpu
shrinker during GEM swap-in operation done under the held reservation lock.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
 include/drm/drm_gem_shmem_helper.h | 1 +
 2 files changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f0f708e0ff00..62958af90383 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -888,7 +888,7 @@ struct sg_table *drm_gem_shmem_get_sg_table(struct 
drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_get_sg_table);
 
-static struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct 
drm_gem_shmem_object *shmem)
+struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct 
drm_gem_shmem_object *shmem)
 {
struct drm_gem_object *obj = >base;
int ret;
@@ -927,6 +927,7 @@ static struct sg_table 
*drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
drm_gem_shmem_put_pages_locked(shmem);
return ERR_PTR(ret);
 }
+EXPORT_SYMBOL_GPL(drm_gem_shmem_get_pages_sgt_locked);
 
 /**
  * drm_gem_shmem_get_pages_sgt - Pin pages, dma map them, and return a
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index 112dbe5208c0..e10ba533f74d 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -161,6 +161,7 @@ void drm_gem_shmem_purge_locked(struct drm_gem_shmem_object 
*shmem);
 
 struct sg_table *drm_gem_shmem_get_sg_table(struct drm_gem_shmem_object 
*shmem);
 struct sg_table *drm_gem_shmem_get_pages_sgt(struct drm_gem_shmem_object 
*shmem);
+struct sg_table *drm_gem_shmem_get_pages_sgt_locked(struct 
drm_gem_shmem_object *shmem);
 
 void drm_gem_shmem_print_info(const struct drm_gem_shmem_object *shmem,
  struct drm_printer *p, unsigned int indent);
-- 
2.41.0



[PATCH v15 20/23] drm/virtio: Pin display framebuffer BO

2023-08-27 Thread Dmitry Osipenko
Prepare to addition of memory shrinker support by pinning display
framebuffer BO pages in memory while they are in use by display on host.
Shrinker is free to relocate framebuffer BO pages if it doesn't know that
pages are in use, thus pin the pages to disallow shrinker to move them.

Acked-by: Gerd Hoffmann 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h   |  2 ++
 drivers/gpu/drm/virtio/virtgpu_gem.c   | 19 +++
 drivers/gpu/drm/virtio/virtgpu_plane.c | 17 +++--
 3 files changed, 36 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 4126c384286b..5a4b74b7b318 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -313,6 +313,8 @@ void virtio_gpu_array_put_free(struct 
virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
   struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_gem_pin(struct virtio_gpu_object *bo);
+void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo);
 
 /* virtgpu_vq.c */
 int virtio_gpu_alloc_vbufs(struct virtio_gpu_device *vgdev);
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 7db48d17ee3a..625c05d625bf 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -294,3 +294,22 @@ void virtio_gpu_array_put_free_work(struct work_struct 
*work)
}
spin_unlock(>obj_free_lock);
 }
+
+int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
+{
+   int err;
+
+   if (virtio_gpu_is_shmem(bo)) {
+   err = drm_gem_shmem_pin(>base);
+   if (err)
+   return err;
+   }
+
+   return 0;
+}
+
+void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo)
+{
+   if (virtio_gpu_is_shmem(bo))
+   drm_gem_shmem_unpin(>base);
+}
diff --git a/drivers/gpu/drm/virtio/virtgpu_plane.c 
b/drivers/gpu/drm/virtio/virtgpu_plane.c
index a2e045f3a000..def57b01a826 100644
--- a/drivers/gpu/drm/virtio/virtgpu_plane.c
+++ b/drivers/gpu/drm/virtio/virtgpu_plane.c
@@ -238,20 +238,28 @@ static int virtio_gpu_plane_prepare_fb(struct drm_plane 
*plane,
struct virtio_gpu_device *vgdev = dev->dev_private;
struct virtio_gpu_framebuffer *vgfb;
struct virtio_gpu_object *bo;
+   int err;
 
if (!new_state->fb)
return 0;
 
vgfb = to_virtio_gpu_framebuffer(new_state->fb);
bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
-   if (!bo || (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob))
+
+   err = virtio_gpu_gem_pin(bo);
+   if (err)
+   return err;
+
+   if (plane->type == DRM_PLANE_TYPE_PRIMARY && !bo->guest_blob)
return 0;
 
if (bo->dumb && (plane->state->fb != new_state->fb)) {
vgfb->fence = virtio_gpu_fence_alloc(vgdev, 
vgdev->fence_drv.context,
 0);
-   if (!vgfb->fence)
+   if (!vgfb->fence) {
+   virtio_gpu_gem_unpin(bo);
return -ENOMEM;
+   }
}
 
return 0;
@@ -261,15 +269,20 @@ static void virtio_gpu_plane_cleanup_fb(struct drm_plane 
*plane,
struct drm_plane_state *state)
 {
struct virtio_gpu_framebuffer *vgfb;
+   struct virtio_gpu_object *bo;
 
if (!state->fb)
return;
 
vgfb = to_virtio_gpu_framebuffer(state->fb);
+   bo = gem_to_virtio_gpu_obj(vgfb->base.obj[0]);
+
if (vgfb->fence) {
dma_fence_put(>fence->f);
vgfb->fence = NULL;
}
+
+   virtio_gpu_gem_unpin(bo);
 }
 
 static void virtio_gpu_cursor_plane_update(struct drm_plane *plane,
-- 
2.41.0



[PATCH v15 21/23] drm/virtio: Attach shmem BOs dynamically

2023-08-27 Thread Dmitry Osipenko
Prepare for addition of memory shrinker support by attaching shmem pages
to host dynamically on first use. The attachment vq command wasn't fenced
and there was no vq kick made in the BO creation code path, hence the
the attachment already was happening dynamically, but implicitly. Making
attachment explicitly dynamic will allow to simplify and reuse more code
when shrinker will be added. The virtio_gpu_object_shmem_init() now works
under held reservation lock, which will be important to have for shrinker.

Acked-by: Gerd Hoffmann 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/virtio/virtgpu_drv.h|  7 +++
 drivers/gpu/drm/virtio/virtgpu_gem.c| 26 
 drivers/gpu/drm/virtio/virtgpu_ioctl.c  | 32 ++
 drivers/gpu/drm/virtio/virtgpu_object.c | 80 -
 drivers/gpu/drm/virtio/virtgpu_submit.c | 15 -
 5 files changed, 132 insertions(+), 28 deletions(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_drv.h 
b/drivers/gpu/drm/virtio/virtgpu_drv.h
index 5a4b74b7b318..8c82530eae82 100644
--- a/drivers/gpu/drm/virtio/virtgpu_drv.h
+++ b/drivers/gpu/drm/virtio/virtgpu_drv.h
@@ -89,6 +89,7 @@ struct virtio_gpu_object {
uint32_t hw_res_handle;
bool dumb;
bool created;
+   bool detached;
bool host3d_blob, guest_blob;
uint32_t blob_mem, blob_flags;
 
@@ -313,6 +314,8 @@ void virtio_gpu_array_put_free(struct 
virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_delayed(struct virtio_gpu_device *vgdev,
   struct virtio_gpu_object_array *objs);
 void virtio_gpu_array_put_free_work(struct work_struct *work);
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+struct virtio_gpu_object_array *objs);
 int virtio_gpu_gem_pin(struct virtio_gpu_object *bo);
 void virtio_gpu_gem_unpin(struct virtio_gpu_object *bo);
 
@@ -458,6 +461,10 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
*vgdev,
 
 bool virtio_gpu_is_shmem(struct virtio_gpu_object *bo);
 
+int virtio_gpu_reattach_shmem_object_locked(struct virtio_gpu_object *bo);
+
+int virtio_gpu_reattach_shmem_object(struct virtio_gpu_object *bo);
+
 int virtio_gpu_resource_id_get(struct virtio_gpu_device *vgdev,
   uint32_t *resid);
 /* virtgpu_prime.c */
diff --git a/drivers/gpu/drm/virtio/virtgpu_gem.c 
b/drivers/gpu/drm/virtio/virtgpu_gem.c
index 625c05d625bf..97e67064c97e 100644
--- a/drivers/gpu/drm/virtio/virtgpu_gem.c
+++ b/drivers/gpu/drm/virtio/virtgpu_gem.c
@@ -295,6 +295,26 @@ void virtio_gpu_array_put_free_work(struct work_struct 
*work)
spin_unlock(>obj_free_lock);
 }
 
+int virtio_gpu_array_prepare(struct virtio_gpu_device *vgdev,
+struct virtio_gpu_object_array *objs)
+{
+   struct virtio_gpu_object *bo;
+   int ret = 0;
+   u32 i;
+
+   for (i = 0; i < objs->nents; i++) {
+   bo = gem_to_virtio_gpu_obj(objs->objs[i]);
+
+   if (virtio_gpu_is_shmem(bo) && bo->detached) {
+   ret = virtio_gpu_reattach_shmem_object_locked(bo);
+   if (ret)
+   break;
+   }
+   }
+
+   return ret;
+}
+
 int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
 {
int err;
@@ -303,6 +323,12 @@ int virtio_gpu_gem_pin(struct virtio_gpu_object *bo)
err = drm_gem_shmem_pin(>base);
if (err)
return err;
+
+   err = virtio_gpu_reattach_shmem_object(bo);
+   if (err) {
+   drm_gem_shmem_unpin(>base);
+   return err;
+   }
}
 
return 0;
diff --git a/drivers/gpu/drm/virtio/virtgpu_ioctl.c 
b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
index b24b11f25197..070c29cea26a 100644
--- a/drivers/gpu/drm/virtio/virtgpu_ioctl.c
+++ b/drivers/gpu/drm/virtio/virtgpu_ioctl.c
@@ -246,6 +246,10 @@ static int virtio_gpu_transfer_from_host_ioctl(struct 
drm_device *dev,
if (ret != 0)
goto err_put_free;
 
+   ret = virtio_gpu_array_prepare(vgdev, objs);
+   if (ret)
+   goto err_unlock;
+
fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
if (!fence) {
ret = -ENOMEM;
@@ -288,11 +292,25 @@ static int virtio_gpu_transfer_to_host_ioctl(struct 
drm_device *dev, void *data,
goto err_put_free;
}
 
+   ret = virtio_gpu_array_lock_resv(objs);
+   if (ret != 0)
+   goto err_put_free;
+
+   ret = virtio_gpu_array_prepare(vgdev, objs);
+   if (ret)
+   goto err_unlock;
+
+   fence = virtio_gpu_fence_alloc(vgdev, vgdev->fence_drv.context, 0);
+   if (!fence) {
+   ret = -ENOMEM;
+   goto err_unlock;
+   }
+
if (!vgdev->has_virgl_3d) {
virtio_gpu_cmd_transfer_to_host_2d

[PATCH v15 12/23] drm/shmem-helper: Add and use pages_pin_count

2023-08-27 Thread Dmitry Osipenko
Add separate pages_pin_count for tracking of whether drm-shmem pages are
moveable or not. With the addition of memory shrinker support to drm-shmem,
the pages_use_count will no longer determine whether pages are hard-pinned
in memory, but whether pages exit and are soft-pinned (and could be swapped
out). The pages_pin_count > 1 will hard-pin pages in memory.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 22 +-
 include/drm/drm_gem_shmem_helper.h | 10 ++
 2 files changed, 27 insertions(+), 5 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d545d3d227d7..1a7e5c332fd8 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -234,14 +234,22 @@ static int drm_gem_shmem_pin_locked(struct 
drm_gem_shmem_object *shmem)
 
dma_resv_assert_held(shmem->base.resv);
 
+   if (kref_get_unless_zero(>pages_pin_count))
+   return 0;
+
ret = drm_gem_shmem_get_pages_locked(shmem);
+   if (!ret)
+   kref_init(>pages_pin_count);
 
return ret;
 }
 
-static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
+static void drm_gem_shmem_kref_unpin_pages(struct kref *kref)
 {
-   dma_resv_assert_held(shmem->base.resv);
+   struct drm_gem_shmem_object *shmem;
+
+   shmem = container_of(kref, struct drm_gem_shmem_object,
+pages_pin_count);
 
drm_gem_shmem_put_pages_locked(shmem);
 }
@@ -263,6 +271,9 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 
drm_WARN_ON(obj->dev, obj->import_attach);
 
+   if (kref_get_unless_zero(>pages_pin_count))
+   return 0;
+
ret = dma_resv_lock_interruptible(shmem->base.resv, NULL);
if (ret)
return ret;
@@ -286,9 +297,10 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object 
*shmem)
 
drm_WARN_ON(obj->dev, obj->import_attach);
 
-   dma_resv_lock(shmem->base.resv, NULL);
-   drm_gem_shmem_unpin_locked(shmem);
-   dma_resv_unlock(shmem->base.resv);
+   if (kref_put_dma_resv(>pages_pin_count,
+ drm_gem_shmem_kref_unpin_pages,
+ obj->resv, NULL))
+   dma_resv_unlock(obj->resv);
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
 
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index ec2d8b24e3cf..afb7cd671e2a 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -39,6 +39,16 @@ struct drm_gem_shmem_object {
 */
unsigned int pages_use_count;
 
+   /**
+* @pages_pin_count:
+*
+* Reference count on the pinned pages table.
+* The pages allowed to be evicted and purged by memory
+* shrinker only when the count is zero, otherwise pages
+* are hard-pinned in memory.
+*/
+   struct kref pages_pin_count;
+
/**
 * @madv: State for madvise
 *
-- 
2.41.0



[PATCH v15 07/23] drm/shmem-helper: Make all exported symbols GPL

2023-08-27 Thread Dmitry Osipenko
Make all drm-shmem exported symbols GPL to make them consistent with
the rest of drm-shmem symbols.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 16 
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index db20b9123891..575704f38808 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object 
*shmem)
  shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
 }
-EXPORT_SYMBOL(drm_gem_shmem_put_pages);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
 
 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
 {
@@ -271,7 +271,7 @@ int drm_gem_shmem_pin(struct drm_gem_shmem_object *shmem)
 
return ret;
 }
-EXPORT_SYMBOL(drm_gem_shmem_pin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_pin);
 
 /**
  * drm_gem_shmem_unpin - Unpin backing pages for a shmem GEM object
@@ -290,7 +290,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
drm_gem_shmem_unpin_locked(shmem);
dma_resv_unlock(shmem->base.resv);
 }
-EXPORT_SYMBOL(drm_gem_shmem_unpin);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
 
 /*
  * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
@@ -360,7 +360,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
 
return ret;
 }
-EXPORT_SYMBOL(drm_gem_shmem_vmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vmap);
 
 /*
  * drm_gem_shmem_vunmap - Unmap a virtual mapping for a shmem GEM object
@@ -396,7 +396,7 @@ void drm_gem_shmem_vunmap(struct drm_gem_shmem_object 
*shmem,
 
shmem->vaddr = NULL;
 }
-EXPORT_SYMBOL(drm_gem_shmem_vunmap);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_vunmap);
 
 static int
 drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
@@ -435,7 +435,7 @@ int drm_gem_shmem_madvise(struct drm_gem_shmem_object 
*shmem, int madv)
 
return (madv >= 0);
 }
-EXPORT_SYMBOL(drm_gem_shmem_madvise);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_madvise);
 
 void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
 {
@@ -467,7 +467,7 @@ void drm_gem_shmem_purge(struct drm_gem_shmem_object *shmem)
 
invalidate_mapping_pages(file_inode(obj->filp)->i_mapping, 0, 
(loff_t)-1);
 }
-EXPORT_SYMBOL(drm_gem_shmem_purge);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_purge);
 
 /**
  * drm_gem_shmem_dumb_create - Create a dumb shmem buffer object
@@ -636,7 +636,7 @@ void drm_gem_shmem_print_info(const struct 
drm_gem_shmem_object *shmem,
drm_printf_indent(p, indent, "vmap_use_count=%u\n", 
shmem->vmap_use_count);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
 }
-EXPORT_SYMBOL(drm_gem_shmem_print_info);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_print_info);
 
 /**
  * drm_gem_shmem_get_sg_table - Provide a scatter/gather table of pinned
-- 
2.41.0



[PATCH v15 15/23] drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin

2023-08-27 Thread Dmitry Osipenko
The vmapped pages shall be pinned in memory and previously get/put_pages()
were implicitly hard-pinning/unpinning the pages. This will no longer be
the case with addition of memory shrinker because pages_use_count > 0 won't
determine anymore whether pages are hard-pinned (they will be soft-pinned),
while the new pages_pin_count will do the hard-pinning. Switch the
vmap/vunmap() to use pin/unpin() functions in a preparation of addition
of the memory shrinker support to drm-shmem.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 13 ++---
 include/drm/drm_gem_shmem_helper.h |  2 +-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f386289c24fc..17a0177acb5d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -274,6 +274,13 @@ static void drm_gem_shmem_kref_unpin_pages(struct kref 
*kref)
drm_gem_shmem_put_pages_locked(shmem);
 }
 
+static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
+{
+   dma_resv_assert_held(shmem->base.resv);
+
+   kref_put(>pages_pin_count, drm_gem_shmem_kref_unpin_pages);
+}
+
 /**
  * drm_gem_shmem_pin - Pin backing pages for a shmem GEM object
  * @shmem: shmem GEM object
@@ -357,7 +364,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
return 0;
}
 
-   ret = drm_gem_shmem_get_pages_locked(shmem);
+   ret = drm_gem_shmem_pin_locked(shmem);
if (ret)
goto err_zero_use;
 
@@ -380,7 +387,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
 
 err_put_pages:
if (!obj->import_attach)
-   drm_gem_shmem_put_pages_locked(shmem);
+   drm_gem_shmem_unpin_locked(shmem);
 err_zero_use:
shmem->vmap_use_count = 0;
 
@@ -417,7 +424,7 @@ void drm_gem_shmem_vunmap_locked(struct 
drm_gem_shmem_object *shmem,
return;
 
vunmap(shmem->vaddr);
-   drm_gem_shmem_put_pages_locked(shmem);
+   drm_gem_shmem_unpin_locked(shmem);
}
 
shmem->vaddr = NULL;
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index a5a3c193cc8f..400ecd63f45f 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -137,7 +137,7 @@ int drm_gem_shmem_madvise_locked(struct 
drm_gem_shmem_object *shmem, int madv);
 static inline bool drm_gem_shmem_is_purgeable(struct drm_gem_shmem_object 
*shmem)
 {
return (shmem->madv > 0) &&
-   !shmem->vmap_use_count && shmem->sgt &&
+   !kref_read(>pages_pin_count) && shmem->sgt &&
!shmem->base.dma_buf && !shmem->base.import_attach;
 }
 
-- 
2.41.0



[PATCH v15 13/23] drm/shmem-helper: Use kref for pages_use_count

2023-08-27 Thread Dmitry Osipenko
Use atomic kref helper for pages_use_count to optimize pin/unpin functions
by skipping reservation locking while GEM's pin refcount > 1.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c  | 48 ++---
 drivers/gpu/drm/lima/lima_gem.c |  2 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c |  2 +-
 include/drm/drm_gem_shmem_helper.h  |  2 +-
 4 files changed, 30 insertions(+), 24 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 1a7e5c332fd8..5a2e37b3e51d 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -155,7 +155,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (shmem->got_sgt)
drm_gem_shmem_put_pages_locked(shmem);
 
-   drm_WARN_ON(obj->dev, shmem->pages_use_count);
+   drm_WARN_ON(obj->dev, kref_read(>pages_use_count));
 
dma_resv_unlock(shmem->base.resv);
}
@@ -172,14 +172,13 @@ static int drm_gem_shmem_get_pages_locked(struct 
drm_gem_shmem_object *shmem)
 
dma_resv_assert_held(shmem->base.resv);
 
-   if (shmem->pages_use_count++ > 0)
+   if (kref_get_unless_zero(>pages_use_count))
return 0;
 
pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
drm_dbg_kms(obj->dev, "Failed to get pages (%ld)\n",
PTR_ERR(pages));
-   shmem->pages_use_count = 0;
return PTR_ERR(pages);
}
 
@@ -195,26 +194,20 @@ static int drm_gem_shmem_get_pages_locked(struct 
drm_gem_shmem_object *shmem)
 
shmem->pages = pages;
 
+   kref_init(>pages_use_count);
+
return 0;
 }
 
-/*
- * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages 
for a shmem GEM object
- * @shmem: shmem GEM object
- *
- * This function decreases the use count and puts the backing pages when use 
drops to zero.
- */
-void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
-{
-   struct drm_gem_object *obj = >base;
-
-   dma_resv_assert_held(shmem->base.resv);
 
-   if (drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
-   return;
+static void drm_gem_shmem_kref_release_pages(struct kref *kref)
+{
+   struct drm_gem_shmem_object *shmem;
+   struct drm_gem_object *obj;
 
-   if (--shmem->pages_use_count > 0)
-   return;
+   shmem = container_of(kref, struct drm_gem_shmem_object,
+pages_use_count);
+   obj = >base;
 
 #ifdef CONFIG_X86
if (shmem->map_wc)
@@ -226,6 +219,19 @@ void drm_gem_shmem_put_pages_locked(struct 
drm_gem_shmem_object *shmem)
  shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
 }
+
+/*
+ * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages 
for a shmem GEM object
+ * @shmem: shmem GEM object
+ *
+ * This function decreases the use count and puts the backing pages when use 
drops to zero.
+ */
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
+{
+   dma_resv_assert_held(shmem->base.resv);
+
+   kref_put(>pages_use_count, drm_gem_shmem_kref_release_pages);
+}
 EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
 
 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
@@ -556,8 +562,8 @@ static void drm_gem_shmem_vm_open(struct vm_area_struct 
*vma)
 * mmap'd, vm_open() just grabs an additional reference for the new
 * mm the vma is getting copied into (ie. on fork()).
 */
-   if (!drm_WARN_ON_ONCE(obj->dev, !shmem->pages_use_count))
-   shmem->pages_use_count++;
+   drm_WARN_ON_ONCE(obj->dev,
+!kref_get_unless_zero(>pages_use_count));
 
dma_resv_unlock(shmem->base.resv);
 
@@ -638,7 +644,7 @@ void drm_gem_shmem_print_info(const struct 
drm_gem_shmem_object *shmem,
if (shmem->base.import_attach)
return;
 
-   drm_printf_indent(p, indent, "pages_use_count=%u\n", 
shmem->pages_use_count);
+   drm_printf_indent(p, indent, "pages_use_count=%u\n", 
kref_read(>pages_use_count));
drm_printf_indent(p, indent, "vmap_use_count=%u\n", 
shmem->vmap_use_count);
drm_printf_indent(p, indent, "vaddr=%p\n", shmem->vaddr);
 }
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 7d74c71f5558..a5f015d188cd 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -47,7 +47,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
}
 
bo->base.pages = pages;
-   bo->base.pages_use_count = 1;
+   kref_init(>base.pages_use_count);
 
mapping_set_unevictable(mapping);
}
diff --git a/drivers/gpu/drm/panfrost/panfrost_mmu.c 

[PATCH v15 18/23] drm/shmem-helper: Add memory shrinker

2023-08-27 Thread Dmitry Osipenko
Introduce common drm-shmem shrinker for DRM drivers.

To start using drm-shmem shrinker drivers should do the following:

1. Implement evict() callback of GEM object where driver should check
   whether object is purgeable or evictable using drm-shmem helpers and
   perform the shrinking action

2. Initialize drm-shmem internals using drmm_gem_shmem_init(drm_device),
   which will register drm-shmem shrinker

3. Implement madvise IOCTL that will use drm_gem_shmem_madvise()

Signed-off-by: Daniel Almeida 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c| 415 +-
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |   9 +-
 include/drm/drm_device.h  |  10 +-
 include/drm/drm_gem_shmem_helper.h|  71 ++-
 4 files changed, 474 insertions(+), 31 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index ca5da976aafa..f0f708e0ff00 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -20,6 +20,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 #include 
 
@@ -88,8 +89,6 @@ __drm_gem_shmem_create(struct drm_device *dev, size_t size, 
bool private)
if (ret)
goto err_release;
 
-   INIT_LIST_HEAD(>madv_list);
-
if (!private) {
/*
 * Our buffers are kept pinned, so allocating them
@@ -142,7 +141,42 @@ static void drm_gem_shmem_resv_assert_held(struct 
drm_gem_shmem_object *shmem)
 * refcount drops to zero, we pretend it is already locked.
 */
if (kref_read(>base.refcount))
-   drm_gem_shmem_resv_assert_held(shmem);
+   dma_resv_assert_held(shmem->base.resv);
+}
+
+static bool drm_gem_shmem_is_evictable(struct drm_gem_shmem_object *shmem)
+{
+   drm_gem_shmem_resv_assert_held(shmem);
+
+   return (shmem->madv >= 0) && shmem->base.funcs->evict &&
+   kref_read(>pages_use_count) &&
+   !kref_read(>pages_pin_count) &&
+   !shmem->base.dma_buf && !shmem->base.import_attach &&
+   shmem->sgt && !shmem->evicted;
+}
+
+static void
+drm_gem_shmem_update_pages_state_locked(struct drm_gem_shmem_object *shmem)
+{
+   struct drm_gem_object *obj = >base;
+   struct drm_gem_shmem *shmem_mm = obj->dev->shmem_mm;
+   struct drm_gem_shmem_shrinker *shmem_shrinker = _mm->shrinker;
+
+   drm_gem_shmem_resv_assert_held(shmem);
+
+   if (!shmem_shrinker || obj->import_attach)
+   return;
+
+   if (shmem->madv < 0)
+   drm_gem_lru_remove(>base);
+   else if (drm_gem_shmem_is_evictable(shmem) || 
drm_gem_shmem_is_purgeable(shmem))
+   drm_gem_lru_move_tail(_shrinker->lru_evictable, 
>base);
+   else if (shmem->evicted)
+   drm_gem_lru_move_tail(_shrinker->lru_evicted, 
>base);
+   else if (!shmem->pages)
+   drm_gem_lru_remove(>base);
+   else
+   drm_gem_lru_move_tail(_shrinker->lru_pinned, 
>base);
 }
 
 /**
@@ -159,6 +193,9 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else if (!shmem->imported_sgt) {
+   /* take out shmem GEM object from the memory shrinker */
+   drm_gem_shmem_madvise_locked(shmem, -1);
+
drm_WARN_ON(obj->dev, kref_read(>vmap_use_count));
 
if (shmem->sgt) {
@@ -178,15 +215,26 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object 
*shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
+static int
+drm_gem_shmem_acquire_pages(struct drm_gem_shmem_object *shmem, bool init)
 {
struct drm_gem_object *obj = >base;
struct page **pages;
 
drm_gem_shmem_resv_assert_held(shmem);
 
-   if (kref_get_unless_zero(>pages_use_count))
+   if (shmem->madv < 0) {
+   drm_WARN_ON(obj->dev, shmem->pages);
+   return -ENOMEM;
+   }
+
+   if (shmem->pages) {
+   drm_WARN_ON(obj->dev, !shmem->evicted);
return 0;
+   }
+
+   if (drm_WARN_ON(obj->dev, !(init ^ kref_read(>pages_use_count
+   return -EINVAL;
 
pages = drm_gem_get_pages(obj);
if (IS_ERR(pages)) {
@@ -207,20 +255,20 @@ static int drm_gem_shmem_get_pages_locked(struct 
drm_gem_shmem_object *shmem)
 
shmem->pages = pages;
 
-   kref_init(>pages_use_count);
-
return 0;
 }
 
-
-static void drm_gem_shmem_kref_release_pages(struct kref *kref)
+static void
+drm_gem_shmem_release_pages_locked(struct drm_gem_shmem_object *shmem)
 {
-   struct drm_gem_shmem_object *shmem;
-   struct drm_gem_object *obj;
+   struct drm_gem_object *obj = >base;
 
-   shmem = container_of(kref, struct drm_gem_shmem_object,
-

[PATCH v15 10/23] locking/refcount, kref: Add kref_put_ww_mutex()

2023-08-27 Thread Dmitry Osipenko
Introduce kref_put_ww_mutex() helper that will handle the wait-wound
mutex auto-locking on kref_put(). This helper is wanted by DRM drivers
that extensively use dma-reservation locking which in turns uses ww-mutex.

Signed-off-by: Dmitry Osipenko 
---
 include/linux/kref.h | 12 
 include/linux/refcount.h |  5 +
 lib/refcount.c   | 34 ++
 3 files changed, 51 insertions(+)

diff --git a/include/linux/kref.h b/include/linux/kref.h
index d32e21a2538c..b2d8dc6e9ae0 100644
--- a/include/linux/kref.h
+++ b/include/linux/kref.h
@@ -90,6 +90,18 @@ static inline int kref_put_lock(struct kref *kref,
return 0;
 }
 
+static inline int kref_put_ww_mutex(struct kref *kref,
+   void (*release)(struct kref *kref),
+   struct ww_mutex *lock,
+   struct ww_acquire_ctx *ctx)
+{
+   if (refcount_dec_and_ww_mutex_lock(>refcount, lock, ctx)) {
+   release(kref);
+   return 1;
+   }
+   return 0;
+}
+
 /**
  * kref_get_unless_zero - Increment refcount for object unless it is zero.
  * @kref: object.
diff --git a/include/linux/refcount.h b/include/linux/refcount.h
index a62fcca97486..be9ad272bc77 100644
--- a/include/linux/refcount.h
+++ b/include/linux/refcount.h
@@ -99,6 +99,8 @@
 #include 
 
 struct mutex;
+struct ww_mutex;
+struct ww_acquire_ctx;
 
 /**
  * typedef refcount_t - variant of atomic_t specialized for reference counts
@@ -366,4 +368,7 @@ extern __must_check bool refcount_dec_and_lock(refcount_t 
*r, spinlock_t *lock)
 extern __must_check bool refcount_dec_and_lock_irqsave(refcount_t *r,
   spinlock_t *lock,
   unsigned long *flags) 
__cond_acquires(lock);
+extern __must_check bool refcount_dec_and_ww_mutex_lock(refcount_t *r,
+   struct ww_mutex *lock,
+   struct ww_acquire_ctx 
*ctx) __cond_acquires(>base);
 #endif /* _LINUX_REFCOUNT_H */
diff --git a/lib/refcount.c b/lib/refcount.c
index a207a8f22b3c..3f6fd0ceed02 100644
--- a/lib/refcount.c
+++ b/lib/refcount.c
@@ -6,6 +6,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 #define REFCOUNT_WARN(str) WARN_ONCE(1, "refcount_t: " str ".\n")
@@ -184,3 +185,36 @@ bool refcount_dec_and_lock_irqsave(refcount_t *r, 
spinlock_t *lock,
return true;
 }
 EXPORT_SYMBOL(refcount_dec_and_lock_irqsave);
+
+/**
+ * refcount_dec_and_ww_mutex_lock - return holding ww-mutex if able to
+ *  decrement refcount to 0
+ * @r: the refcount
+ * @lock: the ww-mutex to be locked
+ * @ctx: wait-wound context
+ *
+ * Similar to atomic_dec_and_lock(), it will WARN on underflow and fail to
+ * decrement when saturated at REFCOUNT_SATURATED.
+ *
+ * Provides release memory ordering, such that prior loads and stores are done
+ * before, and provides a control dependency such that free() must come after.
+ * See the comment on top.
+ *
+ * Return: true and hold ww-mutex lock if able to decrement refcount to 0,
+ * false otherwise
+ */
+bool refcount_dec_and_ww_mutex_lock(refcount_t *r, struct ww_mutex *lock,
+   struct ww_acquire_ctx *ctx)
+{
+   if (refcount_dec_not_one(r))
+   return false;
+
+   ww_mutex_lock(lock, ctx);
+   if (!refcount_dec_and_test(r)) {
+   ww_mutex_unlock(lock);
+   return false;
+   }
+
+   return true;
+}
+EXPORT_SYMBOL(refcount_dec_and_ww_mutex_lock);
-- 
2.41.0



[PATCH v15 11/23] dma-resv: Add kref_put_dma_resv()

2023-08-27 Thread Dmitry Osipenko
Add simple kref_put_dma_resv() helper that wraps around kref_put_ww_mutex()
for drivers that needs to lock dma-resv on kref_put().

It's not possible to easily add this helper to kref.h because of the
headers inclusion dependency, hence add it to dma-resv.h.

Signed-off-by: Dmitry Osipenko 
---
 include/linux/dma-resv.h | 9 +
 1 file changed, 9 insertions(+)

diff --git a/include/linux/dma-resv.h b/include/linux/dma-resv.h
index 8d0e34dad446..c5cf302e4194 100644
--- a/include/linux/dma-resv.h
+++ b/include/linux/dma-resv.h
@@ -41,6 +41,7 @@
 
 #include 
 #include 
+#include 
 #include 
 #include 
 #include 
@@ -464,6 +465,14 @@ static inline void dma_resv_unlock(struct dma_resv *obj)
ww_mutex_unlock(>lock);
 }
 
+static inline int kref_put_dma_resv(struct kref *kref,
+   void (*release)(struct kref *kref),
+   struct dma_resv *resv,
+   struct ww_acquire_ctx *ctx)
+{
+   return kref_put_ww_mutex(kref, release, >lock, ctx);
+}
+
 void dma_resv_init(struct dma_resv *obj);
 void dma_resv_fini(struct dma_resv *obj);
 int dma_resv_reserve_fences(struct dma_resv *obj, unsigned int num_fences);
-- 
2.41.0



[PATCH v15 09/23] drm/shmem-helper: Remove obsoleted is_iomem test

2023-08-27 Thread Dmitry Osipenko
Everything that uses the mapped buffer should by agnostic to is_iomem.
The only reason for the is_iomem test is that we're setting shmem->vaddr
to the returned map->vaddr. Now that the shmem->vaddr code is gone, remove
the obsoleted is_iomem test to clean up the code.

Suggested-by: Thomas Zimmermann 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 6 --
 1 file changed, 6 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index f053dc511508..d545d3d227d7 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -315,12 +315,6 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
 
if (obj->import_attach) {
ret = dma_buf_vmap(obj->import_attach->dmabuf, map);
-   if (!ret) {
-   if (drm_WARN_ON(obj->dev, map->is_iomem)) {
-   dma_buf_vunmap(obj->import_attach->dmabuf, map);
-   return -EIO;
-   }
-   }
} else {
pgprot_t prot = PAGE_KERNEL;
 
-- 
2.41.0



[PATCH v15 08/23] drm/shmem-helper: Refactor locked/unlocked functions

2023-08-27 Thread Dmitry Osipenko
Add locked and remove unlocked postfixes from drm-shmem function names,
making names consistent with the drm/gem core code.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c| 64 +--
 drivers/gpu/drm/lima/lima_gem.c   |  8 +--
 drivers/gpu/drm/panfrost/panfrost_drv.c   |  2 +-
 drivers/gpu/drm/panfrost/panfrost_gem.c   |  6 +-
 .../gpu/drm/panfrost/panfrost_gem_shrinker.c  |  2 +-
 drivers/gpu/drm/panfrost/panfrost_mmu.c   |  2 +-
 drivers/gpu/drm/v3d/v3d_bo.c  |  4 +-
 drivers/gpu/drm/virtio/virtgpu_object.c   |  4 +-
 include/drm/drm_gem_shmem_helper.h| 36 +--
 9 files changed, 64 insertions(+), 64 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 575704f38808..f053dc511508 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -43,8 +43,8 @@ static const struct drm_gem_object_funcs drm_gem_shmem_funcs 
= {
.pin = drm_gem_shmem_object_pin,
.unpin = drm_gem_shmem_object_unpin,
.get_sg_table = drm_gem_shmem_object_get_sg_table,
-   .vmap = drm_gem_shmem_object_vmap,
-   .vunmap = drm_gem_shmem_object_vunmap,
+   .vmap = drm_gem_shmem_object_vmap_locked,
+   .vunmap = drm_gem_shmem_object_vunmap_locked,
.mmap = drm_gem_shmem_object_mmap,
.vm_ops = _gem_shmem_vm_ops,
 };
@@ -153,7 +153,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
kfree(shmem->sgt);
}
if (shmem->got_sgt)
-   drm_gem_shmem_put_pages(shmem);
+   drm_gem_shmem_put_pages_locked(shmem);
 
drm_WARN_ON(obj->dev, shmem->pages_use_count);
 
@@ -165,7 +165,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_free);
 
-static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+static int drm_gem_shmem_get_pages_locked(struct drm_gem_shmem_object *shmem)
 {
struct drm_gem_object *obj = >base;
struct page **pages;
@@ -199,12 +199,12 @@ static int drm_gem_shmem_get_pages(struct 
drm_gem_shmem_object *shmem)
 }
 
 /*
- * drm_gem_shmem_put_pages - Decrease use count on the backing pages for a 
shmem GEM object
+ * drm_gem_shmem_put_pages_locked - Decrease use count on the backing pages 
for a shmem GEM object
  * @shmem: shmem GEM object
  *
  * This function decreases the use count and puts the backing pages when use 
drops to zero.
  */
-void drm_gem_shmem_put_pages(struct drm_gem_shmem_object *shmem)
+void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 {
struct drm_gem_object *obj = >base;
 
@@ -226,7 +226,7 @@ void drm_gem_shmem_put_pages(struct drm_gem_shmem_object 
*shmem)
  shmem->pages_mark_accessed_on_put);
shmem->pages = NULL;
 }
-EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages);
+EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
 
 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
 {
@@ -234,7 +234,7 @@ static int drm_gem_shmem_pin_locked(struct 
drm_gem_shmem_object *shmem)
 
dma_resv_assert_held(shmem->base.resv);
 
-   ret = drm_gem_shmem_get_pages(shmem);
+   ret = drm_gem_shmem_get_pages_locked(shmem);
 
return ret;
 }
@@ -243,7 +243,7 @@ static void drm_gem_shmem_unpin_locked(struct 
drm_gem_shmem_object *shmem)
 {
dma_resv_assert_held(shmem->base.resv);
 
-   drm_gem_shmem_put_pages(shmem);
+   drm_gem_shmem_put_pages_locked(shmem);
 }
 
 /**
@@ -293,7 +293,7 @@ void drm_gem_shmem_unpin(struct drm_gem_shmem_object *shmem)
 EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
 
 /*
- * drm_gem_shmem_vmap - Create a virtual mapping for a shmem GEM object
+ * drm_gem_shmem_vmap_locked - Create a virtual mapping for a shmem GEM object
  * @shmem: shmem GEM object
  * @map: Returns the kernel virtual address of the SHMEM GEM object's backing
  *   store.
@@ -302,13 +302,13 @@ EXPORT_SYMBOL_GPL(drm_gem_shmem_unpin);
  * exists for the buffer backing the shmem GEM object. It hides the differences
  * between dma-buf imported and natively allocated objects.
  *
- * Acquired mappings should be cleaned up by calling drm_gem_shmem_vunmap().
+ * Acquired mappings should be cleaned up by calling 
drm_gem_shmem_vunmap_locked().
  *
  * Returns:
  * 0 on success or a negative error code on failure.
  */
-int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
-  struct iosys_map *map)
+int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object *shmem,
+ struct iosys_map *map)
 {
struct drm_gem_object *obj = >base;
int ret = 0;
@@ -331,7 +331,7 @@ int drm_gem_shmem_vmap(struct drm_gem_shmem_object *shmem,
return 0;
}
 
-   

[PATCH v15 17/23] drm/shmem-helper: Add and use drm_gem_shmem_resv_assert_held() helper

2023-08-27 Thread Dmitry Osipenko
In a preparation of adding drm-shmem memory shrinker, move all reservation
locking lockdep checks to use new drm_gem_shmem_resv_assert_held() that
will resolve spurious lockdep warning about wrong locking order vs
fs_reclam code paths during freeing of shmem GEM, where lockdep isn't
aware that it's impossible to have locking contention with the fs_reclam
at this special time.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 37 +-
 1 file changed, 25 insertions(+), 12 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index d96fee3d6166..ca5da976aafa 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -128,6 +128,23 @@ struct drm_gem_shmem_object *drm_gem_shmem_create(struct 
drm_device *dev, size_t
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_create);
 
+static void drm_gem_shmem_resv_assert_held(struct drm_gem_shmem_object *shmem)
+{
+   /*
+* Destroying the object is a special case.. drm_gem_shmem_free()
+* calls many things that WARN_ON if the obj lock is not held.  But
+* acquiring the obj lock in drm_gem_shmem_free() can cause a locking
+* order inversion between reservation_ww_class_mutex and fs_reclaim.
+*
+* This deadlock is not actually possible, because no one should
+* be already holding the lock when drm_gem_shmem_free() is called.
+* Unfortunately lockdep is not aware of this detail.  So when the
+* refcount drops to zero, we pretend it is already locked.
+*/
+   if (kref_read(>base.refcount))
+   drm_gem_shmem_resv_assert_held(shmem);
+}
+
 /**
  * drm_gem_shmem_free - Free resources associated with a shmem GEM object
  * @shmem: shmem GEM object to free
@@ -142,8 +159,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
} else if (!shmem->imported_sgt) {
-   dma_resv_lock(shmem->base.resv, NULL);
-
drm_WARN_ON(obj->dev, kref_read(>vmap_use_count));
 
if (shmem->sgt) {
@@ -156,8 +171,6 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
drm_gem_shmem_put_pages_locked(shmem);
 
drm_WARN_ON(obj->dev, kref_read(>pages_use_count));
-
-   dma_resv_unlock(shmem->base.resv);
}
 
drm_gem_object_release(obj);
@@ -170,7 +183,7 @@ static int drm_gem_shmem_get_pages_locked(struct 
drm_gem_shmem_object *shmem)
struct drm_gem_object *obj = >base;
struct page **pages;
 
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
if (kref_get_unless_zero(>pages_use_count))
return 0;
@@ -228,7 +241,7 @@ static void drm_gem_shmem_kref_release_pages(struct kref 
*kref)
  */
 void drm_gem_shmem_put_pages_locked(struct drm_gem_shmem_object *shmem)
 {
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
kref_put(>pages_use_count, drm_gem_shmem_kref_release_pages);
 }
@@ -252,7 +265,7 @@ static int drm_gem_shmem_pin_locked(struct 
drm_gem_shmem_object *shmem)
 {
int ret;
 
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
if (kref_get_unless_zero(>pages_pin_count))
return 0;
@@ -276,7 +289,7 @@ static void drm_gem_shmem_kref_unpin_pages(struct kref 
*kref)
 
 static void drm_gem_shmem_unpin_locked(struct drm_gem_shmem_object *shmem)
 {
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
kref_put(>pages_pin_count, drm_gem_shmem_kref_unpin_pages);
 }
@@ -357,7 +370,7 @@ int drm_gem_shmem_vmap_locked(struct drm_gem_shmem_object 
*shmem,
} else {
pgprot_t prot = PAGE_KERNEL;
 
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
if (kref_get_unless_zero(>vmap_use_count)) {
iosys_map_set_vaddr(map, shmem->vaddr);
@@ -426,7 +439,7 @@ void drm_gem_shmem_vunmap_locked(struct 
drm_gem_shmem_object *shmem,
if (obj->import_attach) {
dma_buf_vunmap(obj->import_attach->dmabuf, map);
} else {
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
kref_put(>vmap_use_count, drm_gem_shmem_kref_vunmap);
}
 
@@ -462,7 +475,7 @@ drm_gem_shmem_create_with_handle(struct drm_file *file_priv,
  */
 int drm_gem_shmem_madvise_locked(struct drm_gem_shmem_object *shmem, int madv)
 {
-   dma_resv_assert_held(shmem->base.resv);
+   drm_gem_shmem_resv_assert_held(shmem);
 
if (shmem->madv >= 0)
shmem->madv = madv;
@@ -478,7 +491,7 @@ void 

[PATCH v15 14/23] drm/shmem-helper: Add and use lockless drm_gem_shmem_get_pages()

2023-08-27 Thread Dmitry Osipenko
Add lockless drm_gem_shmem_get_pages() helper that skips taking reservation
lock if pages_use_count is non-zero, leveraging from atomicity of the kref
counter. Make drm_gem_shmem_mmap() to utilize the new helper.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 19 +++
 1 file changed, 15 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 5a2e37b3e51d..f386289c24fc 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -234,6 +234,20 @@ void drm_gem_shmem_put_pages_locked(struct 
drm_gem_shmem_object *shmem)
 }
 EXPORT_SYMBOL_GPL(drm_gem_shmem_put_pages_locked);
 
+static int drm_gem_shmem_get_pages(struct drm_gem_shmem_object *shmem)
+{
+   int ret;
+
+   if (kref_get_unless_zero(>pages_use_count))
+   return 0;
+
+   dma_resv_lock(shmem->base.resv, NULL);
+   ret = drm_gem_shmem_get_pages_locked(shmem);
+   dma_resv_unlock(shmem->base.resv);
+
+   return ret;
+}
+
 static int drm_gem_shmem_pin_locked(struct drm_gem_shmem_object *shmem)
 {
int ret;
@@ -616,10 +630,7 @@ int drm_gem_shmem_mmap(struct drm_gem_shmem_object *shmem, 
struct vm_area_struct
return ret;
}
 
-   dma_resv_lock(shmem->base.resv, NULL);
-   ret = drm_gem_shmem_get_pages_locked(shmem);
-   dma_resv_unlock(shmem->base.resv);
-
+   ret = drm_gem_shmem_get_pages(shmem);
if (ret)
return ret;
 
-- 
2.41.0



[PATCH v15 06/23] drm/virtio: Replace drm_gem_shmem_free() with drm_gem_object_put()

2023-08-27 Thread Dmitry Osipenko
Prepare virtio_gpu_object_create() to addition of memory shrinker support
by replacing open-coded drm_gem_shmem_free() with drm_gem_object_put() that
decrements GEM refcount to 0, which becomes important for drm-shmem because
it will start to use GEM's refcount during the shmem's BO freeing time in
order to prevent spurious lockdep warning about resv lock ordering vs
fs_reclaim code paths.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/virtio/virtgpu_object.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/virtio/virtgpu_object.c 
b/drivers/gpu/drm/virtio/virtgpu_object.c
index c7e74cf13022..343b13428125 100644
--- a/drivers/gpu/drm/virtio/virtgpu_object.c
+++ b/drivers/gpu/drm/virtio/virtgpu_object.c
@@ -244,6 +244,6 @@ int virtio_gpu_object_create(struct virtio_gpu_device 
*vgdev,
 err_put_id:
virtio_gpu_resource_id_put(vgdev, bo->hw_res_handle);
 err_free_gem:
-   drm_gem_shmem_free(shmem_obj);
+   drm_gem_object_put(>base.base);
return ret;
 }
-- 
2.41.0



[PATCH v15 05/23] drm/v3d: Replace open-coded drm_gem_shmem_free() with drm_gem_object_put()

2023-08-27 Thread Dmitry Osipenko
The drm_gem_shmem_free() doesn't put GEM's kref to zero, which becomes
important with addition of the shrinker support to drm-shmem that will
use kref=0 in order to prevent taking lock during special GEM-freeing
time in order to avoid spurious lockdep warning about locking ordering
vs fs_reclaim code paths.

Replace open-coded drm_gem_shmem_free() with drm_gem_object_put() that
drops kref to zero before freeing GEM.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/v3d/v3d_bo.c | 22 --
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/gpu/drm/v3d/v3d_bo.c b/drivers/gpu/drm/v3d/v3d_bo.c
index 8b3229a37c6d..70c1095d6eec 100644
--- a/drivers/gpu/drm/v3d/v3d_bo.c
+++ b/drivers/gpu/drm/v3d/v3d_bo.c
@@ -33,16 +33,18 @@ void v3d_free_object(struct drm_gem_object *obj)
struct v3d_dev *v3d = to_v3d_dev(obj->dev);
struct v3d_bo *bo = to_v3d_bo(obj);
 
-   v3d_mmu_remove_ptes(bo);
+   if (drm_mm_node_allocated(>node)) {
+   v3d_mmu_remove_ptes(bo);
 
-   mutex_lock(>bo_lock);
-   v3d->bo_stats.num_allocated--;
-   v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT;
-   mutex_unlock(>bo_lock);
+   mutex_lock(>bo_lock);
+   v3d->bo_stats.num_allocated--;
+   v3d->bo_stats.pages_allocated -= obj->size >> PAGE_SHIFT;
+   mutex_unlock(>bo_lock);
 
-   spin_lock(>mm_lock);
-   drm_mm_remove_node(>node);
-   spin_unlock(>mm_lock);
+   spin_lock(>mm_lock);
+   drm_mm_remove_node(>node);
+   spin_unlock(>mm_lock);
+   }
 
/* GPU execution may have dirtied any pages in the BO. */
bo->base.pages_mark_dirty_on_put = true;
@@ -142,7 +144,7 @@ struct v3d_bo *v3d_bo_create(struct drm_device *dev, struct 
drm_file *file_priv,
return bo;
 
 free_obj:
-   drm_gem_shmem_free(shmem_obj);
+   drm_gem_object_put(_obj->base);
return ERR_PTR(ret);
 }
 
@@ -160,7 +162,7 @@ v3d_prime_import_sg_table(struct drm_device *dev,
 
ret = v3d_bo_create_finish(obj);
if (ret) {
-   drm_gem_shmem_free(_v3d_bo(obj)->base);
+   drm_gem_object_put(obj);
return ERR_PTR(ret);
}
 
-- 
2.41.0



[PATCH v15 03/23] drm/gem: Change locked/unlocked postfix of drm_gem_v/unmap() function names

2023-08-27 Thread Dmitry Osipenko
Make drm/gem API function names consistent by having locked function
use the _locked postfix in the name, while the unlocked variants don't
use the _unlocked postfix. Rename drm_gem_v/unmap() function names to
make them consistent with the rest of the API functions.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_client.c |  6 +++---
 drivers/gpu/drm/drm_gem.c| 20 ++--
 drivers/gpu/drm/drm_gem_framebuffer_helper.c |  6 +++---
 drivers/gpu/drm/drm_internal.h   |  4 ++--
 drivers/gpu/drm/drm_prime.c  |  4 ++--
 drivers/gpu/drm/lima/lima_sched.c|  4 ++--
 drivers/gpu/drm/panfrost/panfrost_dump.c |  4 ++--
 drivers/gpu/drm/panfrost/panfrost_perfcnt.c  |  6 +++---
 include/drm/drm_gem.h|  4 ++--
 9 files changed, 29 insertions(+), 29 deletions(-)

diff --git a/drivers/gpu/drm/drm_client.c b/drivers/gpu/drm/drm_client.c
index 037e36f2049c..29306657117a 100644
--- a/drivers/gpu/drm/drm_client.c
+++ b/drivers/gpu/drm/drm_client.c
@@ -265,7 +265,7 @@ void drm_client_dev_restore(struct drm_device *dev)
 static void drm_client_buffer_delete(struct drm_client_buffer *buffer)
 {
if (buffer->gem) {
-   drm_gem_vunmap_unlocked(buffer->gem, >map);
+   drm_gem_vunmap(buffer->gem, >map);
drm_gem_object_put(buffer->gem);
}
 
@@ -349,7 +349,7 @@ drm_client_buffer_vmap(struct drm_client_buffer *buffer,
 * fd_install step out of the driver backend hooks, to make that
 * final step optional for internal users.
 */
-   ret = drm_gem_vmap_unlocked(buffer->gem, map);
+   ret = drm_gem_vmap(buffer->gem, map);
if (ret)
return ret;
 
@@ -371,7 +371,7 @@ void drm_client_buffer_vunmap(struct drm_client_buffer 
*buffer)
 {
struct iosys_map *map = >map;
 
-   drm_gem_vunmap_unlocked(buffer->gem, map);
+   drm_gem_vunmap(buffer->gem, map);
 }
 EXPORT_SYMBOL(drm_client_buffer_vunmap);
 
diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index 6129b89bb366..fae5832bb0bd 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1173,7 +1173,7 @@ void drm_gem_unpin(struct drm_gem_object *obj)
obj->funcs->unpin(obj);
 }
 
-int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
+int drm_gem_vmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
 {
int ret;
 
@@ -1190,9 +1190,9 @@ int drm_gem_vmap(struct drm_gem_object *obj, struct 
iosys_map *map)
 
return 0;
 }
-EXPORT_SYMBOL(drm_gem_vmap);
+EXPORT_SYMBOL(drm_gem_vmap_locked);
 
-void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
+void drm_gem_vunmap_locked(struct drm_gem_object *obj, struct iosys_map *map)
 {
dma_resv_assert_held(obj->resv);
 
@@ -1205,27 +1205,27 @@ void drm_gem_vunmap(struct drm_gem_object *obj, struct 
iosys_map *map)
/* Always set the mapping to NULL. Callers may rely on this. */
iosys_map_clear(map);
 }
-EXPORT_SYMBOL(drm_gem_vunmap);
+EXPORT_SYMBOL(drm_gem_vunmap_locked);
 
-int drm_gem_vmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+int drm_gem_vmap(struct drm_gem_object *obj, struct iosys_map *map)
 {
int ret;
 
dma_resv_lock(obj->resv, NULL);
-   ret = drm_gem_vmap(obj, map);
+   ret = drm_gem_vmap_locked(obj, map);
dma_resv_unlock(obj->resv);
 
return ret;
 }
-EXPORT_SYMBOL(drm_gem_vmap_unlocked);
+EXPORT_SYMBOL(drm_gem_vmap);
 
-void drm_gem_vunmap_unlocked(struct drm_gem_object *obj, struct iosys_map *map)
+void drm_gem_vunmap(struct drm_gem_object *obj, struct iosys_map *map)
 {
dma_resv_lock(obj->resv, NULL);
-   drm_gem_vunmap(obj, map);
+   drm_gem_vunmap_locked(obj, map);
dma_resv_unlock(obj->resv);
 }
-EXPORT_SYMBOL(drm_gem_vunmap_unlocked);
+EXPORT_SYMBOL(drm_gem_vunmap);
 
 /**
  * drm_gem_lock_reservations - Sets up the ww context and acquires
diff --git a/drivers/gpu/drm/drm_gem_framebuffer_helper.c 
b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
index 3bdb6ba37ff4..3808f47310bf 100644
--- a/drivers/gpu/drm/drm_gem_framebuffer_helper.c
+++ b/drivers/gpu/drm/drm_gem_framebuffer_helper.c
@@ -362,7 +362,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct 
iosys_map *map,
ret = -EINVAL;
goto err_drm_gem_vunmap;
}
-   ret = drm_gem_vmap_unlocked(obj, [i]);
+   ret = drm_gem_vmap(obj, [i]);
if (ret)
goto err_drm_gem_vunmap;
}
@@ -384,7 +384,7 @@ int drm_gem_fb_vmap(struct drm_framebuffer *fb, struct 
iosys_map *map,
obj = drm_gem_fb_get_obj(fb, i);
if (!obj)
continue;
-   drm_gem_vunmap_unlocked(obj, [i]);
+   drm_gem_vunmap(obj, [i]);
}

[PATCH v15 04/23] drm/gem: Add _locked postfix to functions that have unlocked counterpart

2023-08-27 Thread Dmitry Osipenko
Add _locked postfix to drm_gem functions that have unlocked counterpart
functions to make GEM functions naming more consistent and intuitive in
regards to the locking requirements.

Suggested-by: Boris Brezillon 
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem.c | 6 +++---
 include/drm/drm_gem.h | 2 +-
 2 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/gpu/drm/drm_gem.c b/drivers/gpu/drm/drm_gem.c
index fae5832bb0bd..8c0268944199 100644
--- a/drivers/gpu/drm/drm_gem.c
+++ b/drivers/gpu/drm/drm_gem.c
@@ -1488,10 +1488,10 @@ drm_gem_lru_scan(struct drm_gem_lru *lru,
 EXPORT_SYMBOL(drm_gem_lru_scan);
 
 /**
- * drm_gem_evict - helper to evict backing pages for a GEM object
+ * drm_gem_evict_locked - helper to evict backing pages for a GEM object
  * @obj: obj in question
  */
-int drm_gem_evict(struct drm_gem_object *obj)
+int drm_gem_evict_locked(struct drm_gem_object *obj)
 {
dma_resv_assert_held(obj->resv);
 
@@ -1503,4 +1503,4 @@ int drm_gem_evict(struct drm_gem_object *obj)
 
return 0;
 }
-EXPORT_SYMBOL(drm_gem_evict);
+EXPORT_SYMBOL(drm_gem_evict_locked);
diff --git a/include/drm/drm_gem.h b/include/drm/drm_gem.h
index f338f8cfacf7..e78e6d817451 100644
--- a/include/drm/drm_gem.h
+++ b/include/drm/drm_gem.h
@@ -542,7 +542,7 @@ unsigned long drm_gem_lru_scan(struct drm_gem_lru *lru,
   unsigned long *remaining,
   bool (*shrink)(struct drm_gem_object *obj));
 
-int drm_gem_evict(struct drm_gem_object *obj);
+int drm_gem_evict_locked(struct drm_gem_object *obj);
 
 #ifdef CONFIG_LOCKDEP
 /**
-- 
2.41.0



[PATCH v15 01/23] drm/shmem-helper: Fix UAF in error path when freeing SGT of imported GEM

2023-08-27 Thread Dmitry Osipenko
Freeing drm-shmem GEM right after creating it using
drm_gem_shmem_prime_import_sg_table() frees SGT of the imported dma-buf
and then dma-buf frees this SGT second time.

The v3d_prime_import_sg_table() is example of a error code path where
dma-buf's SGT is freed by drm-shmem and then it's freed second time by
dma_buf_unmap_attachment() in drm_gem_prime_import_dev().

Add drm-shmem GEM flag telling that this is imported SGT shall not be
treated as own SGT, fixing the use-after-free bug.

Cc: sta...@vger.kernel.org
Fixes: 2194a63a818d ("drm: Add library for shmem backed GEM objects")
Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
 include/drm/drm_gem_shmem_helper.h | 7 +++
 2 files changed, 9 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index a783d2245599..78d9cf2355a5 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -141,7 +141,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
 
if (obj->import_attach) {
drm_prime_gem_destroy(obj, shmem->sgt);
-   } else {
+   } else if (!shmem->imported_sgt) {
dma_resv_lock(shmem->base.resv, NULL);
 
drm_WARN_ON(obj->dev, shmem->vmap_use_count);
@@ -758,6 +758,7 @@ drm_gem_shmem_prime_import_sg_table(struct drm_device *dev,
return ERR_CAST(shmem);
 
shmem->sgt = sgt;
+   shmem->imported_sgt = true;
 
drm_dbg_prime(dev, "size = %zu\n", size);
 
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index bf0c31aa8fbe..ec70a98a8fe1 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -73,6 +73,13 @@ struct drm_gem_shmem_object {
 */
unsigned int vmap_use_count;
 
+   /**
+* @imported_sgt:
+*
+* True if SG table belongs to imported dma-buf.
+*/
+   bool imported_sgt : 1;
+
/**
 * @pages_mark_dirty_on_put:
 *
-- 
2.41.0



[PATCH v15 02/23] drm/shmem-helper: Use flag for tracking page count bumped by get_pages_sgt()

2023-08-27 Thread Dmitry Osipenko
Use separate flag for tracking page count bumped by shmem->sgt to avoid
imbalanced page counter during of drm_gem_shmem_free() time. It's fragile
to assume that populated shmem->pages at a freeing time means that the
count was bumped by drm_gem_shmem_get_pages_sgt(), using a flag removes
the ambiguity.

Signed-off-by: Dmitry Osipenko 
---
 drivers/gpu/drm/drm_gem_shmem_helper.c | 3 ++-
 drivers/gpu/drm/lima/lima_gem.c| 1 +
 include/drm/drm_gem_shmem_helper.h | 7 +++
 3 files changed, 10 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/drm_gem_shmem_helper.c 
b/drivers/gpu/drm/drm_gem_shmem_helper.c
index 78d9cf2355a5..db20b9123891 100644
--- a/drivers/gpu/drm/drm_gem_shmem_helper.c
+++ b/drivers/gpu/drm/drm_gem_shmem_helper.c
@@ -152,7 +152,7 @@ void drm_gem_shmem_free(struct drm_gem_shmem_object *shmem)
sg_free_table(shmem->sgt);
kfree(shmem->sgt);
}
-   if (shmem->pages)
+   if (shmem->got_sgt)
drm_gem_shmem_put_pages(shmem);
 
drm_WARN_ON(obj->dev, shmem->pages_use_count);
@@ -687,6 +687,7 @@ static struct sg_table 
*drm_gem_shmem_get_pages_sgt_locked(struct drm_gem_shmem_
if (ret)
goto err_free_sgt;
 
+   shmem->got_sgt = true;
shmem->sgt = sgt;
 
return sgt;
diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
index 4f9736e5f929..28602302c281 100644
--- a/drivers/gpu/drm/lima/lima_gem.c
+++ b/drivers/gpu/drm/lima/lima_gem.c
@@ -89,6 +89,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
}
 
*bo->base.sgt = sgt;
+   bo->base.got_sgt = true;
 
if (vm) {
ret = lima_vm_map_bo(vm, bo, old_size >> PAGE_SHIFT);
diff --git a/include/drm/drm_gem_shmem_helper.h 
b/include/drm/drm_gem_shmem_helper.h
index ec70a98a8fe1..f87124629bb5 100644
--- a/include/drm/drm_gem_shmem_helper.h
+++ b/include/drm/drm_gem_shmem_helper.h
@@ -73,6 +73,13 @@ struct drm_gem_shmem_object {
 */
unsigned int vmap_use_count;
 
+   /**
+* @got_sgt:
+*
+* True if SG table was retrieved using drm_gem_shmem_get_pages_sgt()
+*/
+   bool got_sgt : 1;
+
/**
 * @imported_sgt:
 *
-- 
2.41.0



[PATCH v15 00/23] Add generic memory shrinker to VirtIO-GPU and Panfrost DRM drivers

2023-08-27 Thread Dmitry Osipenko
This series:

  1. Adds common drm-shmem memory shrinker
  2. Enables shrinker for VirtIO-GPU driver
  3. Switches Panfrost driver to the common shrinker

Changelog:

v15:- Moved drm-shmem reference counters to use kref that allows to
  optimize unlocked functions, like was suggested by Boris Brezillon.

- Changed drm/gem/shmem function names to use _locked postfix and
  dropped the _unlocked, making the naming scheme consistent across
  DRM code, like was suggested by Boris Brezillon.

- Added patch that fixes UAF in drm-shmem for drivers that import
  dma-buf and then release buffer in the import error code path.

- Added patch that makes drm-shmem use new flag for SGT's get_pages()
  refcounting, preventing unbalanced refcounting when GEM is freed.

- Fixed guest blob pinning in virtio-gpu driver that was missed
  previously in the shrinker patch.

- Moved VC4 and virtio-gpu drivers to use drm_gem_put() in
  GEM-creation error code paths, which is now required by drm-shmem
  and was missed in a previous patch versions.

- Virtio-GPU now attaches shmem pages to host on first use and not
  when BO is created. In older patch versions there was a potential
  race condition in the BO creation code path where both
  get_sgt()+object_attach() should've been made under same resv lock,
  otherwise pages could be evicted before attachment is invoked.

- Virtio-GPU and drm-shmem shrinker patches are split into smaller
  ones.

v14:- All the prerequisite reservation locking patches landed upstream,
  previously were a part of this series in v13 and older.


https://lore.kernel.org/dri-devel/20230529223935.2672495-1-dmitry.osipe...@collabora.com/

- Added patches to improve locked/unlocked function names, like was
  suggested by Boris Brezillon for v13.

- Made all exported drm-shmem symbols GPL, like was previously
  discussed with Thomas Zimmermann on this series.

- Improved virtio-gpu shrinker patch. Now it won't detach purged BO
  when userspace closes GEM. Crosvm (and not qemu) checks res_id on
  CMD_CTX_DETACH_RESOURCE and prints noisy error message if ID is
  invalid, which wasn't noticed before.

v13:- Updated virtio-gpu shrinker patch to use drm_gem_shmem_object_pin()
  directly instead of drm_gem_pin() and dropped patch that exported
  drm_gem_pin() functions, like was requested by Thomas Zimmermann in
  v12.

v12:- Fixed the "no previous prototype for function" warning reported by
  kernel build bot for v11.

- Fixed the missing reservation lock reported by Intel CI for VGEM
  driver. Other drivers using drm-shmem were affected similarly to
  VGEM. The problem was in the dma-buf attachment code path that led
  to drm-shmem pinning function which assumed the held reservation lock
  by drm_gem_pin(). In the past that code path was causing trouble for
  i915 driver and we've changed the locking scheme for the attachment
  code path in the dma-buf core to let exporters to handle the locking
  themselves. After a closer investigation, I realized that my assumption
  about testing of dma-buf export code path using Panfrost driver was
  incorrect. Now I created additional local test to exrecise the Panfrost
  export path. I also reproduced the issue reported by the Intel CI for
  v10. It's all fixed now by making the drm_gem_shmem_pin() to take the
  resv lock by itself.

- Patches are based on top of drm-tip, CC'd intel-gfx CI for testing.

v11:- Rebased on a recent linux-next. Added new patch as a result:

drm/shmem-helper: Export drm_gem_shmem_get_pages_sgt_locked()

It's needed by the virtio-gpu driver to swap-in/unevict shmem
object, previously get_pages_sgt() didn't use locking.

- Separated the "Add memory shrinker" patch into smaller parts to ease
  the reviewing, as was requested by Thomas Zimmermann:

drm/shmem-helper: Factor out pages alloc/release from
  drm_gem_shmem_get/put_pages()
drm/shmem-helper: Add pages_pin_count field
drm/shmem-helper: Switch drm_gem_shmem_vmap/vunmap to use pin/unpin
drm/shmem-helper: Factor out unpinning part from drm_gem_shmem_purge()

- Addessed the v10 review comments from Thomas Zimmermann: return errno
  instead of bool, sort code alphabetically, rename function and etc
  minor changes.

- Added new patch to remove the "map->is_iomem" from drm-shmem, as
  was suggested by Thomas Zimmermann.

- Added acks and r-b's that were given to v10.

v10:- Was partially applied to misc-fixes/next.

  
https://lore.kernel.org/dri-devel/6c16f303-81df-7ebe-85e9-51bb40a8b...@collabora.com/T/

Dmitry Osipenko (23):
  drm/shmem-helper: Fix UAF in error path when freeing SGT of imported
GEM
  drm/shmem-helper: Use flag for tracking page count bumped by
get_pages_sgt()
  drm/gem: Change 

Re: [V10 1/8] ACPI: Add support for AMD ACPI based Wifi band RFI mitigation feature

2023-08-27 Thread Simon Horman
On Fri, Aug 25, 2023 at 04:38:39PM +0800, Evan Quan wrote:
> Due to electrical and mechanical constraints in certain platform designs
> there may be likely interference of relatively high-powered harmonics of
> the (G-)DDR memory clocks with local radio module frequency bands used
> by Wifi 6/6e/7.
> 
> To mitigate this, AMD has introduced a mechanism that devices can use to
> notify active use of particular frequencies so that other devices can make
> relative internal adjustments as necessary to avoid this resonance.
> 
> Signed-off-by: Evan Quan 

...

> diff --git a/drivers/acpi/amd_wbrf.c b/drivers/acpi/amd_wbrf.c

...

> +/**
> + * acpi_amd_wbrf_add_exclusion - broadcast the frequency band the device
> + *   is using
> + *
> + * @dev: device pointer
> + * @in: input structure containing the frequency band the device is using
> + *
> + * Broadcast to other consumers the frequency band the device starts
> + * to use. Underneath the surface the information is cached into an
> + * internal buffer first. Then a notification is sent to all those
> + * registered consumers. So then they can retrieve that buffer to
> + * know the latest active frequency bands. The benifit with such design

nit: ./checkpatch.pl --codespell suggests benifit -> benefit.

> + * is for those consumers which have not been registered yet, they can
> + * still have a chance to retrieve such information later.
> + */
> +int acpi_amd_wbrf_add_exclusion(struct device *dev,
> + struct wbrf_ranges_in_out *in)
> +{
> + struct acpi_device *adev = ACPI_COMPANION(dev);
> + int ret;
> +
> + if (!adev)
> + return -ENODEV;
> +
> + ret = wbrf_record(adev, WBRF_RECORD_ADD, in);
> + if (ret)
> + return ret;
> +
> + blocking_notifier_call_chain(_chain_head,
> +  WBRF_CHANGED,
> +  NULL);
> +
> + return 0;
> +}
> +EXPORT_SYMBOL_GPL(acpi_amd_wbrf_add_exclusion);

...


[syzbot] Monthly dri report (Aug 2023)

2023-08-27 Thread syzbot
Hello dri maintainers/developers,

This is a 31-day syzbot report for the dri subsystem.
All related reports/information can be found at:
https://syzkaller.appspot.com/upstream/s/dri

During the period, 3 new issues were detected and 0 were fixed.
In total, 11 issues are still open and 30 have been fixed so far.

Some of the still happening issues:

Ref Crashes Repro Title
<1> 345 Yes   WARNING in drm_wait_one_vblank
  https://syzkaller.appspot.com/bug?extid=6f7fe2dbc479dca0ed17
<2> 62  Yes   WARNING in vkms_get_vblank_timestamp (2)
  https://syzkaller.appspot.com/bug?extid=93bd128a383695391534
<3> 33  Yes   inconsistent lock state in sync_info_debugfs_show
  https://syzkaller.appspot.com/bug?extid=007bfe0f3330f6e1e7d1
<4> 4   Yes   divide error in drm_mode_vrefresh
  https://syzkaller.appspot.com/bug?extid=622bba18029bcde672e1

---
This report is generated by a bot. It may contain errors.
See https://goo.gl/tpsmEJ for more information about syzbot.
syzbot engineers can be reached at syzkal...@googlegroups.com.

To disable reminders for individual bugs, reply with the following command:
#syz set  no-reminders

To change bug's subsystems, reply with:
#syz set  subsystems: new-subsystem

You may send multiple commands in a single email message.


Re: [PATCH 1/2] dt-bindings: display/lvds-codec: add ti,sn65lvds94

2023-08-27 Thread Krzysztof Kozlowski
On 27/08/2023 14:19, Conor Dooley wrote:
> On Sun, Aug 27, 2023 at 12:54:28AM +0300, Dmitry Baryshkov wrote:
>> Add compatible strings for TI sn65lvds94, LVDS serdes receiver.
>>
>> Signed-off-by: Dmitry Baryshkov 
> 
> Acked-by: Conor Dooley 

For the record, patch looks good, but was not tested by automation.
Missing Cc-list.

Best regards,
Krzysztof



Re: [PATCH 1/2] dt-bindings: display/lvds-codec: add ti,sn65lvds94

2023-08-27 Thread Conor Dooley
On Sun, Aug 27, 2023 at 12:54:28AM +0300, Dmitry Baryshkov wrote:
> Add compatible strings for TI sn65lvds94, LVDS serdes receiver.
> 
> Signed-off-by: Dmitry Baryshkov 

Acked-by: Conor Dooley 

Thanks,
Conor.

> ---
>  Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml 
> b/Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml
> index 84aafcbf0919..6ceeed76e88e 100644
> --- a/Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml
> +++ b/Documentation/devicetree/bindings/display/bridge/lvds-codec.yaml
> @@ -41,6 +41,7 @@ properties:
>- enum:
>- ti,ds90cf364a # For the DS90CF364A FPD-Link LVDS Receiver
>- ti,ds90cf384a # For the DS90CF384A FPD-Link LVDS Receiver
> +  - ti,sn65lvds94 # For the SN65DS94 LVDS serdes
>- const: lvds-decoder # Generic LVDS decoders compatible fallback
>- enum:
>- thine,thc63lvdm83d # For the THC63LVDM83D LVDS serializer
> -- 
> 2.39.2
> 


signature.asc
Description: PGP signature


[PATCH] drm/panel/panel-sitronix-st7701: Move init sequence from prepare() to enable()

2023-08-27 Thread Mimoja
The struct drm_panel_funcs are offering a prepare() and an enable()
entrypoint for panels. According to drm/panel.h:

"The .prepare() function is typically called before the display controller
starts to transmit video data."
and
"After the display controller has started transmitting video data, it's safe
 to call the .enable() function."

The st7701 driver currently does not respect this, queing DSI control commands
during enable.
While generally fine this can lead to a fillup of the transmission queue before
the transmission is set up on certain dsi bridges.
This issue can also be seen on downstream imx8m* kernels.
By moving the init sequence into the enable function we not only circumvent the 
issue
but also properly soft-reset the panel on enable().

Signed-off-by: Mimoja 

Cc: Marek Vasut 
Cc: Guido Günther 
Cc: Jagan Teki 
Cc: Laurent Pinchart 
Cc: Linus Walleij 
Cc: Sam Ravnborg 
Cc: Thierry Reding 
---
 drivers/gpu/drm/panel/panel-sitronix-st7701.c | 14 +++---
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/gpu/drm/panel/panel-sitronix-st7701.c 
b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
index 7eae83aa0ea1..18c5a8d97cc8 100644
--- a/drivers/gpu/drm/panel/panel-sitronix-st7701.c
+++ b/drivers/gpu/drm/panel/panel-sitronix-st7701.c
@@ -439,6 +439,13 @@ static int st7701_prepare(struct drm_panel *panel)
gpiod_set_value(st7701->reset, 1);
msleep(150);
 
+   return 0;
+}
+
+static int st7701_enable(struct drm_panel *panel)
+{
+   struct st7701 *st7701 = panel_to_st7701(panel);
+
st7701_init_sequence(st7701);
 
if (st7701->desc->gip_sequence)
@@ -447,13 +454,6 @@ static int st7701_prepare(struct drm_panel *panel)
/* Disable Command2 */
st7701_switch_cmd_bkx(st7701, false, 0);
 
-   return 0;
-}
-
-static int st7701_enable(struct drm_panel *panel)
-{
-   struct st7701 *st7701 = panel_to_st7701(panel);
-
ST7701_DSI(st7701, MIPI_DCS_SET_DISPLAY_ON, 0x00);
 
return 0;
-- 
2.39.2



Re: [PATCH v3 1/1] backlight: hid_bl: Add VESA VCP HID backlight driver

2023-08-27 Thread Thomas Weißschuh
On 2023-08-20 11:41:18+0200, Julius Zint wrote:
> [..]

> diff --git a/drivers/video/backlight/Kconfig b/drivers/video/backlight/Kconfig
> index 51387b1ef012..b964a820956d 100644
> --- a/drivers/video/backlight/Kconfig
> +++ b/drivers/video/backlight/Kconfig
> @@ -472,6 +472,14 @@ config BACKLIGHT_LED
> If you have a LCD backlight adjustable by LED class driver, say Y
> to enable this driver.
>  
> +config BACKLIGHT_HID
> + tristate "VESA VCP HID Backlight Driver"
> + depends on HID
> + help
> +   If you have an external display with VESA compliant HID brightness
> +   controls then say Y to enable this backlight driver. Currently the
> +   only supported device is the Apple Studio Display.

Is the last sentence needed?
It will go out of date soon, requiring updates to the Kconfig.

> +
>  endif # BACKLIGHT_CLASS_DEVICE
>  
>  endmenu

> [..]

> diff --git a/drivers/video/backlight/hid_bl.c 
> b/drivers/video/backlight/hid_bl.c
> new file mode 100644
> index ..b40f8f412ee2
> --- /dev/null
> +++ b/drivers/video/backlight/hid_bl.c
> @@ -0,0 +1,269 @@
> +// SPDX-License-Identifier: GPL-2.0
> +#include 
> +#include 
> +#include 
> +#include 
> +
> +#define APPLE_STUDIO_DISPLAY_VENDOR_ID  0x05ac
> +#define APPLE_STUDIO_DISPLAY_PRODUCT_ID 0x1114

Use hid-ids.h.  The vendor ID already has an entry.

> +
> +#define HID_USAGE_MONITOR_CTRL   0x81
> +#define HID_USAGE_VESA_VCP_BRIGHTNESS0x820010

> [..]

> +static int hid_bl_probe(struct hid_device *hdev, const struct hid_device_id 
> *id)
> +{

> [..]

> +
> + memset(, 0, sizeof(props));
> + props.type = BACKLIGHT_RAW;

Wouldn't this be more a BACKLIGHT_FIRMWARE?

> + props.max_brightness = data->max_brightness - data->min_brightness;
> +
> + bl = devm_backlight_device_register(>dev, "vesa_vcp",

It's non-obvious that the "vesa_vcp" backlight comes from the
"hid_backlight" driver. Maybe align the names.

What happens when multiple compatible devices are used?
That seems to be possible with external monitors.

Can existing userspace figure out which display the backlight device
belongs to?
(I don't know either)

> + >dev, data,
> + _bl_ops,
> + );

> [..]


[PATCH] drm: bridge: it66121: Fix invalid connector dereference

2023-08-27 Thread Jai Luthra
Fix the NULL pointer dereference when no monitor is connected, and the
sound card is opened from userspace.

Instead return an error as EDID information cannot be provided to
the sound framework if there is no connector attached.

Fixes: e0fd83dbe924 ("drm: bridge: it66121: Add audio support")
Reported-by: Nishanth Menon 
Closes: https://lore.kernel.org/all/20230825105849.crhon42qndxqif4i@gondola/
Signed-off-by: Jai Luthra 
---
 drivers/gpu/drm/bridge/ite-it66121.c | 5 +
 1 file changed, 5 insertions(+)

diff --git a/drivers/gpu/drm/bridge/ite-it66121.c 
b/drivers/gpu/drm/bridge/ite-it66121.c
index 466641c77fe9..d6fa00dea464 100644
--- a/drivers/gpu/drm/bridge/ite-it66121.c
+++ b/drivers/gpu/drm/bridge/ite-it66121.c
@@ -1446,6 +1446,11 @@ static int it66121_audio_get_eld(struct device *dev, 
void *data,
 {
struct it66121_ctx *ctx = dev_get_drvdata(dev);
 
+   if (!ctx->connector) {
+   dev_dbg(dev, "No connector present, cannot provide EDID data");
+   return -EINVAL;
+   }
+
mutex_lock(>lock);
 
memcpy(buf, ctx->connector->eld,

---
base-commit: 6269320850097903b30be8f07a5c61d9f7592393
change-id: 20230825-it66121_edid-6ee98517808b

Best regards,
-- 
Jai Luthra 



[PATCH] spi: tegra: Fix missing IRQ check in tegra_slink_probe()

2023-08-27 Thread Zhang Shurong
This func misses checking for platform_get_irq()'s call and may passes the
negative error codes to request_irq(), which takes unsigned IRQ #,
causing it to fail with -EINVAL, overriding an original error code.

Fix this by stop calling request_irq() with invalid IRQ #s.

Fixes: dc4dc3605639 ("spi: tegra: add spi driver for SLINK controller")
Signed-off-by: Zhang Shurong 
---
 drivers/spi/spi-tegra20-slink.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/drivers/spi/spi-tegra20-slink.c b/drivers/spi/spi-tegra20-slink.c
index 4d6db6182c5e..f5cd365c913a 100644
--- a/drivers/spi/spi-tegra20-slink.c
+++ b/drivers/spi/spi-tegra20-slink.c
@@ -1086,6 +1086,8 @@ static int tegra_slink_probe(struct platform_device *pdev)
reset_control_deassert(tspi->rst);
 
spi_irq = platform_get_irq(pdev, 0);
+   if (spi_irq < 0)
+   return spi_irq;
tspi->irq = spi_irq;
ret = request_threaded_irq(tspi->irq, tegra_slink_isr,
   tegra_slink_isr_thread, IRQF_ONESHOT,
-- 
2.30.2



Re: [PATCH] drm/panel/panel-sitronix-st7701: Move init sequence from prepare() to enable()

2023-08-27 Thread Mimoja

I appreciate you taking the time to respond!

On 26.08.23 17:18, Marek Vasut wrote:

On 8/26/23 11:55, Mimoja wrote:
"The .prepare() function is typically called before the display 
controller

starts to transmit video data."
and
"After the display controller has started transmitting video data, 
it's safe

  to call the .enable() function."


DSI commands are not DSI video, so this should be OK ?


You are correct, my commit message is mixing things up here. I wanted to 
emphasize roughly the thought of
"when enable() is called the dsi core is expected to have its clock 
initialized". Will take note to clarify this if I succeed to

make a case for this patch below :)

While generally fine this can lead to a fillup of the transmission 
queue before

the transmission is set up on certain dsi bridges.
This issue can also be seen on downstream imx8m* kernels.


Can you reproduce this with current mainline Linux or linux-next tree ?
I recall the display pipeline in the NXP downstream stuff is very 
different from mainline .


You are very much correct. The NXP downstream kernel is completely 
different from the upstream one
and is really a great example to show the issue (code cleaned up for 
readability):


https://github.com/varigit/linux-imx/blob/5.15-2.0.x-imx_var01/drivers/gpu/drm/bridge/sec-dsim.c#L1368
```
    ret = drm_panel_prepare(dsim->panel);
    if (unlikely(ret)) [...]

    /* config esc clock, byte clock and etc */
    sec_mipi_dsim_config_clkctrl(dsim);

    ret = drm_panel_enable(dsim->panel);
    if (unlikely(ret)) [...]

```


Which SoC does have this problem ?
Sadly I don't have any SoCs available which would work perfectly with 
linux-next, let alone are confirmed affected :/


I were able to make my Kingway Panel work (Custom one and so far 
unsupported by the st7701 driver) with this
patch on downstream 5.4 and 5.15 imx8mn as well as on a raspberry pi CM4 
with 6.1. However raspberrypi/linux brings
SPI support to the st7701 driver which should not affect this but I 
would just like to document it here.
I could not find any success story with st7701 and the rpi on 6.1 online 
after a short search (and only one
reference with 5.10 which seems to me a bit different in a short 
comparison)  but again I can only offer

circumstantial evidence. Sorry :/

Thank you again
~Mimoja



Patch "drm/i915: Fix HPD polling, reenabling the output poll work as needed" has been added to the 6.4-stable tree

2023-08-27 Thread gregkh


This is a note to let you know that I've just added the patch titled

drm/i915: Fix HPD polling, reenabling the output poll work as needed

to the 6.4-stable tree which can be found at:

http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
 drm-i915-fix-hpd-polling-reenabling-the-output-poll-work-as-needed.patch
and it can be found in the queue-6.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let  know about it.


>From 1dcc437427bbcebc8381226352f7ade08a271191 Mon Sep 17 00:00:00 2001
From: Imre Deak 
Date: Tue, 22 Aug 2023 14:30:15 +0300
Subject: drm/i915: Fix HPD polling, reenabling the output poll work as needed
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Imre Deak 

commit 1dcc437427bbcebc8381226352f7ade08a271191 upstream.

After the commit in the Fixes: line below, HPD polling stopped working
on i915, since after that change calling drm_kms_helper_poll_enable()
doesn't restart drm_mode_config::output_poll_work if the work was
stopped (no connectors needing polling) and enabling polling for a
connector (during runtime suspend or detecting an HPD IRQ storm).

After the above change calling drm_kms_helper_poll_enable() is a nop
after it's been called already and polling for some connectors was
disabled/re-enabled.

Fix this by calling drm_kms_helper_poll_reschedule() added in the
previous patch instead, which reschedules the work whenever expected.

Fixes: d33a54e3991d ("drm/probe_helper: sort out poll_running vs poll_enabled")
CC: sta...@vger.kernel.org # 6.4+
Cc: Dmitry Baryshkov 
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Jouni Högander 
Signed-off-by: Imre Deak 
Link: 
https://patchwork.freedesktop.org/patch/msgid/20230822113015.41224-2-imre.d...@intel.com
(cherry picked from commit 50452f2f76852322620b63e62922b85e955abe94)
Signed-off-by: Rodrigo Vivi 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/gpu/drm/i915/display/intel_hotplug.c |4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

--- a/drivers/gpu/drm/i915/display/intel_hotplug.c
+++ b/drivers/gpu/drm/i915/display/intel_hotplug.c
@@ -210,7 +210,7 @@ intel_hpd_irq_storm_switch_to_polling(st
 
/* Enable polling and queue hotplug re-enabling. */
if (hpd_disabled) {
-   drm_kms_helper_poll_enable(_priv->drm);
+   drm_kms_helper_poll_reschedule(_priv->drm);
mod_delayed_work(system_wq, 
_priv->display.hotplug.reenable_work,
 msecs_to_jiffies(HPD_STORM_REENABLE_DELAY));
}
@@ -644,7 +644,7 @@ static void i915_hpd_poll_init_work(stru
drm_connector_list_iter_end(_iter);
 
if (enabled)
-   drm_kms_helper_poll_enable(_priv->drm);
+   drm_kms_helper_poll_reschedule(_priv->drm);
 
mutex_unlock(_priv->drm.mode_config.mutex);
 


Patches currently in stable-queue which might be from imre.d...@intel.com are

queue-6.4/drm-i915-fix-hpd-polling-reenabling-the-output-poll-work-as-needed.patch
queue-6.4/drm-add-an-hpd-poll-helper-to-reschedule-the-poll-work.patch


Patch "drm: Add an HPD poll helper to reschedule the poll work" has been added to the 6.4-stable tree

2023-08-27 Thread gregkh


This is a note to let you know that I've just added the patch titled

drm: Add an HPD poll helper to reschedule the poll work

to the 6.4-stable tree which can be found at:

http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
 drm-add-an-hpd-poll-helper-to-reschedule-the-poll-work.patch
and it can be found in the queue-6.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let  know about it.


>From a94e7ccfc400c024976f3c2f31689ed843498b7c Mon Sep 17 00:00:00 2001
From: Imre Deak 
Date: Tue, 22 Aug 2023 14:30:14 +0300
Subject: drm: Add an HPD poll helper to reschedule the poll work
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Imre Deak 

commit a94e7ccfc400c024976f3c2f31689ed843498b7c upstream.

Add a helper to reschedule drm_mode_config::output_poll_work after
polling has been enabled for a connector (and needing a reschedule,
since previously polling was disabled for all connectors and hence
output_poll_work was not running).

This is needed by the next patch fixing HPD polling on i915.

CC: sta...@vger.kernel.org # 6.4+
Cc: Dmitry Baryshkov 
Cc: dri-devel@lists.freedesktop.org
Reviewed-by: Jouni Högander 
Reviewed-by: Dmitry Baryshkov 
Signed-off-by: Imre Deak 
Link: 
https://patchwork.freedesktop.org/patch/msgid/20230822113015.41224-1-imre.d...@intel.com
(cherry picked from commit fe2352fd64029918174de4b460dfe6df0c6911cd)
Signed-off-by: Rodrigo Vivi 
Signed-off-by: Greg Kroah-Hartman 
---
 drivers/gpu/drm/drm_probe_helper.c | 68 --
 include/drm/drm_probe_helper.h |  1 +
 2 files changed, 47 insertions(+), 22 deletions(-)

diff --git a/drivers/gpu/drm/drm_probe_helper.c 
b/drivers/gpu/drm/drm_probe_helper.c
index 2fb9bf901a2c..3f479483d7d8 100644
--- a/drivers/gpu/drm/drm_probe_helper.c
+++ b/drivers/gpu/drm/drm_probe_helper.c
@@ -262,6 +262,26 @@ static bool drm_kms_helper_enable_hpd(struct drm_device 
*dev)
 }
 
 #define DRM_OUTPUT_POLL_PERIOD (10*HZ)
+static void reschedule_output_poll_work(struct drm_device *dev)
+{
+   unsigned long delay = DRM_OUTPUT_POLL_PERIOD;
+
+   if (dev->mode_config.delayed_event)
+   /*
+* FIXME:
+*
+* Use short (1s) delay to handle the initial delayed event.
+* This delay should not be needed, but Optimus/nouveau will
+* fail in a mysterious way if the delayed event is handled as
+* soon as possible like it is done in
+* drm_helper_probe_single_connector_modes() in case the poll
+* was enabled before.
+*/
+   delay = HZ;
+
+   schedule_delayed_work(>mode_config.output_poll_work, delay);
+}
+
 /**
  * drm_kms_helper_poll_enable - re-enable output polling.
  * @dev: drm_device
@@ -279,37 +299,41 @@ static bool drm_kms_helper_enable_hpd(struct drm_device 
*dev)
  */
 void drm_kms_helper_poll_enable(struct drm_device *dev)
 {
-   bool poll = false;
-   unsigned long delay = DRM_OUTPUT_POLL_PERIOD;
-
if (!dev->mode_config.poll_enabled || !drm_kms_helper_poll ||
dev->mode_config.poll_running)
return;
 
-   poll = drm_kms_helper_enable_hpd(dev);
-
-   if (dev->mode_config.delayed_event) {
-   /*
-* FIXME:
-*
-* Use short (1s) delay to handle the initial delayed event.
-* This delay should not be needed, but Optimus/nouveau will
-* fail in a mysterious way if the delayed event is handled as
-* soon as possible like it is done in
-* drm_helper_probe_single_connector_modes() in case the poll
-* was enabled before.
-*/
-   poll = true;
-   delay = HZ;
-   }
-
-   if (poll)
-   schedule_delayed_work(>mode_config.output_poll_work, 
delay);
+   if (drm_kms_helper_enable_hpd(dev) ||
+   dev->mode_config.delayed_event)
+   reschedule_output_poll_work(dev);
 
dev->mode_config.poll_running = true;
 }
 EXPORT_SYMBOL(drm_kms_helper_poll_enable);
 
+/**
+ * drm_kms_helper_poll_reschedule - reschedule the output polling work
+ * @dev: drm_device
+ *
+ * This function reschedules the output polling work, after polling for a
+ * connector has been enabled.
+ *
+ * Drivers must call this helper after enabling polling for a connector by
+ * setting %DRM_CONNECTOR_POLL_CONNECT / %DRM_CONNECTOR_POLL_DISCONNECT flags
+ * in drm_connector::polled. Note that after disabling polling by clearing 
these
+ * flags for a connector will stop the output polling work automatically if
+ * the polling is disabled for all other connectors as well.
+ *
+ * The function can be called only after polling has been enabled by calling
+ * drm_kms_helper_poll_init() 

[Bug 217664] Laptop doesnt wake up from suspend mode.

2023-08-27 Thread bugzilla-daemon
https://bugzilla.kernel.org/show_bug.cgi?id=217664

--- Comment #38 from popus_czy_to_ty (pentelja...@o2.pl) ---
Created attachment 304948
  --> https://bugzilla.kernel.org/attachment.cgi?id=304948=edit
test nr 2

-- 
You may reply to this email to add a comment.

You are receiving this mail because:
You are watching the assignee of the bug.