Re: [PATCH] drm/lima: Use delayed timer as default in devfreq profile

2021-02-07 Thread Qiang Yu
Applied to drm-misc-next.

Regards,
Qiang

On Thu, Feb 4, 2021 at 10:24 PM Lukasz Luba  wrote:
>
>
>
> On 2/4/21 1:39 PM, Robin Murphy wrote:
> > On 2021-02-03 02:01, Qiang Yu wrote:
> >> On Tue, Feb 2, 2021 at 10:02 PM Lukasz Luba  wrote:
> >>>
> >>>
> >>>
> >>> On 2/2/21 1:01 AM, Qiang Yu wrote:
> >>>> Hi Lukasz,
> >>>>
> >>>> Thanks for the explanation. So the deferred timer option makes a
> >>>> mistake that
> >>>> when GPU goes from idle to busy for only one poll periodic, in this
> >>>> case 50ms, right?
> >>>
> >>> Not exactly. Driver sets the polling interval to 50ms (in this case)
> >>> because it needs ~3-frame average load (in 60fps). I have discovered the
> >>> issue quite recently that on systems with 2 CPUs or more, the devfreq
> >>> core is not monitoring the devices even for seconds. Therefore, we might
> >>> end up with quite big amount of work that GPU is doing, but we don't
> >>> know about it. Devfreq core didn't check <- timer didn't fired. Then
> >>> suddenly that CPU, which had the deferred timer registered last time,
> >>> is waking up and timer triggers to check our device. We get the stats,
> >>> but they might be showing load from 1sec not 50ms. We feed them into
> >>> governor. Governor sees the new load, but was tested and configured for
> >>> 50ms, so it might try to rise the frequency to max. The GPU work might
> >>> be already lower and there is no need for such freq. Then the CPU goes
> >>> idle again, so no devfreq core check for next e.g. 1sec, but the
> >>> frequency stays at max OPP and we burn power.
> >>>
> >>> So, it's completely unreliable. We might stuck at min frequency and
> >>> suffer the frame drops, or sometimes stuck to max freq and burn more
> >>> power when there is no such need.
> >>>
> >>> Similar for thermal governor, which is confused by this old stats and
> >>> long period stats, longer than 50ms.
> >>>
> >>> Stats from last e.g. ~1sec tells you nothing about real recent GPU
> >>> workload.
> >> Oh, right, I missed this case.
> >>
> >>>
> >>>> But delayed timer will wakeup CPU every 50ms even when system is
> >>>> idle, will this
> >>>> cause more power consumption for the case like phone suspend?
> >>>
> >>> No, in case of phone suspend it won't increase the power consumption.
> >>> The device won't be woken up, it will stay in suspend.
> >> I mean the CPU is waked up frequently by timer when phone suspend,
> >> not the whole device (like the display).
> >>
> >> Seems it's better to have deferred timer when device is suspended for
> >> power saving,
> >> and delayed timer when device in working state. User knows this and
> >> can use sysfs
> >> to change it.
> >
> > Doesn't devfreq_suspend_device() already cancel any timer work either
> > way in that case?
>
> Correct, the governor should pause the monitoring mechanism (and timer).
>
> Regards,
> Lukasz


Re: [PATCH v2] drm/lima: add governor data with pre-defined thresholds

2021-02-07 Thread Qiang Yu
Applied to drm-misc-next.

Regards,
Qiang

On Tue, Feb 2, 2021 at 9:04 AM Qiang Yu  wrote:
>
> OK, I see. Patch is also:
> Reviewed-by: Qiang Yu 
>
> Regards,
> Qiang
>
> On Mon, Feb 1, 2021 at 5:59 PM Lukasz Luba  wrote:
> >
> >
> >
> > On 1/30/21 1:57 PM, Qiang Yu wrote:
> > > This patch gets minor improvement on glmark2 (160->162).
> >
> > It has bigger impact when the load is changing and the frequency
> > is stuck to min w/o this patch.
> >
> > >
> > > Seems there's no way for user to change this value, do we?
> > > Or there's work pending to expose it to sysfs?
> >
> > True there is no user sysfs. I've proposed a patch to export these via
> > sysfs. Chanwoo is going to work on it. When it will land mainline, it's
> > probably a few months. So for now, the fix makes sense.
> >
> > Regards,
> > Lukasz
> >


Re: [PATCH] drm/lima: Use delayed timer as default in devfreq profile

2021-02-02 Thread Qiang Yu
On Tue, Feb 2, 2021 at 10:02 PM Lukasz Luba  wrote:
>
>
>
> On 2/2/21 1:01 AM, Qiang Yu wrote:
> > Hi Lukasz,
> >
> > Thanks for the explanation. So the deferred timer option makes a mistake 
> > that
> > when GPU goes from idle to busy for only one poll periodic, in this
> > case 50ms, right?
>
> Not exactly. Driver sets the polling interval to 50ms (in this case)
> because it needs ~3-frame average load (in 60fps). I have discovered the
> issue quite recently that on systems with 2 CPUs or more, the devfreq
> core is not monitoring the devices even for seconds. Therefore, we might
> end up with quite big amount of work that GPU is doing, but we don't
> know about it. Devfreq core didn't check <- timer didn't fired. Then
> suddenly that CPU, which had the deferred timer registered last time,
> is waking up and timer triggers to check our device. We get the stats,
> but they might be showing load from 1sec not 50ms. We feed them into
> governor. Governor sees the new load, but was tested and configured for
> 50ms, so it might try to rise the frequency to max. The GPU work might
> be already lower and there is no need for such freq. Then the CPU goes
> idle again, so no devfreq core check for next e.g. 1sec, but the
> frequency stays at max OPP and we burn power.
>
> So, it's completely unreliable. We might stuck at min frequency and
> suffer the frame drops, or sometimes stuck to max freq and burn more
> power when there is no such need.
>
> Similar for thermal governor, which is confused by this old stats and
> long period stats, longer than 50ms.
>
> Stats from last e.g. ~1sec tells you nothing about real recent GPU
> workload.
Oh, right, I missed this case.

>
> > But delayed timer will wakeup CPU every 50ms even when system is idle, will 
> > this
> > cause more power consumption for the case like phone suspend?
>
> No, in case of phone suspend it won't increase the power consumption.
> The device won't be woken up, it will stay in suspend.
I mean the CPU is waked up frequently by timer when phone suspend,
not the whole device (like the display).

Seems it's better to have deferred timer when device is suspended for
power saving,
and delayed timer when device in working state. User knows this and
can use sysfs
to change it.

Set the delayed timer as default is reasonable, so patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

>
> Regards,
> Lukasz
>
>
> >
> > Regards,
> > Qiang
> >
> >
> > On Mon, Feb 1, 2021 at 5:53 PM Lukasz Luba  wrote:
> >>
> >> Hi Qiang,
> >>
> >> On 1/30/21 1:51 PM, Qiang Yu wrote:
> >>> Thanks for the patch. But I can't observe any difference on glmark2
> >>> with or without this patch.
> >>> Maybe you can provide other test which can benefit from it.
> >>
> >> This is a design problem and has impact on the whole system.
> >> There is a few issues. When the device is not checked and there are
> >> long delays between last check and current, the history is broken.
> >> It confuses the devfreq governor and thermal governor (Intelligent Power
> >> Allocation (IPA)). Thermal governor works on stale stats data and makes
> >> stupid decisions, because there is no new stats (device not checked).
> >> Similar applies to devfreq simple_ondemand governor, where it 'tires' to
> >> work on a lng period even 3sec and make prediction for the next
> >> frequency based on it (which is broken).
> >>
> >> How it should be done: constant reliable check is needed, then:
> >> - period is guaranteed and has fixed size, e.g 50ms or 100ms.
> >> - device status is quite recent so thermal devfreq cooling provides
> >> 'fresh' data into thermal governor
> >>
> >> This would prevent odd behavior and solve the broken cases.
> >>
> >>>
> >>> Considering it will wake up CPU more frequently, and user may choose
> >>> to change this by sysfs,
> >>> I'd like to not apply it.
> >>
> >> The deferred timer for GPU is wrong option, for UFS or eMMC makes more
> >> sense. It's also not recommended for NoC busses. I've discovered that
> >> some time ago and proposed to have option to switch into delayed timer.
> >> Trust me, it wasn't obvious to find out that this missing check has
> >> those impacts. So the other engineers or users might not know that some
> >> problems they faces (especially when the device load is changing) is due
> >> to this delayed vs deffered timer and they will change it in the sysfs.
> >>
> >> Regards,
> >> Lukasz
> >>
> >>>
> >>> Regards,
> >>> Qiang
> >>>


Re: [PATCH v2] drm/lima: add governor data with pre-defined thresholds

2021-02-01 Thread Qiang Yu
OK, I see. Patch is also:
Reviewed-by: Qiang Yu 

Regards,
Qiang

On Mon, Feb 1, 2021 at 5:59 PM Lukasz Luba  wrote:
>
>
>
> On 1/30/21 1:57 PM, Qiang Yu wrote:
> > This patch gets minor improvement on glmark2 (160->162).
>
> It has bigger impact when the load is changing and the frequency
> is stuck to min w/o this patch.
>
> >
> > Seems there's no way for user to change this value, do we?
> > Or there's work pending to expose it to sysfs?
>
> True there is no user sysfs. I've proposed a patch to export these via
> sysfs. Chanwoo is going to work on it. When it will land mainline, it's
> probably a few months. So for now, the fix makes sense.
>
> Regards,
> Lukasz
>


Re: [PATCH] drm/lima: Use delayed timer as default in devfreq profile

2021-02-01 Thread Qiang Yu
Hi Lukasz,

Thanks for the explanation. So the deferred timer option makes a mistake that
when GPU goes from idle to busy for only one poll periodic, in this
case 50ms, right?
But delayed timer will wakeup CPU every 50ms even when system is idle, will this
cause more power consumption for the case like phone suspend?

Regards,
Qiang


On Mon, Feb 1, 2021 at 5:53 PM Lukasz Luba  wrote:
>
> Hi Qiang,
>
> On 1/30/21 1:51 PM, Qiang Yu wrote:
> > Thanks for the patch. But I can't observe any difference on glmark2
> > with or without this patch.
> > Maybe you can provide other test which can benefit from it.
>
> This is a design problem and has impact on the whole system.
> There is a few issues. When the device is not checked and there are
> long delays between last check and current, the history is broken.
> It confuses the devfreq governor and thermal governor (Intelligent Power
> Allocation (IPA)). Thermal governor works on stale stats data and makes
> stupid decisions, because there is no new stats (device not checked).
> Similar applies to devfreq simple_ondemand governor, where it 'tires' to
> work on a lng period even 3sec and make prediction for the next
> frequency based on it (which is broken).
>
> How it should be done: constant reliable check is needed, then:
> - period is guaranteed and has fixed size, e.g 50ms or 100ms.
> - device status is quite recent so thermal devfreq cooling provides
>'fresh' data into thermal governor
>
> This would prevent odd behavior and solve the broken cases.
>
> >
> > Considering it will wake up CPU more frequently, and user may choose
> > to change this by sysfs,
> > I'd like to not apply it.
>
> The deferred timer for GPU is wrong option, for UFS or eMMC makes more
> sense. It's also not recommended for NoC busses. I've discovered that
> some time ago and proposed to have option to switch into delayed timer.
> Trust me, it wasn't obvious to find out that this missing check has
> those impacts. So the other engineers or users might not know that some
> problems they faces (especially when the device load is changing) is due
> to this delayed vs deffered timer and they will change it in the sysfs.
>
> Regards,
> Lukasz
>
> >
> > Regards,
> > Qiang
> >


Re: [PATCH v2] drm/lima: add governor data with pre-defined thresholds

2021-01-30 Thread Qiang Yu
This patch gets minor improvement on glmark2 (160->162).

Seems there's no way for user to change this value, do we?
Or there's work pending to expose it to sysfs?

Regards,
Qiang

On Thu, Jan 28, 2021 at 3:40 AM Christian Hewitt
 wrote:
>
> This patch adapts the panfrost pre-defined thresholds change [0] to the
> lima driver to improve real-world performance. The upthreshold value has
> been set to ramp GPU frequency to max freq faster (compared to panfrost)
> to compensate for the lower overall performance of utgard devices.
>
> [0] 
> https://patchwork.kernel.org/project/dri-devel/patch/20210121170445.19761-1-lukasz.l...@arm.com/
>
> Signed-off-by: Christian Hewitt 
> ---
> Change since v1: increased upthreshold from 20 to 30, with a soft
> dependency on Lukasz delayed timer patch [0]
>
> [0] https://lore.kernel.org/lkml/20210127105121.20345-1-lukasz.l...@arm.com/
>
>  drivers/gpu/drm/lima/lima_devfreq.c | 10 +-
>  drivers/gpu/drm/lima/lima_devfreq.h |  2 ++
>  2 files changed, 11 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index 5686ad4aaf7c..c9854315a0b5 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -163,8 +163,16 @@ int lima_devfreq_init(struct lima_device *ldev)
> lima_devfreq_profile.initial_freq = cur_freq;
> dev_pm_opp_put(opp);
>
> +   /*
> +* Setup default thresholds for the simple_ondemand governor.
> +* The values are chosen based on experiments.
> +*/
> +   ldevfreq->gov_data.upthreshold = 30;
> +   ldevfreq->gov_data.downdifferential = 5;
> +
> devfreq = devm_devfreq_add_device(dev, _devfreq_profile,
> - DEVFREQ_GOV_SIMPLE_ONDEMAND, NULL);
> + DEVFREQ_GOV_SIMPLE_ONDEMAND,
> + >gov_data);
> if (IS_ERR(devfreq)) {
> dev_err(dev, "Couldn't initialize GPU devfreq\n");
> ret = PTR_ERR(devfreq);
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.h 
> b/drivers/gpu/drm/lima/lima_devfreq.h
> index 2d9b3008ce77..b0c7c736e81a 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.h
> +++ b/drivers/gpu/drm/lima/lima_devfreq.h
> @@ -4,6 +4,7 @@
>  #ifndef __LIMA_DEVFREQ_H__
>  #define __LIMA_DEVFREQ_H__
>
> +#include 
>  #include 
>  #include 
>
> @@ -18,6 +19,7 @@ struct lima_devfreq {
> struct opp_table *clkname_opp_table;
> struct opp_table *regulators_opp_table;
> struct thermal_cooling_device *cooling;
> +   struct devfreq_simple_ondemand_data gov_data;
>
> ktime_t busy_time;
> ktime_t idle_time;
> --
> 2.17.1
>


Re: [PATCH] drm/lima: Use delayed timer as default in devfreq profile

2021-01-30 Thread Qiang Yu
Thanks for the patch. But I can't observe any difference on glmark2
with or without this patch.
Maybe you can provide other test which can benefit from it.

Considering it will wake up CPU more frequently, and user may choose
to change this by sysfs,
I'd like to not apply it.

Regards,
Qiang

On Wed, Jan 27, 2021 at 6:51 PM Lukasz Luba  wrote:
>
> Devfreq framework supports 2 modes for monitoring devices.
> Use delayed timer as default instead of deferrable timer
> in order to monitor the GPU status regardless of CPU idle.
>
> Signed-off-by: Lukasz Luba 
> ---
> Hi all,
>
> I've missed the Lima driver while working on Panfrost patch for fixing
> the issue with default devfreq framework polling mode. More about this
> and the patch, can be found here [1].
>
> Regards,
> Lukasz Luba
>
> [1] https://lore.kernel.org/lkml/20210105164111.30122-1-lukasz.l...@arm.com/
>
>  drivers/gpu/drm/lima/lima_devfreq.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index 5686ad4aaf7c..f1c9eb3e71bd 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -81,6 +81,7 @@ static int lima_devfreq_get_dev_status(struct device *dev,
>  }
>
>  static struct devfreq_dev_profile lima_devfreq_profile = {
> +   .timer = DEVFREQ_TIMER_DELAYED,
> .polling_ms = 50, /* ~3 frames */
> .target = lima_devfreq_target,
> .get_dev_status = lima_devfreq_get_dev_status,
> --
> 2.17.1
>


Re: [PATCH] drm/lima: fix reference leak in lima_pm_busy

2021-01-30 Thread Qiang Yu
Thanks, applied to drm-misc-next.

Regards,
Qiang

On Fri, Nov 27, 2020 at 5:42 PM Qinglang Miao  wrote:
>
> pm_runtime_get_sync will increment pm usage counter even it
> failed. Forgetting to putting operation will result in a
> reference leak here.
>
> A new function pm_runtime_resume_and_get is introduced in
> [0] to keep usage counter balanced. So We fix the reference
> leak by replacing it with new funtion.
>
> [0] dd8088d5a896 ("PM: runtime: Add  pm_runtime_resume_and_get to deal with 
> usage counter")
>
> Fixes: 50de2e9ebbc0 ("drm/lima: enable runtime pm")
> Reported-by: Hulk Robot 
> Signed-off-by: Qinglang Miao 
> ---
>  drivers/gpu/drm/lima/lima_sched.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c 
> b/drivers/gpu/drm/lima/lima_sched.c
> index dc6df9e9a..f6e7a88a5 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -200,7 +200,7 @@ static int lima_pm_busy(struct lima_device *ldev)
> int ret;
>
> /* resume GPU if it has been suspended by runtime PM */
> -   ret = pm_runtime_get_sync(ldev->dev);
> +   ret = pm_runtime_resume_and_get(ldev->dev);
> if (ret < 0)
> return ret;
>
> --
> 2.23.0
>


Re: [PATCH 5/7] drm/lima: dev_pm_opp_put_*() accepts NULL argument

2020-11-15 Thread Qiang Yu
Looks good for me, patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

On Fri, Nov 6, 2020 at 3:05 PM Viresh Kumar  wrote:
>
> The dev_pm_opp_put_*() APIs now accepts a NULL opp_table pointer and so
> there is no need for us to carry the extra check. Drop them.
>
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/gpu/drm/lima/lima_devfreq.c | 13 -
>  1 file changed, 4 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index bbe02817721b..e7b7b8dfd792 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -110,15 +110,10 @@ void lima_devfreq_fini(struct lima_device *ldev)
> devfreq->opp_of_table_added = false;
> }
>
> -   if (devfreq->regulators_opp_table) {
> -   dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
> -   devfreq->regulators_opp_table = NULL;
> -   }
> -
> -   if (devfreq->clkname_opp_table) {
> -   dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
> -   devfreq->clkname_opp_table = NULL;
> -   }
> +   dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
> +   dev_pm_opp_put_clkname(devfreq->clkname_opp_table);
> +   devfreq->regulators_opp_table = NULL;
> +   devfreq->clkname_opp_table = NULL;
>  }
>
>  int lima_devfreq_init(struct lima_device *ldev)
> --
> 2.25.0.rc1.19.g042ed3e048af
>


Re: [PATCH V2 Resend 1/2] drm/lima: Unconditionally call dev_pm_opp_of_remove_table()

2020-11-15 Thread Qiang Yu
Applied to drm-misc-next.

On Wed, Oct 28, 2020 at 2:44 PM Viresh Kumar  wrote:
>
> dev_pm_opp_of_remove_table() doesn't report any errors when it fails to
> find the OPP table with error -ENODEV (i.e. OPP table not present for
> the device). And we can call dev_pm_opp_of_remove_table()
> unconditionally here.
>
> Reviewed-by: Qiang Yu 
> Signed-off-by: Viresh Kumar 
>
> ---
> V2: Applied Reviewed by tag.
> ---
>  drivers/gpu/drm/lima/lima_devfreq.c | 6 +-
>  drivers/gpu/drm/lima/lima_devfreq.h | 1 -
>  2 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index bbe02817721b..cd290d866a04 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -105,10 +105,7 @@ void lima_devfreq_fini(struct lima_device *ldev)
> devfreq->devfreq = NULL;
> }
>
> -   if (devfreq->opp_of_table_added) {
> -   dev_pm_opp_of_remove_table(ldev->dev);
> -   devfreq->opp_of_table_added = false;
> -   }
> +   dev_pm_opp_of_remove_table(ldev->dev);
>
> if (devfreq->regulators_opp_table) {
> dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
> @@ -162,7 +159,6 @@ int lima_devfreq_init(struct lima_device *ldev)
> ret = dev_pm_opp_of_add_table(dev);
> if (ret)
> goto err_fini;
> -   ldevfreq->opp_of_table_added = true;
>
> lima_devfreq_reset(ldevfreq);
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.h 
> b/drivers/gpu/drm/lima/lima_devfreq.h
> index 5eed2975a375..2d9b3008ce77 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.h
> +++ b/drivers/gpu/drm/lima/lima_devfreq.h
> @@ -18,7 +18,6 @@ struct lima_devfreq {
> struct opp_table *clkname_opp_table;
> struct opp_table *regulators_opp_table;
> struct thermal_cooling_device *cooling;
> -   bool opp_of_table_added;
>
> ktime_t busy_time;
> ktime_t idle_time;
> --
> 2.25.0.rc1.19.g042ed3e048af
>


Re: [PATCH 15/40] drm/lima/lima_drv: Demote kernel-doc formatting abuse

2020-11-15 Thread Qiang Yu
Applied to drm-misc-next.

On Fri, Nov 13, 2020 at 9:50 PM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/lima/lima_drv.c:264: warning: cannot understand function 
> prototype: 'const struct drm_driver lima_drm_driver = '
>
> Cc: Qiang Yu 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: dri-de...@lists.freedesktop.org
> Cc: l...@lists.freedesktop.org
> Signed-off-by: Lee Jones 
> ---
>  drivers/gpu/drm/lima/lima_drv.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
> index d497af91d8505..7b8d7178d09aa 100644
> --- a/drivers/gpu/drm/lima/lima_drv.c
> +++ b/drivers/gpu/drm/lima/lima_drv.c
> @@ -255,7 +255,7 @@ static const struct drm_ioctl_desc 
> lima_drm_driver_ioctls[] = {
>
>  DEFINE_DRM_GEM_FOPS(lima_drm_driver_fops);
>
> -/**
> +/*
>   * Changelog:
>   *
>   * - 1.1.0 - add heap buffer support
> --
> 2.25.1
>


Re: [PATCH 23/40] drm/lima/lima_sched: Remove unused and unnecessary variable 'ret'

2020-11-15 Thread Qiang Yu
Applied to drm-misc-next.

On Fri, Nov 13, 2020 at 9:50 PM Lee Jones  wrote:
>
> Fixes the following W=1 kernel build warning(s):
>
>  drivers/gpu/drm/lima/lima_sched.c: In function ‘lima_sched_run_job’:
>  drivers/gpu/drm/lima/lima_sched.c:227:20: warning: variable ‘ret’ set but 
> not used [-Wunused-but-set-variable]
>
> Cc: Qiang Yu 
> Cc: David Airlie 
> Cc: Daniel Vetter 
> Cc: Sumit Semwal 
> Cc: "Christian König" 
> Cc: dri-de...@lists.freedesktop.org
> Cc: l...@lists.freedesktop.org
> Cc: linux-me...@vger.kernel.org
> Cc: linaro-mm-...@lists.linaro.org
> Signed-off-by: Lee Jones 
> ---
>  drivers/gpu/drm/lima/lima_sched.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_sched.c 
> b/drivers/gpu/drm/lima/lima_sched.c
> index a070a85f8f368..63b4c5643f9cd 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -224,7 +224,6 @@ static struct dma_fence *lima_sched_run_job(struct 
> drm_sched_job *job)
> struct lima_sched_pipe *pipe = to_lima_pipe(job->sched);
> struct lima_device *ldev = pipe->ldev;
> struct lima_fence *fence;
> -   struct dma_fence *ret;
> int i, err;
>
> /* after GPU reset */
> @@ -246,7 +245,7 @@ static struct dma_fence *lima_sched_run_job(struct 
> drm_sched_job *job)
> /* for caller usage of the fence, otherwise irq handler
>  * may consume the fence before caller use it
>  */
> -   ret = dma_fence_get(task->fence);
> +   dma_fence_get(task->fence);
>
> pipe->current_task = task;
>
> --
> 2.25.1
>


Re: [PATCH -next] drm/lima: simplify the return expression of lima_devfreq_target

2020-11-15 Thread Qiang Yu
Applied to drm-misc-next.

On Sat, Sep 19, 2020 at 6:43 PM Qiang Yu  wrote:
>
> Looks good for me, patch is:
> Reviewed-by: Qiang Yu 
>
> Regards,
> Qiang
>
> On Sat, Sep 19, 2020 at 5:47 PM Liu Shixin  wrote:
> >
> > Simplify the return expression.
> >
> > Signed-off-by: Liu Shixin 
> > ---
> >  drivers/gpu/drm/lima/lima_devfreq.c | 7 +--
> >  1 file changed, 1 insertion(+), 6 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> > b/drivers/gpu/drm/lima/lima_devfreq.c
> > index bbe02817721b..5914442936ed 100644
> > --- a/drivers/gpu/drm/lima/lima_devfreq.c
> > +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> > @@ -35,18 +35,13 @@ static int lima_devfreq_target(struct device *dev, 
> > unsigned long *freq,
> >u32 flags)
> >  {
> > struct dev_pm_opp *opp;
> > -   int err;
> >
> > opp = devfreq_recommended_opp(dev, freq, flags);
> > if (IS_ERR(opp))
> > return PTR_ERR(opp);
> > dev_pm_opp_put(opp);
> >
> > -   err = dev_pm_opp_set_rate(dev, *freq);
> > -   if (err)
> > -   return err;
> > -
> > -   return 0;
> > +   return dev_pm_opp_set_rate(dev, *freq);
> >  }
> >
> >  static void lima_devfreq_reset(struct lima_devfreq *devfreq)
> > --
> > 2.25.1
> >


Re: [PATCH -next] drm/lima: simplify the return expression of lima_devfreq_target

2020-09-19 Thread Qiang Yu
Looks good for me, patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

On Sat, Sep 19, 2020 at 5:47 PM Liu Shixin  wrote:
>
> Simplify the return expression.
>
> Signed-off-by: Liu Shixin 
> ---
>  drivers/gpu/drm/lima/lima_devfreq.c | 7 +--
>  1 file changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index bbe02817721b..5914442936ed 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -35,18 +35,13 @@ static int lima_devfreq_target(struct device *dev, 
> unsigned long *freq,
>u32 flags)
>  {
> struct dev_pm_opp *opp;
> -   int err;
>
> opp = devfreq_recommended_opp(dev, freq, flags);
> if (IS_ERR(opp))
> return PTR_ERR(opp);
> dev_pm_opp_put(opp);
>
> -   err = dev_pm_opp_set_rate(dev, *freq);
> -   if (err)
> -   return err;
> -
> -   return 0;
> +   return dev_pm_opp_set_rate(dev, *freq);
>  }
>
>  static void lima_devfreq_reset(struct lima_devfreq *devfreq)
> --
> 2.25.1
>


Re: [PATCH 2/8] drm/lima: Unconditionally call dev_pm_opp_of_remove_table()

2020-08-22 Thread Qiang Yu
Looks good for me, patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

On Thu, Aug 20, 2020 at 6:44 PM Viresh Kumar  wrote:
>
> dev_pm_opp_of_remove_table() doesn't report any errors when it fails to
> find the OPP table with error -ENODEV (i.e. OPP table not present for
> the device). And we can call dev_pm_opp_of_remove_table()
> unconditionally here.
>
> Signed-off-by: Viresh Kumar 
> ---
>  drivers/gpu/drm/lima/lima_devfreq.c | 6 +-
>  drivers/gpu/drm/lima/lima_devfreq.h | 1 -
>  2 files changed, 1 insertion(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.c 
> b/drivers/gpu/drm/lima/lima_devfreq.c
> index bbe02817721b..cd290d866a04 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.c
> +++ b/drivers/gpu/drm/lima/lima_devfreq.c
> @@ -105,10 +105,7 @@ void lima_devfreq_fini(struct lima_device *ldev)
> devfreq->devfreq = NULL;
> }
>
> -   if (devfreq->opp_of_table_added) {
> -   dev_pm_opp_of_remove_table(ldev->dev);
> -   devfreq->opp_of_table_added = false;
> -   }
> +   dev_pm_opp_of_remove_table(ldev->dev);
>
> if (devfreq->regulators_opp_table) {
> dev_pm_opp_put_regulators(devfreq->regulators_opp_table);
> @@ -162,7 +159,6 @@ int lima_devfreq_init(struct lima_device *ldev)
> ret = dev_pm_opp_of_add_table(dev);
> if (ret)
> goto err_fini;
> -   ldevfreq->opp_of_table_added = true;
>
> lima_devfreq_reset(ldevfreq);
>
> diff --git a/drivers/gpu/drm/lima/lima_devfreq.h 
> b/drivers/gpu/drm/lima/lima_devfreq.h
> index 5eed2975a375..2d9b3008ce77 100644
> --- a/drivers/gpu/drm/lima/lima_devfreq.h
> +++ b/drivers/gpu/drm/lima/lima_devfreq.h
> @@ -18,7 +18,6 @@ struct lima_devfreq {
> struct opp_table *clkname_opp_table;
> struct opp_table *regulators_opp_table;
> struct thermal_cooling_device *cooling;
> -   bool opp_of_table_added;
>
> ktime_t busy_time;
> ktime_t idle_time;
> --
> 2.25.0.rc1.19.g042ed3e048af
>


[PATCH] arm64: dts: allwinner: h5: remove Mali GPU PMU module

2020-08-22 Thread Qiang Yu
H5's Mali GPU PMU is not present or working corretly although
H5 datasheet record its interrupt vector.

Adding this module will miss lead lima driver try to shutdown
it and get waiting timeout. This problem is not exposed before
lima runtime PM supoprt is added.

Fixes: bb39ed07e55b ("arm64: dts: allwinner: h5: Add device node for Mali-450 
GPU")
Signed-off-by: Qiang Yu 
---
 arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi | 6 ++
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi 
b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
index 6735e316a39c..6c6053a18413 100644
--- a/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
+++ b/arch/arm64/boot/dts/allwinner/sun50i-h5.dtsi
@@ -139,8 +139,7 @@ mali: gpu@1e8 {
 ,
 ,
 ,
-,
-;
+;
interrupt-names = "gp",
  "gpmmu",
  "pp",
@@ -151,8 +150,7 @@ mali: gpu@1e8 {
  "pp2",
  "ppmmu2",
  "pp3",
- "ppmmu3",
- "pmu";
+ "ppmmu3";
clocks = < CLK_BUS_GPU>, < CLK_GPU>;
clock-names = "bus", "core";
resets = < RST_BUS_GPU>;
-- 
2.25.1



Re: [PATCH] drm/lima: Expose job_hang_limit module parameter

2020-07-13 Thread Qiang Yu
Applied to drm-misc-next:
https://cgit.freedesktop.org/drm/drm-misc/

Sorry for the late response.

Regards,
Qiang

On Tue, Jul 7, 2020 at 12:17 AM Andrey Lebedev  wrote:
>
> Hello guys,
>
> What is the status of this patch? Was this committed to any branch? Is
> it pending for merge to the mainline? Do I have to do anything in order
> to make it mergeable?
>
> On 6/19/20 10:58 AM, Andrey Lebedev wrote:
> > From: Andrey Lebedev 
> >
> > Some pp or gp jobs can be successfully repeated even after they time outs.
> > Introduce lima module parameter to specify number of times a job can hang
> > before being dropped.
> >
> > Signed-off-by: Andrey Lebedev 
> > ---
> >
> > Now all types are correct (uint).
> >
> >   drivers/gpu/drm/lima/lima_drv.c   | 4 
> >   drivers/gpu/drm/lima/lima_drv.h   | 1 +
> >   drivers/gpu/drm/lima/lima_sched.c | 5 +++--
> >   3 files changed, 8 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/gpu/drm/lima/lima_drv.c 
> > b/drivers/gpu/drm/lima/lima_drv.c
> > index a831565af813..ab460121fd52 100644
> > --- a/drivers/gpu/drm/lima/lima_drv.c
> > +++ b/drivers/gpu/drm/lima/lima_drv.c
> > @@ -19,6 +19,7 @@
> >   int lima_sched_timeout_ms;
> >   uint lima_heap_init_nr_pages = 8;
> >   uint lima_max_error_tasks;
> > +uint lima_job_hang_limit;
> >
> >   MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms");
> >   module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444);
> > @@ -29,6 +30,9 @@ module_param_named(heap_init_nr_pages, 
> > lima_heap_init_nr_pages, uint, 0444);
> >   MODULE_PARM_DESC(max_error_tasks, "max number of error tasks to save");
> >   module_param_named(max_error_tasks, lima_max_error_tasks, uint, 0644);
> >
> > +MODULE_PARM_DESC(job_hang_limit, "number of times to allow a job to hang 
> > before dropping it (default 0)");
> > +module_param_named(job_hang_limit, lima_job_hang_limit, uint, 0444);
> > +
> >   static int lima_ioctl_get_param(struct drm_device *dev, void *data, 
> > struct drm_file *file)
> >   {
> >   struct drm_lima_get_param *args = data;
> > diff --git a/drivers/gpu/drm/lima/lima_drv.h 
> > b/drivers/gpu/drm/lima/lima_drv.h
> > index fdbd4077c768..c738d288547b 100644
> > --- a/drivers/gpu/drm/lima/lima_drv.h
> > +++ b/drivers/gpu/drm/lima/lima_drv.h
> > @@ -11,6 +11,7 @@
> >   extern int lima_sched_timeout_ms;
> >   extern uint lima_heap_init_nr_pages;
> >   extern uint lima_max_error_tasks;
> > +extern uint lima_job_hang_limit;
> >
> >   struct lima_vm;
> >   struct lima_bo;
> > diff --git a/drivers/gpu/drm/lima/lima_sched.c 
> > b/drivers/gpu/drm/lima/lima_sched.c
> > index e6cefda00279..1602985dfa04 100644
> > --- a/drivers/gpu/drm/lima/lima_sched.c
> > +++ b/drivers/gpu/drm/lima/lima_sched.c
> > @@ -503,8 +503,9 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, 
> > const char *name)
> >
> >   INIT_WORK(>recover_work, lima_sched_recover_work);
> >
> > - return drm_sched_init(>base, _sched_ops, 1, 0,
> > -   msecs_to_jiffies(timeout), name);
> > + return drm_sched_init(>base, _sched_ops, 1,
> > +   lima_job_hang_limit, msecs_to_jiffies(timeout),
> > +   name);
> >   }
> >
> >   void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> >
>
> --
> Andrey Lebedev aka -.- . -.. -.. . .-.
> Software engineer
> Homepage: http://lebedev.lt/


Re: [PATCH] drm/lima: Expose job_hang_limit module parameter

2020-06-18 Thread Qiang Yu
On Thu, Jun 18, 2020 at 10:58 PM Andrey Lebedev
 wrote:
>
> From: Andrey Lebedev 
>
> Some pp or gp jobs can be successfully repeated even after they time outs.
> Introduce lima module parameter to specify number of times a job can hang
> before being dropped.
>
> Signed-off-by: Andrey Lebedev 
> ---
>
> Fixes for the embarrassing build error
> Reported-by: kernel test robot 
>
>  drivers/gpu/drm/lima/lima_drv.c   | 4 
>  drivers/gpu/drm/lima/lima_drv.h   | 1 +
>  drivers/gpu/drm/lima/lima_sched.c | 5 +++--
>  3 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
> index a831565af813..2400b8d52d92 100644
> --- a/drivers/gpu/drm/lima/lima_drv.c
> +++ b/drivers/gpu/drm/lima/lima_drv.c
> @@ -19,6 +19,7 @@
>  int lima_sched_timeout_ms;
>  uint lima_heap_init_nr_pages = 8;
>  uint lima_max_error_tasks;
> +uint lima_job_hang_limit;
>
>  MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms");
>  module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444);
> @@ -29,6 +30,9 @@ module_param_named(heap_init_nr_pages, 
> lima_heap_init_nr_pages, uint, 0444);
>  MODULE_PARM_DESC(max_error_tasks, "max number of error tasks to save");
>  module_param_named(max_error_tasks, lima_max_error_tasks, uint, 0644);
>
> +MODULE_PARM_DESC(job_hang_limit, "number of times to allow a job to hang 
> before dropping it (default 0)");
> +module_param_named(job_hang_limit, lima_job_hang_limit, int, 0444);
> +
Still miss this "int" to "uint".

Regards,
Qiang

>  static int lima_ioctl_get_param(struct drm_device *dev, void *data, struct 
> drm_file *file)
>  {
> struct drm_lima_get_param *args = data;
> diff --git a/drivers/gpu/drm/lima/lima_drv.h b/drivers/gpu/drm/lima/lima_drv.h
> index fdbd4077c768..c738d288547b 100644
> --- a/drivers/gpu/drm/lima/lima_drv.h
> +++ b/drivers/gpu/drm/lima/lima_drv.h
> @@ -11,6 +11,7 @@
>  extern int lima_sched_timeout_ms;
>  extern uint lima_heap_init_nr_pages;
>  extern uint lima_max_error_tasks;
> +extern uint lima_job_hang_limit;
>
>  struct lima_vm;
>  struct lima_bo;
> diff --git a/drivers/gpu/drm/lima/lima_sched.c 
> b/drivers/gpu/drm/lima/lima_sched.c
> index e6cefda00279..1602985dfa04 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -503,8 +503,9 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, 
> const char *name)
>
> INIT_WORK(>recover_work, lima_sched_recover_work);
>
> -   return drm_sched_init(>base, _sched_ops, 1, 0,
> - msecs_to_jiffies(timeout), name);
> +   return drm_sched_init(>base, _sched_ops, 1,
> + lima_job_hang_limit, msecs_to_jiffies(timeout),
> + name);
>  }
>
>  void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> --
> 2.25.1
>


Re: [PATCH] drm/lima: Expose job_hang_limit module parameter

2020-06-18 Thread Qiang Yu
On Thu, Jun 18, 2020 at 1:57 AM Andrey Lebedev  wrote:
>
> From: Andrey Lebedev 
>
> Some pp or gp jobs can be successfully repeated even after they time outs.
> Introduce lima module parameter to specify number of times a job can hang
> before being dropped.
>
> Signed-off-by: Andrey Lebedev 
> ---
>
> Hello,
>
> This patch allows to work around a freezing problem as discussed in
> https://gitlab.freedesktop.org/lima/linux/-/issues/33
>
>  drivers/gpu/drm/lima/lima_drv.c   | 4 
>  drivers/gpu/drm/lima/lima_drv.h   | 1 +
>  drivers/gpu/drm/lima/lima_sched.c | 5 +++--
>  3 files changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_drv.c b/drivers/gpu/drm/lima/lima_drv.c
> index a831565af813..2807eba26c55 100644
> --- a/drivers/gpu/drm/lima/lima_drv.c
> +++ b/drivers/gpu/drm/lima/lima_drv.c
> @@ -19,6 +19,7 @@
>  int lima_sched_timeout_ms;
>  uint lima_heap_init_nr_pages = 8;
>  uint lima_max_error_tasks;
> +int lima_job_hang_limit;

Better be an "uint" to avoid negative check. With this fixed, patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

>
>  MODULE_PARM_DESC(sched_timeout_ms, "task run timeout in ms");
>  module_param_named(sched_timeout_ms, lima_sched_timeout_ms, int, 0444);
> @@ -29,6 +30,9 @@ module_param_named(heap_init_nr_pages, 
> lima_heap_init_nr_pages, uint, 0444);
>  MODULE_PARM_DESC(max_error_tasks, "max number of error tasks to save");
>  module_param_named(max_error_tasks, lima_max_error_tasks, uint, 0644);
>
> +MODULE_PARM_DESC(job_hang_limit, "number of times to allow a job to hang 
> before dropping it (default 0)");
> +module_param_named(job_hang_limit, lima_job_hang_limit, int, 0444);
> +
>  static int lima_ioctl_get_param(struct drm_device *dev, void *data, struct 
> drm_file *file)
>  {
> struct drm_lima_get_param *args = data;
> diff --git a/drivers/gpu/drm/lima/lima_drv.h b/drivers/gpu/drm/lima/lima_drv.h
> index fdbd4077c768..39fd98e3b14d 100644
> --- a/drivers/gpu/drm/lima/lima_drv.h
> +++ b/drivers/gpu/drm/lima/lima_drv.h
> @@ -11,6 +11,7 @@
>  extern int lima_sched_timeout_ms;
>  extern uint lima_heap_init_nr_pages;
>  extern uint lima_max_error_tasks;
> +extern int lima_job_hang_limit;
>
>  struct lima_vm;
>  struct lima_bo;
> diff --git a/drivers/gpu/drm/lima/lima_sched.c 
> b/drivers/gpu/drm/lima/lima_sched.c
> index e6cefda00279..1602985dfa04 100644
> --- a/drivers/gpu/drm/lima/lima_sched.c
> +++ b/drivers/gpu/drm/lima/lima_sched.c
> @@ -503,8 +503,9 @@ int lima_sched_pipe_init(struct lima_sched_pipe *pipe, 
> const char *name)
>
> INIT_WORK(>recover_work, lima_sched_recover_work);
>
> -   return drm_sched_init(>base, _sched_ops, 1, 0,
> - msecs_to_jiffies(timeout), name);
> +   return drm_sched_init(>base, _sched_ops, 1,
> + lima_job_hang_limit, msecs_to_jiffies(timeout),
> + name);
>  }
>
>  void lima_sched_pipe_fini(struct lima_sched_pipe *pipe)
> --
> 2.25.1
>


Re: [PATCH v5 13/38] drm: lima: fix common struct sg_table related issues

2020-05-17 Thread Qiang Yu
Looks good for me, patch is:
Reviewed-by: Qiang Yu 

Regards,
Qiang

On Wed, May 13, 2020 at 9:33 PM Marek Szyprowski
 wrote:
>
> The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
> returns the number of the created entries in the DMA address space.
> However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
> dma_unmap_sg must be called with the original number of the entries
> passed to the dma_map_sg().
>
> struct sg_table is a common structure used for describing a non-contiguous
> memory buffer, used commonly in the DRM and graphics subsystems. It
> consists of a scatterlist with memory pages and DMA addresses (sgl entry),
> as well as the number of scatterlist entries: CPU pages (orig_nents entry)
> and DMA mapped pages (nents entry).
>
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg()
> function.
>
> To avoid such issues, lets use a common dma-mapping wrappers operating
> directly on the struct sg_table objects and use scatterlist page
> iterators where possible. This, almost always, hides references to the
> nents and orig_nents entries, making the code robust, easier to follow
> and copy/paste safe.
>
> Signed-off-by: Marek Szyprowski 
> ---
> For more information, see '[PATCH v5 00/38] DRM: fix struct sg_table nents
> vs. orig_nents misuse' thread:
> https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprow...@samsung.com/T/
> ---
>  drivers/gpu/drm/lima/lima_gem.c | 11 ---
>  drivers/gpu/drm/lima/lima_vm.c  |  5 ++---
>  2 files changed, 10 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/gpu/drm/lima/lima_gem.c b/drivers/gpu/drm/lima/lima_gem.c
> index 5404e0d..cda43f6 100644
> --- a/drivers/gpu/drm/lima/lima_gem.c
> +++ b/drivers/gpu/drm/lima/lima_gem.c
> @@ -69,8 +69,7 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> return ret;
>
> if (bo->base.sgt) {
> -   dma_unmap_sg(dev, bo->base.sgt->sgl,
> -bo->base.sgt->nents, DMA_BIDIRECTIONAL);
> +   dma_unmap_sgtable(dev, bo->base.sgt, DMA_BIDIRECTIONAL, 0);
> sg_free_table(bo->base.sgt);
> } else {
> bo->base.sgt = kmalloc(sizeof(*bo->base.sgt), GFP_KERNEL);
> @@ -80,7 +79,13 @@ int lima_heap_alloc(struct lima_bo *bo, struct lima_vm *vm)
> }
> }
>
> -   dma_map_sg(dev, sgt.sgl, sgt.nents, DMA_BIDIRECTIONAL);
> +   ret = dma_map_sgtable(dev, , DMA_BIDIRECTIONAL, 0);
> +   if (ret) {
> +   sg_free_table();
> +   kfree(bo->base.sgt);
> +   bo->base.sgt = NULL;
> +   return ret;
> +   }
>
> *bo->base.sgt = sgt;
>
> diff --git a/drivers/gpu/drm/lima/lima_vm.c b/drivers/gpu/drm/lima/lima_vm.c
> index 5b92fb8..2b2739a 100644
> --- a/drivers/gpu/drm/lima/lima_vm.c
> +++ b/drivers/gpu/drm/lima/lima_vm.c
> @@ -124,7 +124,7 @@ int lima_vm_bo_add(struct lima_vm *vm, struct lima_bo 
> *bo, bool create)
> if (err)
> goto err_out1;
>
> -   for_each_sg_dma_page(bo->base.sgt->sgl, _iter, 
> bo->base.sgt->nents, 0) {
> +   for_each_sgtable_dma_page(bo->base.sgt, _iter, 0) {
> err = lima_vm_map_page(vm, sg_page_iter_dma_address(_iter),
>bo_va->node.start + offset);
> if (err)
> @@ -298,8 +298,7 @@ int lima_vm_map_bo(struct lima_vm *vm, struct lima_bo 
> *bo, int pageoff)
> mutex_lock(>lock);
>
> base = bo_va->node.start + (pageoff << PAGE_SHIFT);
> -   for_each_sg_dma_page(bo->base.sgt->sgl, _iter,
> -bo->base.sgt->nents, pageoff) {
> +   for_each_sgtable_dma_page(bo->base.sgt, _iter, pageoff) {
> err = lima_vm_map_page(vm, sg_page_iter_dma_address(_iter),
>base + offset);
> if (err)
> --
> 1.9.1
>


Re: [Lima] [PATCH -next] MAINTAINERS: mark lima mailing list as moderated

2019-04-08 Thread Qiang Yu
On Mon, Apr 8, 2019 at 8:00 PM Neil Armstrong  wrote:
>
> On 08/04/2019 03:37, Qiang Yu wrote:
> > Looks good for me, patch is:
> > Reviewed-by: Qiang Yu 
>
> Also:
> Reviewed-by: Neil Armstrong 
>
> >
> > Should I apply this patch to drm-misc in this case? Or this patch will be
> > submitted in other kernel tree and back merged to drm-misc?
>
> You can push it yourself on the drm-misc-next branch now.
>
> (Then you can reply to the original patch when you applied it)

Thanks for the guidance. I'll push it soon.

Regards,
Qiang

>
> Neil
>
> >
> > Regards,
> > Qiang
> >
> > On Mon, Apr 8, 2019 at 3:12 AM Randy Dunlap  wrote:
> >>
> >> From: Randy Dunlap 
> >>
> >> Note that the lima mailing list is moderated.
> >>
> >> Signed-off-by: Randy Dunlap 
> >> Cc: Qiang Yu 
> >> Cc: dri-de...@lists.freedesktop.org
> >> Cc: l...@lists.freedesktop.org
> >> ---
> >>  MAINTAINERS |2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> --- mmotm-2019-0405-1828.orig/MAINTAINERS
> >> +++ mmotm-2019-0405-1828/MAINTAINERS
> >> @@ -5226,7 +5226,7 @@ F:Documentation/devicetree/bindings/dis
> >>  DRM DRIVERS FOR LIMA
> >>  M: Qiang Yu 
> >>  L: dri-de...@lists.freedesktop.org
> >> -L: l...@lists.freedesktop.org
> >> +L: l...@lists.freedesktop.org (moderated for non-subscribers)
> >>  S: Maintained
> >>  F: drivers/gpu/drm/lima/
> >>  F: include/uapi/drm/lima_drm.h
> >>
> >>
> > ___
> > lima mailing list
> > l...@lists.freedesktop.org
> > https://lists.freedesktop.org/mailman/listinfo/lima
> >
>


[PATCH] drm/lima: adopt xa_alloc API change

2019-04-02 Thread Qiang Yu
Cc: Stephen Rothwell 
Cc: Matthew Wilcox 
Cc: Daniel Vetter 
Signed-off-by: Qiang Yu 
---
 drivers/gpu/drm/lima/lima_ctx.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/lima/lima_ctx.c b/drivers/gpu/drm/lima/lima_ctx.c
index c8d12f7c6894..22fff6caa961 100644
--- a/drivers/gpu/drm/lima/lima_ctx.c
+++ b/drivers/gpu/drm/lima/lima_ctx.c
@@ -23,7 +23,7 @@ int lima_ctx_create(struct lima_device *dev, struct 
lima_ctx_mgr *mgr, u32 *id)
goto err_out0;
}
 
-   err = xa_alloc(>handles, id, UINT_MAX, ctx, GFP_KERNEL);
+   err = xa_alloc(>handles, id, ctx, xa_limit_32b, GFP_KERNEL);
if (err < 0)
goto err_out0;
 
-- 
2.17.1



Re: linux-next: build failure after merge of the drm-misc tree

2019-04-02 Thread Qiang Yu
On Tue, Apr 2, 2019 at 3:57 PM Daniel Vetter  wrote:
>
> On Tue, Apr 02, 2019 at 01:55:03PM +0800, Qiang Yu wrote:
> > Thanks, patch is:
> > Reviewed-by: Qiang Yu 
>
> Good time to get started with committing patches? In general it's kinda
> confusing if the maintainer r-bs a patch, but doesn't say whether/when/how
> it gets merged. Big chance the patch will get lost in limbo and fall
> through cracks.

Thanks for the remind, this patch should only be applied to drm-misc-next
branch when 5.1-rcx branch gets merged to drm-misc-next for the new
xa_alloc API. So I expect the guy who do the 5.1-rcx merge should apply
this patch. Who will do the merge?

Regards,
Qiang

>
> >
> > Regards,
> > Qiang
> >
> > On Tue, Apr 2, 2019 at 7:50 AM Stephen Rothwell  
> > wrote:
> > >
> > > Hi all,
> > >
> > > After merging the drm-misc tree, today's linux-next build (x86_64
> > > allmodconfig) failed like this:
> > >
> > > In file included from include/linux/kernel.h:7,
> > >  from include/asm-generic/bug.h:18,
> > >  from arch/x86/include/asm/bug.h:83,
> > >  from include/linux/bug.h:5,
> > >  from include/linux/mmdebug.h:5,
> > >  from include/linux/gfp.h:5,
> > >  from include/linux/slab.h:15,
> > >  from drivers/gpu/drm/lima/lima_ctx.c:4:
> > > drivers/gpu/drm/lima/lima_ctx.c: In function 'lima_ctx_create':
> > > include/linux/limits.h:13:18: warning: passing argument 3 of 'xa_alloc' 
> > > makes pointer from integer without a cast [-Wint-conversion]
> > >  #define UINT_MAX (~0U)
> > >   ^
> > > drivers/gpu/drm/lima/lima_ctx.c:26:36: note: in expansion of macro 
> > > 'UINT_MAX'
> > >   err = xa_alloc(>handles, id, UINT_MAX, ctx, GFP_KERNEL);
> > > ^~~~
> > > In file included from include/linux/radix-tree.h:31,
> > >  from include/linux/idr.h:15,
> > >  from include/drm/drm_device.h:7,
> > >  from drivers/gpu/drm/lima/lima_device.h:7,
> > >  from drivers/gpu/drm/lima/lima_ctx.c:6:
> > > include/linux/xarray.h:817:9: note: expected 'void *' but argument is of 
> > > type 'unsigned int'
> > >void *entry, struct xa_limit limit, gfp_t gfp)
> > >~~^
> > > drivers/gpu/drm/lima/lima_ctx.c:26:46: error: incompatible type for 
> > > argument 4 of 'xa_alloc'
> > >   err = xa_alloc(>handles, id, UINT_MAX, ctx, GFP_KERNEL);
> > >   ^~~
> > > In file included from include/linux/radix-tree.h:31,
> > >  from include/linux/idr.h:15,
> > >  from include/drm/drm_device.h:7,
> > >  from drivers/gpu/drm/lima/lima_device.h:7,
> > >  from drivers/gpu/drm/lima/lima_ctx.c:6:
> > > include/linux/xarray.h:817:32: note: expected 'struct xa_limit' but 
> > > argument is of type 'struct lima_ctx *'
> > >void *entry, struct xa_limit limit, gfp_t gfp)
> > > ^
> > >
> > > Caused by commit
> > >
> > >   a1d2a6339961 ("drm/lima: driver for ARM Mali4xx GPUs")
> > >
> > > interacting with commit
> > >
> > >   a3e4d3f97ec8 ("XArray: Redesign xa_alloc API")
> > >
> > > from Linus' tree (v5.1-rc1).
> > >
> > > I have applied the following patch for today.  It could be applied as
> > > part of a merge of v5.1-rc1 into drm-misc.
> > >
> > > From: Stephen Rothwell 
> > > Date: Tue, 2 Apr 2019 10:45:32 +1100
> > > Subject: [PATCH] drm/lima: update for xa_alloc API change
> > >
> > > Signed-off-by: Stephen Rothwell 
> > > ---
> > >  drivers/gpu/drm/lima/lima_ctx.c | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/gpu/drm/lima/lima_ctx.c 
> > > b/drivers/gpu/drm/lima/lima_ctx.c
> > > index c8d12f7c6894..bafa00d74cc5 100644
> > > --- a/drivers/gpu/drm/lima/lima_ctx.c
> > > +++ b/drivers/gpu/drm/lima/lima_ctx.c
> > > @@ -23,7 +23,7 @@ int lima_ctx_create(struct lima_device *dev, struct 
> > > lima_ctx_mgr *mgr, u32 *id)
> > > goto err_out0;
> > > }
> > >
> > > -   err = xa_alloc(>handles, id, UINT_MAX, ctx, GFP_KERNEL);
> > > +   err = xa_alloc(>handles, id, ctx, XA_LIMIT(*id, UINT_MAX), 
> > > GFP_KERNEL);
> > > if (err < 0)
> > > goto err_out0;
> > >
> > > --
> > > 2.20.1
> > >
> > > --
> > > Cheers,
> > > Stephen Rothwell
>
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch