Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-11 Thread Andrey Grodzovsky
* Otherwise queue it on timeout_wq and wait for it to complete
+*/
+   ... more typing needed here ...
+   }
+
+   if (sched->free_guilty) {
+   sched->ops->free_job(job);
+   sched->free_guilty = false;
+   }
+}
+
   /**
* drm_sched_main - main scheduler thread
*
@@ -787,6 +826,7 @@ static int drm_sched_main(void *param)

  wait_event_interruptible(sched->wake_up_worker,
   (cleanup_job =
drm_sched_get_cleanup_job(sched)) ||
+handle_timeout(sched) ||
   (!drm_sched_blocked(sched) &&
(entity =
drm_sched_select_entity(sched))) ||
   kthread_should_stop());
-

drm_sched_fault() and the sw timeout handler would just set
sched->has_timeout and kick sched->wake_up_worker.

And since we handle the timeout case after
drm_sched_get_cleanup_job(), we know that all of the successfully
completed jobs have already been popped off the list, and won't be
unfairly maligned.

BR,
-R

On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:

[AMD Official Use Only]

Okay, I will reprepare this patch

Thanks

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter 
Sent: Tuesday, August 31, 2021 9:02 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, Jingwen 

Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:

Can we please have some actual commit message here, with detailed
explanation of the race/bug/whatever, how you fix it and why this is
the best option?

On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
   drivers/gpu/drm/scheduler/sched_main.c | 24

   1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
  sched = container_of(work, struct drm_gpu_scheduler,
work_tdr.work);

  /* Protects against concurrent deletion in
drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))

This is a __ function, i.e. considered internal, and it's lockless
atomic, i.e. unordered. And you're not explaining why this works.

Iow it's probably buggy, and an just unconditionally parking the
kthread is probably the right thing to do. If it's not the right thing
to do, there's a bug here for sure.

Also why don't we reuse the function drivers already have to stop a scheduler 
thread? We seem to have two kthread_park now, that's probably one too much.
-Daniel


+   kthread_park(sched->thread);
+
  spin_lock(>job_list_lock);
  job = list_first_entry_or_null(>pending_list,
 struct drm_sched_job, list);

  if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
  spin_unlock(>job_list_lock);

+   /* vendor's timeout_job should call drm_sched_start() */
  status = job->sched->ops->timedout_job(job);

  /*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
  kthread_park(sched->thread);

  /*
-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
   * Iterate the job list from later to  earlier one and either deactive
   * their HW callbacks or remove them from pending list if they already
   * signaled.
--
2.7.4


--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.
ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C29

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-10 Thread Rob Clark
) {
> > > > +   /*
> > > > +* If driver has no specific requirements about 
> > > > serializing
> > > > +* reset wrt. other engines, just call timedout_job() 
> > > > directly
> > > > +*/
> > > > +   sched->ops->timedout_job(job);
> > > > +   } else {
> > > > +   /*
> > > > +* Otherwise queue it on timeout_wq and wait for it to 
> > > > complete
> > > > +*/
> > > > +   ... more typing needed here ...
> > > > +   }
> > > > +
> > > > +   if (sched->free_guilty) {
> > > > +   sched->ops->free_job(job);
> > > > +   sched->free_guilty = false;
> > > > +   }
> > > > +}
> > > > +
> > > >  /**
> > > >   * drm_sched_main - main scheduler thread
> > > >   *
> > > > @@ -787,6 +826,7 @@ static int drm_sched_main(void *param)
> > > >
> > > > wait_event_interruptible(sched->wake_up_worker,
> > > >  (cleanup_job =
> > > > drm_sched_get_cleanup_job(sched)) ||
> > > > +handle_timeout(sched) ||
> > > >      (!drm_sched_blocked(sched) &&
> > > >   (entity =
> > > > drm_sched_select_entity(sched))) ||
> > > >  kthread_should_stop());
> > > > -
> > > >
> > > > drm_sched_fault() and the sw timeout handler would just set
> > > > sched->has_timeout and kick sched->wake_up_worker.
> > > >
> > > > And since we handle the timeout case after
> > > > drm_sched_get_cleanup_job(), we know that all of the successfully
> > > > completed jobs have already been popped off the list, and won't be
> > > > unfairly maligned.
> > > >
> > > > BR,
> > > > -R
> > > >
> > > > On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
> > > > >
> > > > > [AMD Official Use Only]
> > > > >
> > > > > Okay, I will reprepare this patch
> > > > >
> > > > > Thanks
> > > > >
> > > > > --
> > > > > Monk Liu | Cloud-GPU Core team
> > > > > --
> > > > >
> > > > > -Original Message-
> > > > > From: Daniel Vetter 
> > > > > Sent: Tuesday, August 31, 2021 9:02 PM
> > > > > To: Liu, Monk 
> > > > > Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; 
> > > > > Chen, Jingwen 
> > > > > Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and 
> > > > > scheduler
> > > > >
> > > > > On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> > > > > > Can we please have some actual commit message here, with detailed
> > > > > > explanation of the race/bug/whatever, how you fix it and why this is
> > > > > > the best option?
> > > > > >
> > > > > > On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > > > > > > tested-by: jingwen chen 
> > > > > > > Signed-off-by: Monk Liu 
> > > > > > > Signed-off-by: jingwen chen 
> > > > > > > ---
> > > > > > >  drivers/gpu/drm/scheduler/sched_main.c | 24
> > > > > > > 
> > > > > > >  1 file changed, 4 insertions(+), 20 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > index ecf8140..894fdb24 100644
> > > > > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
> > > > > > > work_struct *work)
> > > > > > > sched = container_of(work, str

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-10 Thread Daniel Vetter
> > > @@ -764,6 +764,45 @@ static bool drm_sched_blocked(struct
> > > > > drm_gpu_scheduler *sched)
> > > > >  return false;
> > > > >   }
> > > > > 
> > > > > +static bool handle_timeout(struct drm_gpu_scheduler *sched)
> > > > > +{
> > > > > +   struct drm_sched_job *bad;
> > > > > +
> > > > > +   if (!sched->has_timeout)
> > > > > +   return false;
> > > > > +
> > > > > +   sched->has_timeout = false;
> > > > > +
> > > > > +   spin_lock(>job_list_lock);
> > > > > +   bad = list_first_entry_or_null(>pending_list,
> > > > > +  struct drm_sched_job, list);
> > > > > +
> > > > > +   if (!bad) {
> > > > > +   spin_unlock(>job_list_lock);
> > > > > +   return false;
> > > > > +   }
> > > > > +
> > > > > +   spin_unlock(>job_list_lock);
> > > > > +
> > > > > +   if (sched->timeout_wq == system_wq) {
> > > > > +   /*
> > > > > +* If driver has no specific requirements about 
> > > > > serializing
> > > > > +* reset wrt. other engines, just call timedout_job() 
> > > > > directly
> > > > > +*/
> > > > > +   sched->ops->timedout_job(job);
> > > > > +   } else {
> > > > > +   /*
> > > > > +* Otherwise queue it on timeout_wq and wait for it 
> > > > > to complete
> > > > > +*/
> > > > > +   ... more typing needed here ...
> > > > > +   }
> > > > > +
> > > > > +   if (sched->free_guilty) {
> > > > > +   sched->ops->free_job(job);
> > > > > +   sched->free_guilty = false;
> > > > > +   }
> > > > > +}
> > > > > +
> > > > >   /**
> > > > >* drm_sched_main - main scheduler thread
> > > > >*
> > > > > @@ -787,6 +826,7 @@ static int drm_sched_main(void *param)
> > > > > 
> > > > >  wait_event_interruptible(sched->wake_up_worker,
> > > > >       (cleanup_job =
> > > > > drm_sched_get_cleanup_job(sched)) ||
> > > > > +handle_timeout(sched) ||
> > > > >   (!drm_sched_blocked(sched) 
> > > > > &&
> > > > >(entity =
> > > > > drm_sched_select_entity(sched))) ||
> > > > >   kthread_should_stop());
> > > > > -
> > > > > 
> > > > > drm_sched_fault() and the sw timeout handler would just set
> > > > > sched->has_timeout and kick sched->wake_up_worker.
> > > > > 
> > > > > And since we handle the timeout case after
> > > > > drm_sched_get_cleanup_job(), we know that all of the successfully
> > > > > completed jobs have already been popped off the list, and won't be
> > > > > unfairly maligned.
> > > > > 
> > > > > BR,
> > > > > -R
> > > > > 
> > > > > On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
> > > > > > [AMD Official Use Only]
> > > > > > 
> > > > > > Okay, I will reprepare this patch
> > > > > > 
> > > > > > Thanks
> > > > > > 
> > > > > > --
> > > > > > Monk Liu | Cloud-GPU Core team
> > > > > > --
> > > > > > 
> > > > > > -Original Message-
> > > > > > From: Daniel Vetter 
> > > > > > Sent: Tuesday, August 31, 2021 9:02 PM
> > > > > > To: Liu, Monk 
> > > > > > Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; 
> > > > > > Chen, Jingwen 
> > > > > > Sub

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-10 Thread Andrey Grodzovsky
ndle_timeout(sched) ||
(!drm_sched_blocked(sched) &&
   (entity =
drm_sched_select_entity(sched))) ||
kthread_should_stop());
-

drm_sched_fault() and the sw timeout handler would just set
sched->has_timeout and kick sched->wake_up_worker.

And since we handle the timeout case after
drm_sched_get_cleanup_job(), we know that all of the successfully
completed jobs have already been popped off the list, and won't be
unfairly maligned.

BR,
-R

On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:

[AMD Official Use Only]

Okay, I will reprepare this patch

Thanks

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter 
Sent: Tuesday, August 31, 2021 9:02 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; 
dri-de...@lists.freedesktop.org; Chen, Jingwen 

Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and 
scheduler


On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:

Can we please have some actual commit message here, with detailed
explanation of the race/bug/whatever, how you fix it and why 
this is

the best option?

On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
  drivers/gpu/drm/scheduler/sched_main.c | 24

  1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
work_struct *work)

 sched = container_of(work, struct drm_gpu_scheduler,
work_tdr.work);

 /* Protects against concurrent deletion in
drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))

This is a __ function, i.e. considered internal, and it's lockless
atomic, i.e. unordered. And you're not explaining why this works.

Iow it's probably buggy, and an just unconditionally parking the
kthread is probably the right thing to do. If it's not the right 
thing

to do, there's a bug here for sure.
Also why don't we reuse the function drivers already have to stop 
a scheduler thread? We seem to have two kthread_park now, that's 
probably one too much.

-Daniel


+ kthread_park(sched->thread);
+
 spin_lock(>job_list_lock);
 job = list_first_entry_or_null(>pending_list,
    struct drm_sched_job, list);

 if (job) {
-   /*
-    * Remove the bad job so it cannot be freed by 
concurrent
-    * drm_sched_cleanup_jobs. It will be reinserted 
back after sched->thread

-    * is parked at which point it's safe.
-    */
-   list_del_init(>list);
spin_unlock(>job_list_lock);

+   /* vendor's timeout_job should call 
drm_sched_start() */

 status = job->sched->ops->timedout_job(job);

 /*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct 
drm_gpu_scheduler *sched, struct drm_sched_job *bad)

 kthread_park(sched->thread);

 /*
-    * Reinsert back the bad job here - now it's safe as
-    * drm_sched_get_cleanup_job cannot race against us and 
release the
-    * bad job at this point - we parked (waited for) any in 
progress
-    * (earlier) cleanups and drm_sched_get_cleanup_job will 
not be called

-    * now until the scheduler thread is unparked.
-    */
-   if (bad && bad->sched == sched)
-   /*
-    * Add at the head of the queue to reflect it was 
the earliest

-    * job extracted.
-    */
-   list_add(>list, >pending_list);
-
-   /*
  * Iterate the job list from later to  earlier one and 
either deactive
  * their HW callbacks or remove them from pending list if 
they already

  * signaled.
--
2.7.4


--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog. 

ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76 

b308d96c7f7a8b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6376601170 

51194614%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiL 

CJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QzgCU7%2BPdA0aWL5%2BJLg 


KeKbGaMMGqeGI9KE0P0LXlN4%3Dreserved=0

--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2Fdata=04%7C01%7Candrey.grodzovsky%40amd.com%7C3c31e06d94674e61f3a008d9a4323fb1%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637721358003695123%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=FMFNbfIxwNG3Tv1p1iHLI%2BpepJgwvsT%2FxhL%2FKZc0eVE%3Dreserved=0 


--
Daniel Vetter
Software Engineer, 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-10 Thread Christian König

drm_sched_fault() and the sw timeout handler would just set
sched->has_timeout and kick sched->wake_up_worker.

And since we handle the timeout case after
drm_sched_get_cleanup_job(), we know that all of the successfully
completed jobs have already been popped off the list, and won't be
unfairly maligned.

BR,
-R

On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:

[AMD Official Use Only]

Okay, I will reprepare this patch

Thanks

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter 
Sent: Tuesday, August 31, 2021 9:02 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, Jingwen 

Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:

Can we please have some actual commit message here, with detailed
explanation of the race/bug/whatever, how you fix it and why this is
the best option?

On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
  drivers/gpu/drm/scheduler/sched_main.c | 24

  1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
 sched = container_of(work, struct drm_gpu_scheduler,
work_tdr.work);

 /* Protects against concurrent deletion in
drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))

This is a __ function, i.e. considered internal, and it's lockless
atomic, i.e. unordered. And you're not explaining why this works.

Iow it's probably buggy, and an just unconditionally parking the
kthread is probably the right thing to do. If it's not the right thing
to do, there's a bug here for sure.

Also why don't we reuse the function drivers already have to stop a scheduler 
thread? We seem to have two kthread_park now, that's probably one too much.
-Daniel


+   kthread_park(sched->thread);
+
 spin_lock(>job_list_lock);
 job = list_first_entry_or_null(>pending_list,
struct drm_sched_job, list);

 if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
 spin_unlock(>job_list_lock);

+   /* vendor's timeout_job should call drm_sched_start() */
 status = job->sched->ops->timedout_job(job);

 /*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
 kthread_park(sched->thread);

 /*
-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
  * Iterate the job list from later to  earlier one and either deactive
  * their HW callbacks or remove them from pending list if they already
  * signaled.
--
2.7.4


--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.
ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76
b308d96c7f7a8b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6376601170
51194614%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiL
CJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QzgCU7%2BPdA0aWL5%2BJLg
KeKbGaMMGqeGI9KE0P0LXlN4%3Dreserved=0

--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76b308d96c7f7a8b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660117051194614%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QzgCU7%2BPdA0aWL5%2BJLgKeKbGaMMGqeGI9KE0P0LXlN4%3Dreserved=0

--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch




Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-10 Thread Daniel Vetter
re ...
> > > +   }
> > > +
> > > +   if (sched->free_guilty) {
> > > +   sched->ops->free_job(job);
> > > +   sched->free_guilty = false;
> > > +   }
> > > +}
> > > +
> > >  /**
> > >   * drm_sched_main - main scheduler thread
> > >   *
> > > @@ -787,6 +826,7 @@ static int drm_sched_main(void *param)
> > >
> > > wait_event_interruptible(sched->wake_up_worker,
> > >  (cleanup_job =
> > > drm_sched_get_cleanup_job(sched)) ||
> > > +handle_timeout(sched) ||
> > >  (!drm_sched_blocked(sched) &&
> > >   (entity =
> > > drm_sched_select_entity(sched))) ||
> > >  kthread_should_stop());
> > > -
> > >
> > > drm_sched_fault() and the sw timeout handler would just set
> > > sched->has_timeout and kick sched->wake_up_worker.
> > >
> > > And since we handle the timeout case after
> > > drm_sched_get_cleanup_job(), we know that all of the successfully
> > > completed jobs have already been popped off the list, and won't be
> > > unfairly maligned.
> > >
> > > BR,
> > > -R
> > >
> > > On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
> > > >
> > > > [AMD Official Use Only]
> > > >
> > > > Okay, I will reprepare this patch
> > > >
> > > > Thanks
> > > >
> > > > --
> > > > Monk Liu | Cloud-GPU Core team
> > > > --
> > > >
> > > > -Original Message-
> > > > From: Daniel Vetter 
> > > > Sent: Tuesday, August 31, 2021 9:02 PM
> > > > To: Liu, Monk 
> > > > Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; 
> > > > Chen, Jingwen 
> > > > Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler
> > > >
> > > > On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> > > > > Can we please have some actual commit message here, with detailed
> > > > > explanation of the race/bug/whatever, how you fix it and why this is
> > > > > the best option?
> > > > >
> > > > > On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > > > > > tested-by: jingwen chen 
> > > > > > Signed-off-by: Monk Liu 
> > > > > > Signed-off-by: jingwen chen 
> > > > > > ---
> > > > > >  drivers/gpu/drm/scheduler/sched_main.c | 24
> > > > > > 
> > > > > >  1 file changed, 4 insertions(+), 20 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > index ecf8140..894fdb24 100644
> > > > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
> > > > > > work_struct *work)
> > > > > > sched = container_of(work, struct drm_gpu_scheduler,
> > > > > > work_tdr.work);
> > > > > >
> > > > > > /* Protects against concurrent deletion in
> > > > > > drm_sched_get_cleanup_job */
> > > > > > +   if (!__kthread_should_park(sched->thread))
> > > > >
> > > > > This is a __ function, i.e. considered internal, and it's lockless
> > > > > atomic, i.e. unordered. And you're not explaining why this works.
> > > > >
> > > > > Iow it's probably buggy, and an just unconditionally parking the
> > > > > kthread is probably the right thing to do. If it's not the right thing
> > > > > to do, there's a bug here for sure.
> > > >
> > > > Also why don't we reuse the function drivers already have to stop a 
> > > > scheduler thread? We seem to have two kthread_park now, that's probably 
> > > > one too much.
> > > > -Daniel
> > > >
> > > > > >

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-09 Thread Rob Clark
> >
> > And since we handle the timeout case after
> > drm_sched_get_cleanup_job(), we know that all of the successfully
> > completed jobs have already been popped off the list, and won't be
> > unfairly maligned.
> >
> > BR,
> > -R
> >
> > On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
> > >
> > > [AMD Official Use Only]
> > >
> > > Okay, I will reprepare this patch
> > >
> > > Thanks
> > >
> > > --
> > > Monk Liu | Cloud-GPU Core team
> > > --
> > >
> > > -Original Message-
> > > From: Daniel Vetter 
> > > Sent: Tuesday, August 31, 2021 9:02 PM
> > > To: Liu, Monk 
> > > Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, 
> > > Jingwen 
> > > Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler
> > >
> > > On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> > > > Can we please have some actual commit message here, with detailed
> > > > explanation of the race/bug/whatever, how you fix it and why this is
> > > > the best option?
> > > >
> > > > On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > > > > tested-by: jingwen chen 
> > > > > Signed-off-by: Monk Liu 
> > > > > Signed-off-by: jingwen chen 
> > > > > ---
> > > > >  drivers/gpu/drm/scheduler/sched_main.c | 24
> > > > > 
> > > > >  1 file changed, 4 insertions(+), 20 deletions(-)
> > > > >
> > > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > index ecf8140..894fdb24 100644
> > > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
> > > > > work_struct *work)
> > > > > sched = container_of(work, struct drm_gpu_scheduler,
> > > > > work_tdr.work);
> > > > >
> > > > > /* Protects against concurrent deletion in
> > > > > drm_sched_get_cleanup_job */
> > > > > +   if (!__kthread_should_park(sched->thread))
> > > >
> > > > This is a __ function, i.e. considered internal, and it's lockless
> > > > atomic, i.e. unordered. And you're not explaining why this works.
> > > >
> > > > Iow it's probably buggy, and an just unconditionally parking the
> > > > kthread is probably the right thing to do. If it's not the right thing
> > > > to do, there's a bug here for sure.
> > >
> > > Also why don't we reuse the function drivers already have to stop a 
> > > scheduler thread? We seem to have two kthread_park now, that's probably 
> > > one too much.
> > > -Daniel
> > >
> > > > > +   kthread_park(sched->thread);
> > > > > +
> > > > > spin_lock(>job_list_lock);
> > > > > job = list_first_entry_or_null(>pending_list,
> > > > >struct drm_sched_job, list);
> > > > >
> > > > > if (job) {
> > > > > -   /*
> > > > > -* Remove the bad job so it cannot be freed by concurrent
> > > > > -* drm_sched_cleanup_jobs. It will be reinserted back 
> > > > > after sched->thread
> > > > > -* is parked at which point it's safe.
> > > > > -*/
> > > > > -   list_del_init(>list);
> > > > > spin_unlock(>job_list_lock);
> > > > >
> > > > > +   /* vendor's timeout_job should call drm_sched_start() */
> > > > > status = job->sched->ops->timedout_job(job);
> > > > >
> > > > > /*
> > > > > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler 
> > > > > *sched, struct drm_sched_job *bad)
> > > > > kthread_park(sched->thread);
> > > > >
> > > > > /*
> > > > > -* Reinsert back the bad job here - now it's safe as
> > > > > -* drm_sched_get_cleanup_job can

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-09 Thread Daniel Vetter
On Mon, Nov 08, 2021 at 03:39:17PM -0800, Rob Clark wrote:
> I stumbled across this thread when I ran into the same issue, while
> working out how to move drm/msm to use scheduler's retire +
> timeout/recovery (and get rid of our own mirror list of in-flight
> jobs).  We already have hw error detection enabled, and it can signal
> quite fast, so assuming the first job on the list is the guilty job
> just won't work.
> 
> But I was considering a slightly different approach to fixing this,
> instead just handling it all in drm_sched_main() and getting rid of
> the complicated kthread parking gymnastics.  Ie. something along the
> lines of:

So handling timeouts in the main sched thread wont work as soon as you
have multiple engines and reset that impacts across engines:

- Nothing is simplified since you still need to stop the other scheduler
  threads.

- You get deadlocks if 2 schedulers time out at the same time, and both
  want to stop the other one.

Hence workqueue. Now the rule for the wq is that you can only have one per
reset domain, so
- single engine you just take the one drm/sched provides
- if reset affects all your engines in the chip, then you allocate on in
  the drm_device and pass that to all
- if you have a complex of gpus all interconnected (e.g. xgmi hive for
  amd), then it's one wq for the entire hive

_All_ reset related things must be run on that workqueue or things breaks,
which means if you get hw fault that also needs to be run there. I guess
we should either patch drm/sched to check you call that function from the
right workqueue, or just handle it internally.
-Daniel

> 
> -
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> b/drivers/gpu/drm/scheduler/sched_main.c
> index 67382621b429..4d6ce775c316 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -764,6 +764,45 @@ static bool drm_sched_blocked(struct
> drm_gpu_scheduler *sched)
> return false;
>  }
> 
> +static bool handle_timeout(struct drm_gpu_scheduler *sched)
> +{
> +   struct drm_sched_job *bad;
> +
> +   if (!sched->has_timeout)
> +   return false;
> +
> +   sched->has_timeout = false;
> +
> +   spin_lock(>job_list_lock);
> +   bad = list_first_entry_or_null(>pending_list,
> +  struct drm_sched_job, list);
> +
> +   if (!bad) {
> +   spin_unlock(>job_list_lock);
> +   return false;
> +   }
> +
> +   spin_unlock(>job_list_lock);
> +
> +   if (sched->timeout_wq == system_wq) {
> +   /*
> +* If driver has no specific requirements about serializing
> +* reset wrt. other engines, just call timedout_job() directly
> +*/
> +   sched->ops->timedout_job(job);
> +   } else {
> +   /*
> +* Otherwise queue it on timeout_wq and wait for it to 
> complete
> +*/
> +   ... more typing needed here ...
> +   }
> +
> +   if (sched->free_guilty) {
> +   sched->ops->free_job(job);
> +   sched->free_guilty = false;
> +   }
> +}
> +
>  /**
>   * drm_sched_main - main scheduler thread
>   *
> @@ -787,6 +826,7 @@ static int drm_sched_main(void *param)
> 
> wait_event_interruptible(sched->wake_up_worker,
>  (cleanup_job =
> drm_sched_get_cleanup_job(sched)) ||
> +handle_timeout(sched) ||
>  (!drm_sched_blocked(sched) &&
>   (entity =
> drm_sched_select_entity(sched))) ||
>  kthread_should_stop());
> -
> 
> drm_sched_fault() and the sw timeout handler would just set
> sched->has_timeout and kick sched->wake_up_worker.
> 
> And since we handle the timeout case after
> drm_sched_get_cleanup_job(), we know that all of the successfully
> completed jobs have already been popped off the list, and won't be
> unfairly maligned.
> 
> BR,
> -R
> 
> On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
> >
> > [AMD Official Use Only]
> >
> > Okay, I will reprepare this patch
> >
> > Thanks
> >
> > --------------
> > Monk Liu | Cloud-GPU Core team
> > --
> >
> > -Original Message-
> > From: Daniel Vetter 
> > Sent: Tuesday, August 31, 2021 9:02 PM
> > To: Liu, Monk 
> > Cc: amd-g

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-11-08 Thread Rob Clark
I stumbled across this thread when I ran into the same issue, while
working out how to move drm/msm to use scheduler's retire +
timeout/recovery (and get rid of our own mirror list of in-flight
jobs).  We already have hw error detection enabled, and it can signal
quite fast, so assuming the first job on the list is the guilty job
just won't work.

But I was considering a slightly different approach to fixing this,
instead just handling it all in drm_sched_main() and getting rid of
the complicated kthread parking gymnastics.  Ie. something along the
lines of:

-
diff --git a/drivers/gpu/drm/scheduler/sched_main.c
b/drivers/gpu/drm/scheduler/sched_main.c
index 67382621b429..4d6ce775c316 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -764,6 +764,45 @@ static bool drm_sched_blocked(struct
drm_gpu_scheduler *sched)
return false;
 }

+static bool handle_timeout(struct drm_gpu_scheduler *sched)
+{
+   struct drm_sched_job *bad;
+
+   if (!sched->has_timeout)
+   return false;
+
+   sched->has_timeout = false;
+
+   spin_lock(>job_list_lock);
+   bad = list_first_entry_or_null(>pending_list,
+  struct drm_sched_job, list);
+
+   if (!bad) {
+   spin_unlock(>job_list_lock);
+   return false;
+   }
+
+   spin_unlock(>job_list_lock);
+
+   if (sched->timeout_wq == system_wq) {
+   /*
+* If driver has no specific requirements about serializing
+* reset wrt. other engines, just call timedout_job() directly
+*/
+   sched->ops->timedout_job(job);
+   } else {
+   /*
+* Otherwise queue it on timeout_wq and wait for it to complete
+*/
+   ... more typing needed here ...
+   }
+
+   if (sched->free_guilty) {
+   sched->ops->free_job(job);
+   sched->free_guilty = false;
+   }
+}
+
 /**
  * drm_sched_main - main scheduler thread
  *
@@ -787,6 +826,7 @@ static int drm_sched_main(void *param)

wait_event_interruptible(sched->wake_up_worker,
 (cleanup_job =
drm_sched_get_cleanup_job(sched)) ||
+handle_timeout(sched) ||
 (!drm_sched_blocked(sched) &&
  (entity =
drm_sched_select_entity(sched))) ||
 kthread_should_stop());
-

drm_sched_fault() and the sw timeout handler would just set
sched->has_timeout and kick sched->wake_up_worker.

And since we handle the timeout case after
drm_sched_get_cleanup_job(), we know that all of the successfully
completed jobs have already been popped off the list, and won't be
unfairly maligned.

BR,
-R

On Tue, Aug 31, 2021 at 6:29 PM Liu, Monk  wrote:
>
> [AMD Official Use Only]
>
> Okay, I will reprepare this patch
>
> Thanks
>
> --
> Monk Liu | Cloud-GPU Core team
> --
>
> -Original Message-
> From: Daniel Vetter 
> Sent: Tuesday, August 31, 2021 9:02 PM
> To: Liu, Monk 
> Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, 
> Jingwen 
> Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler
>
> On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> > Can we please have some actual commit message here, with detailed
> > explanation of the race/bug/whatever, how you fix it and why this is
> > the best option?
> >
> > On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > > tested-by: jingwen chen 
> > > Signed-off-by: Monk Liu 
> > > Signed-off-by: jingwen chen 
> > > ---
> > >  drivers/gpu/drm/scheduler/sched_main.c | 24
> > > 
> > >  1 file changed, 4 insertions(+), 20 deletions(-)
> > >
> > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c
> > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > index ecf8140..894fdb24 100644
> > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
> > > work_struct *work)
> > > sched = container_of(work, struct drm_gpu_scheduler,
> > > work_tdr.work);
> > >
> > > /* Protects against concurrent deletion in
> > > drm_sched_get_cleanup_job */
> > > +   if (!__kthread_should_park(sched->thread))
> >
> > This is a __ function, i.e. considered inter

RE: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Liu, Monk
[AMD Official Use Only]

Okay, I will reprepare this patch

Thanks 

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter  
Sent: Tuesday, August 31, 2021 9:02 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, 
Jingwen 
Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> Can we please have some actual commit message here, with detailed 
> explanation of the race/bug/whatever, how you fix it and why this is 
> the best option?
> 
> On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > tested-by: jingwen chen 
> > Signed-off-by: Monk Liu 
> > Signed-off-by: jingwen chen 
> > ---
> >  drivers/gpu/drm/scheduler/sched_main.c | 24 
> > 
> >  1 file changed, 4 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index ecf8140..894fdb24 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> > *work)
> > sched = container_of(work, struct drm_gpu_scheduler, 
> > work_tdr.work);
> >  
> > /* Protects against concurrent deletion in 
> > drm_sched_get_cleanup_job */
> > +   if (!__kthread_should_park(sched->thread))
> 
> This is a __ function, i.e. considered internal, and it's lockless 
> atomic, i.e. unordered. And you're not explaining why this works.
> 
> Iow it's probably buggy, and an just unconditionally parking the 
> kthread is probably the right thing to do. If it's not the right thing 
> to do, there's a bug here for sure.

Also why don't we reuse the function drivers already have to stop a scheduler 
thread? We seem to have two kthread_park now, that's probably one too much.
-Daniel

> > +   kthread_park(sched->thread);
> > +
> > spin_lock(>job_list_lock);
> > job = list_first_entry_or_null(>pending_list,
> >struct drm_sched_job, list);
> >  
> > if (job) {
> > -   /*
> > -* Remove the bad job so it cannot be freed by concurrent
> > -* drm_sched_cleanup_jobs. It will be reinserted back after 
> > sched->thread
> > -* is parked at which point it's safe.
> > -*/
> > -   list_del_init(>list);
> > spin_unlock(>job_list_lock);
> >  
> > +   /* vendor's timeout_job should call drm_sched_start() */
> > status = job->sched->ops->timedout_job(job);
> >  
> > /*
> > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> > struct drm_sched_job *bad)
> > kthread_park(sched->thread);
> >  
> > /*
> > -* Reinsert back the bad job here - now it's safe as
> > -* drm_sched_get_cleanup_job cannot race against us and release the
> > -* bad job at this point - we parked (waited for) any in progress
> > -* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> > -* now until the scheduler thread is unparked.
> > -*/
> > -   if (bad && bad->sched == sched)
> > -   /*
> > -* Add at the head of the queue to reflect it was the earliest
> > -* job extracted.
> > -*/
> > -   list_add(>list, >pending_list);
> > -
> > -   /*
> >  * Iterate the job list from later to  earlier one and either deactive
> >  * their HW callbacks or remove them from pending list if they already
> >  * signaled.
> > --
> > 2.7.4
> > 
> 
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.
> ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76
> b308d96c7f7a8b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C6376601170
> 51194614%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiL
> CJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QzgCU7%2BPdA0aWL5%2BJLg
> KeKbGaMMGqeGI9KE0P0LXlN4%3Dreserved=0

--
Daniel Vetter
Software Engineer, Intel Corporation
https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76b308d96c7f7a8b%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660117051194614%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=QzgCU7%2BPdA0aWL5%2BJLgKeKbGaMMGqeGI9KE0P0LXlN4%3Dreserved=0


RE: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Liu, Monk
[AMD Official Use Only]

>> Also why don't we reuse the function drivers already have to stop a 
>> scheduler thread? We seem to have two kthread_park now, that's probably one 
>> too much.
Are you referring to drm_sched_stop ?

That's  different, we don't need the logic from it, see that it go through 
pending list and remove all callbacks , etc... meanwhile vendor's timeout 
callback will call drm_sched_stop in a proper way,
All we want in my patch is to simply park scheduler,
Besides, even you call drm_sched_stop in job_timeout you still run into the 
warning issue I hit.

Thanks 

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter  
Sent: Tuesday, August 31, 2021 9:02 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, 
Jingwen 
Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> Can we please have some actual commit message here, with detailed 
> explanation of the race/bug/whatever, how you fix it and why this is 
> the best option?
> 
> On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > tested-by: jingwen chen 
> > Signed-off-by: Monk Liu 
> > Signed-off-by: jingwen chen 
> > ---
> >  drivers/gpu/drm/scheduler/sched_main.c | 24 
> > 
> >  1 file changed, 4 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index ecf8140..894fdb24 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> > *work)
> > sched = container_of(work, struct drm_gpu_scheduler, 
> > work_tdr.work);
> >  
> > /* Protects against concurrent deletion in 
> > drm_sched_get_cleanup_job */
> > +   if (!__kthread_should_park(sched->thread))
> 
> This is a __ function, i.e. considered internal, and it's lockless 
> atomic, i.e. unordered. And you're not explaining why this works.
> 
> Iow it's probably buggy, and an just unconditionally parking the 
> kthread is probably the right thing to do. If it's not the right thing 
> to do, there's a bug here for sure.

Also why don't we reuse the function drivers already have to stop a scheduler 
thread? We seem to have two kthread_park now, that's probably one too much.
-Daniel

> > +   kthread_park(sched->thread);
> > +
> > spin_lock(>job_list_lock);
> > job = list_first_entry_or_null(>pending_list,
> >struct drm_sched_job, list);
> >  
> > if (job) {
> > -   /*
> > -* Remove the bad job so it cannot be freed by concurrent
> > -* drm_sched_cleanup_jobs. It will be reinserted back after 
> > sched->thread
> > -* is parked at which point it's safe.
> > -*/
> > -   list_del_init(>list);
> > spin_unlock(>job_list_lock);
> >  
> > +   /* vendor's timeout_job should call drm_sched_start() */
> > status = job->sched->ops->timedout_job(job);
> >  
> > /*
> > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> > struct drm_sched_job *bad)
> > kthread_park(sched->thread);
> >  
> > /*
> > -* Reinsert back the bad job here - now it's safe as
> > -* drm_sched_get_cleanup_job cannot race against us and release the
> > -* bad job at this point - we parked (waited for) any in progress
> > -* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> > -* now until the scheduler thread is unparked.
> > -*/
> > -   if (bad && bad->sched == sched)
> > -   /*
> > -* Add at the head of the queue to reflect it was the earliest
> > -* job extracted.
> > -*/
> > -   list_add(>list, >pending_list);
> > -
> > -   /*
> >  * Iterate the job list from later to  earlier one and either deactive
> >  * their HW callbacks or remove them from pending list if they already
> >  * signaled.
> > --
> > 2.7.4
> > 
> 
> --
> Daniel Vetter
> Software Engineer, Intel Corporation
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Fblog.
> ffwll.ch%2Fdata=04%7C01%7CMonk.Liu%40amd.com%7C298815bea18f4fbf76
>

RE: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Liu, Monk
[AMD Official Use Only]

>> This is a __ function, i.e. considered internal, and it's lockless atomic, 
>> i.e. unordered. And you're not explaining why this works.

It's not a traditional habit from what I can see that put explain in code, but 
we can do that in mails ,
We want to park the scheduler in job_timeout to serialize the job accessing 
from both sched and TO handler , but inside vendor's callback timeout_job at 
least both panfrost and amd 
They both call drm_sched_stop() on all schedulers.  

If we unconditionally call "kthread_park" in job_timedout  then the bailing 
job's timedout will try to call "kthread_park" again on its scheduler and 
introduce "warning"

The scenario is :
1,Job1 on sched1 triggers timedout, and sched1 is parked,
2,vendor callback runs, it will usually stop all schedulers.
3,Job2 on sched2 triggers timedout, so the job_timedout also try to park 
sched2, but sched2 was stopped already by above step.  (job2's timeout is 
introduced by job1, or by other VF)
  ---So there will be "warning" in kernel log from above step... after 
this "__kthread_should_park()" here we can avoid the warning, that's the only 
reason I need this __function.
4,Before vendor callback exit, it will unpark all schedulers.

>From another hand if we don't do the kthread_park() and still delete the job 
>here (drop deleting/reinserting the job from pending_list  is what we want), 
>we still have a small windows to hit the race issue: 
That cleanup_job from sched thread is freeing the job while job is under 
processing by job_timedout or vendor's call back.

And the reason we want to avoid deleting/reinserting the timedout job is 
because we (amd vendor) have our own way to do a diagnostic on all jobs in 
pending list from all scheduler, we want to cherry-pick the real bad job 
>From all scheduler's pending list that causes this JOB TIMEOUT.

Besides, it is also much reasonable to park scheduler when job_timedout is 
running there, they should exclusively access those common members like 
sched_job. (due to spin_lock is off before running into vendor's calback)

Hope I explained ourselves well enough.

Thanks 

--
Monk Liu | Cloud-GPU Core team
--

-Original Message-
From: Daniel Vetter  
Sent: Tuesday, August 31, 2021 8:59 PM
To: Liu, Monk 
Cc: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org; Chen, 
Jingwen 
Subject: Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

Can we please have some actual commit message here, with detailed explanation 
of the race/bug/whatever, how you fix it and why this is the best option?

On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> tested-by: jingwen chen 
> Signed-off-by: Monk Liu 
> Signed-off-by: jingwen chen 
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 24 
>  1 file changed, 4 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> b/drivers/gpu/drm/scheduler/sched_main.c
> index ecf8140..894fdb24 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> *work)
>   sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
>  
>   /* Protects against concurrent deletion in drm_sched_get_cleanup_job 
> */
> + if (!__kthread_should_park(sched->thread))

This is a __ function, i.e. considered internal, and it's lockless atomic, i.e. 
unordered. And you're not explaining why this works.

Iow it's probably buggy, and an just unconditionally parking the kthread is 
probably the right thing to do. If it's not the right thing to do, there's a 
bug here for sure.
-Daniel

> + kthread_park(sched->thread);
> +
>   spin_lock(>job_list_lock);
>   job = list_first_entry_or_null(>pending_list,
>  struct drm_sched_job, list);
>  
>   if (job) {
> - /*
> -  * Remove the bad job so it cannot be freed by concurrent
> -  * drm_sched_cleanup_jobs. It will be reinserted back after 
> sched->thread
> -  * is parked at which point it's safe.
> -  */
> - list_del_init(>list);
>   spin_unlock(>job_list_lock);
>  
> + /* vendor's timeout_job should call drm_sched_start() */
>   status = job->sched->ops->timedout_job(job);
>  
>   /*
> @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> struct drm_sched_job *bad)
>   kthread_park(sched->thread);
>  
>   /*
> -  * Reinsert back the bad job here - now

[PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Monk Liu
tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
 drivers/gpu/drm/scheduler/sched_main.c | 24 
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index 3e0bbc7..87d72e9 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
 
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+
spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
 
if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
spin_unlock(>job_list_lock);
 
+   /* vendor's timeout_job should call drm_sched_start() */
status = job->sched->ops->timedout_job(job);
 
/*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
kthread_park(sched->thread);
 
/*
-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
 * Iterate the job list from later to  earlier one and either deactive
 * their HW callbacks or remove them from pending list if they already
 * signaled.
-- 
2.7.4



Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Luben Tuikov
On 2021-08-31 16:56, Andrey Grodzovsky wrote:
> On 2021-08-31 12:01 p.m., Luben Tuikov wrote:
>> On 2021-08-31 11:23, Andrey Grodzovsky wrote:
>>> On 2021-08-31 10:38 a.m., Daniel Vetter wrote:
 On Tue, Aug 31, 2021 at 10:20:40AM -0400, Andrey Grodzovsky wrote:
> On 2021-08-31 10:03 a.m., Daniel Vetter wrote:
>> On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:
>>> It's says patch [2/2] but i can't find patch 1
>>>
>>> On 2021-08-31 6:35 a.m., Monk Liu wrote:
 tested-by: jingwen chen 
 Signed-off-by: Monk Liu 
 Signed-off-by: jingwen chen 
 ---
  drivers/gpu/drm/scheduler/sched_main.c | 24 
 
  1 file changed, 4 insertions(+), 20 deletions(-)

 diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
 b/drivers/gpu/drm/scheduler/sched_main.c
 index ecf8140..894fdb24 100644
 --- a/drivers/gpu/drm/scheduler/sched_main.c
 +++ b/drivers/gpu/drm/scheduler/sched_main.c
 @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
 work_struct *work)
sched = container_of(work, struct drm_gpu_scheduler, 
 work_tdr.work);
/* Protects against concurrent deletion in 
 drm_sched_get_cleanup_job */
 +  if (!__kthread_should_park(sched->thread))
 +  kthread_park(sched->thread);
 +
>>> As mentioned before, without serializing against other TDR handlers from
>>> other
>>> schedulers you just race here against them, e.g. you parked it now but
>>> another
>>> one in progress will unpark it as part of calling  drm_sched_start for 
>>> other
>>> rings[1]
>>> Unless I am missing something since I haven't found patch [1/2]
>>>
>>> [1] - 
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=PrrvFHAwDeLlbcOctlKHsCFs9%2F56XNVzoLVcT1RoJgI%3Dreserved=0
>> You need to have your own wq and run all your tdr work on the same wq if
>> your reset has any cross-engine impact.
> IMHO what is problematic in serializing vs. locking (with trylock and bail
> out like we do in [1]) is for multiple TO events arising from same reason
> like maybe one job just waits for another and once first is hanged the
> second will also appear to be hanged triggering it's own TO event.
> In this case multiple TOs event will trigger multiple resets if we 
> serialize
> but if we use lock with trylock the second one will quietly bail out.
>
> [1] 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L4903data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=kxSWBoshVTLMMFIFZdPsP4MBgUAoC%2F3szo9GUemSRLY%3Dreserved=0
 Hm so I guess a single wq here, that will hold up all other TO. And they
 should recheck whether the job is moving meanwhile.
>>> Can you clarify about this ? What job should be moving ? The dependent job ?
>>>
>>>
 Also unless you use hw semaphores the job shouldn't even start before the
 deps are singalled, so not sure how this goes wrong?
>>> What about a simple example where
>>> we actually can submit a shader on one ring and a simple
>>> WAIT_REG_MEM packet on another to wait for the shader to write
>>> a specific value to specific memory location. Here you have both of them
>>> started
>>> in close proximity and no explicit dependencies involved (at the
>>> scheduler level)
>>> and yet if the shader hangs also the WAIT_REG_MEM job will hang.
>>>
>>>
 The vm_id flush stuff can make things a bit more fun for your specific
 case, but in your specific case you have to run all TO handlers on the
 same ordered workqueue anyway (because trying to paper over this in other
 ways doesn't work imo).
>>> I didn't get this one.
>> So, awhile back I tried to "serialize" this by moving timed-out jobs
>> into their own timed-out-dedicated list, then freeing them asynchronously,
>> but I never got it to work reliably due to races with low-level drivers and
>> assumptions made way back.
>>
>> My idea was to atomic-move timed-out jobs into their own list, at the time of
>> timeout, and later asynchronously to free them (or better yet, inquire about
>> their state, and free them or move them back--ideally the 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Andrey Grodzovsky



On 2021-08-31 12:01 p.m., Luben Tuikov wrote:

On 2021-08-31 11:23, Andrey Grodzovsky wrote:

On 2021-08-31 10:38 a.m., Daniel Vetter wrote:

On Tue, Aug 31, 2021 at 10:20:40AM -0400, Andrey Grodzovsky wrote:

On 2021-08-31 10:03 a.m., Daniel Vetter wrote:

On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:

It's says patch [2/2] but i can't find patch 1

On 2021-08-31 6:35 a.m., Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
 drivers/gpu/drm/scheduler/sched_main.c | 24 
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+

As mentioned before, without serializing against other TDR handlers from
other
schedulers you just race here against them, e.g. you parked it now but
another
one in progress will unpark it as part of calling  drm_sched_start for other
rings[1]
Unless I am missing something since I haven't found patch [1/2]

[1] - 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=PrrvFHAwDeLlbcOctlKHsCFs9%2F56XNVzoLVcT1RoJgI%3Dreserved=0

You need to have your own wq and run all your tdr work on the same wq if
your reset has any cross-engine impact.

IMHO what is problematic in serializing vs. locking (with trylock and bail
out like we do in [1]) is for multiple TO events arising from same reason
like maybe one job just waits for another and once first is hanged the
second will also appear to be hanged triggering it's own TO event.
In this case multiple TOs event will trigger multiple resets if we serialize
but if we use lock with trylock the second one will quietly bail out.

[1] 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L4903data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=kxSWBoshVTLMMFIFZdPsP4MBgUAoC%2F3szo9GUemSRLY%3Dreserved=0

Hm so I guess a single wq here, that will hold up all other TO. And they
should recheck whether the job is moving meanwhile.

Can you clarify about this ? What job should be moving ? The dependent job ?



Also unless you use hw semaphores the job shouldn't even start before the
deps are singalled, so not sure how this goes wrong?

What about a simple example where
we actually can submit a shader on one ring and a simple
WAIT_REG_MEM packet on another to wait for the shader to write
a specific value to specific memory location. Here you have both of them
started
in close proximity and no explicit dependencies involved (at the
scheduler level)
and yet if the shader hangs also the WAIT_REG_MEM job will hang.



The vm_id flush stuff can make things a bit more fun for your specific
case, but in your specific case you have to run all TO handlers on the
same ordered workqueue anyway (because trying to paper over this in other
ways doesn't work imo).

I didn't get this one.

So, awhile back I tried to "serialize" this by moving timed-out jobs
into their own timed-out-dedicated list, then freeing them asynchronously,
but I never got it to work reliably due to races with low-level drivers and
assumptions made way back.

My idea was to atomic-move timed-out jobs into their own list, at the time of
timeout, and later asynchronously to free them (or better yet, inquire about
their state, and free them or move them back--ideally the inquiry is atomic
and done at timeout time before being moved to the timeout list). Anyway...

But I found out that all these knobs and levers weren't in place and I was
getting problems with it and it never materialized.

The paradigm was loosely "let someone else do it", like, "on an event,
move it to a list, and let someone else handle it", or "on an event, mark
it, and let someone else handle it". (loosely borrowed from an iSCSI target
I did many many years ago--it worked well and there were no races, even with
out-of-order executions.)

If 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Luben Tuikov
On 2021-08-31 11:23, Andrey Grodzovsky wrote:
> On 2021-08-31 10:38 a.m., Daniel Vetter wrote:
>> On Tue, Aug 31, 2021 at 10:20:40AM -0400, Andrey Grodzovsky wrote:
>>> On 2021-08-31 10:03 a.m., Daniel Vetter wrote:
 On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:
> It's says patch [2/2] but i can't find patch 1
>
> On 2021-08-31 6:35 a.m., Monk Liu wrote:
>> tested-by: jingwen chen 
>> Signed-off-by: Monk Liu 
>> Signed-off-by: jingwen chen 
>> ---
>> drivers/gpu/drm/scheduler/sched_main.c | 24 
>> 1 file changed, 4 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index ecf8140..894fdb24 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
>> work_struct *work)
>>  sched = container_of(work, struct drm_gpu_scheduler, 
>> work_tdr.work);
>>  /* Protects against concurrent deletion in 
>> drm_sched_get_cleanup_job */
>> +if (!__kthread_should_park(sched->thread))
>> +kthread_park(sched->thread);
>> +
> As mentioned before, without serializing against other TDR handlers from
> other
> schedulers you just race here against them, e.g. you parked it now but
> another
> one in progress will unpark it as part of calling  drm_sched_start for 
> other
> rings[1]
> Unless I am missing something since I haven't found patch [1/2]
>
> [1] - 
> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=PrrvFHAwDeLlbcOctlKHsCFs9%2F56XNVzoLVcT1RoJgI%3Dreserved=0
 You need to have your own wq and run all your tdr work on the same wq if
 your reset has any cross-engine impact.
>>> IMHO what is problematic in serializing vs. locking (with trylock and bail
>>> out like we do in [1]) is for multiple TO events arising from same reason
>>> like maybe one job just waits for another and once first is hanged the
>>> second will also appear to be hanged triggering it's own TO event.
>>> In this case multiple TOs event will trigger multiple resets if we serialize
>>> but if we use lock with trylock the second one will quietly bail out.
>>>
>>> [1] 
>>> https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L4903data=04%7C01%7Cluben.tuikov%40amd.com%7C228bd1600c914efe24aa08d96c934bbb%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660202148713283%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=kxSWBoshVTLMMFIFZdPsP4MBgUAoC%2F3szo9GUemSRLY%3Dreserved=0
>> Hm so I guess a single wq here, that will hold up all other TO. And they
>> should recheck whether the job is moving meanwhile.
>
> Can you clarify about this ? What job should be moving ? The dependent job ?
>
>
>> Also unless you use hw semaphores the job shouldn't even start before the
>> deps are singalled, so not sure how this goes wrong?
>
> What about a simple example where
> we actually can submit a shader on one ring and a simple
> WAIT_REG_MEM packet on another to wait for the shader to write
> a specific value to specific memory location. Here you have both of them 
> started
> in close proximity and no explicit dependencies involved (at the 
> scheduler level)
> and yet if the shader hangs also the WAIT_REG_MEM job will hang.
>
>
>> The vm_id flush stuff can make things a bit more fun for your specific
>> case, but in your specific case you have to run all TO handlers on the
>> same ordered workqueue anyway (because trying to paper over this in other
>> ways doesn't work imo).
>
> I didn't get this one.

So, awhile back I tried to "serialize" this by moving timed-out jobs
into their own timed-out-dedicated list, then freeing them asynchronously,
but I never got it to work reliably due to races with low-level drivers and
assumptions made way back.

My idea was to atomic-move timed-out jobs into their own list, at the time of
timeout, and later asynchronously to free them (or better yet, inquire about
their state, and free them or move them back--ideally the inquiry is atomic
and done at timeout time before being moved to the timeout list). Anyway...

But I found out that all these knobs and levers weren't in place and I was
getting problems with it and it never materialized.

The paradigm was loosely "let someone else do 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Andrey Grodzovsky



On 2021-08-31 10:38 a.m., Daniel Vetter wrote:

On Tue, Aug 31, 2021 at 10:20:40AM -0400, Andrey Grodzovsky wrote:

On 2021-08-31 10:03 a.m., Daniel Vetter wrote:

On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:

It's says patch [2/2] but i can't find patch 1

On 2021-08-31 6:35 a.m., Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
drivers/gpu/drm/scheduler/sched_main.c | 24 
1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+

As mentioned before, without serializing against other TDR handlers from
other
schedulers you just race here against them, e.g. you parked it now but
another
one in progress will unpark it as part of calling  drm_sched_start for other
rings[1]
Unless I am missing something since I haven't found patch [1/2]

[1] - 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Candrey.grodzovsky%40amd.com%7C86b39a7bbcd34a18c6e908d96c8cf862%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660174991641911%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=tz7lxvL%2BR6NrpcdfIg1Mjw5lZ55%2F5HTPF%2Fkwu7wqvqE%3Dreserved=0

You need to have your own wq and run all your tdr work on the same wq if
your reset has any cross-engine impact.


IMHO what is problematic in serializing vs. locking (with trylock and bail
out like we do in [1]) is for multiple TO events arising from same reason
like maybe one job just waits for another and once first is hanged the
second will also appear to be hanged triggering it's own TO event.
In this case multiple TOs event will trigger multiple resets if we serialize
but if we use lock with trylock the second one will quietly bail out.

[1] 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L4903data=04%7C01%7Candrey.grodzovsky%40amd.com%7C86b39a7bbcd34a18c6e908d96c8cf862%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660174991651903%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=SpirDOLVdw5kIZAq0LHjnB0Qy6apwPLDPFjm61Wc2ko%3Dreserved=0

Hm so I guess a single wq here, that will hold up all other TO. And they
should recheck whether the job is moving meanwhile.



Can you clarify about this ? What job should be moving ? The dependent job ?




Also unless you use hw semaphores the job shouldn't even start before the
deps are singalled, so not sure how this goes wrong?



What about a simple example where
we actually can submit a shader on one ring and a simple
WAIT_REG_MEM packet on another to wait for the shader to write
a specific value to specific memory location. Here you have both of them 
started
in close proximity and no explicit dependencies involved (at the 
scheduler level)

and yet if the shader hangs also the WAIT_REG_MEM job will hang.




The vm_id flush stuff can make things a bit more fun for your specific
case, but in your specific case you have to run all TO handlers on the
same ordered workqueue anyway (because trying to paper over this in other
ways doesn't work imo).



I didn't get this one.

Andrey




So I think this should all work, no need for tricky cross-scheduler
locking.
-Daniel


Andrey



See

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fgpu%2Fdrm-mm.html%23c.drm_sched_backend_opsdata=04%7C01%7Candrey.grodzovsky%40amd.com%7C86b39a7bbcd34a18c6e908d96c8cf862%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660174991651903%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=uV4s%2Fsu%2FKabZvsRMd1PAyd36JRSz91aPfYEn8PlvFlM%3Dreserved=0

for the ->timeout_job callback docs. I thought I brought this up already?
-Daniel


Yes, this discussion is a continuation of your comment about serializing, I
mentioned before that you proposed it.

Andrey



Andrey



spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Luben Tuikov
On 2021-08-31 08:59, Daniel Vetter wrote:
> Can we please have some actual commit message here, with detailed
> explanation of the race/bug/whatever, how you fix it and why this is the
> best option?

I agree with Daniel--a narrative form of a commit message is so much easier
for humans to digest. The "[what]"/"[why]"/"[how]" and "issue"/"fix" format is
somewhat dry and uninformative, and leaves much to be desired.

Regards,
Luben

>
> On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
>> tested-by: jingwen chen 
>> Signed-off-by: Monk Liu 
>> Signed-off-by: jingwen chen 
>> ---
>>  drivers/gpu/drm/scheduler/sched_main.c | 24 
>>  1 file changed, 4 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
>> b/drivers/gpu/drm/scheduler/sched_main.c
>> index ecf8140..894fdb24 100644
>> --- a/drivers/gpu/drm/scheduler/sched_main.c
>> +++ b/drivers/gpu/drm/scheduler/sched_main.c
>> @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
>> *work)
>>  sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
>>  
>>  /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
>> +if (!__kthread_should_park(sched->thread))
> This is a __ function, i.e. considered internal, and it's lockless atomic,
> i.e. unordered. And you're not explaining why this works.
>
> Iow it's probably buggy, and an just unconditionally parking the kthread
> is probably the right thing to do. If it's not the right thing to do,
> there's a bug here for sure.
> -Daniel
>
>> +kthread_park(sched->thread);
>> +
>>  spin_lock(>job_list_lock);
>>  job = list_first_entry_or_null(>pending_list,
>> struct drm_sched_job, list);
>>  
>>  if (job) {
>> -/*
>> - * Remove the bad job so it cannot be freed by concurrent
>> - * drm_sched_cleanup_jobs. It will be reinserted back after 
>> sched->thread
>> - * is parked at which point it's safe.
>> - */
>> -list_del_init(>list);
>>  spin_unlock(>job_list_lock);
>>  
>> +/* vendor's timeout_job should call drm_sched_start() */
>>  status = job->sched->ops->timedout_job(job);
>>  
>>  /*
>> @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
>> struct drm_sched_job *bad)
>>  kthread_park(sched->thread);
>>  
>>  /*
>> - * Reinsert back the bad job here - now it's safe as
>> - * drm_sched_get_cleanup_job cannot race against us and release the
>> - * bad job at this point - we parked (waited for) any in progress
>> - * (earlier) cleanups and drm_sched_get_cleanup_job will not be called
>> - * now until the scheduler thread is unparked.
>> - */
>> -if (bad && bad->sched == sched)
>> -/*
>> - * Add at the head of the queue to reflect it was the earliest
>> - * job extracted.
>> - */
>> -list_add(>list, >pending_list);
>> -
>> -/*
>>   * Iterate the job list from later to  earlier one and either deactive
>>   * their HW callbacks or remove them from pending list if they already
>>   * signaled.
>> -- 
>> 2.7.4
>>



Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Daniel Vetter
On Tue, Aug 31, 2021 at 10:20:40AM -0400, Andrey Grodzovsky wrote:
> 
> On 2021-08-31 10:03 a.m., Daniel Vetter wrote:
> > On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:
> > > It's says patch [2/2] but i can't find patch 1
> > > 
> > > On 2021-08-31 6:35 a.m., Monk Liu wrote:
> > > > tested-by: jingwen chen 
> > > > Signed-off-by: Monk Liu 
> > > > Signed-off-by: jingwen chen 
> > > > ---
> > > >drivers/gpu/drm/scheduler/sched_main.c | 24 
> > > >1 file changed, 4 insertions(+), 20 deletions(-)
> > > > 
> > > > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > > > b/drivers/gpu/drm/scheduler/sched_main.c
> > > > index ecf8140..894fdb24 100644
> > > > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > > > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > > > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct 
> > > > work_struct *work)
> > > > sched = container_of(work, struct drm_gpu_scheduler, 
> > > > work_tdr.work);
> > > > /* Protects against concurrent deletion in 
> > > > drm_sched_get_cleanup_job */
> > > > +   if (!__kthread_should_park(sched->thread))
> > > > +   kthread_park(sched->thread);
> > > > +
> > > 
> > > As mentioned before, without serializing against other TDR handlers from
> > > other
> > > schedulers you just race here against them, e.g. you parked it now but
> > > another
> > > one in progress will unpark it as part of calling  drm_sched_start for 
> > > other
> > > rings[1]
> > > Unless I am missing something since I haven't found patch [1/2]
> > > 
> > > [1] - 
> > > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc697c75898664f678f4b08d96c8820e7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660154199259544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=1Y8Tuh2fLtexYsGrmQD2ITTSIfUVJmqTylwgMryDjxw%3Dreserved=0
> > You need to have your own wq and run all your tdr work on the same wq if
> > your reset has any cross-engine impact.
> 
> 
> IMHO what is problematic in serializing vs. locking (with trylock and bail
> out like we do in [1]) is for multiple TO events arising from same reason
> like maybe one job just waits for another and once first is hanged the
> second will also appear to be hanged triggering it's own TO event.
> In this case multiple TOs event will trigger multiple resets if we serialize
> but if we use lock with trylock the second one will quietly bail out.
> 
> [1] 
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c#L4903

Hm so I guess a single wq here, that will hold up all other TO. And they
should recheck whether the job is moving meanwhile.

Also unless you use hw semaphores the job shouldn't even start before the
deps are singalled, so not sure how this goes wrong?

The vm_id flush stuff can make things a bit more fun for your specific
case, but in your specific case you have to run all TO handlers on the
same ordered workqueue anyway (because trying to paper over this in other
ways doesn't work imo).

So I think this should all work, no need for tricky cross-scheduler
locking.
-Daniel

> 
> Andrey
> 
> 
> > 
> > See
> > 
> > https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fgpu%2Fdrm-mm.html%23c.drm_sched_backend_opsdata=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc697c75898664f678f4b08d96c8820e7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660154199259544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=tLjFaN7mREYjjydxHszbQlTk3lwH4bQtBDVzHFHvPJg%3Dreserved=0
> > 
> > for the ->timeout_job callback docs. I thought I brought this up already?
> > -Daniel
> 
> 
> Yes, this discussion is a continuation of your comment about serializing, I
> mentioned before that you proposed it.
> 
> Andrey
> 
> 
> > 
> > > Andrey
> > > 
> > > 
> > > > spin_lock(>job_list_lock);
> > > > job = list_first_entry_or_null(>pending_list,
> > > >struct drm_sched_job, list);
> > > > if (job) {
> > > > -   /*
> > > > -* Remove the bad job so it cannot be freed by 
> > > > concurrent
> > > > -* drm_sched_cleanup_jobs. It will be reinserted back 
> > > > after sched->thread
> > > > -* is parked at which point it's safe.
> > > > -*/
> > > > -   list_del_init(>list);
> > > > spin_unlock(>job_list_lock);
> > > > +   /* vendor's timeout_job should call drm_sched_start() */
> > > > status = job->sched->ops->timedout_job(job);
> > > > /*
> > > > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler 
> > > > 

Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Andrey Grodzovsky



On 2021-08-31 10:03 a.m., Daniel Vetter wrote:

On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:

It's says patch [2/2] but i can't find patch 1

On 2021-08-31 6:35 a.m., Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
   drivers/gpu/drm/scheduler/sched_main.c | 24 
   1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+


As mentioned before, without serializing against other TDR handlers from
other
schedulers you just race here against them, e.g. you parked it now but
another
one in progress will unpark it as part of calling  drm_sched_start for other
rings[1]
Unless I am missing something since I haven't found patch [1/2]

[1] - 
https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Felixir.bootlin.com%2Flinux%2Flatest%2Fsource%2Fdrivers%2Fgpu%2Fdrm%2Famd%2Famdgpu%2Famdgpu_device.c%23L5041data=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc697c75898664f678f4b08d96c8820e7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660154199259544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=1Y8Tuh2fLtexYsGrmQD2ITTSIfUVJmqTylwgMryDjxw%3Dreserved=0

You need to have your own wq and run all your tdr work on the same wq if
your reset has any cross-engine impact.



IMHO what is problematic in serializing vs. locking (with trylock and 
bail out like we do in [1]) is for multiple TO events arising from same 
reason
like maybe one job just waits for another and once first is hanged the 
second will also appear to be hanged triggering it's own TO event.
In this case multiple TOs event will trigger multiple resets if we 
serialize but if we use lock with trylock the second one will quietly 
bail out.


[1] 
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c#L4903


Andrey




See

https://nam11.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdri.freedesktop.org%2Fdocs%2Fdrm%2Fgpu%2Fdrm-mm.html%23c.drm_sched_backend_opsdata=04%7C01%7Candrey.grodzovsky%40amd.com%7Cc697c75898664f678f4b08d96c8820e7%7C3dd8961fe4884e608e11a82d994e183d%7C0%7C0%7C637660154199259544%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000sdata=tLjFaN7mREYjjydxHszbQlTk3lwH4bQtBDVzHFHvPJg%3Dreserved=0

for the ->timeout_job callback docs. I thought I brought this up already?
-Daniel



Yes, this discussion is a continuation of your comment about 
serializing, I mentioned before that you proposed it.


Andrey





Andrey



spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
spin_unlock(>job_list_lock);
+   /* vendor's timeout_job should call drm_sched_start() */
status = job->sched->ops->timedout_job(job);
/*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
kthread_park(sched->thread);
/*
-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
 * Iterate the job list from later to  earlier one and either deactive
 * their HW callbacks or remove them from pending list if they already
 * signaled.


Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Daniel Vetter
On Tue, Aug 31, 2021 at 09:53:36AM -0400, Andrey Grodzovsky wrote:
> It's says patch [2/2] but i can't find patch 1
> 
> On 2021-08-31 6:35 a.m., Monk Liu wrote:
> > tested-by: jingwen chen 
> > Signed-off-by: Monk Liu 
> > Signed-off-by: jingwen chen 
> > ---
> >   drivers/gpu/drm/scheduler/sched_main.c | 24 
> >   1 file changed, 4 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index ecf8140..894fdb24 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> > *work)
> > sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
> > /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
> > +   if (!__kthread_should_park(sched->thread))
> > +   kthread_park(sched->thread);
> > +
> 
> 
> As mentioned before, without serializing against other TDR handlers from
> other
> schedulers you just race here against them, e.g. you parked it now but
> another
> one in progress will unpark it as part of calling  drm_sched_start for other
> rings[1]
> Unless I am missing something since I haven't found patch [1/2]
> 
> [1] - 
> https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c#L5041

You need to have your own wq and run all your tdr work on the same wq if
your reset has any cross-engine impact.

See

https://dri.freedesktop.org/docs/drm/gpu/drm-mm.html#c.drm_sched_backend_ops

for the ->timeout_job callback docs. I thought I brought this up already?
-Daniel

> 
> Andrey
> 
> 
> > spin_lock(>job_list_lock);
> > job = list_first_entry_or_null(>pending_list,
> >struct drm_sched_job, list);
> > if (job) {
> > -   /*
> > -* Remove the bad job so it cannot be freed by concurrent
> > -* drm_sched_cleanup_jobs. It will be reinserted back after 
> > sched->thread
> > -* is parked at which point it's safe.
> > -*/
> > -   list_del_init(>list);
> > spin_unlock(>job_list_lock);
> > +   /* vendor's timeout_job should call drm_sched_start() */
> > status = job->sched->ops->timedout_job(job);
> > /*
> > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> > struct drm_sched_job *bad)
> > kthread_park(sched->thread);
> > /*
> > -* Reinsert back the bad job here - now it's safe as
> > -* drm_sched_get_cleanup_job cannot race against us and release the
> > -* bad job at this point - we parked (waited for) any in progress
> > -* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> > -* now until the scheduler thread is unparked.
> > -*/
> > -   if (bad && bad->sched == sched)
> > -   /*
> > -* Add at the head of the queue to reflect it was the earliest
> > -* job extracted.
> > -*/
> > -   list_add(>list, >pending_list);
> > -
> > -   /*
> >  * Iterate the job list from later to  earlier one and either deactive
> >  * their HW callbacks or remove them from pending list if they already
> >  * signaled.

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Andrey Grodzovsky

It's says patch [2/2] but i can't find patch 1

On 2021-08-31 6:35 a.m., Monk Liu wrote:

tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
  drivers/gpu/drm/scheduler/sched_main.c | 24 
  1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
  
  	/* Protects against concurrent deletion in drm_sched_get_cleanup_job */

+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+



As mentioned before, without serializing against other TDR handlers from 
other
schedulers you just race here against them, e.g. you parked it now but 
another
one in progress will unpark it as part of calling  drm_sched_start for 
other rings[1]

Unless I am missing something since I haven't found patch [1/2]

[1] - 
https://elixir.bootlin.com/linux/latest/source/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c#L5041


Andrey



spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
  
  	if (job) {

-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
spin_unlock(>job_list_lock);
  
+		/* vendor's timeout_job should call drm_sched_start() */

status = job->sched->ops->timedout_job(job);
  
  		/*

@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
kthread_park(sched->thread);
  
  	/*

-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
 * Iterate the job list from later to  earlier one and either deactive
 * their HW callbacks or remove them from pending list if they already
 * signaled.


Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Daniel Vetter
On Tue, Aug 31, 2021 at 02:59:02PM +0200, Daniel Vetter wrote:
> Can we please have some actual commit message here, with detailed
> explanation of the race/bug/whatever, how you fix it and why this is the
> best option?
> 
> On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> > tested-by: jingwen chen 
> > Signed-off-by: Monk Liu 
> > Signed-off-by: jingwen chen 
> > ---
> >  drivers/gpu/drm/scheduler/sched_main.c | 24 
> >  1 file changed, 4 insertions(+), 20 deletions(-)
> > 
> > diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> > b/drivers/gpu/drm/scheduler/sched_main.c
> > index ecf8140..894fdb24 100644
> > --- a/drivers/gpu/drm/scheduler/sched_main.c
> > +++ b/drivers/gpu/drm/scheduler/sched_main.c
> > @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> > *work)
> > sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
> >  
> > /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
> > +   if (!__kthread_should_park(sched->thread))
> 
> This is a __ function, i.e. considered internal, and it's lockless atomic,
> i.e. unordered. And you're not explaining why this works.
> 
> Iow it's probably buggy, and an just unconditionally parking the kthread
> is probably the right thing to do. If it's not the right thing to do,
> there's a bug here for sure.

Also why don't we reuse the function drivers already have to stop a
scheduler thread? We seem to have two kthread_park now, that's probably
one too much.
-Daniel

> > +   kthread_park(sched->thread);
> > +
> > spin_lock(>job_list_lock);
> > job = list_first_entry_or_null(>pending_list,
> >struct drm_sched_job, list);
> >  
> > if (job) {
> > -   /*
> > -* Remove the bad job so it cannot be freed by concurrent
> > -* drm_sched_cleanup_jobs. It will be reinserted back after 
> > sched->thread
> > -* is parked at which point it's safe.
> > -*/
> > -   list_del_init(>list);
> > spin_unlock(>job_list_lock);
> >  
> > +   /* vendor's timeout_job should call drm_sched_start() */
> > status = job->sched->ops->timedout_job(job);
> >  
> > /*
> > @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> > struct drm_sched_job *bad)
> > kthread_park(sched->thread);
> >  
> > /*
> > -* Reinsert back the bad job here - now it's safe as
> > -* drm_sched_get_cleanup_job cannot race against us and release the
> > -* bad job at this point - we parked (waited for) any in progress
> > -* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> > -* now until the scheduler thread is unparked.
> > -*/
> > -   if (bad && bad->sched == sched)
> > -   /*
> > -* Add at the head of the queue to reflect it was the earliest
> > -* job extracted.
> > -*/
> > -   list_add(>list, >pending_list);
> > -
> > -   /*
> >  * Iterate the job list from later to  earlier one and either deactive
> >  * their HW callbacks or remove them from pending list if they already
> >  * signaled.
> > -- 
> > 2.7.4
> > 
> 
> -- 
> Daniel Vetter
> Software Engineer, Intel Corporation
> http://blog.ffwll.ch

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


Re: [PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Daniel Vetter
Can we please have some actual commit message here, with detailed
explanation of the race/bug/whatever, how you fix it and why this is the
best option?

On Tue, Aug 31, 2021 at 06:35:39PM +0800, Monk Liu wrote:
> tested-by: jingwen chen 
> Signed-off-by: Monk Liu 
> Signed-off-by: jingwen chen 
> ---
>  drivers/gpu/drm/scheduler/sched_main.c | 24 
>  1 file changed, 4 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
> b/drivers/gpu/drm/scheduler/sched_main.c
> index ecf8140..894fdb24 100644
> --- a/drivers/gpu/drm/scheduler/sched_main.c
> +++ b/drivers/gpu/drm/scheduler/sched_main.c
> @@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
> *work)
>   sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
>  
>   /* Protects against concurrent deletion in drm_sched_get_cleanup_job */
> + if (!__kthread_should_park(sched->thread))

This is a __ function, i.e. considered internal, and it's lockless atomic,
i.e. unordered. And you're not explaining why this works.

Iow it's probably buggy, and an just unconditionally parking the kthread
is probably the right thing to do. If it's not the right thing to do,
there's a bug here for sure.
-Daniel

> + kthread_park(sched->thread);
> +
>   spin_lock(>job_list_lock);
>   job = list_first_entry_or_null(>pending_list,
>  struct drm_sched_job, list);
>  
>   if (job) {
> - /*
> -  * Remove the bad job so it cannot be freed by concurrent
> -  * drm_sched_cleanup_jobs. It will be reinserted back after 
> sched->thread
> -  * is parked at which point it's safe.
> -  */
> - list_del_init(>list);
>   spin_unlock(>job_list_lock);
>  
> + /* vendor's timeout_job should call drm_sched_start() */
>   status = job->sched->ops->timedout_job(job);
>  
>   /*
> @@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
> struct drm_sched_job *bad)
>   kthread_park(sched->thread);
>  
>   /*
> -  * Reinsert back the bad job here - now it's safe as
> -  * drm_sched_get_cleanup_job cannot race against us and release the
> -  * bad job at this point - we parked (waited for) any in progress
> -  * (earlier) cleanups and drm_sched_get_cleanup_job will not be called
> -  * now until the scheduler thread is unparked.
> -  */
> - if (bad && bad->sched == sched)
> - /*
> -  * Add at the head of the queue to reflect it was the earliest
> -  * job extracted.
> -  */
> - list_add(>list, >pending_list);
> -
> - /*
>* Iterate the job list from later to  earlier one and either deactive
>* their HW callbacks or remove them from pending list if they already
>* signaled.
> -- 
> 2.7.4
> 

-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


[PATCH 2/2] drm/sched: serialize job_timeout and scheduler

2021-08-31 Thread Monk Liu
tested-by: jingwen chen 
Signed-off-by: Monk Liu 
Signed-off-by: jingwen chen 
---
 drivers/gpu/drm/scheduler/sched_main.c | 24 
 1 file changed, 4 insertions(+), 20 deletions(-)

diff --git a/drivers/gpu/drm/scheduler/sched_main.c 
b/drivers/gpu/drm/scheduler/sched_main.c
index ecf8140..894fdb24 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -319,19 +319,17 @@ static void drm_sched_job_timedout(struct work_struct 
*work)
sched = container_of(work, struct drm_gpu_scheduler, work_tdr.work);
 
/* Protects against concurrent deletion in drm_sched_get_cleanup_job */
+   if (!__kthread_should_park(sched->thread))
+   kthread_park(sched->thread);
+
spin_lock(>job_list_lock);
job = list_first_entry_or_null(>pending_list,
   struct drm_sched_job, list);
 
if (job) {
-   /*
-* Remove the bad job so it cannot be freed by concurrent
-* drm_sched_cleanup_jobs. It will be reinserted back after 
sched->thread
-* is parked at which point it's safe.
-*/
-   list_del_init(>list);
spin_unlock(>job_list_lock);
 
+   /* vendor's timeout_job should call drm_sched_start() */
status = job->sched->ops->timedout_job(job);
 
/*
@@ -393,20 +391,6 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, 
struct drm_sched_job *bad)
kthread_park(sched->thread);
 
/*
-* Reinsert back the bad job here - now it's safe as
-* drm_sched_get_cleanup_job cannot race against us and release the
-* bad job at this point - we parked (waited for) any in progress
-* (earlier) cleanups and drm_sched_get_cleanup_job will not be called
-* now until the scheduler thread is unparked.
-*/
-   if (bad && bad->sched == sched)
-   /*
-* Add at the head of the queue to reflect it was the earliest
-* job extracted.
-*/
-   list_add(>list, >pending_list);
-
-   /*
 * Iterate the job list from later to  earlier one and either deactive
 * their HW callbacks or remove them from pending list if they already
 * signaled.
-- 
2.7.4