On 01/11/2019 05:03 PM, Andrey Grodzovsky wrote:
>
>
> On 01/11/2019 02:11 PM, Koenig, Christian wrote:
>> Am 11.01.19 um 16:37 schrieb Grodzovsky, Andrey:
>>> On 01/11/2019 04:42 AM, Koenig, Christian wrote:
>>>> Am 10.01.19 um 16:56 schrieb Grodzovsky, Andrey:
>>>>> [SNIP]
>>>>>>>> But we will not be adding the cb back in drm_sched_stop 
>>>>>>>> anymore, now we
>>>>>>>> are only going to add back the cb in drm_sched_startr after 
>>>>>>>> rerunning
>>>>>>>> those jobs in drm_sched_resubmit_jobs and assign them a new parent
>>>>>>>> there
>>>>>>>> anyway.
>>>>>>> Yeah, but when we find that we don't need to reset anything anymore
>>>>>>> then adding the callbacks again won't be possible any more.
>>>>>>>
>>>>>>> Christian.
>>>>>> I am not sure I understand it, can u point me to example of how this
>>>>>> will happen ? I am attaching my latest patches with waiting only for
>>>>>> the last job's fence here just so we are on same page regarding 
>>>>>> the code.
>>>> Well the whole idea is to prepare all schedulers, then check once more
>>>> if the offending job hasn't completed in the meantime.
>>>>
>>>> If the job completed we need to be able to rollback everything and
>>>> continue as if nothing had happened.
>>>>
>>>> Christian.
>>> Oh, but this piece of functionality - skipping HW ASIC reset in case 
>>> the
>>> guilty job done is totally missing form this patch series and so needs
>>> to be added. So what you say actually is that for the case were we skip
>>> HW asic reset because the guilty job did complete we also need to skip
>>> resubmitting the jobs in drm_sched_resubmit_jobs and hence preserve the
>>> old parent fence pointer for reuse ? If so I would like to add all this
>>> functionality as a third patch since the first 2 patches are more about
>>> resolving race condition with jobs in flight while doing reset - 
>>> what do
>>> you think ?
>> Yeah, sounds perfectly fine to me.
>>
>> Christian.
>
> I realized there is another complication now for XGMI hive use case, 
> we currently skip gpu recover for adev in case another gpu recover for 
> different adev in same hive is running, under the assumption that we 
> are going to reset all devices in hive anyway because that should 
> cover our own dev too. But if we chose to skip now HW asic reset if 
> our guilty job did finish we will aslo not HW reset any other devices 
> in the hive even if one of them might actually had a bad job, wanted 
> to do gpu recover but skipped it because our recover was in progress 
> in that time.
> My general idea on that is to keep a list of guilty jobs per hive, 
> when you start gpu recover you first add you guilty job to the hive 
> and trylock hive->reset_lock. Any owner of hive->reset_lock (gpu 
> recovery in progress) once he finished his recovery and released 
> hive->reset_lock should go over hive->bad_jobs_list and if at least 
> one of them is still not signaled (not done) trigger another gpu 
> recovery and so on. If you do manage to trylock you also go over the 
> list, clean it and perform recovery. This list probably needs to be 
> protected with per hive lock.
> I also think we can for now at least finish reviewing the first 2 
> patches and submit them since as I said before they are not dealing 
> with this topic and fixing existing race conditions. If you are OK 
> with that I can send for review the last iteration of the first 2 
> patches where I wait for the last fence in ring mirror list.
>
> Andrey

I implemented HW reset avoidance including XGMI use case according to 
the plan i specified. Patch is attached but I can't test it yet due to 
XGMI regression in PSP which is supposed to be fixed soon. Please take a 
look.

Andrey

>
>>
>>> Andrey
>>>>>> Andrey
>>>>>>
>>>> _______________________________________________
>>>> amd-gfx mailing list
>>>> amd-...@lists.freedesktop.org
>>>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>> _______________________________________________
>> amd-gfx mailing list
>> amd-...@lists.freedesktop.org
>> https://lists.freedesktop.org/mailman/listinfo/amd-gfx
>

From 60085547eec4002fe447783be68e7f14342944ff Mon Sep 17 00:00:00 2001
From: Andrey Grodzovsky <andrey.grodzov...@amd.com>
Date: Tue, 15 Jan 2019 15:39:38 -0500
Subject: drm/sched: Skip HW reset if guilty job is signaled.

Also take care of XGMI use case - special care needs to be taken
of other devs in hive who had jobs TO but delegated their reset to
current reset instance assuming it will do HW reset for them.
In such case the reset in progress after finishing (and if HW reset was optimized)
will inspect a perhive list of jobs for any jobs which still not finished and will
issue another reset for them.

Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
 drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 100 ++++++++++++++++++++++-------
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.c    |  29 +++++++++
 drivers/gpu/drm/amd/amdgpu/amdgpu_job.h    |   2 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c   |   3 +
 drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h   |   4 +-
 drivers/gpu/drm/etnaviv/etnaviv_sched.c    |   2 +-
 drivers/gpu/drm/scheduler/sched_main.c     |  18 ++++--
 drivers/gpu/drm/v3d/v3d_sched.c            |   2 +-
 include/drm/gpu_scheduler.h                |   3 +-
 9 files changed, 129 insertions(+), 34 deletions(-)

diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index e2f578a..a772db9 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
@@ -3301,7 +3301,8 @@ bool amdgpu_device_should_recover_gpu(struct amdgpu_device *adev)
 
 static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
 					struct amdgpu_job *job,
-					bool *need_full_reset_arg)
+					bool *need_full_reset_arg,
+					bool *job_done)
 {
 	int i, r = 0;
 	bool need_full_reset  = *need_full_reset_arg;
@@ -3313,13 +3314,14 @@ static int amdgpu_device_pre_asic_reset(struct amdgpu_device *adev,
 		if (!ring || !ring->sched.thread)
 			continue;
 
-		drm_sched_stop(&ring->sched, job ? &job->base : NULL);
+		drm_sched_stop(&ring->sched, job ? &job->base : NULL,
+				job ? job_done : NULL);
 
 		/* after all hw jobs are reset, hw fence is meaningless, so force_completion */
 		amdgpu_fence_driver_force_completion(ring);
 	}
 
-	if(job)
+	if(job && job_done && !(*job_done))
 		drm_sched_increase_karma(&job->base);
 
 
@@ -3457,7 +3459,8 @@ static int amdgpu_do_asic_reset(struct amdgpu_hive_info *hive,
 }
 
 static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev,
-					  struct amdgpu_job *job)
+					  struct amdgpu_job *job,
+					  bool job_done)
 {
 	int i;
 
@@ -3467,7 +3470,7 @@ static void amdgpu_device_post_asic_reset(struct amdgpu_device *adev,
 		if (!ring || !ring->sched.thread)
 			continue;
 
-		if (!adev->asic_reset_res)
+		if (!adev->asic_reset_res && (!job || !job_done))
 			drm_sched_resubmit_jobs(&ring->sched);
 
 		drm_sched_start(&ring->sched, !adev->asic_reset_res);
@@ -3517,7 +3520,7 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 {
 	int r;
 	struct amdgpu_hive_info *hive = NULL;
-	bool need_full_reset = false;
+	bool need_full_reset = false, job_done = false;
 	struct amdgpu_device *tmp_adev = NULL;
 	struct list_head device_list, *device_list_handle =  NULL;
 
@@ -3529,17 +3532,35 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 	 * In case of XGMI hive disallow concurrent resets to be triggered
 	 * by different nodes. No point also since the one node already executing
 	 * reset will also reset all the other nodes in the hive.
+	 *
+	 * In case some one else already executing the reset for you let him
+	 * know your job also timed out so he can evaluate it after copleiting
+	 * the reset.
 	 */
 	hive = amdgpu_get_xgmi_hive(adev, 0);
-	if (hive && adev->gmc.xgmi.num_physical_nodes > 1 &&
-	    !mutex_trylock(&hive->reset_lock))
-		return 0;
+	if (hive && adev->gmc.xgmi.num_physical_nodes > 1) {
+		if (job) {
+			mutex_lock(&hive->hung_job_list_lock);
+			list_add_tail(&job->node, &hive->hung_job_list);
+			mutex_unlock(&hive->hung_job_list_lock);
+		}
+
+		if (mutex_trylock(&hive->reset_lock)) {
+			mutex_lock(&hive->hung_job_list_lock);
+			list_del_init(&job->node);
+			mutex_unlock(&hive->hung_job_list_lock);
+		}
+		else {
+			return 0;
+		}
+	}
 
 	/* Start with adev pre asic reset first for soft reset check.*/
 	amdgpu_device_lock_adev(adev);
 	r = amdgpu_device_pre_asic_reset(adev,
 					 job,
-					 &need_full_reset);
+					 &need_full_reset,
+					 &job_done);
 	if (r) {
 		/*TODO Should we stop ?*/
 		DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
@@ -3574,7 +3595,8 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 		amdgpu_device_lock_adev(tmp_adev);
 		r = amdgpu_device_pre_asic_reset(tmp_adev,
 						 NULL,
-						 &need_full_reset);
+						 &need_full_reset,
+						 NULL);
 		/*TODO Should we stop ?*/
 		if (r) {
 			DRM_ERROR("GPU pre asic reset failed with err, %d for drm dev, %s ",
@@ -3583,21 +3605,26 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 		}
 	}
 
-	/* Actual ASIC resets if needed.*/
-	/* TODO Implement XGMI hive reset logic for SRIOV */
-	if (amdgpu_sriov_vf(adev)) {
-		r = amdgpu_device_reset_sriov(adev, job ? false : true);
-		if (r)
-			adev->asic_reset_res = r;
-	} else {
-		r  = amdgpu_do_asic_reset(hive, device_list_handle, &need_full_reset);
-		if (r && r == -EAGAIN)
-			goto retry;
+	/* Skip HW reset if by now guilty job managed to finish */
+	if (!job || !job_done) {
+		/* Actual ASIC resets if needed.*/
+		/* TODO Implement XGMI hive reset logic for SRIOV */
+		if (amdgpu_sriov_vf(adev)) {
+			r = amdgpu_device_reset_sriov(adev, job ? false : true);
+			if (r)
+				adev->asic_reset_res = r;
+		} else {
+			r  = amdgpu_do_asic_reset(hive, device_list_handle, &need_full_reset);
+			if (r && r == -EAGAIN)
+				goto retry;
+		}
 	}
 
 	/* Post ASIC reset for all devs .*/
 	list_for_each_entry(tmp_adev, device_list_handle, gmc.xgmi.head) {
-		amdgpu_device_post_asic_reset(tmp_adev, tmp_adev == adev ? job : NULL);
+		amdgpu_device_post_asic_reset(tmp_adev,
+					      tmp_adev == adev ? job : NULL,
+					      job_done);
 
 		if (r) {
 			/* bad news, how to tell it to userspace ? */
@@ -3613,8 +3640,35 @@ int amdgpu_device_gpu_recover(struct amdgpu_device *adev,
 	if (hive && adev->gmc.xgmi.num_physical_nodes > 1)
 		mutex_unlock(&hive->reset_lock);
 
-	if (r)
+	/*
+	 * In case of XGMI and optimized reset (no HW reset) special care needs
+	 * to be taken of other devs in hive who had jobs TO but delegated the
+	 * reset to THIS instance assuming it will do HW reset for them.
+	 */
+	if (r) {
 		dev_info(adev->dev, "GPU reset end with ret = %d\n", r);
+	} else if (job_done && hive && adev->gmc.xgmi.num_physical_nodes > 1) {
+		struct amdgpu_job *job, *tmp_job;
+		bool repeat = false;
+
+		mutex_lock(&hive->hung_job_list_lock);
+		list_for_each_entry_safe(job, tmp_job, &hive->hung_job_list, node) {
+			if (!dma_fence_is_signaled(&job->base.s_fence->finished)) {
+				repeat = true;
+				break;
+			}
+			else
+				list_del_init(&job->node);
+		}
+		mutex_unlock(&hive->hung_job_list_lock);
+
+		/* I assume recursion can't be to deep here */
+		if (repeat) {
+			tmp_adev = (to_amdgpu_ring(job->base.sched))->adev;
+			r = amdgpu_device_gpu_recover(tmp_adev, job);
+		}
+	}
+
 	return r;
 }
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
index 0a17fb1..cd6221b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.c
@@ -27,6 +27,7 @@
 #include <drm/drmP.h>
 #include "amdgpu.h"
 #include "amdgpu_trace.h"
+#include "amdgpu_xgmi.h"
 
 static void amdgpu_job_timedout(struct drm_sched_job *s_job)
 {
@@ -81,6 +82,8 @@ int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
 	(*job)->vram_lost_counter = atomic_read(&adev->vram_lost_counter);
 	(*job)->vm_pd_addr = AMDGPU_BO_INVALID_OFFSET;
 
+	INIT_LIST_HEAD(&(*job)->node);
+
 	return 0;
 }
 
@@ -116,7 +119,33 @@ void amdgpu_job_free_resources(struct amdgpu_job *job)
 static void amdgpu_job_free_cb(struct drm_sched_job *s_job)
 {
 	struct amdgpu_ring *ring = to_amdgpu_ring(s_job->sched);
+	struct amdgpu_device *tmp_adev, *adev = ring->adev;
 	struct amdgpu_job *job = to_amdgpu_job(s_job);
+	struct amdgpu_hive_info *hive = amdgpu_get_xgmi_hive(adev, 0);
+
+	/*
+	 * In case of XGMI hive we must flush any pending reset work in progress
+	 * in the entire hive if this job caused TO because it might be accessed
+	 * by the reset work in hung_job_list
+	 */
+	if (hive && adev->gmc.xgmi.num_physical_nodes > 1) {
+		int i;
+		struct amdgpu_ring *ring;
+
+		/* Flush only if this job is on hive's hung_job_list */
+		if (!list_empty_careful(&s_job->node)) {
+			list_for_each_entry(tmp_adev, &hive->device_list, gmc.xgmi.head) {
+				for (i = 0; i < AMDGPU_MAX_RINGS; i++) {
+					ring = tmp_adev->rings[i];
+
+					if (!ring || !ring->sched.thread)
+						continue;
+
+					flush_delayed_work(&ring->sched.work_tdr);
+				}
+			}
+		}
+	}
 
 	drm_sched_job_cleanup(s_job);
 
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
index e1b46a6..5b8cef8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_job.h
@@ -60,6 +60,8 @@ struct amdgpu_job {
 	uint64_t		uf_addr;
 	uint64_t		uf_sequence;
 
+	/* XGMI hive list node */
+	struct list_head		node;
 };
 
 int amdgpu_job_alloc(struct amdgpu_device *adev, unsigned num_ibs,
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
index dac18745..1f27f4b 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.c
@@ -70,6 +70,8 @@ struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev, int lo
 	INIT_LIST_HEAD(&tmp->device_list);
 	mutex_init(&tmp->hive_lock);
 	mutex_init(&tmp->reset_lock);
+	mutex_init(&tmp->hung_job_list_lock);
+	INIT_LIST_HEAD(&tmp->hung_job_list);
 	if (lock)
 		mutex_lock(&tmp->hive_lock);
 
@@ -179,6 +181,7 @@ void amdgpu_xgmi_remove_device(struct amdgpu_device *adev)
 	if (!(hive->number_devices--)) {
 		mutex_destroy(&hive->hive_lock);
 		mutex_destroy(&hive->reset_lock);
+		mutex_destroy(&hive->hung_job_list_lock);
 	} else {
 		mutex_unlock(&hive->hive_lock);
 	}
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
index 14bc606..d4f0bfb 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
+++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_xgmi.h
@@ -29,8 +29,8 @@ struct amdgpu_hive_info {
 	struct list_head	device_list;
 	struct psp_xgmi_topology_info	topology_info;
 	int number_devices;
-	struct mutex hive_lock,
-		     reset_lock;
+	struct mutex hive_lock, reset_lock, hung_job_list_lock;
+	struct list_head	hung_job_list;
 };
 
 struct amdgpu_hive_info *amdgpu_get_xgmi_hive(struct amdgpu_device *adev, int lock);
diff --git a/drivers/gpu/drm/etnaviv/etnaviv_sched.c b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
index 6f1268f..3556976 100644
--- a/drivers/gpu/drm/etnaviv/etnaviv_sched.c
+++ b/drivers/gpu/drm/etnaviv/etnaviv_sched.c
@@ -109,7 +109,7 @@ static void etnaviv_sched_timedout_job(struct drm_sched_job *sched_job)
 	}
 
 	/* block scheduler */
-	drm_sched_stop(&gpu->sched, sched_job);
+	drm_sched_stop(&gpu->sched, sched_job, NULL);
 
 	if(sched_job)
 		drm_sched_increase_karma(sched_job);
diff --git a/drivers/gpu/drm/scheduler/sched_main.c b/drivers/gpu/drm/scheduler/sched_main.c
index 16c6363..54264c8 100644
--- a/drivers/gpu/drm/scheduler/sched_main.c
+++ b/drivers/gpu/drm/scheduler/sched_main.c
@@ -370,14 +370,14 @@ EXPORT_SYMBOL(drm_sched_increase_karma);
  *
  * @sched: scheduler instance
  * @bad: bad scheduler job
- *
+ * @bad_signaled: marks if guilty job already did signal eventually
  */
-void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
+void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad,
+		    bool *bad_signaled)
 {
 	struct drm_sched_job *s_job;
-	unsigned long flags;
 	struct dma_fence *last_fence =  NULL;
-	int r;
+	unsigned long flags;
 
 	kthread_park(sched->thread);
 
@@ -387,14 +387,12 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
 	 * also all the previous jobs that were in flight also already singaled
 	 * and removed from the list.
 	 */
-retry_wait:
 	spin_lock_irqsave(&sched->job_list_lock, flags);
 	list_for_each_entry_reverse(s_job, &sched->ring_mirror_list, node) {
 		if (s_job->s_fence->parent &&
 		    dma_fence_remove_callback(s_job->s_fence->parent,
 					      &s_job->cb)) {
 			dma_fence_put(s_job->s_fence->parent);
-			s_job->s_fence->parent = NULL;
 			atomic_dec(&sched->hw_rq_count);
 		} else {
 			 last_fence = dma_fence_get(&s_job->s_fence->finished);
@@ -407,6 +405,13 @@ void drm_sched_stop(struct drm_gpu_scheduler *sched, struct drm_sched_job *bad)
 		dma_fence_wait(last_fence, false);
 		dma_fence_put(last_fence);
 	}
+
+	/**
+	 * We waited for the last fence to signal so also guilty job must be
+	 * either done by this time or not
+	 */
+	if (bad && bad_signaled)
+		*bad_signaled = dma_fence_is_signaled(&bad->s_fence->finished);
 }
 
 EXPORT_SYMBOL(drm_sched_stop);
@@ -613,6 +618,7 @@ static void drm_sched_process_job(struct dma_fence *f, struct dma_fence_cb *cb)
 
 	drm_sched_fence_finished(s_fence);
 
+	s_fence->parent = NULL;
 	trace_drm_sched_process_job(s_fence);
 	wake_up_interruptible(&sched->wake_up_worker);
 
diff --git a/drivers/gpu/drm/v3d/v3d_sched.c b/drivers/gpu/drm/v3d/v3d_sched.c
index f76d9ed..5a8813e 100644
--- a/drivers/gpu/drm/v3d/v3d_sched.c
+++ b/drivers/gpu/drm/v3d/v3d_sched.c
@@ -179,7 +179,7 @@ v3d_job_timedout(struct drm_sched_job *sched_job)
 		struct drm_gpu_scheduler *sched = &v3d->queue[q].sched;
 
 		drm_sched_stop(sched, (sched_job->sched == sched ?
-					       sched_job : NULL));
+					       sched_job : NULL), NULL);
 
 		if(sched_job)
 			drm_sched_increase_karma(sched_job);
diff --git a/include/drm/gpu_scheduler.h b/include/drm/gpu_scheduler.h
index 62c2352..186f256 100644
--- a/include/drm/gpu_scheduler.h
+++ b/include/drm/gpu_scheduler.h
@@ -297,7 +297,8 @@ int drm_sched_job_init(struct drm_sched_job *job,
 void drm_sched_job_cleanup(struct drm_sched_job *job);
 void drm_sched_wakeup(struct drm_gpu_scheduler *sched);
 void drm_sched_stop(struct drm_gpu_scheduler *sched,
-		    struct drm_sched_job *job);
+		    struct drm_sched_job *job,
+		    bool *bad_signaled);
 void drm_sched_start(struct drm_gpu_scheduler *sched, bool full_recovery);
 void drm_sched_resubmit_jobs(struct drm_gpu_scheduler *sched);
 void drm_sched_increase_karma(struct drm_sched_job *bad);
-- 
2.7.4

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

Reply via email to