[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-18 Thread Michel Dänzer
On 15.08.2014 23:52, Christian K?nig wrote:
> 
> I think I've figured out what the cause of the remaining issues is while
> working on the implicit sync stuff.
> 
> The issue happens when the flush is done because of a CS to the DMA ring
> and a CS to the GFX ring directly after that which depends on the DMA
> submission to be finished.
> 
> In this situation we insert semaphore command so that the GFX ring wait
> for the DMA ring to finish execution and normally don't flush on the GFX
> ring a second time (the flush should be done on the DMA ring and we
> waited for that to finish).
> 
> The problem here is that semaphores can't be executed on the PFP, so the
> PFP doesn't wait for the semaphore to be completed and happily starts
> fetching commands while the flush on the DMA ring isn't completed.
> 
> @Michel: can you give this branch a try and see if it now works for you:
> http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing

Unfortunately not; in fact, it seems to make the problem occur even faster,
after just hundreds of piglit tests instead of after thousands.

However, based on your description above, I came up with the patch below,
which fixes the problem for me, with or without your 'drop
RADEON_FENCE_SIGNALED_SEQ' patch.


From: =?UTF-8?q?Michel=20D=C3=A4nzer?= 
Date: Mon, 18 Aug 2014 17:29:17 +0900
Subject: [PATCH] drm/radeon: Sync ME and PFP after CP semaphore waits on >=
 Cayman
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Fixes lockups due to CP read GPUVM faults when running piglit on Cape
Verde.

Signed-off-by: Michel D?nzer 
---

If the PACKET3_PFP_SYNC_ME packet was already supported before Cayman,
it might be a good idea to do this wherever possible, to avoid any
other issues the PFP running ahead of semaphore waits might cause.

 drivers/gpu/drm/radeon/cik.c | 17 +
 drivers/gpu/drm/radeon/ni.c  | 33 +
 drivers/gpu/drm/radeon/nid.h |  2 ++
 drivers/gpu/drm/radeon/radeon_asic.c |  4 ++--
 drivers/gpu/drm/radeon/radeon_asic.h |  4 
 5 files changed, 58 insertions(+), 2 deletions(-)

diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
index 81d07e6..49707ac 100644
--- a/drivers/gpu/drm/radeon/cik.c
+++ b/drivers/gpu/drm/radeon/cik.c
@@ -3920,6 +3920,17 @@ void cik_fence_compute_ring_emit(struct radeon_device 
*rdev,
radeon_ring_write(ring, 0);
 }

+/**
+ * cik_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
 bool cik_semaphore_ring_emit(struct radeon_device *rdev,
 struct radeon_ring *ring,
 struct radeon_semaphore *semaphore,
@@ -3932,6 +3943,12 @@ bool cik_semaphore_ring_emit(struct radeon_device *rdev,
radeon_ring_write(ring, lower_32_bits(addr));
radeon_ring_write(ring, (upper_32_bits(addr) & 0x) | sel);

+   if (emit_wait) {
+   /* Prevent the PFP from running ahead of the semaphore wait */
+   radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+   radeon_ring_write(ring, 0x0);
+   }
+
return true;
 }

diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
index ba89375..4e586c7 100644
--- a/drivers/gpu/drm/radeon/ni.c
+++ b/drivers/gpu/drm/radeon/ni.c
@@ -1363,6 +1363,39 @@ void cayman_fence_ring_emit(struct radeon_device *rdev,
radeon_ring_write(ring, 0);
 }

+
+/**
+ * cayman_semaphore_ring_emit - emit a semaphore on the CP ring
+ *
+ * @rdev: radeon_device pointer
+ * @ring: radeon ring buffer object
+ * @semaphore: radeon semaphore object
+ * @emit_wait: Is this a sempahore wait?
+ *
+ * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
+ * from running ahead of semaphore waits.
+ */
+bool cayman_semaphore_ring_emit(struct radeon_device *rdev,
+   struct radeon_ring *ring,
+   struct radeon_semaphore *semaphore,
+   bool emit_wait)
+{
+   uint64_t addr = semaphore->gpu_addr;
+   unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : 
PACKET3_SEM_SEL_SIGNAL;
+
+   radeon_ring_write(ring, PACKET3(PACKET3_MEM_SEMAPHORE, 1));
+   radeon_ring_write(ring, lower_32_bits(addr));
+   radeon_ring_write(ring, (upper_32_bits(addr) & 0xff) | sel);
+
+   if (emit_wait) {
+   /* Prevent the PFP from running ahead of the semaphore wait */
+   radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
+   radeon_ring_write(ring, 0x0);
+   }
+
+   return true;
+}
+
 void cayman_ring_ib_execute(struct 

[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-18 Thread Christian König
Am 18.08.2014 um 11:10 schrieb Michel D?nzer:
> On 15.08.2014 23:52, Christian K?nig wrote:
>> I think I've figured out what the cause of the remaining issues is while
>> working on the implicit sync stuff.
>>
>> The issue happens when the flush is done because of a CS to the DMA ring
>> and a CS to the GFX ring directly after that which depends on the DMA
>> submission to be finished.
>>
>> In this situation we insert semaphore command so that the GFX ring wait
>> for the DMA ring to finish execution and normally don't flush on the GFX
>> ring a second time (the flush should be done on the DMA ring and we
>> waited for that to finish).
>>
>> The problem here is that semaphores can't be executed on the PFP, so the
>> PFP doesn't wait for the semaphore to be completed and happily starts
>> fetching commands while the flush on the DMA ring isn't completed.
>>
>> @Michel: can you give this branch a try and see if it now works for you:
>> http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing
> Unfortunately not; in fact, it seems to make the problem occur even faster,
> after just hundreds of piglit tests instead of after thousands.
>
> However, based on your description above, I came up with the patch below,
> which fixes the problem for me, with or without your 'drop
> RADEON_FENCE_SIGNALED_SEQ' patch.

Oh, yes of course! That's indeed much simpler and the PFP_SYNC_ME packet 
should be available even on R600. I'm going to take care of this and 
supplying patches for all hardware generations we have.

Thanks for pointing me to the right solution,
Christian.

>
>
> From: =?UTF-8?q?Michel=20D=C3=A4nzer?= 
> Date: Mon, 18 Aug 2014 17:29:17 +0900
> Subject: [PATCH] drm/radeon: Sync ME and PFP after CP semaphore waits on >=
>   Cayman
> MIME-Version: 1.0
> Content-Type: text/plain; charset=UTF-8
> Content-Transfer-Encoding: 8bit
>
> Fixes lockups due to CP read GPUVM faults when running piglit on Cape
> Verde.
>
> Signed-off-by: Michel D?nzer 
> ---
>
> If the PACKET3_PFP_SYNC_ME packet was already supported before Cayman,
> it might be a good idea to do this wherever possible, to avoid any
> other issues the PFP running ahead of semaphore waits might cause.
>
>   drivers/gpu/drm/radeon/cik.c | 17 +
>   drivers/gpu/drm/radeon/ni.c  | 33 +
>   drivers/gpu/drm/radeon/nid.h |  2 ++
>   drivers/gpu/drm/radeon/radeon_asic.c |  4 ++--
>   drivers/gpu/drm/radeon/radeon_asic.h |  4 
>   5 files changed, 58 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/gpu/drm/radeon/cik.c b/drivers/gpu/drm/radeon/cik.c
> index 81d07e6..49707ac 100644
> --- a/drivers/gpu/drm/radeon/cik.c
> +++ b/drivers/gpu/drm/radeon/cik.c
> @@ -3920,6 +3920,17 @@ void cik_fence_compute_ring_emit(struct radeon_device 
> *rdev,
>   radeon_ring_write(ring, 0);
>   }
>   
> +/**
> + * cik_semaphore_ring_emit - emit a semaphore on the CP ring
> + *
> + * @rdev: radeon_device pointer
> + * @ring: radeon ring buffer object
> + * @semaphore: radeon semaphore object
> + * @emit_wait: Is this a sempahore wait?
> + *
> + * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
> + * from running ahead of semaphore waits.
> + */
>   bool cik_semaphore_ring_emit(struct radeon_device *rdev,
>struct radeon_ring *ring,
>struct radeon_semaphore *semaphore,
> @@ -3932,6 +3943,12 @@ bool cik_semaphore_ring_emit(struct radeon_device 
> *rdev,
>   radeon_ring_write(ring, lower_32_bits(addr));
>   radeon_ring_write(ring, (upper_32_bits(addr) & 0x) | sel);
>   
> + if (emit_wait) {
> + /* Prevent the PFP from running ahead of the semaphore wait */
> + radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
> + radeon_ring_write(ring, 0x0);
> + }
> +
>   return true;
>   }
>   
> diff --git a/drivers/gpu/drm/radeon/ni.c b/drivers/gpu/drm/radeon/ni.c
> index ba89375..4e586c7 100644
> --- a/drivers/gpu/drm/radeon/ni.c
> +++ b/drivers/gpu/drm/radeon/ni.c
> @@ -1363,6 +1363,39 @@ void cayman_fence_ring_emit(struct radeon_device *rdev,
>   radeon_ring_write(ring, 0);
>   }
>   
> +
> +/**
> + * cayman_semaphore_ring_emit - emit a semaphore on the CP ring
> + *
> + * @rdev: radeon_device pointer
> + * @ring: radeon ring buffer object
> + * @semaphore: radeon semaphore object
> + * @emit_wait: Is this a sempahore wait?
> + *
> + * Emits a semaphore signal/wait packet to the CP ring and prevents the PFP
> + * from running ahead of semaphore waits.
> + */
> +bool cayman_semaphore_ring_emit(struct radeon_device *rdev,
> + struct radeon_ring *ring,
> + struct radeon_semaphore *semaphore,
> + bool emit_wait)
> +{
> + uint64_t addr = semaphore->gpu_addr;
> + unsigned sel = emit_wait ? PACKET3_SEM_SEL_WAIT : 
> PACKET3_SEM_SEL_SIGNAL;
> +
> + 

[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-15 Thread Christian König
Hey guys,

I think I've figured out what the cause of the remaining issues is while 
working on the implicit sync stuff.

The issue happens when the flush is done because of a CS to the DMA ring 
and a CS to the GFX ring directly after that which depends on the DMA 
submission to be finished.

In this situation we insert semaphore command so that the GFX ring wait 
for the DMA ring to finish execution and normally don't flush on the GFX 
ring a second time (the flush should be done on the DMA ring and we 
waited for that to finish).

The problem here is that semaphores can't be executed on the PFP, so the 
PFP doesn't wait for the semaphore to be completed and happily starts 
fetching commands while the flush on the DMA ring isn't completed.

@Michel: can you give this branch a try and see if it now works for you: 
http://cgit.freedesktop.org/~deathsimple/linux/log/?h=vm-flushing

We should keep that behavior in mind should we switch to put IBs into 
normal BOs, cause when those a swapped out the synchronization won't 
wait for swapping them back in using the DMA as well.

Thanks,
Christian.

Am 12.08.2014 um 11:05 schrieb Christian K?nig:
> Am 11.08.2014 um 17:00 schrieb Alex Deucher:
>> On Mon, Aug 11, 2014 at 4:42 AM, Michel D?nzer  
>> wrote:
>>> On 08.08.2014 22:34, Alex Deucher wrote:
 On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher 
  wrote:
> On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  
> wrote:
>> On 08.08.2014 17:44, Christian K?nig wrote:
>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
>> 
>> wrote:
>>> We should be using PFP as much as possible.  Does the attached
>>> patch help?
 Unfortunately not.
>>> Maybe add a readback of the VM base addr pointer to make sure 
>>> that the
>>> write has really reached the SBRM?
>> I'm not sure what exactly you're thinking of, but I'm happy to 
>> test any
>> patches you guys come up with. :)
>>
> Maybe some variant of this patch?
 Ignore that one.  typo.  Try this one instead.
>>> Thanks, but still no luck.
>> I'm out of ideas at the moment.  I'll apply your patch unless
>> Christian can think of anything else.
>
> Unfortunately not, so apply the patch for now.
>
> Christian.
>
>>
>> Alex
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>



[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-12 Thread Christian König
Am 11.08.2014 um 17:00 schrieb Alex Deucher:
> On Mon, Aug 11, 2014 at 4:42 AM, Michel D?nzer  wrote:
>> On 08.08.2014 22:34, Alex Deucher wrote:
>>> On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher  
>>> wrote:
 On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  
 wrote:
> On 08.08.2014 17:44, Christian K?nig wrote:
> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  gmail.com>
> wrote:
>> We should be using PFP as much as possible.  Does the attached
>> patch help?
>>> Unfortunately not.
>> Maybe add a readback of the VM base addr pointer to make sure that the
>> write has really reached the SBRM?
> I'm not sure what exactly you're thinking of, but I'm happy to test any
> patches you guys come up with. :)
>
 Maybe some variant of this patch?
>>> Ignore that one.  typo.  Try this one instead.
>> Thanks, but still no luck.
> I'm out of ideas at the moment.  I'll apply your patch unless
> Christian can think of anything else.

Unfortunately not, so apply the patch for now.

Christian.

>
> Alex
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel



[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-11 Thread Michel Dänzer
On 08.08.2014 22:34, Alex Deucher wrote:
> On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher  wrote:
>> On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  wrote:
>>> On 08.08.2014 17:44, Christian K?nig wrote:
>>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
>>> wrote:
 We should be using PFP as much as possible.  Does the attached
 patch help?
> Unfortunately not.

 Maybe add a readback of the VM base addr pointer to make sure that the
 write has really reached the SBRM?
>>>
>>> I'm not sure what exactly you're thinking of, but I'm happy to test any
>>> patches you guys come up with. :)
>>>
>>
>> Maybe some variant of this patch?
> 
> Ignore that one.  typo.  Try this one instead.

Thanks, but still no luck.


-- 
Earthling Michel D?nzer|  http://www.amd.com
Libre software enthusiast  |Mesa and X developer


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-11 Thread Alex Deucher
On Mon, Aug 11, 2014 at 4:42 AM, Michel D?nzer  wrote:
> On 08.08.2014 22:34, Alex Deucher wrote:
>> On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher  
>> wrote:
>>> On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  wrote:
 On 08.08.2014 17:44, Christian K?nig wrote:
 On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
 wrote:
> We should be using PFP as much as possible.  Does the attached
> patch help?
>> Unfortunately not.
>
> Maybe add a readback of the VM base addr pointer to make sure that the
> write has really reached the SBRM?

 I'm not sure what exactly you're thinking of, but I'm happy to test any
 patches you guys come up with. :)

>>>
>>> Maybe some variant of this patch?
>>
>> Ignore that one.  typo.  Try this one instead.
>
> Thanks, but still no luck.

I'm out of ideas at the moment.  I'll apply your patch unless
Christian can think of anything else.

Alex


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-08 Thread Michel Dänzer
On 08.08.2014 17:44, Christian K?nig wrote:
 On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
 wrote:
> We should be using PFP as much as possible.  Does the attached
> patch help?
>> Unfortunately not.
> 
> Maybe add a readback of the VM base addr pointer to make sure that the
> write has really reached the SBRM?

I'm not sure what exactly you're thinking of, but I'm happy to test any
patches you guys come up with. :)


-- 
Earthling Michel D?nzer|  http://www.amd.com
Libre software enthusiast  |Mesa and X developer


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-08 Thread Michel Dänzer
On 08.08.2014 00:55, Alex Deucher wrote:
> 
> Note that there is no PFP (or CE) on the compute queues so we can't
> use PFP (or CE) for compute.

AFAICT cik_hdp_flush_cp_ring_emit() always uses the PFP though.


> Note also that the engine bit is not always consistent (for some packets 0
> = ME, 1 = PFP and for others 1= ME and 0 = PFP).

Ugh. Then we should probably use explicit *_ENGINE_PFP/ME macros instead
of *_ENGINE(lucky_number). :)


>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  
>> wrote:
>>>
>>> We should be using PFP as much as possible.  Does the attached patch help?

Unfortunately not.


-- 
Earthling Michel D?nzer|  http://www.amd.com
Libre software enthusiast  |Mesa and X developer


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-08 Thread Christian König
>>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  
>>> wrote:
 We should be using PFP as much as possible.  Does the attached patch help?
> Unfortunately not.

Maybe add a readback of the VM base addr pointer to make sure that the 
write has really reached the SBRM?

Otherwise I'm out of ideas as well,
Christian.


Am 08.08.2014 um 04:38 schrieb Michel D?nzer:
> On 08.08.2014 00:55, Alex Deucher wrote:
>> Note that there is no PFP (or CE) on the compute queues so we can't
>> use PFP (or CE) for compute.
> AFAICT cik_hdp_flush_cp_ring_emit() always uses the PFP though.
>
>
>> Note also that the engine bit is not always consistent (for some packets 0
>> = ME, 1 = PFP and for others 1= ME and 0 = PFP).
> Ugh. Then we should probably use explicit *_ENGINE_PFP/ME macros instead
> of *_ENGINE(lucky_number). :)
>
>
>>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  
>>> wrote:
 We should be using PFP as much as possible.  Does the attached patch help?
> Unfortunately not.
>
>



[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-08 Thread Alex Deucher
On Fri, Aug 8, 2014 at 9:31 AM, Alex Deucher  wrote:
> On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  wrote:
>> On 08.08.2014 17:44, Christian K?nig wrote:
>> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
>> wrote:
>>> We should be using PFP as much as possible.  Does the attached
>>> patch help?
 Unfortunately not.
>>>
>>> Maybe add a readback of the VM base addr pointer to make sure that the
>>> write has really reached the SBRM?
>>
>> I'm not sure what exactly you're thinking of, but I'm happy to test any
>> patches you guys come up with. :)
>>
>
> Maybe some variant of this patch?

Ignore that one.  typo.  Try this one instead.

Alex
-- next part --
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index dbd9d81..565201d 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -5007,6 +5007,7 @@ static void si_vm_decode_fault(struct radeon_device *rdev,
 void si_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm)
 {
struct radeon_ring *ring = >ring[ridx];
+   u32 reg;

if (vm == NULL)
return;
@@ -5017,15 +5018,23 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
 WRITE_DATA_DST_SEL(0)));

if (vm->id < 8) {
-   radeon_ring_write(ring,
- (VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (vm->id 
<< 2)) >> 2);
+   reg = (VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (vm->id << 2)) >> 2;
} else {
-   radeon_ring_write(ring,
- (VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((vm->id 
- 8) << 2)) >> 2);
+   reg = (VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((vm->id - 8) << 2)) 
>> 2;
}
+   radeon_ring_write(ring, reg);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, vm->pd_gpu_addr >> 12);

+   /* wait for the address change to go through */
+   radeon_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+   radeon_ring_write(ring, 3); /* == */
+   radeon_ring_write(ring, reg);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, vm->pd_gpu_addr >> 12);
+   radeon_ring_write(ring, 0x0fff);
+   radeon_ring_write(ring, 10);
+
/* flush hdp cache */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
@@ -5034,6 +5043,14 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0x1);

+   /* clear the response reg */
+   radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+   radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
+WRITE_DATA_DST_SEL(0)));
+   radeon_ring_write(ring, VM_INVALIDATE_RESPONSE >> 2);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, 0);
+
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
@@ -5042,6 +5059,15 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 1 << vm->id);

+   /* wait for the invalidate */
+   radeon_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+   radeon_ring_write(ring, 3); /* == */
+   radeon_ring_write(ring, VM_INVALIDATE_RESPONSE >> 2);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, 1 << vm->id);
+   radeon_ring_write(ring, 1 << vm->id);
+   radeon_ring_write(ring, 10);
+
/* sync PFP to ME, otherwise we might get invalid PFP reads */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0);


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-08 Thread Alex Deucher
On Fri, Aug 8, 2014 at 4:50 AM, Michel D?nzer  wrote:
> On 08.08.2014 17:44, Christian K?nig wrote:
> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher 
> wrote:
>> We should be using PFP as much as possible.  Does the attached
>> patch help?
>>> Unfortunately not.
>>
>> Maybe add a readback of the VM base addr pointer to make sure that the
>> write has really reached the SBRM?
>
> I'm not sure what exactly you're thinking of, but I'm happy to test any
> patches you guys come up with. :)
>

Maybe some variant of this patch?

Alex
-- next part --
diff --git a/drivers/gpu/drm/radeon/si.c b/drivers/gpu/drm/radeon/si.c
index dbd9d81..0855da0 100644
--- a/drivers/gpu/drm/radeon/si.c
+++ b/drivers/gpu/drm/radeon/si.c
@@ -5007,6 +5007,7 @@ static void si_vm_decode_fault(struct radeon_device *rdev,
 void si_vm_flush(struct radeon_device *rdev, int ridx, struct radeon_vm *vm)
 {
struct radeon_ring *ring = >ring[ridx];
+   u32 reg;

if (vm == NULL)
return;
@@ -5017,15 +5018,23 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
 WRITE_DATA_DST_SEL(0)));

if (vm->id < 8) {
-   radeon_ring_write(ring,
- (VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (vm->id 
<< 2)) >> 2);
+   reg = (VM_CONTEXT0_PAGE_TABLE_BASE_ADDR + (vm->id << 2)) >> 2;
} else {
-   radeon_ring_write(ring,
- (VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((vm->id 
- 8) << 2)) >> 2);
+   reg = (VM_CONTEXT8_PAGE_TABLE_BASE_ADDR + ((vm->id - 8) << 2)) 
>> 2;
}
+   radeon_ring_write(ring, reg);
radeon_ring_write(ring, 0);
radeon_ring_write(ring, vm->pd_gpu_addr >> 12);

+   /* wait for the address change to go through */
+   radeon_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+   radeon_ring_write(ring, 3); /* == */
+   radeon_ring_write(ring, reg);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, vm->pd_gpu_addr >> 12);
+   radeon_ring_write(ring, 0x0fff);
+   radeon_ring_write(ring, 10);
+
/* flush hdp cache */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
@@ -5034,6 +5043,14 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 0x1);

+   /* clear the response reg */
+   radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
+   radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
+WRITE_DATA_DST_SEL(0)));
+   radeon_ring_write(ring, VM_INVALIDATE_RESPONSE >> 2);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, 1 << vm->id);
+
/* bits 0-15 are the VM contexts0-15 */
radeon_ring_write(ring, PACKET3(PACKET3_WRITE_DATA, 3));
radeon_ring_write(ring, (WRITE_DATA_ENGINE_SEL(1) |
@@ -5042,6 +5059,15 @@ void si_vm_flush(struct radeon_device *rdev, int ridx, 
struct radeon_vm *vm)
radeon_ring_write(ring, 0);
radeon_ring_write(ring, 1 << vm->id);

+   /* wait for the invalidate */
+   radeon_ring_write(ring, PACKET3(PACKET3_WAIT_REG_MEM, 5));
+   radeon_ring_write(ring, 3); /* == */
+   radeon_ring_write(ring, VM_INVALIDATE_RESPONSE >> 2);
+   radeon_ring_write(ring, 0);
+   radeon_ring_write(ring, 1 << vm->id);
+   radeon_ring_write(ring, 1 << vm->id);
+   radeon_ring_write(ring, 10);
+
/* sync PFP to ME, otherwise we might get invalid PFP reads */
radeon_ring_write(ring, PACKET3(PACKET3_PFP_SYNC_ME, 0));
radeon_ring_write(ring, 0x0);


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-07 Thread Christian König
The GFX CP is split up into two different engines - the PFP and the ME 
(starting with SI you additionally get the CE as well).

The PFP is responsible for reading commands out of memory and forwarding 
it to the ME (or the CE). Some commands can be executed on the PFP as 
well, like simple register writes, but most commands can only run on the ME.

The PFP and the ME are connected through a 8 entry ring buffer (IIRC), 
so when you do something on the ME the PFP depends on you need to block 
the PFP for the ME to finish it's operation.

It strongly depends on what we want to do if we should use the PFP or 
the ME, but in most cases (like writing to memory) it's only the ME that 
can do the operation anyway.

Regards,
Christian.

Am 07.08.2014 um 17:38 schrieb Marek Ol??k:
> So what's difference between WRITE_DATA with PFP vs ME? Would it also
> be preferable for DMA_DATA and COPY_DATA?
>
> Marek
>
> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  wrote:
>> On Thu, Aug 7, 2014 at 3:46 AM, Michel D?nzer  wrote:
>>> From: Michel D?nzer 
>>>
>>> Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
>>> Bonaire and Kaveri though.
>>>
>>> [0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
>>> using the PFP v2'.
>>>
>>> Signed-off-by: Michel D?nzer 
>> We should be using PFP as much as possible.  Does the attached patch help?
>>
>> Alex
>>
>>> ---
>>>   drivers/gpu/drm/radeon/radeon_vm.c | 4 +++-
>>>   1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/radeon/radeon_vm.c 
>>> b/drivers/gpu/drm/radeon/radeon_vm.c
>>> index ccae4d9..898cbb7 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_vm.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_vm.c
>>> @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev,
>>>  uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
>>>
>>>  /* if we can't remember our last VM flush then flush now! */
>>> -   if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>>> +   /* XXX figure out why we have to flush all the time before CIK */
>>> +   if (rdev->family < CHIP_BONAIRE ||
>>> +   !vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>>>  trace_radeon_vm_flush(pd_addr, ring, vm->id);
>>>  vm->pd_gpu_addr = pd_addr;
>>>  radeon_ring_vm_flush(rdev, ring, vm);
>>> --
>>> 2.0.1
>>>
>>> ___
>>> dri-devel mailing list
>>> dri-devel at lists.freedesktop.org
>>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel



[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-07 Thread Marek Olšák
So what's difference between WRITE_DATA with PFP vs ME? Would it also
be preferable for DMA_DATA and COPY_DATA?

Marek

On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  wrote:
> On Thu, Aug 7, 2014 at 3:46 AM, Michel D?nzer  wrote:
>> From: Michel D?nzer 
>>
>> Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
>> Bonaire and Kaveri though.
>>
>> [0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
>> using the PFP v2'.
>>
>> Signed-off-by: Michel D?nzer 
>
> We should be using PFP as much as possible.  Does the attached patch help?
>
> Alex
>
>> ---
>>  drivers/gpu/drm/radeon/radeon_vm.c | 4 +++-
>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/gpu/drm/radeon/radeon_vm.c 
>> b/drivers/gpu/drm/radeon/radeon_vm.c
>> index ccae4d9..898cbb7 100644
>> --- a/drivers/gpu/drm/radeon/radeon_vm.c
>> +++ b/drivers/gpu/drm/radeon/radeon_vm.c
>> @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev,
>> uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
>>
>> /* if we can't remember our last VM flush then flush now! */
>> -   if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>> +   /* XXX figure out why we have to flush all the time before CIK */
>> +   if (rdev->family < CHIP_BONAIRE ||
>> +   !vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>> trace_radeon_vm_flush(pd_addr, ring, vm->id);
>> vm->pd_gpu_addr = pd_addr;
>> radeon_ring_vm_flush(rdev, ring, vm);
>> --
>> 2.0.1
>>
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-07 Thread Michel Dänzer
From: Michel D?nzer 

Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
Bonaire and Kaveri though.

[0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
using the PFP v2'.

Signed-off-by: Michel D?nzer 
---
 drivers/gpu/drm/radeon/radeon_vm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/radeon/radeon_vm.c 
b/drivers/gpu/drm/radeon/radeon_vm.c
index ccae4d9..898cbb7 100644
--- a/drivers/gpu/drm/radeon/radeon_vm.c
+++ b/drivers/gpu/drm/radeon/radeon_vm.c
@@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev,
uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);

/* if we can't remember our last VM flush then flush now! */
-   if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
+   /* XXX figure out why we have to flush all the time before CIK */
+   if (rdev->family < CHIP_BONAIRE ||
+   !vm->last_flush || pd_addr != vm->pd_gpu_addr) {
trace_radeon_vm_flush(pd_addr, ring, vm->id);
vm->pd_gpu_addr = pd_addr;
radeon_ring_vm_flush(rdev, ring, vm);
-- 
2.0.1



[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-07 Thread Alex Deucher
On Thu, Aug 7, 2014 at 11:38 AM, Marek Ol??k  wrote:
> So what's difference between WRITE_DATA with PFP vs ME? Would it also
> be preferable for DMA_DATA and COPY_DATA?

The PFP comes before the ME in the pipeline.  Note that there is no
PFP (or CE) on the compute queues so we can't use PFP (or CE) for
compute.  According to the internal gfx teams, we should use PFP
whenever possible since the PFP is rarely as busy as the ME.  Note
also that the engine bit is not always consistent (for some packets 0
= ME, 1 = PFP and for others 1= ME and 0 = PFP).

Alex

>
> Marek
>
> On Thu, Aug 7, 2014 at 3:59 PM, Alex Deucher  wrote:
>> On Thu, Aug 7, 2014 at 3:46 AM, Michel D?nzer  wrote:
>>> From: Michel D?nzer 
>>>
>>> Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
>>> Bonaire and Kaveri though.
>>>
>>> [0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
>>> using the PFP v2'.
>>>
>>> Signed-off-by: Michel D?nzer 
>>
>> We should be using PFP as much as possible.  Does the attached patch help?
>>
>> Alex
>>
>>> ---
>>>  drivers/gpu/drm/radeon/radeon_vm.c | 4 +++-
>>>  1 file changed, 3 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/gpu/drm/radeon/radeon_vm.c 
>>> b/drivers/gpu/drm/radeon/radeon_vm.c
>>> index ccae4d9..898cbb7 100644
>>> --- a/drivers/gpu/drm/radeon/radeon_vm.c
>>> +++ b/drivers/gpu/drm/radeon/radeon_vm.c
>>> @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev,
>>> uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
>>>
>>> /* if we can't remember our last VM flush then flush now! */
>>> -   if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>>> +   /* XXX figure out why we have to flush all the time before CIK */
>>> +   if (rdev->family < CHIP_BONAIRE ||
>>> +   !vm->last_flush || pd_addr != vm->pd_gpu_addr) {
>>> trace_radeon_vm_flush(pd_addr, ring, vm->id);
>>> vm->pd_gpu_addr = pd_addr;
>>> radeon_ring_vm_flush(rdev, ring, vm);
>>> --
>>> 2.0.1
>>>
>>> ___
>>> dri-devel mailing list
>>> dri-devel at lists.freedesktop.org
>>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>>
>> ___
>> dri-devel mailing list
>> dri-devel at lists.freedesktop.org
>> http://lists.freedesktop.org/mailman/listinfo/dri-devel
>>


[PATCH] drm/radeon: Always flush VM again on < CIK

2014-08-07 Thread Alex Deucher
On Thu, Aug 7, 2014 at 3:46 AM, Michel D?nzer  wrote:
> From: Michel D?nzer 
>
> Not doing this causes piglit hangs[0] on my Cape Verde card. No issues on
> Bonaire and Kaveri though.
>
> [0] Same symptoms as those fixed on CIK by 'drm/radeon: set VM base addr
> using the PFP v2'.
>
> Signed-off-by: Michel D?nzer 

We should be using PFP as much as possible.  Does the attached patch help?

Alex

> ---
>  drivers/gpu/drm/radeon/radeon_vm.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/gpu/drm/radeon/radeon_vm.c 
> b/drivers/gpu/drm/radeon/radeon_vm.c
> index ccae4d9..898cbb7 100644
> --- a/drivers/gpu/drm/radeon/radeon_vm.c
> +++ b/drivers/gpu/drm/radeon/radeon_vm.c
> @@ -238,7 +238,9 @@ void radeon_vm_flush(struct radeon_device *rdev,
> uint64_t pd_addr = radeon_bo_gpu_offset(vm->page_directory);
>
> /* if we can't remember our last VM flush then flush now! */
> -   if (!vm->last_flush || pd_addr != vm->pd_gpu_addr) {
> +   /* XXX figure out why we have to flush all the time before CIK */
> +   if (rdev->family < CHIP_BONAIRE ||
> +   !vm->last_flush || pd_addr != vm->pd_gpu_addr) {
> trace_radeon_vm_flush(pd_addr, ring, vm->id);
> vm->pd_gpu_addr = pd_addr;
> radeon_ring_vm_flush(rdev, ring, vm);
> --
> 2.0.1
>
> ___
> dri-devel mailing list
> dri-devel at lists.freedesktop.org
> http://lists.freedesktop.org/mailman/listinfo/dri-devel
-- next part --
A non-text attachment was scrubbed...
Name: 0001-drm-radeon-use-pfp-for-all-vm_flush-related-updates.patch
Type: text/x-diff
Size: 3275 bytes
Desc: not available
URL: