Christian,
Looks like we need more discuss with it…
Here is your approach:
1. Stop the scheduler from feeding more jobs to the hardware when a jobs
completes. //this is where I agree with you
2. Then call hw_job_reset to remove the connection between job and hardware
fence.
3. Test if job is
On 2017年05月09日 19:16, Christian König wrote:
What's the background of this change? E.g. why it is needed?
price question, maybe I should describe more in patch comment, I think
this is an improvement for previous patch, the sched_sync is to store
fence that could be skipped as scheduled, when
On 03/05/17 09:46 PM, Christian König wrote:
> Am 02.05.2017 um 22:04 schrieb SF Markus Elfring:
>> From: Markus Elfring
>> Date: Tue, 2 May 2017 22:00:02 +0200
>>
>> Three update suggestions were taken into account
>> from static source code analysis.
>>
>> Markus Elfring (3):
>>Use seq_putc(
On Wed, Apr 26, 2017 at 7:57 AM, Christian König
wrote:
> Am 26.04.2017 um 11:57 schrieb Dave Airlie:
>
>> On 26 April 2017 at 18:45, Christian König
>> wrote:
>>
>>> Am 26.04.2017 um 05:28 schrieb Dave Airlie:
>>>
Okay I've gone around the sun with these a few times, and
pretty much i
Hi,
Please review the patch set that supports amdgpu VM update via CPU. This
feature provides improved performance for compute (HSA) where mapping /
unmapping is carried out (by Kernel) independent of command submissions (done
directly by user space). This version doesn't support shadow copy of
On Mon, May 8, 2017 at 7:26 PM, Dave Airlie wrote:
> On 4 May 2017 at 18:16, Chris Wilson wrote:
> > On Wed, Apr 26, 2017 at 01:28:29PM +1000, Dave Airlie wrote:
> >> +#include
> >
> > I wonder if Daniel has already split everything used here into its own
> > headers?
>
> not sure, if drm_file
Hi,
Please review the patch set that supports amdgpu VM update via CPU. This
feature provides improved performance for compute (HSA) where mapping /
unmapping is carried out (by Kernel) independent of command submissions (done
directly by user space). This version doesn't support shadow copy of
Programming CP_HQD_QUEUE_PRIORITY enables a queue to take priority over
other queues on the same pipe. Multiple queues on a pipe are timesliced
so this gives us full precedence over other queues.
Programming CP_HQD_PIPE_PRIORITY changes the SPI_ARB_PRIORITY of the
wave as follows:
0x2: CS_
Add a new context creation parameter to express a global context priority.
The priority ranking in descending order is as follows:
* AMDGPU_CTX_PRIORITY_HIGH
* AMDGPU_CTX_PRIORITY_NORMAL
* AMDGPU_CTX_PRIORITY_LOW
The driver will attempt to schedule work to the hardware according to
the priorit
Add an initial framework for changing the HW priorities of rings. The
framework allows requesting priority changes for the lifetime of an
amdgpu_job. After the job completes the priority will decay to the next
lowest priority for which a request is still valid.
A new ring function set_priority() c
New in v9:
* Changed CU reservation into pipe resource reservation
* The priority_get/put routines are now sleep safe
* Removed requirement on srbm spinlock patch
___
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/
On 2017-05-09 04:24 AM, Daniel Vetter wrote:
On Mon, May 08, 2017 at 02:54:22PM -0400, Harry Wentland wrote:
Hi Daniel,
Thanks for taking the time to look at DC.
I had a couple more questions/comments in regard to the patch you posted on
IRC: http://paste.debian.net/plain/930704
My impressi
Reviewed-by: Felix Kuehling
Kent, can you make sure we pick this up with out next merge?
Thanks,
Felix
On 17-05-09 01:09 PM, Alex Deucher wrote:
> Signed-off-by: Alex Deucher
> ---
> drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 ++
> 1 file changed, 2 insertions(+)
>
> diff --git a/drivers/
Signed-off-by: Alex Deucher
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 416908a..b429f11 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b/driver
> -Original Message-
> From: Daniel Drake [mailto:dr...@endlessm.com]
> Sent: Tuesday, May 09, 2017 12:55 PM
> To: dri-devel; amd-gfx@lists.freedesktop.org; Deucher, Alexander
> Cc: Chris Chiu; Linux Upstreaming Team
> Subject: amdgpu display corruption and hang on AMD A10-9620P
>
> Hi,
>
Hi,
We are working with new laptops that have the AMD Bristol Ridge
chipset with this SoC:
AMD A10-9620P RADEON R5, 10 COMPUTE CORES 4C+6G
I think this is the Bristol Ridge chipset.
During boot, the display becomes unusable at the point where the
amdgpu driver loads. You can see at least two ho
On Wed, Apr 26, 2017 at 01:28:29PM +1000, Dave Airlie wrote:
> From: Dave Airlie
>
> Sync objects are new toplevel drm object, that contain a
> pointer to a fence. This fence can be updated via command
> submission ioctls via drivers.
>
> There is also a generic wait obj API modelled on the vulk
> -Original Message-
> From: amd-gfx [mailto:amd-gfx-boun...@lists.freedesktop.org] On Behalf
> Of Tom St Denis
> Sent: Tuesday, May 09, 2017 10:31 AM
> To: amd-gfx@lists.freedesktop.org
> Cc: StDenis, Tom
> Subject: [PATCH] drm/amd/amdgpu: Find correct min clocks for vega10
>
> Fixes: 1fe
Fixes: 1fe8f78d00589904b830a0ebd092c7810f625f00
Signed-off-by: Tom St Denis
---
drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 8 ++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c
b/drivers/gpu/drm/amd/powerplay/hwmgr
On Mon, May 8, 2017 at 1:01 PM, Gustavo A. R. Silva
wrote:
> Local variable use_doorbell is assigned to a constant value and it is never
> updated again. Remove this variable and the dead code it guards.
>
> Addresses-Coverity-ID: 1401828
> Signed-off-by: Gustavo A. R. Silva
This code is already
[ML] if the job complete, the job’s sched fence callback will take
this spin_lock and remove itself from mirror_list, so we are still
safe to call amd_sched_job_kickout(), and it will do nothing if so
Indeed, but I still think that this is a bad approach cause we then
reset the hardware withou
Hi all,
I'm seeing some very strange errors on Verde with CPU readback from
GART, and am pretty much out of ideas. Some help would be very much
appreciated.
The error manifests with the
GL45-CTS.gtf32.GL3Tests.packed_pixels.packed_pixels_pbo test on amdgpu,
but *not* on radeon. Here's what
You are missing that it is entirely possible that the job will complete while
we are trying to kick it out.
[ML] if the job complete, the job’s sched fence callback will take this
spin_lock and remove itself from mirror_list, so we are still safe to call
amd_sched_job_kickout(), and it will do
Am 08.05.2017 um 18:41 schrieb Gustavo A. R. Silva:
Local variable use_doorbell is assigned to a constant value and it is never
updated again. Remove this variable and the dead code it guards.
Addresses-Coverity-ID: 1401837
Signed-off-by: Gustavo A. R. Silva
Acked-by: Christian König for thi
What's the background of this change? E.g. why it is needed?
Christian.
Am 09.05.2017 um 10:14 schrieb Chunming Zhou:
Change-Id: I26d3a2794272ba94b25753d4bf367326d12f6939
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c
Am 09.05.2017 um 10:14 schrieb Chunming Zhou:
The problem is that executing the jobs in the right order doesn't give you the
right result
because consecutive jobs executed on the same engine are pipelined.
In other words job B does it buffer read before job A has written it's result.
Change-Id:
Am 09.05.2017 um 10:33 schrieb Zhang, Jerry (Junwei):
On 05/09/2017 04:14 PM, Chunming Zhou wrote:
Change-Id: Iced391f5c24a79ad7aecae33e22ff089f68f1337
Signed-off-by: Chunming Zhou
Good catch!
Reviewed-by: Junwei Zhang
Indeed, nice catch.
Reviewed-by: Christian König
---
drivers/gp
Am 09.05.2017 um 03:26 schrieb Alex Xie:
Signed-off-by: Alex Xie
---
drivers/gpu/drm/amd/amdgpu/amdgpu_device.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_device.c
index 2704f88..4
Am 08.05.2017 um 22:43 schrieb Harry Wentland:
On 2017-05-08 02:32 PM, Alex Deucher wrote:
On Fri, May 5, 2017 at 10:27 AM, Alex Deucher
wrote:
Update the scratch reg for when the engine is hung.
Signed-off-by: Alex Deucher
ping on this series.
I'm not an expert on this and haven't had
On Tue, May 09, 2017 at 12:26:34PM +1000, Dave Airlie wrote:
> On 4 May 2017 at 18:16, Chris Wilson wrote:
> > On Wed, Apr 26, 2017 at 01:28:29PM +1000, Dave Airlie wrote:
> >> +#include
> >
> > I wonder if Daniel has already split everything used here into its own
> > headers?
>
> not sure, if
On 05/09/2017 04:19 PM, Chunming Zhou wrote:
Change-Id: I26d3a2794272ba94b25753d4bf367326d12f6939
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 7 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 5 -
3 files c
On 05/09/2017 04:14 PM, Chunming Zhou wrote:
The problem is that executing the jobs in the right order doesn't give you the
right result
because consecutive jobs executed on the same engine are pipelined.
In other words job B does it buffer read before job A has written it's result.
Change-Id:
On 05/09/2017 04:14 PM, Chunming Zhou wrote:
Change-Id: Iced391f5c24a79ad7aecae33e22ff089f68f1337
Signed-off-by: Chunming Zhou
Good catch!
Reviewed-by: Junwei Zhang
---
drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drive
On Mon, May 08, 2017 at 02:54:22PM -0400, Harry Wentland wrote:
> Hi Daniel,
>
> Thanks for taking the time to look at DC.
>
> I had a couple more questions/comments in regard to the patch you posted on
> IRC: http://paste.debian.net/plain/930704
>
> My impression is that this item is the most i
Change-Id: I26d3a2794272ba94b25753d4bf367326d12f6939
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 7 ++-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 5 -
3 files changed, 11 insertions(+), 2 deletions(-)
diff --gi
On Mon, May 08, 2017 at 03:50:36PM -0400, Harry Wentland wrote:
>
>
> On 2017-05-08 03:07 PM, Dave Airlie wrote:
> > On 9 May 2017 at 04:54, Harry Wentland wrote:
> > > Hi Daniel,
> > >
> > > Thanks for taking the time to look at DC.
> > >
> > > I had a couple more questions/comments in regard
Change-Id: I26d3a2794272ba94b25753d4bf367326d12f6939
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 +
drivers/gpu/drm/amd/amdgpu/amdgpu_ib.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_job.c | 5 -
3 files changed, 10 insertions(+), 2 deletions(-)
diff --git
The problem is that executing the jobs in the right order doesn't give you the
right result
because consecutive jobs executed on the same engine are pipelined.
In other words job B does it buffer read before job A has written it's result.
Change-Id: I9065abf5bafbda7a92702ca11477315d25c9a347
Signe
Change-Id: Iced391f5c24a79ad7aecae33e22ff089f68f1337
Signed-off-by: Chunming Zhou
---
drivers/gpu/drm/amd/scheduler/gpu_scheduler.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
b/drivers/gpu/drm/amd/scheduler/gpu_scheduler.c
in
[ML] Really not necessary, we have spin_lock to protect the
mirror-list, nothing will be messed up ...
You are missing that it is entirely possible that the job will complete
while we are trying to kick it out.
[ML] why don't touch hardware fence at all ? the original/bare-metal
gpu reset als
40 matches
Mail list logo