On 06/12/2017 07:08 AM, Maarten Lankhorst wrote:
Op 09-06-17 om 23:30 schreef Andrey Grodzovsky:
Problem:
While running IGT kms_atomic_transition test suite i encountered
a hang in drmHandleEvent immidietly follwoing an atomic_commit.
After dumping the atomic state I relized that in this case
on user side will happen.
Fix:
Explicitly fail by failing atomic_commit early in
drm_mode_atomic_commit where such problem can be identified.
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
drivers/gpu/drm/drm_atomic.c | 13 -
1 file changed, 12 insertions(+), 1 de
where such probelm can be identified.
v2:
Fix typos and extra newlines.
Change-Id: I3ee28ffae35fd1e8bfe553146c44da53da02e6f8
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
drivers/gpu/drm/drm_atomic.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff
Just a reminder.
Thanks.
On 06/09/2017 05:30 PM, Andrey Grodzovsky wrote:
Problem:
While running IGT kms_atomic_transition test suite i encountered
a hang in drmHandleEvent immidietly follwoing an atomic_commit.
After dumping the atomic state I relized that in this case there was
not even one
Reviewed-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
Thanks,
Andrey
On 11/27/2017 09:57 AM, Gustavo A. R. Silva wrote:
dm_new_crtc_state->stream and disconnected_acrtc are being dereferenced
before they are null checked, hence there is a potential null pointer
derefere
On 05/01/2018 10:35 AM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->flags & PF_S
On 04/26/2018 08:34 AM, Andrey Grodzovsky wrote:
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea,
On 04/30/2018 08:08 AM, Christian König wrote:
Hi Eric,
sorry for the late response, was on vacation last week.
Am 26.04.2018 um 02:01 schrieb Eric W. Biederman:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grod
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->flags & PF_SIGNALED) && current->exit_code == SIGKILL)
+ if ((current->flags &am
On 04/30/2018 12:25 PM, Eric W. Biederman wrote:
Christian König <ckoenig.leichtzumer...@gmail.com> writes:
Hi Eric,
sorry for the late response, was on vacation last week.
Am 26.04.2018 um 02:01 schrieb Eric W. Biederman:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes
On 04/30/2018 02:29 PM, Christian König wrote:
Am 30.04.2018 um 18:10 schrieb Andrey Grodzovsky:
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->fl
This allows device drivers to specify an additional badness for the OOM
when they allocate memory on behalf of userspace.
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
include/linux/fs.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/fs.h b/include
Hi, this series is a revised version of an RFC sent by Christian König
a few years ago. The original RFC can be found at
https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html
This is the same idea and I've just adressed his concern from the original RFC
and switched to a
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 46a0c93..6a733cdc8 100644
--- a/drivers/gpu/d
Try to make better decisions which process to kill based on
per file OOM badness
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
mm/oom_kill.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 29f8555..825ed52
.
This patch gives the OOM another hint which process is
holding how many resources.
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
drivers/gpu/drm/drm_file.c | 12
drivers/gpu/drm/drm_gem.c | 8
include/drm/drm_file.h | 4
3 files chang
That definitely what I planned, just didn't want to clutter the RFC with
multiple repeated changes.
Thanks,
Andrey
On 01/30/2018 04:24 AM, Daniel Vetter wrote:
On Thu, Jan 18, 2018 at 11:47:52AM -0500, Andrey Grodzovsky wrote:
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.
On 03/12/2018 06:22 AM, David Binderman wrote:
hello there,
Source code is
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (res_ctx->pipe_ctx[i].stream) {
pipe_ctx = _ctx->pipe_ctx[i];
*pipe_idx = i;
break;
}
}
Indeed
The check before alone is not enough for a case where there is another
bug introduced so that
context->stream_count is not in sync with actual number of streams
across entire resource_context.
At least assert indeed should be there.
Andrey
On 03/12/2018 07:06 PM, Li, Roman wrote:
There
On 04/24/2018 05:21 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
On 2
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being
On 04/24/2018 12:23 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
Avoid calling wait_event_killable when you are possibly being called
from get_signal routine since in that case you end up in a deadlock
where you are alreay blocked in singla proc
On 04/24/2018 12:14 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
If the ring is hanging for some reason allow to recover the waiting
by sending fatal signal.
Originally-by: David Panariti <david.panar...@amd.com>
Signed-off-by: Andre
On 04/24/2018 12:42 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
Currently calling wait_event_killable as part of exiting process
will stall forever since SIGKILL generation is suppresed by PF_EXITING.
In our partilaur case AMDGPU driver wants to
On 04/25/2018 03:14 AM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:37:08PM -0400, Andrey Grodzovsky wrote:
On 04/24/2018 05:21 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018
On 04/24/2018 05:40 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:02:40PM -0400, Andrey Grodzovsky wrote:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code
On 04/25/2018 09:55 AM, Oleg Nesterov wrote:
On 04/24, Eric W. Biederman wrote:
Let me respectfully suggest that the wait_event_killable on that code
path is wrong.
I tend to agree even if I don't know this code.
But if it can be called from f_op->release() then any usage of "current" or
this to the kernel mailing list mainly because of the first patch,
the 2 others are intended more for amd-...@lists.freedesktop.org and
are given here just to provide more context for the problem we try to solve.
Andrey Grodzovsky (3):
signals: Allow generation of SIGKILL to exiting task.
dr
it and avoid a process in D state.
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
kernel/signal.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/signal.c b/kernel/signal.c
index c6e4c83..c49c706 100644
--- a/kernel/signal.c
+++ b/kernel/si
Avoid calling wait_event_killable when you are possibly being called
from get_signal routine since in that case you end up in a deadlock
where you are alreay blocked in singla processing any trying to wait
on a new signal.
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
d
If the ring is hanging for some reason allow to recover the waiting
by sending fatal signal.
Originally-by: David Panariti <david.panar...@amd.com>
Signed-off-by: Andrey Grodzovsky <andrey.grodzov...@amd.com>
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14 ++
1 file
On 04/24/2018 11:46 AM, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
Thanks, so many addresses that this one slipped out...
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being called
from
On 04/24/2018 11:46 AM, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
Thanks, so many addresses that this one slipped out...
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being called
from
if fence is not
signaled even when interrupted by non fatal signal.
Kind of dma_fence_wait_killable, except that we don't have such API
(maybe worth adding ?)
Andrey
-Original Message-
From: Andrey Grodzovsky <andrey.grodzov...@amd.com>
Sent: Tuesday, April 24, 2018 11
On 04/24/2018 12:30 PM, Eric W. Biederman wrote:
"Panariti, David" <david.panar...@amd.com> writes:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
Kind of dma_fence_wait_killable, except that we don't have such API
(maybe worth adding ?)
Depends on how many pla
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea, but we still want to be
able to exit immediately
and not wait for GPU jobs completion when the reason for reaching this code
is because of KILL
signal to the user
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea, but we still want to be
able to exit immed
On 04/25/2018 04:55 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/24/2018 12:30 PM, Eric W. Biederman wrote:
"Panariti, David" <david.panar...@amd.com> writes:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
Kin
On 04/26/2018 11:57 AM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/26/2018 08:34 AM, Andrey Grodzovsky wrote:
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky <andrey.grodzov...@amd.com> writes:
On 04/25/2018 01
On 03/12/2018 06:22 AM, David Binderman wrote:
hello there,
Source code is
for (i = 0; i < dc->res_pool->pipe_count; i++) {
if (res_ctx->pipe_ctx[i].stream) {
pipe_ctx = _ctx->pipe_ctx[i];
*pipe_idx = i;
break;
}
}
Indeed
The check before alone is not enough for a case where there is another
bug introduced so that
context->stream_count is not in sync with actual number of streams
across entire resource_context.
At least assert indeed should be there.
Andrey
On 03/12/2018 07:06 PM, Li, Roman wrote:
There
On 04/30/2018 08:08 AM, Christian König wrote:
Hi Eric,
sorry for the late response, was on vacation last week.
Am 26.04.2018 um 02:01 schrieb Eric W. Biederman:
Andrey Grodzovsky writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->flags & PF_SIGNALED) && current->exit_code == SIGKILL)
+ if ((current->flags &am
On 04/30/2018 12:25 PM, Eric W. Biederman wrote:
Christian König writes:
Hi Eric,
sorry for the late response, was on vacation last week.
Am 26.04.2018 um 02:01 schrieb Eric W. Biederman:
Andrey Grodzovsky writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey
On 04/30/2018 02:29 PM, Christian König wrote:
Am 30.04.2018 um 18:10 schrieb Andrey Grodzovsky:
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->fl
On 04/25/2018 03:14 AM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:37:08PM -0400, Andrey Grodzovsky wrote:
On 04/24/2018 05:21 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer
On 04/24/2018 05:40 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:02:40PM -0400, Andrey Grodzovsky wrote:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code
On 04/25/2018 09:55 AM, Oleg Nesterov wrote:
On 04/24, Eric W. Biederman wrote:
Let me respectfully suggest that the wait_event_killable on that code
path is wrong.
I tend to agree even if I don't know this code.
But if it can be called from f_op->release() then any usage of "current" or
On 04/24/2018 12:30 PM, Eric W. Biederman wrote:
"Panariti, David" writes:
Andrey Grodzovsky writes:
Kind of dma_fence_wait_killable, except that we don't have such API
(maybe worth adding ?)
Depends on how many places it would be called, or think it might be called.
Can alw
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea, but we still want to be
able to exit immediately
and not wait for GPU jobs completion when the reason for reaching this code
is because of KILL
signal to the user
On 04/25/2018 04:55 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/24/2018 12:30 PM, Eric W. Biederman wrote:
"Panariti, David" writes:
Andrey Grodzovsky writes:
Kind of dma_fence_wait_killable, except that we don't have such API
(maybe worth adding ?)
Depe
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea, but we still want to be
able to exit immediately
and not wait for GPU jobs completion
On 04/26/2018 08:34 AM, Andrey Grodzovsky wrote:
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here (drm_sched_entity_fini) is also a bad idea, but we still want
to be
able
On 04/26/2018 11:57 AM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/26/2018 08:34 AM, Andrey Grodzovsky wrote:
On 04/25/2018 08:01 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/25/2018 01:17 PM, Oleg Nesterov wrote:
On 04/25, Andrey Grodzovsky wrote:
here
Reviewed-by: Andrey Grodzovsky
Thanks,
Andrey
On 11/27/2017 09:57 AM, Gustavo A. R. Silva wrote:
dm_new_crtc_state->stream and disconnected_acrtc are being dereferenced
before they are null checked, hence there is a potential null pointer
dereference.
Fix this by null checking such point
this to the kernel mailing list mainly because of the first patch,
the 2 others are intended more for amd-...@lists.freedesktop.org and
are given here just to provide more context for the problem we try to solve.
Andrey Grodzovsky (3):
signals: Allow generation of SIGKILL to exiting task.
dr
it and avoid a process in D state.
Signed-off-by: Andrey Grodzovsky
---
kernel/signal.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/signal.c b/kernel/signal.c
index c6e4c83..c49c706 100644
--- a/kernel/signal.c
+++ b/kernel/signal.c
@@ -886,10 +886,10 @@ static
Avoid calling wait_event_killable when you are possibly being called
from get_signal routine since in that case you end up in a deadlock
where you are alreay blocked in singla processing any trying to wait
on a new signal.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/scheduler
If the ring is hanging for some reason allow to recover the waiting
by sending fatal signal.
Originally-by: David Panariti
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14 ++
1 file changed, 10 insertions(+), 4 deletions(-)
diff --git a/drivers
On 04/24/2018 11:46 AM, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
Thanks, so many addresses that this one slipped out...
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being called
from
On 04/24/2018 11:46 AM, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
Thanks, so many addresses that this one slipped out...
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being called
from
if fence is not
signaled even when interrupted by non fatal signal.
Kind of dma_fence_wait_killable, except that we don't have such API
(maybe worth adding ?)
Andrey
-Original Message-
From: Andrey Grodzovsky
Sent: Tuesday, April 24, 2018 11:31 AM
To: linux-kernel@vger.kernel.org
On 04/24/2018 12:14 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
If the ring is hanging for some reason allow to recover the waiting
by sending fatal signal.
Originally-by: David Panariti
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_ctx.c | 14
On 04/24/2018 12:23 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
Avoid calling wait_event_killable when you are possibly being called
from get_signal routine since in that case you end up in a deadlock
where you are alreay blocked in singla processing any trying to wait
on a new
On 04/24/2018 12:42 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
Currently calling wait_event_killable as part of exiting process
will stall forever since SIGKILL generation is suppresed by PF_EXITING.
In our partilaur case AMDGPU driver wants to flush all GPU jobs in
flight
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
On 2018-04-24 05:30 PM, Andrey Grodzovsky wrote:
Avoid calling wait_event_killable when you are possibly being
On 04/24/2018 05:21 PM, Eric W. Biederman wrote:
Andrey Grodzovsky writes:
On 04/24/2018 03:44 PM, Daniel Vetter wrote:
On Tue, Apr 24, 2018 at 05:46:52PM +0200, Michel Dänzer wrote:
Adding the dri-devel list, since this is driver independent code.
On 2018-04-24 05:30 PM, Andrey
That definitely what I planned, just didn't want to clutter the RFC with
multiple repeated changes.
Thanks,
Andrey
On 01/30/2018 04:24 AM, Daniel Vetter wrote:
On Thu, Jan 18, 2018 at 11:47:52AM -0500, Andrey Grodzovsky wrote:
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd
Hi, this series is a revised version of an RFC sent by Christian König
a few years ago. The original RFC can be found at
https://lists.freedesktop.org/archives/dri-devel/2015-September/089778.html
This is the same idea and I've just adressed his concern from the original RFC
and switched to a
This allows device drivers to specify an additional badness for the OOM
when they allocate memory on behalf of userspace.
Signed-off-by: Andrey Grodzovsky
---
include/linux/fs.h | 1 +
1 file changed, 1 insertion(+)
diff --git a/include/linux/fs.h b/include/linux/fs.h
index 511fbaa..938394a
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
b/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
index 46a0c93..6a733cdc8 100644
--- a/drivers/gpu/drm/amd/amdgpu/amdgpu_drv.c
+++ b
Try to make better decisions which process to kill based on
per file OOM badness
Signed-off-by: Andrey Grodzovsky
---
mm/oom_kill.c | 23 +++
1 file changed, 23 insertions(+)
diff --git a/mm/oom_kill.c b/mm/oom_kill.c
index 29f8555..825ed52 100644
--- a/mm/oom_kill.c
+++ b
.
This patch gives the OOM another hint which process is
holding how many resources.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/drm_file.c | 12
drivers/gpu/drm/drm_gem.c | 8
include/drm/drm_file.h | 4
3 files changed, 24 insertions(+)
diff --git a/drivers
On 05/01/2018 10:35 AM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
On 04/30/2018 12:00 PM, Oleg Nesterov wrote:
On 04/30, Andrey Grodzovsky wrote:
What about changing PF_SIGNALED to PF_EXITING in
drm_sched_entity_do_release
- if ((current->flags & PF_S
where such probelm can be identified.
v2:
Fix typos and extra newlines.
Change-Id: I3ee28ffae35fd1e8bfe553146c44da53da02e6f8
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/drm_atomic.c | 11 ++-
1 file changed, 10 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm
Just a reminder.
Thanks.
On 06/09/2017 05:30 PM, Andrey Grodzovsky wrote:
Problem:
While running IGT kms_atomic_transition test suite i encountered
a hang in drmHandleEvent immidietly follwoing an atomic_commit.
After dumping the atomic state I relized that in this case there was
not even one
on user side will happen.
Fix:
Explicitly fail by failing atomic_commit early in
drm_mode_atomic_commit where such problem can be identified.
Signed-off-by: Andrey Grodzovsky
---
drivers/gpu/drm/drm_atomic.c | 13 -
1 file changed, 12 insertions(+), 1 deletion(-)
diff --git a/drivers
On 06/12/2017 07:08 AM, Maarten Lankhorst wrote:
Op 09-06-17 om 23:30 schreef Andrey Grodzovsky:
Problem:
While running IGT kms_atomic_transition test suite i encountered
a hang in drmHandleEvent immidietly follwoing an atomic_commit.
After dumping the atomic state I relized that in this case
We actually do, while currently it's on hold since I switched to working
on surprise insertion
for some time it still would be helpful if you could give it a try.
https://cgit.freedesktop.org/~agrodzov/linux/log/?h=drm-misc-next
Andrey
On 2021-03-15 11:01 p.m., Nicholas Johnson wrote:
On
79 matches
Mail list logo