On 9/3/20 1:32 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-09-03 10:50:45)
On 9/2/20 4:02 PM, Thomas Hellström (Intel) wrote:
Hi, Chris,
On 8/26/20 3:28 PM, Chris Wilson wrote:
Use the wait_queue_entry.flags to denote the special fence behaviour
(flattening continuations
ere somethin specific
you want me to change for a R-B?
Thanks,
Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
))
return -EINVAL;
- pending = 0;
+ pending = I915_SW_FENCE_FLAG_FENCE;
if (!wq) {
wq = kmalloc(sizeof(*wq), gfp);
if (!wq) {
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https
On 9/2/20 4:02 PM, Thomas Hellström (Intel) wrote:
Hi, Chris,
On 8/26/20 3:28 PM, Chris Wilson wrote:
Use the wait_queue_entry.flags to denote the special fence behaviour
(flattening continuations along fence chains, and for propagating
errors) rather than trying to detect ordinary waiters
;userptr.lock?
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
ther checks in the function.
Furthermore the sleep is uninterruptible. We probably need a core change
to get this right.
pvec = kvmalloc_array(num_pages, sizeof(struct page *), GFP_KERNEL);
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
);
if (eb.trampoline)
i915_vma_unpin(eb.trampoline);
WARN_ON(err == -EDEADLK);
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 10/19/20 9:30 AM, Thomas Hellström (Intel) wrote:
On 10/16/20 12:43 PM, Maarten Lankhorst wrote:
Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation slowpath. When all locks are dropped, we acquire
On 10/19/20 10:10 AM, Maarten Lankhorst wrote:
Op 19-10-2020 om 09:30 schreef Thomas Hellström (Intel):
On 10/16/20 12:43 PM, Maarten Lankhorst wrote:
Instead of doing what we do currently, which will never work with
PROVE_LOCKING, do the same as AMD does, and something similar to
relocation
röm
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
the ww dance.
Changes since v1:
- Do not use ww locking in i915_gem_set_caching_ioctl (Thomas).
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org
the order of eb_parse slightly, to ensure
we pass ww at a point where we could still handle -EDEADLK safely.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https
means we no longer need to worry about a
potential -EDEADLK at a point where we are ready to submit.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org
On 9/22/20 11:12 AM, Tvrtko Ursulin wrote:
On 17/09/2020 19:59, Thomas Hellström (Intel) wrote:
From: Thomas Hellström
With the huge number of sites where multiple-object locking is
needed in the driver, it becomes difficult to avoid recursive
ww_acquire_ctx initialization, and the function
Fix that.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
ww.c
b/drivers/gpu/drm/i915/i915_gem_ww.c
new file mode 100644
index ..3490b72cf613
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+#include
+#include
+#include "i915_gem_ww.
/drm/i915/i915_gem_ww.c
index 3490b72cf613..6247af1dba87 100644
--- a/drivers/gpu/drm/i915/i915_gem_ww.c
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -1,10 +1,12 @@
// SPDX-License-Identifier: MIT
/*
- * Copyright © 2020 Intel Corporation
+ * Copyright © 2019 Intel Corporation
*/
+#include
ww.c
b/drivers/gpu/drm/i915/i915_gem_ww.c
new file mode 100644
index ..3490b72cf613
--- /dev/null
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -0,0 +1,72 @@
+// SPDX-License-Identifier: MIT
+/*
+ * Copyright © 2020 Intel Corporation
+ */
+#include
+#include
+#include "i915_gem_ww.
/i915_gem_ww.c
+++ b/drivers/gpu/drm/i915/i915_gem_ww.c
@@ -1,10 +1,12 @@
// SPDX-License-Identifier: MIT
/*
- * Copyright © 2020 Intel Corporation
+ * Copyright © 2019 Intel Corporation
*/
+#include
#include
#include
#include "i915_gem_ww.h"
+#include "i915_globals.h&
ack of the current implementation is the use of the hash
table and corresponding performance cost, but as mentioned in
patch 2, a core variant could probably do this in a much more
efficient way.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
ht
-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson
Cc: Maarten Lankhorst
Cc: Christian König
Signed-off-by: Daniel Vetter
LGTM. Perhaps some in-code documentation on how to use the new functions
are called.
Otherwise for patch 2 and 3,
Reviewed-by: Thomas Hellstrom
On 10/27/20 5:25 PM, Thomas Hellström (Intel) wrote:
+
+ if (WARN_ON(!i915_gem_object_trylock(tl->hwsp_ggtt->obj)))
+ return -EBUSY;
I think we should either annotate this properly as an isolated lock,
or allow a silent -EBUSY.
This is done in a controlled selftest where w
urn -EBUSY;
I think we should either annotate this properly as an isolated lock, or
allow a silent -EBUSY.
/Thomas
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
should.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 10/27/20 3:31 PM, Maarten Lankhorst wrote:
Op 27-10-2020 om 12:03 schreef Thomas Hellström (Intel):
On 10/15/20 1:25 PM, Maarten Lankhorst wrote:
We're starting to require the reservation lock for pinning,
so wait until we have that.
Update the selftests to handle this correctly
On 2020-07-21 15:59, Christian König wrote:
Am 21.07.20 um 12:47 schrieb Thomas Hellström (Intel):
...
Yes, we can't do magic. As soon as an indefinite batch makes it to
such hardware we've lost. But since we can break out while the batch
is stuck in the scheduler waiting, what I believe we
On 2020-07-22 13:39, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 12:31 PM Thomas Hellström (Intel)
wrote:
On 2020-07-22 11:45, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
wrote:
On 2020-07-22 09:11, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 8:45 AM
return it;
+ }
+#endif
+ }
BUILD_BUG_ON(offsetof(typeof(*it), node));
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 2020-07-22 11:45, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 10:05 AM Thomas Hellström (Intel)
wrote:
On 2020-07-22 09:11, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
wrote:
On 2020-07-22 00:45, Dave Airlie wrote:
On Tue, 21 Jul 2020 at 18:47
On 2020-07-22 09:11, Daniel Vetter wrote:
On Wed, Jul 22, 2020 at 8:45 AM Thomas Hellström (Intel)
wrote:
On 2020-07-22 00:45, Dave Airlie wrote:
On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
wrote:
On 7/21/20 9:45 AM, Christian König wrote:
Am 21.07.20 um 09:41 schrieb Daniel
On 2020-07-22 00:45, Dave Airlie wrote:
On Tue, 21 Jul 2020 at 18:47, Thomas Hellström (Intel)
wrote:
On 7/21/20 9:45 AM, Christian König wrote:
Am 21.07.20 um 09:41 schrieb Daniel Vetter:
On Mon, Jul 20, 2020 at 01:15:17PM +0200, Thomas Hellström (Intel)
wrote:
Hi,
On 7/9/20 2:33 PM
insertions(+), 11 deletions(-)
Lgtm. Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 2020-07-22 16:23, Christian König wrote:
Am 22.07.20 um 16:07 schrieb Daniel Vetter:
On Wed, Jul 22, 2020 at 3:12 PM Thomas Hellström (Intel)
wrote:
On 2020-07-22 14:41, Daniel Vetter wrote:
I'm pretty sure there's more bugs, I just haven't heard from them yet.
Also due to the opt
h feedback aside from amdgpu and
intel, and those two drivers pretty much need to sort out their memory
fence issues anyway (because of userptr and stuff like that).
The only other issues outside of these two drivers I'm aware of:
- various scheduler drivers doing allocations in the drm/schedule
,
fix that up.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Tvrtko Ursulin
Lgtm. Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
parsing in a separate function which can get called from execbuf
relocation fast and slowpath.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org
age-table-entry splitting may happen under the i_mmap_lock
from unmap_mapping_range() it might be worth figuring out how new page
directory pages are allocated, though.
/Thomas
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.
pinning pages, triggers a shrinker on the
other driver
5. Other driver shrinker blocks on the second DMA-fence,
6. Deadlock.
Or do I misread the i915 userptr code?
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https
On 7/28/20 4:50 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-07-27 10:24:24)
Hi, Chris,
It appears to me like this series is doing a lot of different things:
- Various optimizations
- Locking rework
- Adding schedulers
- Other misc fixes
Could you please separate out
via its objects. By only requiring that lock as the context
is activated, it is both reduced in frequency and reduced in duration
(as compared to execbuf).
Signed-off-by: Chris Wilson
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx
Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
, creating a small number of queues for each context to
limit the number of concurrent tasks.
The implementation relies on only scheduling one unbind operation per
vma as we use the unbound vma->node location to track the stale PTE.
Closes: https://gitlab.freedesktop.org/drm/intel/issues/1402
Signed-
On 7/15/20 1:51 PM, Chris Wilson wrote:
Obsolete, last user removed.
Signed-off-by: Chris Wilson
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 7/15/20 1:51 PM, Chris Wilson wrote:
Pull the cmdparser allocations in to the reservation phase, and then
they are included in the common vma pinning pass.
Signed-off-by: Chris Wilson
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
röm
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 7/28/20 5:08 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-07-27 19:19:19)
On 7/15/20 1:51 PM, Chris Wilson wrote:
It is illegal to wait on an another vma while holding the vm->mutex, as
that easily leads to ABBA deadlocks (we wait on a second vma that waits
on
directly. It is done in i915_gem_ww_ctx_fini.
Changes since v1:
- Change ww_ctx and obj order in locking functions (Jonas Lahtinen)
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
On 7/31/20 3:28 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-07-31 10:03:59)
On 7/15/20 1:50 PM, Chris Wilson wrote:
Currently, if an error is raised we always call the cleanup locally
[and skip the main work callback]. However, some future users
Could you add an example
On 8/10/20 12:31 PM, Maarten Lankhorst wrote:
Signed-off-by: Maarten Lankhorst
Commit message, please.
Otherwise, looks good.
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
, for intel_context_pin_ww().
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
ce);
+ ce->ops->post_unpin(ce);
What's protecting ops->unpin() here, running concurrently with ops->pin
in __intel_context_do_pin()? Do the ops functions have to implement
their own locking if needed?
Otherwise LGTM
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
all selftests.
Changes since v1:
- Add intel_engine_pm_get/put() calls to fix use-after-free when using
intel_engine_get_pool().
Signed-off-by: Maarten Lankhorst
LGTM.
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-
now.
Signed-off-by: Maarten Lankhorst
Ugh. We should probably fix this properly as soon as possible to avoid
copy-pasting of self-tests that aren't fixed yet.
For the hack:
Acked-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
.
Changes since v11:
- Remove relocation chaining, pain to make it work.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel
fini();
+
Hmm. Didn't we keep an intel_context_pin() that does exactly the above
without recoding the whole ww transaction? Or do you plan to remove that?
With that taken into account,
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing
-by: Maarten Lankhorst
LGTM.
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
x_backoff();
+ if (!ret)
+ goto retry;
+ }
+ i915_gem_ww_ctx_fini();
Why a ww transaction for a single lock?
/Thomas
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.or
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
ion.
+ if (!err && ce->ring->vma->obj)
+ err = i915_gem_object_lock(ce->ring->vma->obj, ww);
+ if (!err && ce->state)
+ err = i915_gem_object_lock(ce->state->obj, ww);
Could these three locks be made interruptible?
+int i915_ggtt_pin(struct i915_vma *vma, struct i915_gem_ww_ctx *ww,
+ u32 align, unsigned int flags);
static inline int i915_vma_pin_count(const struct i915_vma *vma)
{
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 8/12/20 9:32 PM, Thomas Hellström (Intel) wrote:
+ if (!err && ce->ring->vma->obj)
+ err = i915_gem_object_lock(ce->ring->vma->obj, ww);
+ if (!err && ce->state)
+ err = i915_gem_object_lock(ce->state->obj, ww);
Could these
);
- if (err)
- return err;
-
if (eb->args->flags & I915_EXEC_GEN7_SOL_RESET) {
err = i915_reset_gen7_sol_offsets(eb->request);
if (err)
@@ -3636,6 +3616,9 @@ i915_gem_do_execbuffer(struct drm_device *dev,
goto err_engin
xing series
follow the locking rework. That async work introduces a bunch of code
complexity and it would be beneficial to see a discussion of the
tradeoffs and how it alignes with the upstream proposed dma-fence
annotations
Thanks,
Thomas
___
Intel-gfx
have a renderstate object.
Signed-off-by: Maarten Lankhorst
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
fine __EXEC_OBJECT_NO_RESERVE BIT(31)
+
int __must_check __i915_vma_move_to_active(struct i915_vma *vma,
struct i915_request *rq);
int __must_check i915_vma_move_to_active(struct i915_vma *vma,
___
In
/i915_gem_context.c | 22 +++-
2 files changed, 48 insertions(+), 29 deletions(-)
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 6/23/20 1:22 PM, Thomas Hellström (Intel) wrote:
Hi, Chris,
On 6/22/20 11:59 AM, Chris Wilson wrote:
In order to actually handle eviction and what not, we need to process
all the objects together under a common lock, reservation_ww_class. As
such, do a memory reservation pass after looking
reservation_ww_class. And all memory
pinning seems to be in the fence critical path as well?
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
eb->mm_fence)
+ return -ENOMEM;
Where are the proxy fence functions defined?
Thanks,
Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 6/23/20 4:01 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 13:57:06)
On 6/23/20 1:22 PM, Thomas Hellström (Intel) wrote:
Hi, Chris,
On 6/22/20 11:59 AM, Chris Wilson wrote:
In order to actually handle eviction and what not, we need to process
all the objects
On 6/23/20 6:17 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 16:09:08)
You can't take the dma_resv_lock inside a fence critical section.
I much prefer the alternative interpretation, you can't wait inside a
dma_resv_lock.
-Chris
I respect your point of view, athough
On 6/23/20 12:03 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 10:33:20)
Hi, Chris!
On 6/22/20 11:59 AM, Chris Wilson wrote:
In order to actually handle eviction and what not, we need to process
all the objects together under a common lock, reservation_ww_class
On 6/23/20 6:36 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 12:22:11)
Hi, Chris,
On 6/22/20 11:59 AM, Chris Wilson wrote:
In order to actually handle eviction and what not, we need to process
all the objects together under a common lock, reservation_ww_class
On 6/23/20 11:15 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 21:31:38)
On 6/23/20 8:41 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 19:21:28)
On 6/23/20 6:36 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 12:22:11)
Hi
to),
perhaps a debug warning if during object destruction, this isn't an
empty list head?
Other than that, this patch looks good to me.
Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https
-isolated
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Hi, Chris,
On 6/24/20 9:43 AM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-24 08:10:43)
Hi, Maarten,
On 6/23/20 4:28 PM, Maarten Lankhorst wrote:
i915_gem_ww_ctx is used to lock all gem bo's for pinning and memory
eviction. We don't use it yet, but lets start adding
On 6/24/20 10:08 AM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-24 06:42:33)
On 6/23/20 11:15 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 21:31:38)
On 6/23/20 8:41 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 19:21:28
e the normal rules for w/w handling can be
used for eb parsing as well. :)
~Maarten
I meant the changed assignment of the batch variable?
/Thomas
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
+ sf = 20 - 1;
+
+ return slice << sf;
+}
+
Is this the same deadline calculation as used in the BFS? Could you
perhaps add a pointer to some documentation?
/Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Hi,
On 6/16/20 12:12 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-16 10:07:28)
Hi, Chris,
Some comments and questions:
On 6/8/20 12:21 AM, Chris Wilson wrote:
The first "scheduler" was a topographical sorting of requests into
priority order. The execu
(+), 27 deletions(-)
ltgm. Reviewed-by: Thomas Hellström
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
-by: Thomas Hellström
_______
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
rr_parse;
}
batch = vma;
+ } else {
+ batch = eb.batch->vma;
}
Hmm, it's late friday afternoon so that might be the cause, but I fail
to see what the above hunk is trying to achieve?
/* All GPU relocation batches must be
On 6/23/20 8:41 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 19:21:28)
On 6/23/20 6:36 PM, Chris Wilson wrote:
Quoting Thomas Hellström (Intel) (2020-06-23 12:22:11)
Hi, Chris,
On 6/22/20 11:59 AM, Chris Wilson wrote:
In order to actually handle eviction and what
case would perhaps be to call madvise(madv_dontneed)
on a subpart of a transhuge page. That would IIRC trigger a page split
and interesting mmu notifier calls....
Thanks,
Thomas
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Reviewed-by: Thomas Hellstrom
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
Kuoppala
Cc: Thomas Hellstrom
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-r...@vger.kernel.org
Cc: amd-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson
Cc: Maarten Lankhorst
Cc: Christian König
Signed-off-by: Daniel Vetter
-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson
Cc: Maarten Lankhorst
Cc: Christian König
Signed-off-by: Daniel Vetter
---
Documentation/driver-api/dma-buf.rst | 6
drivers/dma-buf/dma-fence.c | 41
drivers/dma-buf/dma-resv.c
Hellstrom
Cc: linux-me...@vger.kernel.org
Cc: linaro-mm-...@lists.linaro.org
Cc: linux-r...@vger.kernel.org
Cc: amd-...@lists.freedesktop.org
Cc: intel-gfx@lists.freedesktop.org
Cc: Chris Wilson
Cc: Maarten Lankhorst
Cc: Christian König
Signed-off-by: Daniel Vetter
---
Documentation/driver
On 7/15/20 1:50 PM, Chris Wilson wrote:
Remove the stub i915_vma_pin() used for incrementally pining objects for
s/pining/pinning/
___
Intel-gfx mailing list
Intel-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/intel-gfx
On 7/23/20 6:09 PM, Thomas Hellström (Intel) wrote:
On 2020-07-15 13:50, Chris Wilson wrote:
Our timeline lock is our defence against a concurrent execbuf
interrupting our request construction. we need hold it throughout or,
for example, a second thread may interject a relocation request
pping operation blocks on the notifier_lock
in the mmu notifier?
/Thomas
If you understand amdgpu better please share some insights. I
certainly only looked at it briefly today so may be wrong.
Regards,
Tvrtko
_
)
} while (!err);
mutex_unlock(>mutex);
+ /* Wait for all barriers to complete (remote CPU) before we check */
+ i915_active_unlock_wait(>active);
return err;
}
Reviewed-by: Thomas Hellström
___
Intel-gfx mailin
On 7/28/20 1:17 PM, Thomas Hellström (Intel) wrote:
On 7/16/20 5:53 PM, Tvrtko Ursulin wrote:
On 15/07/2020 16:43, Maarten Lankhorst wrote:
Op 15-07-2020 om 13:51 schreef Chris Wilson:
Our goal is to pull all memory reservations (next iteration
obj->ops->get_pages()) under a ww
1 - 100 of 236 matches
Mail list logo