Am 30.08.24 um 03:22 schrieb Li Zetao:
When it needs to get a value within a certain interval, using clamp()
makes the code easier to understand than min(max()).
Signed-off-by: Li Zetao
This patch and #1 is a nice cleanup and Reviewed-by: Christian König
But as Alex also pointed out
Am 22.07.24 um 23:01 schrieb Danilo Krummrich:
On 7/18/24 7:53 AM, Ben Skeggs wrote:
On 19/7/24 02:58, Danilo Krummrich wrote:
Hi Christian,
Those three patches should unblock your series to use GEM references
instead of
TTM ones.
@Lyude, Dave: Can you please double check?
Hi Danilo,
Th
: ab9ccb96a6e6 ("drm/nouveau: use prime helpers")
Signed-off-by: Danilo Krummrich
Thanks for looking into this, feel free to add Reviewed-by: Christian
König to this patch.
But since especially patch #3 is not something I can fully judge
correctness on I can only give an acked-by to the
.
Fixes: 141b15e59175 ("drm/nouveau: move io_reserve_lru handling into the driver
v5")
Cc: Christian König
Signed-off-by: Dave Airlie
Acked-by: Christian König
---
drivers/gpu/drm/nouveau/nouveau_bo.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/drivers/gpu/drm/nouveau/nouveau
space functionality as it is for now, only add
new handling for ttm_bo_validate as suggested by Thomas
v5: fix bug pointed out by Matthew
Signed-off-by: Christian König
Reviewed-by: Zack Rusin v3
---
drivers/gpu/drm/ttm/ttm_bo.c | 231 +
drivers/gpu/dr
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
Reviewed-by: Zack Rusin
---
drivers/gpu/drm/amd/a
Am 27.02.24 um 19:14 schrieb Dmitry Osipenko:
Hello,
Thank you for the patches!
On 2/27/24 13:14, Thomas Zimmermann wrote:
Dma-buf locking semantics require the caller of pin and unpin to hold
the buffer's reservation lock. Fix DRM to adhere to the specs. This
enables to fix the locking in DRM
Nice, looks totally valid to me.
Feel free to add to patch #2, #9, #10, #11 and #12 Reviewed-by:
Christian König
And Acked-by: Christian König to the rest.
Regards,
Christian.
Am 27.02.24 um 11:14 schrieb Thomas Zimmermann:
Dma-buf locking semantics require the caller of pin and unpin to
Am 27.02.24 um 09:12 schrieb Matthew Auld:
On 26/02/2024 20:21, Thomas Hellström wrote:
Hi, Christian
On Fri, 2024-02-23 at 15:30 +0100, Christian König wrote:
Am 06.02.24 um 13:56 schrieb Christian König:
Am 06.02.24 um 13:53 schrieb Thomas Hellström:
Hi, Christian,
On Fri, 2024-01-26 at
Am 06.02.24 um 13:56 schrieb Christian König:
Am 06.02.24 um 13:53 schrieb Thomas Hellström:
Hi, Christian,
On Fri, 2024-01-26 at 15:09 +0100, Christian König wrote:
Previously we would never try to move a BO into the preferred
placements
when it ever landed in a busy placement since those
Am 06.02.24 um 13:53 schrieb Thomas Hellström:
Hi, Christian,
On Fri, 2024-01-26 at 15:09 +0100, Christian König wrote:
Previously we would never try to move a BO into the preferred
placements
when it ever landed in a busy placement since those were considered
compatible.
Rework the whole
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
Reviewed-by: Zack Rusin
---
drivers/gpu/drm/amd/a
space functionality as it is for now, only add
new handling for ttm_bo_validate as suggested by Thomas
Signed-off-by: Christian König
Reviewed-by: Zack Rusin v3
---
drivers/gpu/drm/ttm/ttm_bo.c | 231 +
drivers/gpu/drm/ttm/ttm_resource.c | 16 +-
include/dr
Hi guys,
so pushed the first few patches from this series. I hope that I
correctly managed to resolve the silent Xe merge conflict in drm-tip,
but would be nice if somebody could double check.
Then for the two remaining patches I've implemented most of what
Thomas suggest, e.g. the existing funct
Am 24.01.24 um 12:04 schrieb Alistair Popple:
"Zhou, Xianrong" writes:
[AMD Official Use Only - General]
The vmf_insert_pfn_prot could cause unnecessary double faults on a
device pfn. Because currently the vmf_insert_pfn_prot does not
make the pfn writable so the pte entry is normally read-o
Am 24.01.24 um 03:43 schrieb Zhou, Xianrong:
[AMD Official Use Only - General]
The vmf_insert_pfn_prot could cause unnecessary double faults on a
device pfn. Because currently the vmf_insert_pfn_prot does not make
the pfn writable so the pte entry is normally read-only or dirty
catching.
What?
Am 23.01.24 um 09:33 schrieb Zhou, Xianrong:
[AMD Official Use Only - General]
The vmf_insert_pfn_prot could cause unnecessary double faults on a
device pfn. Because currently the vmf_insert_pfn_prot does not make
the pfn writable so the pte entry is normally read-only or dirty
catching.
What?
Am 22.01.24 um 04:32 schrieb Xianrong Zhou:
The vmf_insert_pfn_prot could cause unnecessary double faults
on a device pfn. Because currently the vmf_insert_pfn_prot does
not make the pfn writable so the pte entry is normally read-only
or dirty catching.
What? How do you got to this conclusion?
Am 12.01.24 um 13:51 schrieb Christian König:
Hi guys,
just a gentle ping on this.
Zack any more comments for the VMWGFX parts?
Thanks,
Christian.
same as the last time. Things I've changed:
Implemented the requirements from Zack to correctly fill in the busy
placements for V
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6
Only convert it to ENOMEM in ttm_bo_validate.
This allows ttm_bo_validate to distinct between an out of memory
situation and just out of space in a placement domain.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff
then
use the busy placement if that didn't worked.
Drawback is that we now always try the idle placement first for each
validation which might cause some additional CPU overhead on overcommit.
v2: fix kerneldoc warning and coding style
v3: take care of XE as well
Signed-off-by: Christian
out by checkpatch
v5: cleanup some rebase problems with VMWGFX
v6: implement some missing VMWGFX functionality pointed out by Zack,
rename the flags as suggested by Michel, rebase on drm-tip and
adjust XE as well
Signed-off-by: Christian König
Signed-off-by: Somalapuram Amaranath
Seems to be unused.
Signed-off-by: Christian König
---
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h| 1 -
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 28 --
2 files changed, 29 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
b/drivers/gpu/drm/vmwgfx
Hi guys,
same as the last time. Things I've changed:
Implemented the requirements from Zack to correctly fill in the busy
placements for VMWGFX.
Renamed the placement flags to desired and fallback as suggested by
Michel.
Rebased on drm-tip instead of drm-misc-next and fixed XE as well.
Please
Am 09.01.24 um 09:14 schrieb Thomas Hellström:
Hi, Christian
On Tue, 2024-01-09 at 08:47 +0100, Christian König wrote:
Hi guys,
I'm trying to make this functionality a bit more useful for years now
since we multiple reports that behavior of drivers can be suboptimal
when multiple place
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6
Only convert it to ENOMEM in ttm_bo_validate.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index edf10618fe2b..8c1eaa74fa21 100644
--- a/drivers
then
use the busy placement if that didn't worked.
Drawback is that we now always try the idle placement first for each
validation which might cause some additional CPU overhead on overcommit.
v2: fix kerneldoc warning and coding style
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/a
out by checkpatch
v5: cleanup some rebase problems with VMWGFX
Signed-off-by: Christian König
Signed-off-by: Somalapuram Amaranath
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c| 11 +---
drivers/gpu/drm/drm_gem_vram_helper.c | 2
Hi guys,
I'm trying to make this functionality a bit more useful for years now
since we multiple reports that behavior of drivers can be suboptimal
when multiple placements be given.
So basically instead of hacking around the TTM behavior in the driver
once more I've gone ahead and changed the id
Seems to be unused.
Signed-off-by: Christian König
---
drivers/gpu/drm/vmwgfx/vmwgfx_drv.h| 1 -
drivers/gpu/drm/vmwgfx/vmwgfx_ttm_buffer.c | 28 --
2 files changed, 29 deletions(-)
diff --git a/drivers/gpu/drm/vmwgfx/vmwgfx_drv.h
b/drivers/gpu/drm/vmwgfx
Am 04.01.24 um 21:02 schrieb Zack Rusin:
On Thu, Jan 4, 2024 at 10:05 AM Christian König
wrote:
From: Somalapuram Amaranath
Instead of a list of separate busy placement add flags which indicate
that a placement should only be used when there is room or if we need to
evict.
v2: add missing
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6
then
use the busy placement if that didn't worked.
Drawback is that we now always try the idle placement first for each
validation which might cause some additional CPU overhead on overcommit.
v2: fix kerneldoc warning and coding style
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/a
Only convert it to ENOMEM in ttm_bo_validate.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index edf10618fe2b..8c1eaa74fa21 100644
--- a/drivers
out by checkpatch
Signed-off-by: Christian König
Signed-off-by: Somalapuram Amaranath
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6 +-
drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c| 11 +--
drivers/gpu/drm/drm_gem_vram_helper.c | 2 -
drivers/gpu/drm/i915/gem/i915_gem_ttm.c| 37
Hi guys,
I'm trying to make this functionality a bit more useful for years now
since we multiple reports that behavior of drivers can be suboptimal
when multiple placements be given.
So basically instead of hacking around the TTM behavior in the driver
once more I've gone ahead and changed the id
Only convert it to ENOMEM in ttm_bo_validate.
Signed-off-by: Christian König
---
drivers/gpu/drm/ttm/ttm_bo.c | 5 -
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/drivers/gpu/drm/ttm/ttm_bo.c b/drivers/gpu/drm/ttm/ttm_bo.c
index edf10618fe2b..8c1eaa74fa21 100644
--- a/drivers
Am 27.11.23 um 17:47 schrieb Bhardwaj, Rajneesh:
[AMD Official Use Only - General]
-Original Message-
From: amd-gfx On Behalf Of Hamza Mahfooz
Sent: Monday, November 27, 2023 10:53 AM
To: Christian König ; jani.nik...@linux.intel.com;
kher...@redhat.com; d...@redhat.com; za
From: Somalapuram Amaranath
Instead of a list of separate busy placement add flags which indicate
that a placement should only be used when there is room or if we need to
evict.
v2: add missing TTM_PL_FLAG_IDLE for i915
v3: fix auto build test ERROR on drm-tip/drm-tip
Signed-off-by: Christian
Try to fill up VRAM as well by setting the busy flag on GTT allocations.
This fixes the issue that when VRAM was evacuated for suspend it's never
filled up again unless the application is restarted.
Signed-off-by: Christian König
---
drivers/gpu/drm/amd/amdgpu/amdgpu_object.c | 6
Hi guys,
TTM has a feature which allows to specify placements for normal operation as
well as when all domains are "busy" and don't have free space.
Not very widely used since it was a bit inflexible and required making multiple
placement lists. Replace the multiple lists with flags and start t
it's
connected to a Thunderbolt controller or USB4 router.
Signed-off-by: Mario Limonciello
Acked-by: Christian König for this one.
---
v2->v3:
* Update commit message
---
drivers/gpu/drm/radeon/radeon_device.c | 4 ++--
drivers/gpu/drm/radeon/radeon_kms.c| 2 +-
2 files changed,
Am 10.11.23 um 17:57 schrieb Danilo Krummrich:
On 11/10/23 09:50, Christian König wrote:
[SNIP]
Another issue Christian brought up is that something intended to
be embeddable (a base class) shouldn't really have its own
refcount. I think that's a valid point. If you at some poin
Am 10.11.23 um 10:39 schrieb Thomas Hellström:
[SNIP]
I was thinking more of the general design of a base-class that needs
to be refcounted. Say a driver vm that inherits from gpu-vm,
gem_object and yet another base-class that supplies its own refcount.
What's the best-practice way to do re
Am 09.11.23 um 19:34 schrieb Danilo Krummrich:
On 11/9/23 17:03, Christian König wrote:
Am 09.11.23 um 16:50 schrieb Thomas Hellström:
[SNIP]
Did we get any resolution on this?
FWIW, my take on this is that it would be possible to get GPUVM to
work both with and without internal
Am 09.11.23 um 16:50 schrieb Thomas Hellström:
[SNIP]
Did we get any resolution on this?
FWIW, my take on this is that it would be possible to get GPUVM to
work both with and without internal refcounting; If with, the driver
needs a vm close to resolve cyclic references, if without that's n
Am 06.11.23 um 15:11 schrieb Danilo Krummrich:
On Mon, Nov 06, 2023 at 02:05:13PM +0100, Christian König wrote:
Am 06.11.23 um 13:16 schrieb Danilo Krummrich:
[SNIP]
This reference count just prevents that the VM is freed as long as other
ressources are attached to it that carry a VM pointer
Am 06.11.23 um 13:16 schrieb Danilo Krummrich:
[SNIP]
This reference count just prevents that the VM is freed as long as other
ressources are attached to it that carry a VM pointer, such as mappings and
VM_BOs. The motivation for that are VM_BOs. For mappings it's indeed a bit
paranoid, but it do
Am 03.11.23 um 16:34 schrieb Danilo Krummrich:
[SNIP]
Especially we most likely don't want the VM to live longer than the
application which originally used it. If you make the GPUVM an
independent object you actually open up driver abuse for the lifetime
of this.
Right, we don't want that.
Am 03.11.23 um 14:14 schrieb Danilo Krummrich:
On Fri, Nov 03, 2023 at 08:18:35AM +0100, Christian König wrote:
Am 02.11.23 um 00:31 schrieb Danilo Krummrich:
Implement reference counting for struct drm_gpuvm.
From the design point of view what is that good for?
It was discussed in this
Am 02.11.23 um 00:31 schrieb Danilo Krummrich:
Implement reference counting for struct drm_gpuvm.
From the design point of view what is that good for?
Background is that the most common use case I see is that this object is
embedded into something else and a reference count is then not really
Am 30.10.23 um 14:38 schrieb Rob Clark:
On Mon, Oct 30, 2023 at 1:05 AM Christian König
wrote:
Am 27.10.23 um 18:58 schrieb Rob Clark:
From: Rob Clark
In cases where the # is known ahead of time, it is silly to do the table
resize dance.
Ah, yes that was my initial implementation as well
+ exec->objects = kmalloc(sz, GFP_KERNEL);
Please use k*v*malloc() here since we can't predict how large that will be.
With that fixed the patch is Reviewed-by: Christian König
.
Regards,
Christian.
/* If allocation here fails, just delay that till the first use
Am 23.10.23 um 22:16 schrieb Danilo Krummrich:
Use drm_WARN() and drm_WARN_ON() variants to indicate drivers the
context the failing VM resides in.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/drm/drm_gpuvm.c| 32 ++
drivers/gpu/drm/nouveau/nouveau_u
Am 12.10.23 um 12:33 schrieb Dave Airlie:
On Wed, 11 Oct 2023 at 17:07, Christian König wrote:
Am 10.10.23 um 22:23 schrieb Dave Airlie:
I think we're then optimizing for different scenarios. Our compute
driver will use mostly external objects only, and if shared, I don't
forsee the
Am 10.10.23 um 22:23 schrieb Dave Airlie:
I think we're then optimizing for different scenarios. Our compute
driver will use mostly external objects only, and if shared, I don't
forsee them bound to many VMs. What saves us currently here is that in
compute mode we only really traverse the extobj
Am 02.10.23 um 20:22 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 08:11:41PM +0200, Christian König wrote:
Am 02.10.23 um 20:08 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 08:01:57PM +0200, Christian König wrote:
Am 02.10.23 um 18:53 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 11:06:19AM -0400
Am 02.10.23 um 20:08 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 08:01:57PM +0200, Christian König wrote:
Am 02.10.23 um 18:53 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 11:06:19AM -0400, Alex Deucher wrote:
On Mon, Oct 2, 2023 at 5:20 AM Christian König
wrote:
Am 29.09.23 um 21:33 schrieb
Am 02.10.23 um 18:53 schrieb Kees Cook:
On Mon, Oct 02, 2023 at 11:06:19AM -0400, Alex Deucher wrote:
On Mon, Oct 2, 2023 at 5:20 AM Christian König
wrote:
Am 29.09.23 um 21:33 schrieb Kees Cook:
On Fri, 22 Sep 2023 10:32:05 -0700, Kees Cook wrote:
This is a batch of patches touching drm
Am 29.09.23 um 21:33 schrieb Kees Cook:
On Fri, 22 Sep 2023 10:32:05 -0700, Kees Cook wrote:
This is a batch of patches touching drm for preparing for the coming
implementation by GCC and Clang of the __counted_by attribute. Flexible
array members annotated with __counted_by can have their acces
Am 27.09.23 um 14:11 schrieb Danilo Krummrich:
On 9/27/23 13:54, Christian König wrote:
Am 26.09.23 um 09:11 schrieb Boris Brezillon:
On Mon, 25 Sep 2023 19:55:21 +0200
Christian König wrote:
Am 25.09.23 um 14:55 schrieb Boris Brezillon:
+The imagination team, who's probably intereste
Am 26.09.23 um 09:11 schrieb Boris Brezillon:
On Mon, 25 Sep 2023 19:55:21 +0200
Christian König wrote:
Am 25.09.23 um 14:55 schrieb Boris Brezillon:
+The imagination team, who's probably interested too.
On Mon, 25 Sep 2023 00:43:06 +0200
Danilo Krummrich wrote:
Currently, job
Am 25.09.23 um 14:55 schrieb Boris Brezillon:
+The imagination team, who's probably interested too.
On Mon, 25 Sep 2023 00:43:06 +0200
Danilo Krummrich wrote:
Currently, job flow control is implemented simply by limiting the amount
of jobs in flight. Therefore, a scheduler is initialized w
Cc: Alex Deucher
Cc: "Christian König"
Cc: "Pan, Xinhui"
Cc: David Airlie
Cc: Daniel Vetter
Cc: Xiaojian Du
Cc: Huang Rui
Cc: Kevin Wang
Cc: amd-...@lists.freedesktop.org
Cc: dri-de...@lists.freedesktop.org
Signed-off-by: Kees Cook
Acked-by: Alex Deucher
Mhm, I
Am 21.09.23 um 16:25 schrieb Boris Brezillon:
On Thu, 21 Sep 2023 15:34:44 +0200
Danilo Krummrich wrote:
On 9/21/23 09:39, Christian König wrote:
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used
Am 21.09.23 um 15:34 schrieb Danilo Krummrich:
On 9/21/23 09:39, Christian König wrote:
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being used outside of
this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Provide a common dma-resv for GEM objects not being used outside of this
GPU-VM. This is used in a subsequent patch to generalize dma-resv,
external and evicted object handling and GEM validation.
Signed-off-by: Danilo Krummrich
---
drivers/gpu/d
Am 20.09.23 um 16:42 schrieb Danilo Krummrich:
Rename struct drm_gpuva_manager to struct drm_gpuvm including
corresponding functions. This way the GPUVA manager's structures align
very well with the documentation of VM_BIND [1] and VM_BIND locking [2].
It also provides a better foundation for th
Am 20.09.23 um 16:02 schrieb Thomas Hellström:
[SNIP]
Do you by "relocation" list refer to what gpuvm calls "evict" list
or something else? Like the relocaton/validation list that used to
be sent from user-space for non-VM_BIND vms?
The BOs send into the kernel with each command submission on
Am 20.09.23 um 15:38 schrieb Thomas Hellström:
On 9/20/23 15:06, Christian König wrote:
Am 20.09.23 um 14:06 schrieb Thomas Hellström:
On 9/20/23 12:51, Christian König wrote:
Am 20.09.23 um 09:44 schrieb Thomas Hellström:
Hi,
On 9/20/23 07:37, Christian König wrote:
Am 19.09.23 um 17
Am 20.09.23 um 14:06 schrieb Thomas Hellström:
On 9/20/23 12:51, Christian König wrote:
Am 20.09.23 um 09:44 schrieb Thomas Hellström:
Hi,
On 9/20/23 07:37, Christian König wrote:
Am 19.09.23 um 17:23 schrieb Thomas Hellström:
On 9/19/23 17:16, Danilo Krummrich wrote:
On 9/19/23 14:21
Am 20.09.23 um 09:44 schrieb Thomas Hellström:
Hi,
On 9/20/23 07:37, Christian König wrote:
Am 19.09.23 um 17:23 schrieb Thomas Hellström:
On 9/19/23 17:16, Danilo Krummrich wrote:
On 9/19/23 14:21, Thomas Hellström wrote:
Hi Christian
On 9/19/23 14:07, Christian König wrote:
Am 13.09.23
Am 19.09.23 um 17:23 schrieb Thomas Hellström:
On 9/19/23 17:16, Danilo Krummrich wrote:
On 9/19/23 14:21, Thomas Hellström wrote:
Hi Christian
On 9/19/23 14:07, Christian König wrote:
Am 13.09.23 um 17:46 schrieb Danilo Krummrich:
On 9/13/23 17:33, Christian König wrote:
Am 13.09.23 um
Am 13.09.23 um 17:46 schrieb Danilo Krummrich:
On 9/13/23 17:33, Christian König wrote:
Am 13.09.23 um 17:15 schrieb Danilo Krummrich:
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the
Am 13.09.23 um 17:15 schrieb Danilo Krummrich:
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the
assumption
that we don't support anything else than GPUVM updates from the IOCTL.
I
Am 13.09.23 um 17:13 schrieb Thomas Hellström:
Hi Christian
On 9/13/23 16:26, Christian König wrote:
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the
assumption
that we don't support anything else than GPUVM updates fro
Am 13.09.23 um 14:16 schrieb Danilo Krummrich:
As mentioned in a different mail thread, the reply is based on the assumption
that we don't support anything else than GPUVM updates from the IOCTL.
I think that this assumption is incorrect.
Vulkan is just once specific use case, but this here sh
: Christian König for this one here.
I hope that I can get somebody to work on the remaining patches with the
end goal of using this in amdgpu as well.
Regards,
Christian.
---
drivers/gpu/drm/Kconfig | 7 +++
drivers/gpu/drm/Makefile| 2 +-
drivers/gpu/drm/drm_gpuvm.c
Am 07.09.23 um 18:33 schrieb suijingfeng:
Hi,
On 2023/9/7 17:08, Christian König wrote:
I strongly suggest that you just completely drop this here
Drop this is OK, no problem. Then I will go to develop something else.
This version is not intended to merge originally, as it's a RFC.
Am 07.09.23 um 17:26 schrieb suijingfeng:
[SNIP]
Then, I'll give you another example, see below for elaborate description.
I have one AMD BC160 GPU, see[1] to get what it looks like.
The GPU don't has a display connector interface exported.
It actually can be seen as a render-only GPU or comp
Am 07.09.23 um 14:32 schrieb suijingfeng:
Hi,
On 2023/9/7 17:08, Christian König wrote:
Well, I have over 25 years of experience with display hardware and
what you describe here was never an issue.
I want to give you an example to let you know more.
I have a ASRock AD2550B-ITX board[1
Am 07.09.23 um 04:30 schrieb Sui Jingfeng:
Hi,
On 2023/9/6 17:40, Christian König wrote:
Am 06.09.23 um 11:08 schrieb suijingfeng:
Well, welcome to correct me if I'm wrong.
You seem to have some very basic misunderstandings here.
The term framebuffer describes some VRAM memory use
Am 06.09.23 um 12:31 schrieb Sui Jingfeng:
Hi,
On 2023/9/6 14:45, Christian König wrote:
Firmware framebuffer device already get killed by the
drm_aperture_remove_conflicting_pci_framebuffers()
function (or its siblings). So, this series is definitely not to
interact with the firmware
Am 06.09.23 um 11:08 schrieb suijingfeng:
Well, welcome to correct me if I'm wrong.
You seem to have some very basic misunderstandings here.
The term framebuffer describes some VRAM memory used for scanout.
This framebuffer is exposed to userspace through some framebuffer
driver, on UEFI pla
Am 05.09.23 um 16:28 schrieb Sui Jingfeng:
Hi,
On 2023/9/5 21:28, Christian König wrote:
2) Typically, those non-86 machines don't have a good UEFI firmware
support, which doesn't support select primary GPU as firmware
stage.
Even on x86, there are old UEFI firmwares whi
Am 05.09.23 um 15:30 schrieb suijingfeng:
Hi,
On 2023/9/5 18:45, Thomas Zimmermann wrote:
Hi
Am 04.09.23 um 21:57 schrieb Sui Jingfeng:
From: Sui Jingfeng
On a machine with multiple GPUs, a Linux user has no control over which
one is primary at boot time. This series tries to solve above m
Am 05.09.23 um 12:38 schrieb Jani Nikula:
On Tue, 05 Sep 2023, Sui Jingfeng wrote:
From: Sui Jingfeng
On a machine with multiple GPUs, a Linux user has no control over which
one is primary at boot time. This series tries to solve above mentioned
problem by introduced the ->be_primary() functi
Am 04.09.23 um 21:57 schrieb Sui Jingfeng:
From: Sui Jingfeng
On a machine with multiple GPUs, a Linux user has no control over which one
is primary at boot time.
Question is why is that useful? Should we give users the ability to
control that?
I don't see an use case for this.
Regards,
C
Am 20.08.23 um 23:53 schrieb Danilo Krummrich:
So far the DRM GPUVA manager offers common infrastructure to track GPU VA
allocations and mappings, generically connect GPU VA mappings to their
backing buffers and perform more complex mapping operations on the GPU VA
space.
However, there are m
Am 20.08.23 um 23:53 schrieb Danilo Krummrich:
drm_exec must always be builtin for the DRM GPUVA manager to depend on
it.
You should probably go the other way around and not always build in the
GPUVA manager.
We have intentionally and with quite a bit of work moved the DRM_EXEC
and DRM_BUDD
Am 10.08.23 um 00:17 schrieb Danilo Krummrich:
With the current mental model every GPU scheduler instance represents
a single HW ring, while every entity represents a software queue feeding
into one or multiple GPU scheduler instances and hence into one or
multiple HW rings.
This does not really
Am 09.08.23 um 05:44 schrieb Ruan Jinjie:
The NULL initialization of the pointers assigned by kzalloc() first is
not necessary, because if the kzalloc() failed, the pointers will be
assigned NULL, otherwise it works as usual. so remove it.
Signed-off-by: Ruan Jinjie
Reviewed-by: Christian
Am 07.08.23 um 20:54 schrieb Danilo Krummrich:
Hi Christian,
On 8/7/23 20:07, Christian König wrote:
Am 03.08.23 um 18:52 schrieb Danilo Krummrich:
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
Am 03.08.23 um 18:52 schrieb Danilo Krummrich:
The new (VM_BIND) UAPI exports DMA fences through DRM syncobjs. Hence,
in order to emit fences within DMA fence signalling critical sections
(e.g. as typically done in the DRM GPU schedulers run_job() callback) we
need to separate fence allocation an
Am 12.07.23 um 11:46 schrieb Uwe Kleine-König:
Hello,
while I debugged an issue in the imx-lcdc driver I was constantly
irritated about struct drm_device pointer variables being named "dev"
because with that name I usually expect a struct device pointer.
I think there is a big benefit when thes
Am 12.07.23 um 15:38 schrieb Uwe Kleine-König:
Hello Maxime,
On Wed, Jul 12, 2023 at 02:52:38PM +0200, Maxime Ripard wrote:
On Wed, Jul 12, 2023 at 01:02:53PM +0200, Uwe Kleine-König wrote:
Background is that this makes merge conflicts easier to handle and detect.
Really?
FWIW, I agree with
Am 23.06.23 um 15:55 schrieb Danilo Krummrich:
[SNIP]
How do you efficiently find only the mappings of a BO in one VM?
Actually, I think this case should even be more efficient than with
a BO having a list of GPUVAs (or mappings):
*than with a BO having a list of VMs:
Having a list of GP
Am 22.06.23 um 17:07 schrieb Danilo Krummrich:
On 6/22/23 17:04, Danilo Krummrich wrote:
On 6/22/23 16:42, Christian König wrote:
Am 22.06.23 um 16:22 schrieb Danilo Krummrich:
On 6/22/23 15:54, Christian König wrote:
Am 20.06.23 um 14:23 schrieb Danilo Krummrich:
Hi Christian,
On 6/20/23
1 - 100 of 395 matches
Mail list logo