[RFC 6/7] drm/amdgpu: Map userqueue into HW

2022-12-23 Thread Shashank Sharma
This patch add the function to map/unmap the usermode queue into the HW, using the prepared MQD and other objects. After this mapping, the queue will be ready to accept the workload. Cc: Alex Deucher Cc: Christian Koenig Signed-off-by: Shashank Sharma --- drivers/gpu/drm/amd/amdgpu/amdgpu_use

[RFC 7/7] drm/amdgpu: Secure semaphore for usermode queue

2022-12-23 Thread Shashank Sharma
From: Arunpravin Paneer Selvam This is a WIP patch, which adds an kernel implementation of secure semaphore for the usermode queues. The UAPI for the same is yet to be implemented. The idea is to create a RO page and map it to each process requesting a user mode queue, and give them a qnique off

[RFC 3/7] drm/amdgpu: Create MQD for userspace queue

2022-12-23 Thread Shashank Sharma
From: Arvind Yadav MQD describes the properies of a user queue to the HW, and allows it to accurately configure the queue while mapping it in GPU HW. This patch adds: - A new header file which contains the MQD definition - A new function which creates an MQD object and fills it with userqueue d

[RFC 5/7] drm/amdgpu: Create context for usermode queue

2022-12-23 Thread Shashank Sharma
The FW expects us to allocate atleast one page as process context space, and one for gang context space. This patch adds some object for the same. Cc: Alex Deucher Cc: Christian Koenig Signed-off-by: Shashank Sharma --- drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c | 57 +++ .

[RFC 4/7] drm/amdgpu: Allocate doorbell slot for user queue

2022-12-23 Thread Shashank Sharma
This patch allocates a doorbell slot in the bar, for the usermode queue. We are using the unique queue-id to get this slot from MES. Cc: Alex Deucher Cc: Christian Koenig Signed-off-by: Shashank Sharma --- drivers/gpu/drm/amd/amdgpu/amdgpu_userqueue.c | 28 +++ 1 file changed,

[RFC 2/7] drm/amdgpu: Add usermode queue for gfx work

2022-12-23 Thread Shashank Sharma
This patch adds skeleton code for usermode queue creation. It typically contains: - A new structure to keep all the user queue data in one place. - An IOCTL function to create/free a usermode queue. - A function to generate unique index for the queue. - A global ptr in amdgpu_dev Cc: Alex Deucher

[RFC 1/7] drm/amdgpu: UAPI for user queue management

2022-12-23 Thread Shashank Sharma
From: Alex Deucher This patch intorduces new UAPI/IOCTL for usermode graphics queue. The userspace app will fill this structure and request the graphics driver to add a graphics work queue for it. The output of this UAPI is a queue id. This UAPI maps the queue into GPU, so the graphics app can s

[RFC 0/7] RFC: Usermode queue for AMDGPU driver

2022-12-23 Thread Shashank Sharma
This is a RFC series to implement usermode graphics queues for AMDGPU driver (Navi 3X and above). The idea of usermode graphics queue is to allow direct workload submission from a userspace graphics process who has amdgpu graphics context. Once we have some initial feedback on the design, we will

Re: [PATCH 16/16] drm/amd/display: Don't restrict bpc to 8 bpc

2022-12-23 Thread Harry Wentland
On 12/14/22 04:01, Pekka Paalanen wrote: > On Tue, 13 Dec 2022 18:20:59 +0100 > Michel Dänzer wrote: > >> On 12/12/22 19:21, Harry Wentland wrote: >>> This will let us pass kms_hdr.bpc_switch. >>> >>> I don't see any good reasons why we still need to >>> limit bpc to 8 bpc and doing so is prob

Re: [PATCH 2/2] drm/amd: Re-create firmware framebuffer on failure to probe

2022-12-23 Thread Ernst Sjöstrand
What about a system with multiple GPUs? Hybrid graphics? Headless systems? Regards //Ernst Den tors 22 dec. 2022 kl 19:30 skrev Mario Limonciello < mario.limoncie...@amd.com>: > If the probe sequence fails then the user is stuck with a frozen > screen and can only really recover via SSH or by re

Re: [PATCH 0/2] Recover from failure to probe GPU

2022-12-23 Thread Mario Limonciello
On 12/22/22 13:41, Javier Martinez Canillas wrote: [adding Thomas Zimmermann to CC list] Hello Mario, Interesting case. On 12/22/22 19:30, Mario Limonciello wrote: One of the first thing that KMS drivers do during initialization is destroy the system firmware framebuffer by means of `drm_aper

Re: amdgpu refcount saturation

2022-12-23 Thread Borislav Petkov
On Thu, Dec 22, 2022 at 10:20:37PM +0100, Michal Kubecek wrote: > Unfortunately, just like Boris, I always seem to have multiple stack > traces tangled together. See if this fixes it: https://lore.kernel.org/r/20221219104718.21677-1-christian.koe...@amd.com Thx. -- Regards/Gruss, Boris. h

Re: amdgpu refcount saturation

2022-12-23 Thread Michal Kubecek
On Mon, Dec 19, 2022 at 09:23:05AM +0100, Christian König wrote: > Am 17.12.22 um 12:53 schrieb Borislav Petkov: > > Hi folks, > > > > this is with Linus' tree from Wed: > > > > 041fae9c105a ("Merge tag 'f2fs-for-6.2-rc1' of > > git://git.kernel.org/pub/scm/linux/kernel/git/jaegeuk/f2fs") > > >

Re: [PATCH] drm/amdgpu: grab extra fence reference for drm_sched_job_add_dependency

2022-12-23 Thread Michal Kubecek
On Mon, Dec 19, 2022 at 11:47:18AM +0100, Christian König wrote: > That function consumes the reference. > > Signed-off-by: Christian König > Fixes: aab9cf7b6954 ("drm/amdgpu: use scheduler dependencies for VM updates") Tested-by: Michal Kubecek I can still see weird artefacts in some windows

[PATCH v2] drm/amdgpu: Retry DDC probing on DVI on failure if we got an HPD interrupt

2022-12-23 Thread xurui
HPD signals on DVI ports can be fired off before the pins required for DDC probing actually make contact, due to the pins for HPD making contact first. This results in a HPD signal being asserted but DDC probing failing, resulting in hotplugging occasionally failing. Rescheduling the hotplug work

Re: [PATCH v6] drm: Optimise for continuous memory allocation

2022-12-23 Thread Dan Carpenter
Hi xinhui, https://git-scm.com/docs/git-format-patch#_base_tree_information] url: https://github.com/intel-lab-lkp/linux/commits/xinhui-pan/drm-Optimise-for-continuous-memory-allocation/20221218-145922 base: git://anongit.freedesktop.org/drm/drm-misc drm-misc-next patch link: https://lo