Re: [PATCH 5/5] drm/amdgpu: replace iova debugfs file with iomem
I haven't tried the patch but just like to point out this breaks umr :-) I'll have to craft something on Monday to support this and iova in parallel until the iova kernels are realistically EOL'ed. On the other hand I support this idea since it eliminates the need for an fmem hack. So much appreciated. Cheers, Tom From: amd-gfxon behalf of Christian König Sent: Friday, February 2, 2018 14:09 To: amd-gfx@lists.freedesktop.org; dri-de...@lists.freedesktop.org Subject: [PATCH 5/5] drm/amdgpu: replace iova debugfs file with iomem This allows access to pages allocated through the driver with optional IOMMU mapping. Signed-off-by: Christian König --- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 57 - 1 file changed, 35 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 648c449aaa79..795ceaeb82d5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1929,38 +1929,51 @@ static const struct file_operations amdgpu_ttm_gtt_fops = { #endif -static ssize_t amdgpu_iova_to_phys_read(struct file *f, char __user *buf, - size_t size, loff_t *pos) +static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf, +size_t size, loff_t *pos) { struct amdgpu_device *adev = file_inode(f)->i_private; - int r; - uint64_t phys; struct iommu_domain *dom; + ssize_t result = 0; + int r; - // always return 8 bytes - if (size != 8) - return -EINVAL; + dom = iommu_get_domain_for_dev(adev->dev); - // only accept page addresses - if (*pos & 0xFFF) - return -EINVAL; + while (size) { + phys_addr_t addr = *pos & PAGE_MASK; + loff_t off = *pos & ~PAGE_MASK; + size_t bytes = PAGE_SIZE - off; + unsigned long pfn; + struct page *p; + void *ptr; - dom = iommu_get_domain_for_dev(adev->dev); - if (dom) - phys = iommu_iova_to_phys(dom, *pos); - else - phys = *pos; + addr = dom ? iommu_iova_to_phys(dom, addr) : addr; - r = copy_to_user(buf, , 8); - if (r) - return -EFAULT; + pfn = addr >> PAGE_SHIFT; + if (!pfn_valid(pfn)) + return -EPERM; + + p = pfn_to_page(pfn); + if (p->mapping != adev->mman.bdev.dev_mapping) + return -EPERM; + + ptr = kmap(p); + r = copy_to_user(buf, ptr, bytes); + kunmap(p); + if (r) + return -EFAULT; - return 8; + size -= bytes; + *pos += bytes; + result += bytes; + } + + return result; } -static const struct file_operations amdgpu_ttm_iova_fops = { +static const struct file_operations amdgpu_ttm_iomem_fops = { .owner = THIS_MODULE, - .read = amdgpu_iova_to_phys_read, + .read = amdgpu_iomem_read, .llseek = default_llseek }; @@ -1973,7 +1986,7 @@ static const struct { #ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS { "amdgpu_gtt", _ttm_gtt_fops, TTM_PL_TT }, #endif - { "amdgpu_iova", _ttm_iova_fops, TTM_PL_SYSTEM }, + { "amdgpu_iomem", _ttm_iomem_fops, TTM_PL_SYSTEM }, }; #endif -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [PATCH 3/9] drm/amdkfd: Make IOMMUv2 code conditional
The attached patch is my attempt to keep most of the IOMMU code in one place (new kfd_iommu.c) to avoid #ifdefs all over the place. This way I can still conditionally compile a bunch of KFD code that is only needed for IOMMU handling, with stub functions for kernel configs without IOMMU support. About 300 lines of conditionally compiled code got moved to kfd_iommu.c. The only piece I didn't move into kfd_iommu.c is kfd_signal_iommu_event. I prefer to keep that in kfd_events.c because it doesn't call any IOMMU driver functions, and because it's closely related to the rest of the event handling logic. It could be compiled unconditionally, but it would be dead code without IOMMU support. And I moved pdd->bound to a place where it doesn't consume extra space (on 64-bit systems due to structure alignment) instead of making it conditional. This is only compile-tested for now. If you like this approach, I'll do more testing and squash it with "Make IOMMUv2 code conditional". Regards, Felix On 2018-01-31 10:00 AM, Oded Gabbay wrote: > On Wed, Jan 31, 2018 at 4:56 PM, Oded Gabbaywrote: >> Hi Felix, >> Please don't spread 19 #ifdefs throughout the code. >> I suggest to put one #ifdef in linux/amd-iommu.h itself around all the >> functions declarations and in the #else section put macros with empty >> implementations. This is much more readable and maintainable. >> >> Oded > To emphasize my point, there is a call to amd_iommu_bind_pasid in > kfd_bind_processes_to_device() which isn't wrapped with the #ifdef so > the compliation breaks. Putting the #ifdefs around the calls is simply > not scalable. > > Oded > >> >> On Fri, Jan 5, 2018 at 12:17 AM, Felix Kuehling >> wrote: >>> dGPUs work without IOMMUv2. Make IOMMUv2 initialization dependent on >>> ASIC information. Also allow building KFD without IOMMUv2 support. >>> This is still useful for dGPUs and prepares for enabling KFD on >>> architectures that don't support AMD IOMMUv2. >>> >>> Signed-off-by: Felix Kuehling >>> --- >>> drivers/gpu/drm/amd/amdkfd/Kconfig| 2 +- >>> drivers/gpu/drm/amd/amdkfd/kfd_crat.c | 8 +++- >>> drivers/gpu/drm/amd/amdkfd/kfd_device.c | 62 >>> +-- >>> drivers/gpu/drm/amd/amdkfd/kfd_events.c | 2 + >>> drivers/gpu/drm/amd/amdkfd/kfd_priv.h | 5 +++ >>> drivers/gpu/drm/amd/amdkfd/kfd_process.c | 17 ++--- >>> drivers/gpu/drm/amd/amdkfd/kfd_topology.c | 2 + >>> drivers/gpu/drm/amd/amdkfd/kfd_topology.h | 2 + >>> 8 files changed, 74 insertions(+), 26 deletions(-) >>> >>> diff --git a/drivers/gpu/drm/amd/amdkfd/Kconfig >>> b/drivers/gpu/drm/amd/amdkfd/Kconfig >>> index bc5a294..5bbeb95 100644 >>> --- a/drivers/gpu/drm/amd/amdkfd/Kconfig >>> +++ b/drivers/gpu/drm/amd/amdkfd/Kconfig >>> @@ -4,6 +4,6 @@ >>> >>> config HSA_AMD >>> tristate "HSA kernel driver for AMD GPU devices" >>> - depends on DRM_AMDGPU && AMD_IOMMU_V2 && X86_64 >>> + depends on DRM_AMDGPU && X86_64 >>> help >>> Enable this if you want to use HSA features on AMD GPU devices. >>> diff --git a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c >>> b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c >>> index 2bc2816..3478270 100644 >>> --- a/drivers/gpu/drm/amd/amdkfd/kfd_crat.c >>> +++ b/drivers/gpu/drm/amd/amdkfd/kfd_crat.c >>> @@ -22,7 +22,9 @@ >>> >>> #include >>> #include >>> +#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2) >>> #include >>> +#endif >>> #include "kfd_crat.h" >>> #include "kfd_priv.h" >>> #include "kfd_topology.h" >>> @@ -1037,15 +1039,17 @@ static int kfd_create_vcrat_image_gpu(void >>> *pcrat_image, >>> struct crat_subtype_generic *sub_type_hdr; >>> struct crat_subtype_computeunit *cu; >>> struct kfd_cu_info cu_info; >>> - struct amd_iommu_device_info iommu_info; >>> int avail_size = *size; >>> uint32_t total_num_of_cu; >>> int num_of_cache_entries = 0; >>> int cache_mem_filled = 0; >>> int ret = 0; >>> +#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2) >>> + struct amd_iommu_device_info iommu_info; >>> const u32 required_iommu_flags = AMD_IOMMU_DEVICE_FLAG_ATS_SUP | >>> AMD_IOMMU_DEVICE_FLAG_PRI_SUP | >>> AMD_IOMMU_DEVICE_FLAG_PASID_SUP; >>> +#endif >>> struct kfd_local_mem_info local_mem_info; >>> >>> if (!pcrat_image || avail_size < VCRAT_SIZE_FOR_GPU) >>> @@ -1106,12 +1110,14 @@ static int kfd_create_vcrat_image_gpu(void >>> *pcrat_image, >>> /* Check if this node supports IOMMU. During parsing this flag will >>> * translate to HSA_CAP_ATS_PRESENT >>> */ >>> +#if defined(CONFIG_AMD_IOMMU_V2_MODULE) || defined(CONFIG_AMD_IOMMU_V2) >>> iommu_info.flags = 0; >>> if (amd_iommu_device_info(kdev->pdev, _info) == 0) { >>>
[PATCH 2/2] drm/amdgpu: clear the shadow fence as well
It also needs to be initialized. Signed-off-by: Christian König--- drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c | 5 + 1 file changed, 5 insertions(+) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c index b43098f02a40..18ce47608bf1 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_vm.c @@ -336,6 +336,11 @@ static int amdgpu_vm_clear_bo(struct amdgpu_device *adev, amdgpu_bo_fence(bo, fence, true); dma_fence_put(fence); + + if (bo->shadow) + return amdgpu_vm_clear_bo(adev, vm, bo->shadow, + level, pte_support_ats); + return 0; error_free: -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 4/5] drm/ttm: set page mapping during allocation
To aid debugging set the page mapping during allocation instead of during VM faults. Signed-off-by: Christian König--- drivers/gpu/drm/ttm/ttm_bo_vm.c | 1 - drivers/gpu/drm/ttm/ttm_tt.c| 18 +- 2 files changed, 17 insertions(+), 2 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 610d6714042a..121f017ac7ca 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -257,7 +257,6 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf) } else if (unlikely(!page)) { break; } - page->mapping = vma->vm_file->f_mapping; page->index = drm_vma_node_start(>vma_node) + page_offset; pfn = page_to_pfn(page); diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 39c44e301c72..9fd7115a013a 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -392,12 +392,28 @@ int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistent_swap_storage) return ret; } +static void ttm_tt_add_mapping(struct ttm_tt *ttm) +{ + pgoff_t i; + + if (ttm->page_flags & TTM_PAGE_FLAG_SG) + return; + + for (i = 0; i < ttm->num_pages; ++i) + ttm->pages[i]->mapping = ttm->bdev->dev_mapping; +} + int ttm_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) { + int ret; + if (ttm->state != tt_unpopulated) return 0; - return ttm->bdev->driver->ttm_tt_populate(ttm, ctx); + ret = ttm->bdev->driver->ttm_tt_populate(ttm, ctx); + if (!ret) + ttm_tt_add_mapping(ttm); + return ret; } static void ttm_tt_clear_mapping(struct ttm_tt *ttm) -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 1/5] drm/ttm: add ttm_tt_populate wrapper
Stop calling the driver callback directly. Signed-off-by: Christian König--- drivers/gpu/drm/ttm/ttm_bo_util.c | 12 +--- drivers/gpu/drm/ttm/ttm_bo_vm.c | 2 +- drivers/gpu/drm/ttm/ttm_tt.c | 10 +- include/drm/ttm/ttm_bo_driver.h | 9 + 4 files changed, 24 insertions(+), 9 deletions(-) diff --git a/drivers/gpu/drm/ttm/ttm_bo_util.c b/drivers/gpu/drm/ttm/ttm_bo_util.c index 33ffe286f3a5..38da6903cae9 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_util.c +++ b/drivers/gpu/drm/ttm/ttm_bo_util.c @@ -375,8 +375,8 @@ int ttm_bo_move_memcpy(struct ttm_buffer_object *bo, /* * TTM might be null for moves within the same region. */ - if (ttm && ttm->state == tt_unpopulated) { - ret = ttm->bdev->driver->ttm_tt_populate(ttm, ctx); + if (ttm) { + ret = ttm_tt_populate(ttm, ctx); if (ret) goto out1; } @@ -557,11 +557,9 @@ static int ttm_bo_kmap_ttm(struct ttm_buffer_object *bo, BUG_ON(!ttm); - if (ttm->state == tt_unpopulated) { - ret = ttm->bdev->driver->ttm_tt_populate(ttm, ); - if (ret) - return ret; - } + ret = ttm_tt_populate(ttm, ); + if (ret) + return ret; if (num_pages == 1 && (mem->placement & TTM_PL_FLAG_CACHED)) { /* diff --git a/drivers/gpu/drm/ttm/ttm_bo_vm.c b/drivers/gpu/drm/ttm/ttm_bo_vm.c index 716e724ac710..610d6714042a 100644 --- a/drivers/gpu/drm/ttm/ttm_bo_vm.c +++ b/drivers/gpu/drm/ttm/ttm_bo_vm.c @@ -234,7 +234,7 @@ static int ttm_bo_vm_fault(struct vm_fault *vmf) cvma.vm_page_prot); /* Allocate all page at once, most common usage */ - if (ttm->bdev->driver->ttm_tt_populate(ttm, )) { + if (ttm_tt_populate(ttm, )) { ret = VM_FAULT_OOM; goto out_io_unlock; } diff --git a/drivers/gpu/drm/ttm/ttm_tt.c b/drivers/gpu/drm/ttm/ttm_tt.c index 95a77dab8cc9..39c44e301c72 100644 --- a/drivers/gpu/drm/ttm/ttm_tt.c +++ b/drivers/gpu/drm/ttm/ttm_tt.c @@ -276,7 +276,7 @@ int ttm_tt_bind(struct ttm_tt *ttm, struct ttm_mem_reg *bo_mem, if (ttm->state == tt_bound) return 0; - ret = ttm->bdev->driver->ttm_tt_populate(ttm, ctx); + ret = ttm_tt_populate(ttm, ctx); if (ret) return ret; @@ -392,6 +392,14 @@ int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistent_swap_storage) return ret; } +int ttm_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx) +{ + if (ttm->state != tt_unpopulated) + return 0; + + return ttm->bdev->driver->ttm_tt_populate(ttm, ctx); +} + static void ttm_tt_clear_mapping(struct ttm_tt *ttm) { pgoff_t i; diff --git a/include/drm/ttm/ttm_bo_driver.h b/include/drm/ttm/ttm_bo_driver.h index 9b417eb2df20..2bac25a6cf90 100644 --- a/include/drm/ttm/ttm_bo_driver.h +++ b/include/drm/ttm/ttm_bo_driver.h @@ -700,6 +700,15 @@ int ttm_tt_swapin(struct ttm_tt *ttm); int ttm_tt_set_placement_caching(struct ttm_tt *ttm, uint32_t placement); int ttm_tt_swapout(struct ttm_tt *ttm, struct file *persistent_swap_storage); +/** + * ttm_tt_populate - allocate pages for a ttm + * + * @ttm: Pointer to the ttm_tt structure + * + * Calls the driver method to allocate pages for a ttm + */ +int ttm_tt_populate(struct ttm_tt *ttm, struct ttm_operation_ctx *ctx); + /** * ttm_tt_unpopulate - free pages from a ttm * -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 2/5] drm/amdgpu: remove extra TT unpopulated check
The subsystem chould check that, not the driver. Signed-off-by: Christian König--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 95f990140f2a..648c449aaa79 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -997,9 +997,6 @@ static int amdgpu_ttm_tt_populate(struct ttm_tt *ttm, struct amdgpu_ttm_tt *gtt = (void *)ttm; bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG); - if (ttm->state != tt_unpopulated) - return 0; - if (gtt && gtt->userptr) { ttm->sg = kzalloc(sizeof(struct sg_table), GFP_KERNEL); if (!ttm->sg) -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 3/5] drm/radeon: remove extra TT unpopulated check
The subsystem chould check that, not the driver. Signed-off-by: Christian König--- drivers/gpu/drm/radeon/radeon_ttm.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/gpu/drm/radeon/radeon_ttm.c b/drivers/gpu/drm/radeon/radeon_ttm.c index a0a839bc39bf..42e3ee81a96e 100644 --- a/drivers/gpu/drm/radeon/radeon_ttm.c +++ b/drivers/gpu/drm/radeon/radeon_ttm.c @@ -728,9 +728,6 @@ static int radeon_ttm_tt_populate(struct ttm_tt *ttm, struct radeon_device *rdev; bool slave = !!(ttm->page_flags & TTM_PAGE_FLAG_SG); - if (ttm->state != tt_unpopulated) - return 0; - if (gtt && gtt->userptr) { ttm->sg = kzalloc(sizeof(struct sg_table), GFP_KERNEL); if (!ttm->sg) -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 5/5] drm/amdgpu: replace iova debugfs file with iomem
This allows access to pages allocated through the driver with optional IOMMU mapping. Signed-off-by: Christian König--- drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c | 57 - 1 file changed, 35 insertions(+), 22 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c index 648c449aaa79..795ceaeb82d5 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_ttm.c @@ -1929,38 +1929,51 @@ static const struct file_operations amdgpu_ttm_gtt_fops = { #endif -static ssize_t amdgpu_iova_to_phys_read(struct file *f, char __user *buf, - size_t size, loff_t *pos) +static ssize_t amdgpu_iomem_read(struct file *f, char __user *buf, +size_t size, loff_t *pos) { struct amdgpu_device *adev = file_inode(f)->i_private; - int r; - uint64_t phys; struct iommu_domain *dom; + ssize_t result = 0; + int r; - // always return 8 bytes - if (size != 8) - return -EINVAL; + dom = iommu_get_domain_for_dev(adev->dev); - // only accept page addresses - if (*pos & 0xFFF) - return -EINVAL; + while (size) { + phys_addr_t addr = *pos & PAGE_MASK; + loff_t off = *pos & ~PAGE_MASK; + size_t bytes = PAGE_SIZE - off; + unsigned long pfn; + struct page *p; + void *ptr; - dom = iommu_get_domain_for_dev(adev->dev); - if (dom) - phys = iommu_iova_to_phys(dom, *pos); - else - phys = *pos; + addr = dom ? iommu_iova_to_phys(dom, addr) : addr; - r = copy_to_user(buf, , 8); - if (r) - return -EFAULT; + pfn = addr >> PAGE_SHIFT; + if (!pfn_valid(pfn)) + return -EPERM; + + p = pfn_to_page(pfn); + if (p->mapping != adev->mman.bdev.dev_mapping) + return -EPERM; + + ptr = kmap(p); + r = copy_to_user(buf, ptr, bytes); + kunmap(p); + if (r) + return -EFAULT; - return 8; + size -= bytes; + *pos += bytes; + result += bytes; + } + + return result; } -static const struct file_operations amdgpu_ttm_iova_fops = { +static const struct file_operations amdgpu_ttm_iomem_fops = { .owner = THIS_MODULE, - .read = amdgpu_iova_to_phys_read, + .read = amdgpu_iomem_read, .llseek = default_llseek }; @@ -1973,7 +1986,7 @@ static const struct { #ifdef CONFIG_DRM_AMDGPU_GART_DEBUGFS { "amdgpu_gtt", _ttm_gtt_fops, TTM_PL_TT }, #endif - { "amdgpu_iova", _ttm_iova_fops, TTM_PL_SYSTEM }, + { "amdgpu_iomem", _ttm_iomem_fops, TTM_PL_SYSTEM }, }; #endif -- 2.14.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [PATCH libdrm] admgpu: add amdgpu_query_sw_info for querying high bits of 32-bit address space
Reviewed-by: Christian KönigAm 02.02.2018 um 18:34 schrieb Marek Olšák: From: Marek Olšák --- amdgpu/amdgpu.h | 21 + amdgpu/amdgpu_device.c | 14 ++ amdgpu/amdgpu_internal.h | 1 + 3 files changed, 36 insertions(+) diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h index 2eb03bf..928b2a6 100644 --- a/amdgpu/amdgpu.h +++ b/amdgpu/amdgpu.h @@ -87,20 +87,24 @@ enum amdgpu_bo_handle_type { amdgpu_bo_handle_type_dma_buf_fd = 2 }; /** Define known types of GPU VM VA ranges */ enum amdgpu_gpu_va_range { /** Allocate from "normal"/general range */ amdgpu_gpu_va_range_general = 0 }; +enum amdgpu_sw_info { + amdgpu_sw_info_address32_hi = 0, +}; + /*--*/ /* -- Datatypes --- */ /*--*/ /** * Define opaque pointer to context associated with fd. * This context will be returned as the result of * "initialize" function and should be pass as the first * parameter to any API call */ @@ -1079,20 +1083,37 @@ int amdgpu_query_gpu_info(amdgpu_device_handle dev, * \param value - \c [out] Pointer to the return value. * * \return 0 on success\n * <0 - Negative POSIX error code * */ int amdgpu_query_info(amdgpu_device_handle dev, unsigned info_id, unsigned size, void *value); /** + * Query hardware or driver information. + * + * The return size is query-specific and depends on the "info_id" parameter. + * No more than "size" bytes is returned. + * + * \param dev - \c [in] Device handle. See #amdgpu_device_initialize() + * \param info- \c [in] amdgpu_sw_info_* + * \param value - \c [out] Pointer to the return value. + * + * \return 0 on success\n + * <0 - Negative POSIX error code + * +*/ +int amdgpu_query_sw_info(amdgpu_device_handle dev, enum amdgpu_sw_info info, +void *value); + +/** * Query information about GDS * * \param dev- \c [in] Device handle. See #amdgpu_device_initialize() * \param gds_info - \c [out] Pointer to structure to get GDS information * * \return 0 on success\n * <0 - Negative POSIX Error code * */ int amdgpu_query_gds_info(amdgpu_device_handle dev, diff --git a/amdgpu/amdgpu_device.c b/amdgpu/amdgpu_device.c index f34e27a..6ee25a9 100644 --- a/amdgpu/amdgpu_device.c +++ b/amdgpu/amdgpu_device.c @@ -268,20 +268,21 @@ int amdgpu_device_initialize(int fd, start = dev->dev_info.high_va_offset; max = dev->dev_info.high_va_max; } else { start = dev->dev_info.virtual_address_offset; max = dev->dev_info.virtual_address_max; } max = MIN2(max, (start & ~0xULL) + 0x1ULL); amdgpu_vamgr_init(>vamgr_32, start, max, dev->dev_info.virtual_address_alignment); + dev->address32_hi = start >> 32; start = max; if (dev->dev_info.high_va_offset && dev->dev_info.high_va_max) max = dev->dev_info.high_va_max; else max = dev->dev_info.virtual_address_max; amdgpu_vamgr_init(>vamgr, start, max, dev->dev_info.virtual_address_alignment); amdgpu_parse_asic_ids(dev); @@ -305,10 +306,23 @@ cleanup: int amdgpu_device_deinitialize(amdgpu_device_handle dev) { amdgpu_device_reference(, NULL); return 0; } const char *amdgpu_get_marketing_name(amdgpu_device_handle dev) { return dev->marketing_name; } + +int amdgpu_query_sw_info(amdgpu_device_handle dev, enum amdgpu_sw_info info, +void *value) +{ + uint32_t *val32 = (uint32_t*)value; + + switch (info) { + case amdgpu_sw_info_address32_hi: + *val32 = dev->address32_hi; + return 0; + } + return -EINVAL; +} diff --git a/amdgpu/amdgpu_internal.h b/amdgpu/amdgpu_internal.h index 3e044f1..802b162 100644 --- a/amdgpu/amdgpu_internal.h +++ b/amdgpu/amdgpu_internal.h @@ -68,20 +68,21 @@ struct amdgpu_va { enum amdgpu_gpu_va_range range; struct amdgpu_bo_va_mgr *vamgr; }; struct amdgpu_device { atomic_t refcount; int fd; int flink_fd; unsigned major_version; unsigned minor_version; + uint32_t address32_hi; char *marketing_name; /** List of buffer handles. Protected by bo_table_mutex. */ struct util_hash_table *bo_handles; /** List of buffer GEM flink names. Protected by bo_table_mutex. */ struct util_hash_table *bo_flink_names; /** This protects all hash tables. */ pthread_mutex_t bo_table_mutex;
Re: Deadlocks with multiple applications on AMD RX 460 and RX 550 - Update 2
Hi Christian, Alexander, I have enabled kmemleak, but memleak didn't detect anything special, in fact this time, I don't know why, I didn't get any allocation failure at all, but the GPU did hang after around 4h 6m of uptime with Xorg. The log can be found in attachment. I will try again to see if the allocation failure reappears, or if it has become less apparent due to kmemleak scans. The kernel stack trace is similar to the GPU hangs I was getting on earlier kernel versions with Kodi, or Firefox when watching videos with either one, but if I left Xorg idle, it would remain up and available without hanging for more than one day. This stack trace also looks quite similar to what Daniel Andersson reported in "[BUG] Intermittent hang/deadlock when opening browser tab with Vega gpu", looks like another demonstration of the same bug on different architectures. Regards, Luís On Fri, Feb 2, 2018 at 7:48 AM, Christian Königwrote: > Hi Luis, > > please enable kmemleak in your build and watch out for any suspicious > messages in the system log. > > Regards, > Christian. > > > Am 02.02.2018 um 00:03 schrieb Luís Mendes: >> >> Hi Alexander, >> >> I didn't notice improvements on this issue with that particular patch >> applied. It still ends up failing to allocate kernel memory after a >> few hours of uptime with Xorg. >> >> I will try to upgrade to mesa 18.0.0-rc3 and to amd-staging-drm-next >> head, to see if the issue still occurs with those versions. >> >> If you have additional suggestions I'll be happy to try them. >> >> Regards, >> Luís Mendes >> >> On Thu, Feb 1, 2018 at 2:30 AM, Alex Deucher >> wrote: >>> >>> On Wed, Jan 31, 2018 at 6:57 PM, Luís Mendes >>> wrote: Hi everyone, I am getting a new issue with amdgpu with RX460, that is, now I can play any videos with Kodi or play web videos with firefox and run OpenGL applications without running into any issues, however after some uptime with XOrg even when almost inactive I get a kmalloc allocation failure, normally followed by a GPU hang a while after the the allocation failure. I had a terminal window under Ubuntu Mate 17.10 and I was compiling code when I got the kernel messages that can be found in attachment. I am using the kernel as identified on my previous email, which can be found below. >>> >>> does this patch help? >>> https://patchwork.freedesktop.org/patch/198258/ >>> >>> Alex >>> Regards, Luís Mendes On Wed, Jan 31, 2018 at 12:47 PM, Luís Mendes wrote: > > Hi Alexander, > > I've cherry picked the patch you pointed out into kernel from > amd-drm-next-4.17-wip at commit > 9ab2894122275a6d636bb2654a157e88a0f7b9e2 ( drm/amdgpu: set > DRIVER_ATOMIC flag early) and tested it on ARMv7l and the problem has > gone indeed. > > > Working great on ARMv7l with AMD RX460. > > Thanks, > Luís Mendes > > > On Tue, Jan 30, 2018 at 6:44 PM, Deucher, Alexander > wrote: >> >> Fixed with this patch: >> >> >> https://lists.freedesktop.org/archives/amd-gfx/2018-January/018472.html >> >> >> Alex <> >> >> __ ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx >> ___ >> amd-gfx mailing list >> amd-gfx@lists.freedesktop.org >> https://lists.freedesktop.org/mailman/listinfo/amd-gfx > > Feb 2 16:29:29 localhost kernel: [14801.740467] [drm:amdgpu_job_timedout [amdgpu]] *ERROR* ring gfx timeout, last signaled seq=831006, last emitted seq=831008 Feb 2 16:29:29 localhost kernel: [14801.751557] [drm] IP block:gmc_v8_0 is hung! Feb 2 16:29:29 localhost kernel: [14801.751563] [drm] IP block:gfx_v8_0 is hung! Feb 2 16:29:29 localhost kernel: [14801.751611] [drm] GPU recovery disabled. Feb 2 16:44:53 localhost kernel: [15725.856181] INFO: task amdgpu_cs:0:3803 blocked for more than 120 seconds. Feb 2 16:44:53 localhost kernel: [15725.863085] Not tainted 4.15.0-rc8-next2g-g9ab2894-dirty #3 Feb 2 16:44:53 localhost kernel: [15725.869213] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Feb 2 16:44:53 localhost kernel: [15725.877078] amdgpu_cs:0 D0 3803 3091 0x Feb 2 16:44:53 localhost kernel: [15725.877084] Backtrace: Feb 2 16:44:53 localhost kernel: [15725.877096] [<80b571c8>] (__schedule) from [<80b578cc>] (schedule+0x44/0xa4) Feb 2 16:44:53 localhost kernel: [15725.877102] r10:600f0013 r9:b45b6000 r8:b45b7bd4 r7: r6:7fff r5:81004c48 Feb 2 16:44:53 localhost kernel: [15725.877104] r4:e000 Feb 2 16:44:53 localhost kernel: [15725.877110] [<80b57888>]
Re: [PATCH 1/2] drm/amdgpu/dce: fix mask in dce_v*_0_is_in_vblank
On 2018-02-02 06:33 PM, Alex Deucher wrote: > Using the wrong mask. > > Noticed-by: Hans de Ruiter> Signed-off-by: Alex Deucher The series is Reviewed-by: Michel Dänzer Thanks for taking care of this! -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH libdrm] admgpu: add amdgpu_query_sw_info for querying high bits of 32-bit address space
From: Marek Olšák--- amdgpu/amdgpu.h | 21 + amdgpu/amdgpu_device.c | 14 ++ amdgpu/amdgpu_internal.h | 1 + 3 files changed, 36 insertions(+) diff --git a/amdgpu/amdgpu.h b/amdgpu/amdgpu.h index 2eb03bf..928b2a6 100644 --- a/amdgpu/amdgpu.h +++ b/amdgpu/amdgpu.h @@ -87,20 +87,24 @@ enum amdgpu_bo_handle_type { amdgpu_bo_handle_type_dma_buf_fd = 2 }; /** Define known types of GPU VM VA ranges */ enum amdgpu_gpu_va_range { /** Allocate from "normal"/general range */ amdgpu_gpu_va_range_general = 0 }; +enum amdgpu_sw_info { + amdgpu_sw_info_address32_hi = 0, +}; + /*--*/ /* -- Datatypes --- */ /*--*/ /** * Define opaque pointer to context associated with fd. * This context will be returned as the result of * "initialize" function and should be pass as the first * parameter to any API call */ @@ -1079,20 +1083,37 @@ int amdgpu_query_gpu_info(amdgpu_device_handle dev, * \param value - \c [out] Pointer to the return value. * * \return 0 on success\n * <0 - Negative POSIX error code * */ int amdgpu_query_info(amdgpu_device_handle dev, unsigned info_id, unsigned size, void *value); /** + * Query hardware or driver information. + * + * The return size is query-specific and depends on the "info_id" parameter. + * No more than "size" bytes is returned. + * + * \param dev - \c [in] Device handle. See #amdgpu_device_initialize() + * \param info- \c [in] amdgpu_sw_info_* + * \param value - \c [out] Pointer to the return value. + * + * \return 0 on success\n + * <0 - Negative POSIX error code + * +*/ +int amdgpu_query_sw_info(amdgpu_device_handle dev, enum amdgpu_sw_info info, +void *value); + +/** * Query information about GDS * * \param dev - \c [in] Device handle. See #amdgpu_device_initialize() * \param gds_info - \c [out] Pointer to structure to get GDS information * * \return 0 on success\n * <0 - Negative POSIX Error code * */ int amdgpu_query_gds_info(amdgpu_device_handle dev, diff --git a/amdgpu/amdgpu_device.c b/amdgpu/amdgpu_device.c index f34e27a..6ee25a9 100644 --- a/amdgpu/amdgpu_device.c +++ b/amdgpu/amdgpu_device.c @@ -268,20 +268,21 @@ int amdgpu_device_initialize(int fd, start = dev->dev_info.high_va_offset; max = dev->dev_info.high_va_max; } else { start = dev->dev_info.virtual_address_offset; max = dev->dev_info.virtual_address_max; } max = MIN2(max, (start & ~0xULL) + 0x1ULL); amdgpu_vamgr_init(>vamgr_32, start, max, dev->dev_info.virtual_address_alignment); + dev->address32_hi = start >> 32; start = max; if (dev->dev_info.high_va_offset && dev->dev_info.high_va_max) max = dev->dev_info.high_va_max; else max = dev->dev_info.virtual_address_max; amdgpu_vamgr_init(>vamgr, start, max, dev->dev_info.virtual_address_alignment); amdgpu_parse_asic_ids(dev); @@ -305,10 +306,23 @@ cleanup: int amdgpu_device_deinitialize(amdgpu_device_handle dev) { amdgpu_device_reference(, NULL); return 0; } const char *amdgpu_get_marketing_name(amdgpu_device_handle dev) { return dev->marketing_name; } + +int amdgpu_query_sw_info(amdgpu_device_handle dev, enum amdgpu_sw_info info, +void *value) +{ + uint32_t *val32 = (uint32_t*)value; + + switch (info) { + case amdgpu_sw_info_address32_hi: + *val32 = dev->address32_hi; + return 0; + } + return -EINVAL; +} diff --git a/amdgpu/amdgpu_internal.h b/amdgpu/amdgpu_internal.h index 3e044f1..802b162 100644 --- a/amdgpu/amdgpu_internal.h +++ b/amdgpu/amdgpu_internal.h @@ -68,20 +68,21 @@ struct amdgpu_va { enum amdgpu_gpu_va_range range; struct amdgpu_bo_va_mgr *vamgr; }; struct amdgpu_device { atomic_t refcount; int fd; int flink_fd; unsigned major_version; unsigned minor_version; + uint32_t address32_hi; char *marketing_name; /** List of buffer handles. Protected by bo_table_mutex. */ struct util_hash_table *bo_handles; /** List of buffer GEM flink names. Protected by bo_table_mutex. */ struct util_hash_table *bo_flink_names; /** This protects all hash tables. */ pthread_mutex_t bo_table_mutex; struct drm_amdgpu_info_device dev_info; struct amdgpu_gpu_info info; -- 2.7.4 ___
[PATCH 1/2] drm/amdgpu/dce: fix mask in dce_v*_0_is_in_vblank
Using the wrong mask. Noticed-by: Hans de RuiterSigned-off-by: Alex Deucher --- drivers/gpu/drm/amd/amdgpu/dce_v10_0.c | 2 +- drivers/gpu/drm/amd/amdgpu/dce_v11_0.c | 2 +- drivers/gpu/drm/amd/amdgpu/dce_v8_0.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c index c7d1ef00f9a4..8161b6579715 100644 --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c @@ -193,7 +193,7 @@ static void dce_v10_0_audio_endpt_wreg(struct amdgpu_device *adev, static bool dce_v10_0_is_in_vblank(struct amdgpu_device *adev, int crtc) { if (RREG32(mmCRTC_STATUS + crtc_offsets[crtc]) & - CRTC_V_BLANK_START_END__CRTC_V_BLANK_START_MASK) + CRTC_STATUS__CRTC_V_BLANK_MASK) return true; else return false; diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c index 99bc1f36c96b..00b3df281207 100644 --- a/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c +++ b/drivers/gpu/drm/amd/amdgpu/dce_v11_0.c @@ -210,7 +210,7 @@ static void dce_v11_0_audio_endpt_wreg(struct amdgpu_device *adev, static bool dce_v11_0_is_in_vblank(struct amdgpu_device *adev, int crtc) { if (RREG32(mmCRTC_STATUS + crtc_offsets[crtc]) & - CRTC_V_BLANK_START_END__CRTC_V_BLANK_START_MASK) + CRTC_STATUS__CRTC_V_BLANK_MASK) return true; else return false; diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c index 823a8c331da5..6fc3e05aadbc 100644 --- a/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c +++ b/drivers/gpu/drm/amd/amdgpu/dce_v8_0.c @@ -143,7 +143,7 @@ static void dce_v8_0_audio_endpt_wreg(struct amdgpu_device *adev, static bool dce_v8_0_is_in_vblank(struct amdgpu_device *adev, int crtc) { if (RREG32(mmCRTC_STATUS + crtc_offsets[crtc]) & - CRTC_V_BLANK_START_END__CRTC_V_BLANK_START_MASK) + CRTC_STATUS__CRTC_V_BLANK_MASK) return true; else return false; -- 2.13.6 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH 2/2] drm/amdgpu: remove unused display_vblank_wait interface
No longer used since we changed the MC programming sequence. Signed-off-by: Alex Deucher--- drivers/gpu/drm/amd/amdgpu/amdgpu.h | 1 - drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h | 2 - drivers/gpu/drm/amd/amdgpu/dce_v10_0.c| 61 --- drivers/gpu/drm/amd/amdgpu/dce_v11_0.c| 61 --- drivers/gpu/drm/amd/amdgpu/dce_v6_0.c | 59 -- drivers/gpu/drm/amd/amdgpu/dce_v8_0.c | 61 --- drivers/gpu/drm/amd/amdgpu/dce_virtual.c | 14 -- drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm.c | 1 - 8 files changed, 260 deletions(-) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu.h b/drivers/gpu/drm/amd/amdgpu/amdgpu.h index 02b1c954e31b..ebe4595c8897 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu.h @@ -1803,7 +1803,6 @@ amdgpu_get_sdma_instance(struct amdgpu_ring *ring) #define amdgpu_ih_decode_iv(adev, iv) (adev)->irq.ih_funcs->decode_iv((adev), (iv)) #define amdgpu_ih_set_rptr(adev) (adev)->irq.ih_funcs->set_rptr((adev)) #define amdgpu_display_vblank_get_counter(adev, crtc) (adev)->mode_info.funcs->vblank_get_counter((adev), (crtc)) -#define amdgpu_display_vblank_wait(adev, crtc) (adev)->mode_info.funcs->vblank_wait((adev), (crtc)) #define amdgpu_display_backlight_set_level(adev, e, l) (adev)->mode_info.funcs->backlight_set_level((e), (l)) #define amdgpu_display_backlight_get_level(adev, e) (adev)->mode_info.funcs->backlight_get_level((e)) #define amdgpu_display_hpd_sense(adev, h) (adev)->mode_info.funcs->hpd_sense((adev), (h)) diff --git a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h index ea1bd75bef35..d9533bbc467c 100644 --- a/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h +++ b/drivers/gpu/drm/amd/amdgpu/amdgpu_mode.h @@ -267,8 +267,6 @@ struct amdgpu_display_funcs { void (*bandwidth_update)(struct amdgpu_device *adev); /* get frame count */ u32 (*vblank_get_counter)(struct amdgpu_device *adev, int crtc); - /* wait for vblank */ - void (*vblank_wait)(struct amdgpu_device *adev, int crtc); /* set backlight level */ void (*backlight_set_level)(struct amdgpu_encoder *amdgpu_encoder, u8 level); diff --git a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c index 8161b6579715..7ea900010702 100644 --- a/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c +++ b/drivers/gpu/drm/amd/amdgpu/dce_v10_0.c @@ -190,66 +190,6 @@ static void dce_v10_0_audio_endpt_wreg(struct amdgpu_device *adev, spin_unlock_irqrestore(>audio_endpt_idx_lock, flags); } -static bool dce_v10_0_is_in_vblank(struct amdgpu_device *adev, int crtc) -{ - if (RREG32(mmCRTC_STATUS + crtc_offsets[crtc]) & - CRTC_STATUS__CRTC_V_BLANK_MASK) - return true; - else - return false; -} - -static bool dce_v10_0_is_counter_moving(struct amdgpu_device *adev, int crtc) -{ - u32 pos1, pos2; - - pos1 = RREG32(mmCRTC_STATUS_POSITION + crtc_offsets[crtc]); - pos2 = RREG32(mmCRTC_STATUS_POSITION + crtc_offsets[crtc]); - - if (pos1 != pos2) - return true; - else - return false; -} - -/** - * dce_v10_0_vblank_wait - vblank wait asic callback. - * - * @adev: amdgpu_device pointer - * @crtc: crtc to wait for vblank on - * - * Wait for vblank on the requested crtc (evergreen+). - */ -static void dce_v10_0_vblank_wait(struct amdgpu_device *adev, int crtc) -{ - unsigned i = 100; - - if (crtc >= adev->mode_info.num_crtc) - return; - - if (!(RREG32(mmCRTC_CONTROL + crtc_offsets[crtc]) & CRTC_CONTROL__CRTC_MASTER_EN_MASK)) - return; - - /* depending on when we hit vblank, we may be close to active; if so, -* wait for another frame. -*/ - while (dce_v10_0_is_in_vblank(adev, crtc)) { - if (i++ == 100) { - i = 0; - if (!dce_v10_0_is_counter_moving(adev, crtc)) - break; - } - } - - while (!dce_v10_0_is_in_vblank(adev, crtc)) { - if (i++ == 100) { - i = 0; - if (!dce_v10_0_is_counter_moving(adev, crtc)) - break; - } - } -} - static u32 dce_v10_0_vblank_get_counter(struct amdgpu_device *adev, int crtc) { if (crtc >= adev->mode_info.num_crtc) @@ -3602,7 +3542,6 @@ static void dce_v10_0_encoder_add(struct amdgpu_device *adev, static const struct amdgpu_display_funcs dce_v10_0_display_funcs = { .bandwidth_update = _v10_0_bandwidth_update, .vblank_get_counter = _v10_0_vblank_get_counter, - .vblank_wait = _v10_0_vblank_wait, .backlight_set_level =
Re: [BUG] Intermittent hang/deadlock when opening browser tab with Vega gpu
On Fri, Feb 2, 2018 at 6:24 AM, Daniel Anderssonwrote: > Hi, > > I have an intermittent deadlock/hang in the amdgpu driver. It seems to > happen when I open a new tab in qutebrowser(v1.1.1), while I am doing other > stuff, like watching youtube through mpv or playing dota 2. It seems to be > pretty arbitrary how often it happens. Sometimes it is once a week and > sometimes multiple times a day. I have a vega 64. > > What happens is that the screen freezes but I still hear sound and can ssh > in to the box. If I reboot it remotely, I get dropped back to tty and it > tries to reboot but it gets stuck on blocking processes(mpv etc) so I have > to reset it manually. > Repro steps: > > * run qutebrowser > * Do a bunch of other stuff, videos, games etc > * Switch back to qutebrowser and hit "Ctrl+t" & be "lucky" > > This seems to happen on all release candidates for 4.15 and 4.15 itself: > > 4.15: > [ 2211.463021] INFO: task amdgpu_cs:0:1053 blocked for more than 120 > seconds. > [ 2211.463026] Not tainted 4.15.0-ARCH+ #1 > [ 2211.463028] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables > this message. > [ 2211.463030] amdgpu_cs:0 D0 1053 1051 0x > [ 2211.463032] Call Trace: > [ 2211.463040] ? __schedule+0x297/0x8b0 > [ 2211.463043] schedule+0x2f/0x90 > [ 2211.463045] schedule_timeout+0x1fd/0x3a0 > [ 2211.463085] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] > [ 2211.463088] dma_fence_default_wait+0x1cc/0x270 > [ 2211.463090] ? dma_fence_release+0xa0/0xa0 > [ 2211.463092] dma_fence_wait_timeout+0x39/0x110 > [ 2211.463119] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] > [ 2211.463145] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] > [ 2211.463149] ? dequeue_entity+0xdc/0x460 > [ 2211.463174] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2211.463185] drm_ioctl_kernel+0x5b/0xb0 [drm] > [ 2211.463194] drm_ioctl+0x2ae/0x350 [drm] > [ 2211.463218] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2211.463239] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] > [ 2211.463243] do_vfs_ioctl+0xa4/0x630 > [ 2211.463246] ? SyS_futex+0x12d/0x180 > [ 2211.463248] SyS_ioctl+0x74/0x80 > [ 2211.463251] entry_SYSCALL_64_fastpath+0x20/0x83 > [ 2211.463254] RIP: 0033:0x7f21b27b6d87 > [ 2211.463255] RSP: 002b:7f21a83acab8 EFLAGS: 0246 > [ 2334.343027] INFO: task amdgpu_cs:0:1053 blocked for more than 120 > seconds. > [ 2334.343032] Not tainted 4.15.0-ARCH+ #1 > [ 2334.343034] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables > this message. > [ 2334.343036] amdgpu_cs:0 D0 1053 1051 0x > [ 2334.343039] Call Trace: > [ 2334.343046] ? __schedule+0x297/0x8b0 > [ 2334.343049] schedule+0x2f/0x90 > [ 2334.343051] schedule_timeout+0x1fd/0x3a0 > [ 2334.343091] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] > [ 2334.343095] dma_fence_default_wait+0x1cc/0x270 > [ 2334.343097] ? dma_fence_release+0xa0/0xa0 > [ 2334.343098] dma_fence_wait_timeout+0x39/0x110 > [ 2334.343125] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] > [ 2334.343151] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] > [ 2334.343155] ? dequeue_entity+0xdc/0x460 > [ 2334.343181] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2334.343191] drm_ioctl_kernel+0x5b/0xb0 [drm] > [ 2334.343200] drm_ioctl+0x2ae/0x350 [drm] > [ 2334.343224] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2334.343245] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] > [ 2334.343249] do_vfs_ioctl+0xa4/0x630 > [ 2334.343252] ? SyS_futex+0x12d/0x180 > [ 2334.343254] SyS_ioctl+0x74/0x80 > [ 2334.343257] entry_SYSCALL_64_fastpath+0x20/0x83 > [ 2334.343259] RIP: 0033:0x7f21b27b6d87 > [ 2334.343260] RSP: 002b:7f21a83acab8 EFLAGS: 0246 > [ 2457.222859] INFO: task amdgpu_cs:0:1053 blocked for more than 120 > seconds. > [ 2457.222862] Not tainted 4.15.0-ARCH+ #1 > [ 2457.222863] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables > this message. > [ 2457.222864] amdgpu_cs:0 D0 1053 1051 0x > [ 2457.222866] Call Trace: > [ 2457.222872] ? __schedule+0x297/0x8b0 > [ 2457.222873] schedule+0x2f/0x90 > [ 2457.222875] schedule_timeout+0x1fd/0x3a0 > [ 2457.222900] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] > [ 2457.222902] dma_fence_default_wait+0x1cc/0x270 > [ 2457.222903] ? dma_fence_release+0xa0/0xa0 > [ 2457.222904] dma_fence_wait_timeout+0x39/0x110 > [ 2457.222918] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] > [ 2457.222932] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] > [ 2457.222935] ? dequeue_entity+0xdc/0x460 > [ 2457.222948] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2457.222955] drm_ioctl_kernel+0x5b/0xb0 [drm] > [ 2457.222960] drm_ioctl+0x2ae/0x350 [drm] > [ 2457.222972] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] > [ 2457.222983] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] > [ 2457.222986] do_vfs_ioctl+0xa4/0x630 > [ 2457.222989] ? SyS_futex+0x12d/0x180 > [ 2457.222989] SyS_ioctl+0x74/0x80 > [ 2457.222991] entry_SYSCALL_64_fastpath+0x20/0x83 > [ 2457.222993] RIP: 0033:0x7f21b27b6d87 > [
Re: All games maded with Unity engine in steam catalog crashes on AMD GPU, but worked on Intel GPU on same machine.
On Fri, Feb 2, 2018 at 5:13 PM, mikhailwrote: > On Thu, 2018-02-01 at 21:31 +0100, Marek Olšák wrote: >> >> Valgrind doesn't show any memory-related issue with Mesa. It does show >> an issue with "New Unity Project". This can corrupt the heap and cause >> a random crash on the next call of malloc/free/new/delete: >> >> ==17721== Mismatched free() / delete / delete [] >> ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) >> ==17721==by 0xD4482C: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== Address 0x1f6bf470 is 0 bytes inside a block of size 220 alloc'd >> ==17721==at 0x4C308B7: operator new[](unsigned long) >> (vg_replace_malloc.c:423) >> ==17721==by 0xD47CF3: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== >> ==17721== Mismatched free() / delete / delete [] >> ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) >> ==17721==by 0xD44843: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== Address 0x1f6bf590 is 0 bytes inside a block of size 220 alloc'd >> ==17721==at 0x4C308B7: operator new[](unsigned long) >> (vg_replace_malloc.c:423) >> ==17721==by 0xD47ED7: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> >> Marek > > > Tak said: We'll investigate, but if that were the case, wouldn't it affect > the intel driver the same way? There are several factors making radeonsi more vulnerable to heap corruption: - radeonsi has a lot of running threads, unlike intel - radeonsi uses LLVM, which is a big user of new/delete to the extent of making new/delete/malloc/free a little slower for Mesa and Unity. Closed-source drivers use their own allocator for memory allocations (usually jemalloc), so they are immune to such Unity bugs. Marek ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [AMDGPU] module does not link without DEBUG_FS configuration option
On 2018-02-02 05:29 PM, sylvain.bertr...@gmail.com wrote: > Hi, > > I did not look into details, but on amd-staging-drm-next > (495e9e174feaec6e5aeb6f8224f0d3bda4c96114), linking the amdgpu module fails if > DEBUG_FS is not enabled (probably some weird things happen in the code with > the CONFIG_DEBUG_FS macro). > > Saw passing something about an amd-gfx mailing list. Is this list still valid > for amdgpu related work? Yes, moving there. -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: All games maded with Unity engine in steam catalog crashes on AMD GPU, but worked on Intel GPU on same machine.
On 2018-02-02 05:13 PM, mikhail wrote: > On Thu, 2018-02-01 at 21:31 +0100, Marek Olšák wrote: >> >> Valgrind doesn't show any memory-related issue with Mesa. It does show >> an issue with "New Unity Project". This can corrupt the heap and cause >> a random crash on the next call of malloc/free/new/delete: >> >> ==17721== Mismatched free() / delete / delete [] >> ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) >> ==17721==by 0xD4482C: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== Address 0x1f6bf470 is 0 bytes inside a block of size 220 alloc'd >> ==17721==at 0x4C308B7: operator new[](unsigned long) >> (vg_replace_malloc.c:423) >> ==17721==by 0xD47CF3: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== >> ==17721== Mismatched free() / delete / delete [] >> ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) >> ==17721==by 0xD44843: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> ==17721== Address 0x1f6bf590 is 0 bytes inside a block of size 220 alloc'd >> ==17721==at 0x4C308B7: operator new[](unsigned long) >> (vg_replace_malloc.c:423) >> ==17721==by 0xD47ED7: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) >> ==17721==by 0x5D7B009: (below main) (libc-start.c:308) >> >> Marek > > > Tak said: We'll investigate, but if that were the case, wouldn't it affect > the intel driver the same way? > > I also checked same Unity project under Valgrind on Intel GPU and see same > errors in log: It could be luck that the Unity engine's errors flagged by valgrind result in a crash with radeonsi but not with i965. The best course of action at this point is to fix the errors in the Unity engine, then check if it still crashes with radeonsi. -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [PATCH] drm/amd/display: fix incompatible structure layouts
On Fri, Feb 2, 2018 at 4:39 PM, Harry Wentlandwrote: > On 2018-02-02 07:31 AM, Arnd Bergmann wrote: >> Building the amd display driver with link-time optimizations revealed a bug > > Curious how I'd go about building with link-time optimizations. I got the idea from last week's LWN article on the topic, see https://lwn.net/Articles/744507/. I needed the latest gcc version to avoid some compiler bugs, and a few dozen kernel patches on top to get a clean build in random configurations (posted them all today). Most normal configurations probably work out of the box, but I have not actually tried running any ;-) Arnd ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: All games maded with Unity engine in steam catalog crashes on AMD GPU, but worked on Intel GPU on same machine.
On Thu, 2018-02-01 at 21:31 +0100, Marek Olšák wrote: > > Valgrind doesn't show any memory-related issue with Mesa. It does show > an issue with "New Unity Project". This can corrupt the heap and cause > a random crash on the next call of malloc/free/new/delete: > > ==17721== Mismatched free() / delete / delete [] > ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) > ==17721==by 0xD4482C: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x5D7B009: (below main) (libc-start.c:308) > ==17721== Address 0x1f6bf470 is 0 bytes inside a block of size 220 alloc'd > ==17721==at 0x4C308B7: operator new[](unsigned long) > (vg_replace_malloc.c:423) > ==17721==by 0xD47CF3: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x5D7B009: (below main) (libc-start.c:308) > ==17721== > ==17721== Mismatched free() / delete / delete [] > ==17721==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) > ==17721==by 0xD44843: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x5D7B009: (below main) (libc-start.c:308) > ==17721== Address 0x1f6bf590 is 0 bytes inside a block of size 220 alloc'd > ==17721==at 0x4C308B7: operator new[](unsigned long) > (vg_replace_malloc.c:423) > ==17721==by 0xD47ED7: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) > ==17721==by 0x5D7B009: (below main) (libc-start.c:308) > > Marek Tak said: We'll investigate, but if that were the case, wouldn't it affect the intel driver the same way? I also checked same Unity project under Valgrind on Intel GPU and see same errors in log: ==3407== Mismatched free() / delete / delete [] ==3407==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) ==3407==by 0xD4482C: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x8197CC: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x4608A1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x5D7B009: (below main) (libc-start.c:308) ==3407== Address 0x17065190 is 0 bytes inside a block of size 220 alloc'd ==3407==at 0x4C308B7: operator new[](unsigned long) (vg_replace_malloc.c:423) ==3407==by 0xD47CF3: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0xD3EF39: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x4FD712: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x7E2B8D: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x818B52: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x4603F1: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x5D7B009: (below main) (libc-start.c:308) ==3407== Mismatched free() / delete / delete [] ==3407==at 0x4C311E8: operator delete(void*) (vg_replace_malloc.c:576) ==3407==by 0xD44843: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0xD315D8: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x4FD79A: ??? (in /home/mikhail/New Unity Project/aaa.x86_64) ==3407==by 0x7E6350: ??? (in /home/mikhail/New Unity Project/aaa.x86_64)
Re: [PATCH] drm/amdgpu/display: fix wrong enum type for ddc_result
On 2018-02-01 08:55 PM, db...@chromium.org wrote: > From: Dominik Behr> > v2: now with fixed result comparison and spelling fixes > > Signed-off-by: Dominik Behr > --- > drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 2 +- > drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- > drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c | 4 ++-- > drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c| 4 ++-- > drivers/gpu/drm/amd/display/include/ddc_service_types.h | 2 +- > 5 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > index 1e8a21b67df7..3b05900d 100644 > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > @@ -130,7 +130,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, >msg->address, >msg->buffer, >msg->size, > - r == DDC_RESULT_SUCESSFULL); > + r == DDC_RESULT_SUCCESSFUL); > #endif > > return msg->size; > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link.c > index 0023754e034b..3abd0f1a287f 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c > @@ -1176,7 +1176,7 @@ static void dpcd_configure_panel_mode( > _config_set.raw, > sizeof(edp_config_set.raw)); > > - ASSERT(result == DDC_RESULT_SUCESSFULL); > + ASSERT(result == DDC_RESULT_SUCCESSFUL); > } > } > dm_logger_write(link->ctx->logger, LOG_DETECTION_DP_CAPS, > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > index d5294798b0a5..6b69b339dba2 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > @@ -661,7 +661,7 @@ enum ddc_result dal_ddc_service_read_dpcd_data( > ddc->ctx->i2caux, > ddc->ddc_pin, > )) > - return DDC_RESULT_SUCESSFULL; > + return DDC_RESULT_SUCCESSFUL; > > return DDC_RESULT_FAILED_OPERATION; > } > @@ -698,7 +698,7 @@ enum ddc_result dal_ddc_service_write_dpcd_data( > ddc->ctx->i2caux, > ddc->ddc_pin, > )) > - return DDC_RESULT_SUCESSFULL; > + return DDC_RESULT_SUCCESSFUL; > > return DDC_RESULT_FAILED_OPERATION; > } > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > index 33d91e4474ea..cc067d04505d 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > @@ -1926,7 +1926,7 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, > union hpd_irq_data *out_hpd > { > union hpd_irq_data hpd_irq_dpcd_data = 0; > union device_service_irq device_service_clear = { { 0 } }; > - enum dc_status result = DDC_RESULT_UNKNOWN; > + enum ddc_result result = DDC_RESULT_UNKNOWN; Result gets the return value from read_hpd_rx_irq_data which is dc_status. This line should be enum dc_status result = DC_OK; > bool status = false; > /* For use cases related to down stream connection status change, >* PSR and device auto test, refer to function handle_sst_hpd_irq > @@ -1946,7 +1946,7 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, > union hpd_irq_data *out_hpd > if (out_hpd_irq_dpcd_data) > *out_hpd_irq_dpcd_data = hpd_irq_dpcd_data; > > - if (result != DC_OK) { > + if (result != DDC_RESULT_SUCCESSFUL) { We should keep the result != DC_OK check here as read_hpd_rx_irq_data returns dc_status. Harry > dm_logger_write(link->ctx->logger, LOG_HW_HPD_IRQ, > "%s: DPCD read failed to obtain irq data\n", > __func__); > diff --git a/drivers/gpu/drm/amd/display/include/ddc_service_types.h > b/drivers/gpu/drm/amd/display/include/ddc_service_types.h > index 019e7a095ea1..f3bf749b3636 100644 > --- a/drivers/gpu/drm/amd/display/include/ddc_service_types.h > +++ b/drivers/gpu/drm/amd/display/include/ddc_service_types.h > @@ -32,7 +32,7 @@ > > enum ddc_result { > DDC_RESULT_UNKNOWN = 0, > - DDC_RESULT_SUCESSFULL, > + DDC_RESULT_SUCCESSFUL, > DDC_RESULT_FAILED_CHANNEL_BUSY, > DDC_RESULT_FAILED_TIMEOUT, > DDC_RESULT_FAILED_PROTOCOL_ERROR, > ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [PATCH] drm/amd/display: fix incompatible structure layouts
On 2018-02-02 07:31 AM, Arnd Bergmann wrote: > Building the amd display driver with link-time optimizations revealed a bug Curious how I'd go about building with link-time optimizations. > that caused dal_cmd_tbl_helper_dce80_get_table() and > dal_cmd_tbl_helper_dce110_get_table() get called with an incompatible > return type between the two callers in command_table_helper.c and > command_table_helper2.c: > > drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce80/command_table_helper_dce80.h:31: > error: type of 'dal_cmd_tbl_helper_dce80_get_table' does not match original > declaration [-Werror=lto-type-mismatch] > const struct command_table_helper *dal_cmd_tbl_helper_dce80_get_table(void); > > drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce80/command_table_helper_dce80.c:351: > note: 'dal_cmd_tbl_helper_dce80_get_table' was previously declared here > const struct command_table_helper *dal_cmd_tbl_helper_dce80_get_table(void) > > drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce110/command_table_helper_dce110.h:32: > error: type of 'dal_cmd_tbl_helper_dce110_get_table' does not match original > declaration [-Werror=lto-type-mismatch] > const struct command_table_helper *dal_cmd_tbl_helper_dce110_get_table(void); > > drivers/gpu/drm/amd/amdgpu/../display/dc/bios/dce110/command_table_helper_dce110.c:361: > note: 'dal_cmd_tbl_helper_dce110_get_table' was previously declared here > const struct command_table_helper *dal_cmd_tbl_helper_dce110_get_table(void) > > The two versions of the structure are obviously derived from the same > one, but have diverged over time, before they got added to the kernel. > > This moves the structure to a new shared header file and uses the superset > of the members, to ensure the interfaces are all compatible. > > Fixes: ae79c310b1a6 ("drm/amd/display: Add DCE12 bios parser support") > Signed-off-by: Arnd BergmannThanks for the fix. Reviewed-by: Harry Wentland Harry > --- > .../drm/amd/display/dc/bios/command_table_helper.h | 33 +-- > .../amd/display/dc/bios/command_table_helper2.h| 30 +- > .../display/dc/bios/command_table_helper_struct.h | 66 > ++ > 3 files changed, 68 insertions(+), 61 deletions(-) > create mode 100644 > drivers/gpu/drm/amd/display/dc/bios/command_table_helper_struct.h > > diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper.h > b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper.h > index 1fab634b66be..4c3789df253d 100644 > --- a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper.h > +++ b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper.h > @@ -29,38 +29,7 @@ > #include "dce80/command_table_helper_dce80.h" > #include "dce110/command_table_helper_dce110.h" > #include "dce112/command_table_helper_dce112.h" > - > -struct command_table_helper { > - bool (*controller_id_to_atom)(enum controller_id id, uint8_t *atom_id); > - uint8_t (*encoder_action_to_atom)( > - enum bp_encoder_control_action action); > - uint32_t (*encoder_mode_bp_to_atom)(enum signal_type s, > - bool enable_dp_audio); > - bool (*engine_bp_to_atom)(enum engine_id engine_id, > - uint32_t *atom_engine_id); > - void (*assign_control_parameter)( > - const struct command_table_helper *h, > - struct bp_encoder_control *control, > - DIG_ENCODER_CONTROL_PARAMETERS_V2 *ctrl_param); > - bool (*clock_source_id_to_atom)(enum clock_source_id id, > - uint32_t *atom_pll_id); > - bool (*clock_source_id_to_ref_clk_src)( > - enum clock_source_id id, > - uint32_t *ref_clk_src_id); > - uint8_t (*transmitter_bp_to_atom)(enum transmitter t); > - uint8_t (*encoder_id_to_atom)(enum encoder_id id); > - uint8_t (*clock_source_id_to_atom_phy_clk_src_id)( > - enum clock_source_id id); > - uint8_t (*signal_type_to_atom_dig_mode)(enum signal_type s); > - uint8_t (*hpd_sel_to_atom)(enum hpd_source_id id); > - uint8_t (*dig_encoder_sel_to_atom)(enum engine_id engine_id); > - uint8_t (*phy_id_to_atom)(enum transmitter t); > - uint8_t (*disp_power_gating_action_to_atom)( > - enum bp_pipe_control_action action); > - bool (*dc_clock_type_to_atom)(enum bp_dce_clock_type id, > - uint32_t *atom_clock_type); > -uint8_t (*transmitter_color_depth_to_atom)(enum transmitter_color_depth > id); > -}; > +#include "command_table_helper_struct.h" > > bool dal_bios_parser_init_cmd_tbl_helper(const struct command_table_helper > **h, > enum dce_version dce); > diff --git a/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h > b/drivers/gpu/drm/amd/display/dc/bios/command_table_helper2.h > index 9f587c91d843..785fcb20a1b9 100644 > ---
Re: [PATCH] drm/amd/pp: Restore power profile mode in auto dpm level on Vega10
Reviewed-by: Alex DeucherFrom: amd-gfx on behalf of Rex Zhu Sent: Friday, February 2, 2018 4:17 AM To: amd-gfx@lists.freedesktop.org Cc: Zhu, Rex Subject: [PATCH] drm/amd/pp: Restore power profile mode in auto dpm level on Vega10 As auto power profile mode still not support on vega10, so just restore default profile mode in auto dpm level. Change-Id: I36359c1d11a48308bc9482f7aafe4c98767ac715 Signed-off-by: Rex Zhu --- drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 10 +++--- drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 1 + 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c index dfaadab..9a9a24e 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c @@ -761,6 +761,7 @@ static int vega10_hwmgr_backend_init(struct pp_hwmgr *hwmgr) hwmgr->backend = data; hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; + hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; vega10_set_default_registry_data(hwmgr); @@ -4228,6 +4229,11 @@ static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr, break; case AMD_DPM_FORCED_LEVEL_AUTO: ret = vega10_unforce_dpm_levels(hwmgr); + if (hwmgr->default_power_profile_mode != hwmgr->power_profile_mode) { + smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask, + 1 << hwmgr->default_power_profile_mode); + hwmgr->power_profile_mode = hwmgr->default_power_profile_mode; + } break; case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD: case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK: @@ -4251,6 +4257,7 @@ static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr, else if (level != AMD_DPM_FORCED_LEVEL_PROFILE_PEAK && hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) vega10_set_fan_control_mode(hwmgr, AMD_FAN_CTRL_AUTO); } + return ret; } @@ -5068,9 +5075,6 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui uint8_t use_rlc_busy; uint8_t min_active_level; - if (input[size] == PP_SMC_POWER_PROFILE_AUTO) - return 0; /* TO DO auto wattman feature not enabled */ - hwmgr->power_profile_mode = input[size]; smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask, diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h index 376af67..231c9be 100644 --- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h +++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h @@ -758,6 +758,7 @@ struct pp_hwmgr { enum amd_pp_profile_type current_power_profile; bool en_umd_pstate; uint32_t power_profile_mode; + uint32_t default_power_profile_mode; uint32_t pstate_sclk; uint32_t pstate_mclk; bool od_enabled; -- 1.9.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[BUG] Intermittent hang/deadlock when opening browser tab with Vega gpu
Hi, I have an intermittent deadlock/hang in the amdgpu driver. It seems to happen when I open a new tab in qutebrowser(v1.1.1), while I am doing other stuff, like watching youtube through mpv or playing dota 2. It seems to be pretty arbitrary how often it happens. Sometimes it is once a week and sometimes multiple times a day. I have a vega 64. What happens is that the screen freezes but I still hear sound and can ssh in to the box. If I reboot it remotely, I get dropped back to tty and it tries to reboot but it gets stuck on blocking processes(mpv etc) so I have to reset it manually. Repro steps: * run qutebrowser * Do a bunch of other stuff, videos, games etc * Switch back to qutebrowser and hit "Ctrl+t" & be "lucky" This seems to happen on all release candidates for 4.15 and 4.15 itself: 4.15: [ 2211.463021] INFO: task amdgpu_cs:0:1053 blocked for more than 120 seconds. [ 2211.463026] Not tainted 4.15.0-ARCH+ #1 [ 2211.463028] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 2211.463030] amdgpu_cs:0 D0 1053 1051 0x [ 2211.463032] Call Trace: [ 2211.463040] ? __schedule+0x297/0x8b0 [ 2211.463043] schedule+0x2f/0x90 [ 2211.463045] schedule_timeout+0x1fd/0x3a0 [ 2211.463085] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] [ 2211.463088] dma_fence_default_wait+0x1cc/0x270 [ 2211.463090] ? dma_fence_release+0xa0/0xa0 [ 2211.463092] dma_fence_wait_timeout+0x39/0x110 [ 2211.463119] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] [ 2211.463145] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] [ 2211.463149] ? dequeue_entity+0xdc/0x460 [ 2211.463174] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2211.463185] drm_ioctl_kernel+0x5b/0xb0 [drm] [ 2211.463194] drm_ioctl+0x2ae/0x350 [drm] [ 2211.463218] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2211.463239] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [ 2211.463243] do_vfs_ioctl+0xa4/0x630 [ 2211.463246] ? SyS_futex+0x12d/0x180 [ 2211.463248] SyS_ioctl+0x74/0x80 [ 2211.463251] entry_SYSCALL_64_fastpath+0x20/0x83 [ 2211.463254] RIP: 0033:0x7f21b27b6d87 [ 2211.463255] RSP: 002b:7f21a83acab8 EFLAGS: 0246 [ 2334.343027] INFO: task amdgpu_cs:0:1053 blocked for more than 120 seconds. [ 2334.343032] Not tainted 4.15.0-ARCH+ #1 [ 2334.343034] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 2334.343036] amdgpu_cs:0 D0 1053 1051 0x [ 2334.343039] Call Trace: [ 2334.343046] ? __schedule+0x297/0x8b0 [ 2334.343049] schedule+0x2f/0x90 [ 2334.343051] schedule_timeout+0x1fd/0x3a0 [ 2334.343091] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] [ 2334.343095] dma_fence_default_wait+0x1cc/0x270 [ 2334.343097] ? dma_fence_release+0xa0/0xa0 [ 2334.343098] dma_fence_wait_timeout+0x39/0x110 [ 2334.343125] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] [ 2334.343151] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] [ 2334.343155] ? dequeue_entity+0xdc/0x460 [ 2334.343181] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2334.343191] drm_ioctl_kernel+0x5b/0xb0 [drm] [ 2334.343200] drm_ioctl+0x2ae/0x350 [drm] [ 2334.343224] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2334.343245] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [ 2334.343249] do_vfs_ioctl+0xa4/0x630 [ 2334.343252] ? SyS_futex+0x12d/0x180 [ 2334.343254] SyS_ioctl+0x74/0x80 [ 2334.343257] entry_SYSCALL_64_fastpath+0x20/0x83 [ 2334.343259] RIP: 0033:0x7f21b27b6d87 [ 2334.343260] RSP: 002b:7f21a83acab8 EFLAGS: 0246 [ 2457.222859] INFO: task amdgpu_cs:0:1053 blocked for more than 120 seconds. [ 2457.222862] Not tainted 4.15.0-ARCH+ #1 [ 2457.222863] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 2457.222864] amdgpu_cs:0 D0 1053 1051 0x [ 2457.222866] Call Trace: [ 2457.222872] ? __schedule+0x297/0x8b0 [ 2457.222873] schedule+0x2f/0x90 [ 2457.222875] schedule_timeout+0x1fd/0x3a0 [ 2457.222900] ? amdgpu_job_alloc+0x37/0xc0 [amdgpu] [ 2457.222902] dma_fence_default_wait+0x1cc/0x270 [ 2457.222903] ? dma_fence_release+0xa0/0xa0 [ 2457.222904] dma_fence_wait_timeout+0x39/0x110 [ 2457.222918] amdgpu_ctx_wait_prev_fence+0x46/0x80 [amdgpu] [ 2457.222932] amdgpu_cs_ioctl+0x98/0x1ac0 [amdgpu] [ 2457.222935] ? dequeue_entity+0xdc/0x460 [ 2457.222948] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2457.222955] drm_ioctl_kernel+0x5b/0xb0 [drm] [ 2457.222960] drm_ioctl+0x2ae/0x350 [drm] [ 2457.222972] ? amdgpu_cs_find_mapping+0xc0/0xc0 [amdgpu] [ 2457.222983] amdgpu_drm_ioctl+0x49/0x80 [amdgpu] [ 2457.222986] do_vfs_ioctl+0xa4/0x630 [ 2457.222989] ? SyS_futex+0x12d/0x180 [ 2457.222989] SyS_ioctl+0x74/0x80 [ 2457.222991] entry_SYSCALL_64_fastpath+0x20/0x83 [ 2457.222993] RIP: 0033:0x7f21b27b6d87 [ 2457.222993] RSP: 002b:7f21a83acab8 EFLAGS: 0246 [ 2580.102828] INFO: task amdgpu_cs:0:1053 blocked for more than 120 seconds. [ 2580.102831] Not tainted 4.15.0-ARCH+ #1 [ 2580.102832] "echo 0 >
Re: MCLK defaults high on second card
On 02/02/18 12:16 AM, Zhu, Rex wrote: Hi Tom, The attached patch should be able to fix this issue. Best Regards Rex Hi Rex, Yup, works fine with the patch. Cheers, Tom -Original Message- From: StDenis, Tom Sent: Friday, February 02, 2018 2:58 AM To: Deucher, Alexander; amd-gfx mailing list Cc: Zhu, Rex; Lazare, Jordan; Wentland, Harry Subject: Re: MCLK defaults high on second card On 01/02/18 01:54 PM, Deucher, Alexander wrote: Does it also work properly with dc disabled? I suspect a bug somewhere in the display info that dc generated when no displays are attached. See smu7_program_display_gap(). Yup, with dc=0 the clock is correctly set. I'll take a peak (tomorrow) but I suspect it'll probably be more fruitful to hand it over to the display team. Thanks, Tom Alex -- -- *From:* StDenis, Tom *Sent:* Thursday, February 1, 2018 1:43 PM *To:* Deucher, Alexander; amd-gfx mailing list *Cc:* Zhu, Rex *Subject:* Re: MCLK defaults high on second card On 01/02/18 01:36 PM, Deucher, Alexander wrote: Are there any displays attached to the secondary card with the mclks stuck high? If not, does attaching a display help? Stealing the display from my primary (CZ in this case) does help, and then putting it back the MCLK remains low. Tom Alex - --- *From:* amd-gfxon behalf of Tom St Denis *Sent:* Thursday, February 1, 2018 1:14:43 PM *To:* amd-gfx mailing list *Cc:* Zhu, Rex *Subject:* MCLK defaults high on second card Hi, I have a setup with a CZ + Polaris10 and on the Polaris10 the SCLK idles low and the MCLK stays in the 2nd state (1750MHz) but on my Workstation which has a single 560 in it the card idles at 300MHz (with the stock FC27 kernel). Is there an issue with non-primary cards idling properly? Doing echo 0 > /sys/devices/pci:00/:00:03.1/:21:00.0/pp_dpm_mclk Doesn't result in low clock rates either. Tom ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
[PATCH] drm/amd/pp: Restore power profile mode in auto dpm level on Vega10
As auto power profile mode still not support on vega10, so just restore default profile mode in auto dpm level. Change-Id: I36359c1d11a48308bc9482f7aafe4c98767ac715 Signed-off-by: Rex Zhu--- drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c | 10 +++--- drivers/gpu/drm/amd/powerplay/inc/hwmgr.h | 1 + 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c index dfaadab..9a9a24e 100644 --- a/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c +++ b/drivers/gpu/drm/amd/powerplay/hwmgr/vega10_hwmgr.c @@ -761,6 +761,7 @@ static int vega10_hwmgr_backend_init(struct pp_hwmgr *hwmgr) hwmgr->backend = data; hwmgr->power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; + hwmgr->default_power_profile_mode = PP_SMC_POWER_PROFILE_VIDEO; vega10_set_default_registry_data(hwmgr); @@ -4228,6 +4229,11 @@ static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr, break; case AMD_DPM_FORCED_LEVEL_AUTO: ret = vega10_unforce_dpm_levels(hwmgr); + if (hwmgr->default_power_profile_mode != hwmgr->power_profile_mode) { + smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask, + 1 << hwmgr->default_power_profile_mode); + hwmgr->power_profile_mode = hwmgr->default_power_profile_mode; + } break; case AMD_DPM_FORCED_LEVEL_PROFILE_STANDARD: case AMD_DPM_FORCED_LEVEL_PROFILE_MIN_SCLK: @@ -4251,6 +4257,7 @@ static int vega10_dpm_force_dpm_level(struct pp_hwmgr *hwmgr, else if (level != AMD_DPM_FORCED_LEVEL_PROFILE_PEAK && hwmgr->dpm_level == AMD_DPM_FORCED_LEVEL_PROFILE_PEAK) vega10_set_fan_control_mode(hwmgr, AMD_FAN_CTRL_AUTO); } + return ret; } @@ -5068,9 +5075,6 @@ static int vega10_set_power_profile_mode(struct pp_hwmgr *hwmgr, long *input, ui uint8_t use_rlc_busy; uint8_t min_active_level; - if (input[size] == PP_SMC_POWER_PROFILE_AUTO) - return 0; /* TO DO auto wattman feature not enabled */ - hwmgr->power_profile_mode = input[size]; smum_send_msg_to_smc_with_parameter(hwmgr, PPSMC_MSG_SetWorkloadMask, diff --git a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h index 376af67..231c9be 100644 --- a/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h +++ b/drivers/gpu/drm/amd/powerplay/inc/hwmgr.h @@ -758,6 +758,7 @@ struct pp_hwmgr { enum amd_pp_profile_type current_power_profile; bool en_umd_pstate; uint32_t power_profile_mode; + uint32_t default_power_profile_mode; uint32_t pstate_sclk; uint32_t pstate_mclk; bool od_enabled; -- 1.9.1 ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx
Re: [PATCH] drm/amdgpu/display: fix wrong enum type for ddc_result
Adding the amd-gfx list, please always send amdgpu patches there. On 2018-02-02 02:55 AM, db...@chromium.org wrote: > From: Dominik Behr> > v2: now with fixed result comparison and spelling fixes > > Signed-off-by: Dominik Behr > --- > drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c | 2 +- > drivers/gpu/drm/amd/display/dc/core/dc_link.c | 2 +- > drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c | 4 ++-- > drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c| 4 ++-- > drivers/gpu/drm/amd/display/include/ddc_service_types.h | 2 +- > 5 files changed, 7 insertions(+), 7 deletions(-) > > diff --git a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > index 1e8a21b67df7..3b05900d 100644 > --- a/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > +++ b/drivers/gpu/drm/amd/display/amdgpu_dm/amdgpu_dm_mst_types.c > @@ -130,7 +130,7 @@ static ssize_t dm_dp_aux_transfer(struct drm_dp_aux *aux, >msg->address, >msg->buffer, >msg->size, > - r == DDC_RESULT_SUCESSFULL); > + r == DDC_RESULT_SUCCESSFUL); > #endif > > return msg->size; > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link.c > index 0023754e034b..3abd0f1a287f 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link.c > @@ -1176,7 +1176,7 @@ static void dpcd_configure_panel_mode( > _config_set.raw, > sizeof(edp_config_set.raw)); > > - ASSERT(result == DDC_RESULT_SUCESSFULL); > + ASSERT(result == DDC_RESULT_SUCCESSFUL); > } > } > dm_logger_write(link->ctx->logger, LOG_DETECTION_DP_CAPS, > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > index d5294798b0a5..6b69b339dba2 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_ddc.c > @@ -661,7 +661,7 @@ enum ddc_result dal_ddc_service_read_dpcd_data( > ddc->ctx->i2caux, > ddc->ddc_pin, > )) > - return DDC_RESULT_SUCESSFULL; > + return DDC_RESULT_SUCCESSFUL; > > return DDC_RESULT_FAILED_OPERATION; > } > @@ -698,7 +698,7 @@ enum ddc_result dal_ddc_service_write_dpcd_data( > ddc->ctx->i2caux, > ddc->ddc_pin, > )) > - return DDC_RESULT_SUCESSFULL; > + return DDC_RESULT_SUCCESSFUL; > > return DDC_RESULT_FAILED_OPERATION; > } > diff --git a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > index 33d91e4474ea..cc067d04505d 100644 > --- a/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > +++ b/drivers/gpu/drm/amd/display/dc/core/dc_link_dp.c > @@ -1926,7 +1926,7 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, > union hpd_irq_data *out_hpd > { > union hpd_irq_data hpd_irq_dpcd_data = 0; > union device_service_irq device_service_clear = { { 0 } }; > - enum dc_status result = DDC_RESULT_UNKNOWN; > + enum ddc_result result = DDC_RESULT_UNKNOWN; > bool status = false; > /* For use cases related to down stream connection status change, >* PSR and device auto test, refer to function handle_sst_hpd_irq > @@ -1946,7 +1946,7 @@ bool dc_link_handle_hpd_rx_irq(struct dc_link *link, > union hpd_irq_data *out_hpd > if (out_hpd_irq_dpcd_data) > *out_hpd_irq_dpcd_data = hpd_irq_dpcd_data; > > - if (result != DC_OK) { > + if (result != DDC_RESULT_SUCCESSFUL) { > dm_logger_write(link->ctx->logger, LOG_HW_HPD_IRQ, > "%s: DPCD read failed to obtain irq data\n", > __func__); > diff --git a/drivers/gpu/drm/amd/display/include/ddc_service_types.h > b/drivers/gpu/drm/amd/display/include/ddc_service_types.h > index 019e7a095ea1..f3bf749b3636 100644 > --- a/drivers/gpu/drm/amd/display/include/ddc_service_types.h > +++ b/drivers/gpu/drm/amd/display/include/ddc_service_types.h > @@ -32,7 +32,7 @@ > > enum ddc_result { > DDC_RESULT_UNKNOWN = 0, > - DDC_RESULT_SUCESSFULL, > + DDC_RESULT_SUCCESSFUL, > DDC_RESULT_FAILED_CHANNEL_BUSY, > DDC_RESULT_FAILED_TIMEOUT, > DDC_RESULT_FAILED_PROTOCOL_ERROR, > -- Earthling Michel Dänzer | http://www.amd.com Libre software enthusiast | Mesa and X developer ___ amd-gfx mailing list amd-gfx@lists.freedesktop.org https://lists.freedesktop.org/mailman/listinfo/amd-gfx