Re: [RFC PATCH] powercap: Add Hygon Fam18h RAPL support
On 2021/3/1 22:20, Rafael J. Wysocki wrote: > On Mon, Mar 1, 2021 at 3:18 AM Wen Pu wrote: >> >> On 2021/2/28 23:42, Srinivas Pandruvada wrote: >>> On Thu, 2021-02-25 at 21:01 +0800, Pu Wen wrote: >>>> Enable Hygon Fam18h RAPL support for the power capping framework. >>>> >>> If this patch is tested and works on this processor, not sure why this >>> is RFC? >> >> This patch is tested and works on Hygon processor. The 'RFC' is automated >> generated by my script ;) > > Well, care to resend as non-RFC, then? OK, already resend. Thanks! -- Regards, Pu Wen
Re: [RFC PATCH] powercap: Add Hygon Fam18h RAPL support
On 2021/2/28 23:42, Srinivas Pandruvada wrote: > On Thu, 2021-02-25 at 21:01 +0800, Pu Wen wrote: >> Enable Hygon Fam18h RAPL support for the power capping framework. >> > If this patch is tested and works on this processor, not sure why this > is RFC? This patch is tested and works on Hygon processor. The 'RFC' is automated generated by my script ;) Thanks, Pu Wen > Thanks, > Srinivas > >> Signed-off-by: Pu Wen >> --- >> drivers/powercap/intel_rapl_common.c | 1 + >> drivers/powercap/intel_rapl_msr.c| 1 + >> 2 files changed, 2 insertions(+) >> >> diff --git a/drivers/powercap/intel_rapl_common.c >> b/drivers/powercap/intel_rapl_common.c >> index fdda2a737186..73cf68af9770 100644 >> --- a/drivers/powercap/intel_rapl_common.c >> +++ b/drivers/powercap/intel_rapl_common.c >> @@ -1069,6 +1069,7 @@ static const struct x86_cpu_id rapl_ids[] >> __initconst = { >> >> X86_MATCH_VENDOR_FAM(AMD, 0x17, _defaults_amd), >> X86_MATCH_VENDOR_FAM(AMD, 0x19, _defaults_amd), >> +X86_MATCH_VENDOR_FAM(HYGON, 0x18, _defaults_amd), >> {} >> }; >> MODULE_DEVICE_TABLE(x86cpu, rapl_ids); >> diff --git a/drivers/powercap/intel_rapl_msr.c >> b/drivers/powercap/intel_rapl_msr.c >> index 78213d4b5b16..cc3b22881bfe 100644 >> --- a/drivers/powercap/intel_rapl_msr.c >> +++ b/drivers/powercap/intel_rapl_msr.c >> @@ -150,6 +150,7 @@ static int rapl_msr_probe(struct platform_device >> *pdev) >> case X86_VENDOR_INTEL: >> rapl_msr_priv = _msr_priv_intel; >> break; >> +case X86_VENDOR_HYGON: >> case X86_VENDOR_AMD: >> rapl_msr_priv = _msr_priv_amd; >> break; >
Re: [git pull] drm next pull for 5.10-rc1
On 2020/10/15 9:33, Dave Airlie wrote: > drm/vram-helper: stop using TTM placement flags This commit (7053e0eab473) produce call trace for me as below: [ 64.782340] WARNING: CPU: 51 PID: 1964 at drivers/gpu/drm/drm_gem_vram_helper.c:284 drm_gem_vram_offset+0x35/0x40 [drm_vram_helper] [ 64.782411] CPU: 51 PID: 1964 Comm: Xorg Not tainted 5.10.0-rc3 #12 [ 64.782413] Hardware name: To be filled. [ 64.782419] RIP: 0010:drm_gem_vram_offset+0x35/0x40 [drm_vram_helper] [ 64.782424] Code: 00 48 89 e5 85 c0 74 17 48 83 bf 78 01 00 00 00 74 18 48 8b 87 80 01 00 00 5d 48 c1 e0 0c c3 0f 0b 48 c7 c0 ed ff ff ff 5d c3 <0f> 0b 31 c0 5d c3 0f 1f 44 00 00 0f 1f 44 00 00 55 48 8b 87 18 06 [ 64.782427] RSP: 0018:a9128909fa68 EFLAGS: 00010246 [ 64.782431] RAX: 0002 RBX: 95a5c25e1ec0 RCX: c02b6600 [ 64.782433] RDX: 959e49824000 RSI: 95a5c25e0b40 RDI: 959e4b1c2c00 [ 64.782434] RBP: a9128909fa68 R08: 0040 R09: 95a9c5dcb688 [ 64.782436] R10: R11: 0001 R12: 959e49824000 [ 64.782437] R13: R14: R15: 95a5c5c56f00 [ 64.782440] FS: 7f485d466a80() GS:95a9afcc() knlGS: [ 64.782442] CS: 0010 DS: ES: CR0: 80050033 [ 64.782444] CR2: 7f485e202000 CR3: 000c82a0e000 CR4: 003506e0 [ 64.782446] Call Trace: [ 64.782455] ast_cursor_page_flip+0x22/0x100 [ast] [ 64.782460] ast_cursor_plane_helper_atomic_update+0x46/0x70 [ast] [ 64.782477] drm_atomic_helper_commit_planes+0xbd/0x220 [drm_kms_helper] [ 64.782493] drm_atomic_helper_commit_tail_rpm+0x3a/0x70 [drm_kms_helper] [ 64.782507] commit_tail+0x99/0x130 [drm_kms_helper] [ 64.782521] drm_atomic_helper_commit+0x123/0x150 [drm_kms_helper] [ 64.782551] drm_atomic_commit+0x4a/0x50 [drm] [ 64.782565] drm_atomic_helper_update_plane+0xe7/0x140 [drm_kms_helper] [ 64.782592] __setplane_atomic+0xcc/0x110 [drm] [ 64.782619] drm_mode_cursor_universal+0x13e/0x260 [drm] [ 64.782647] drm_mode_cursor_common+0xef/0x220 [drm] [ 64.782654] ? tomoyo_path_number_perm+0x6f/0x200 [ 64.782680] ? drm_mode_cursor_ioctl+0x60/0x60 [drm] [ 64.782706] drm_mode_cursor2_ioctl+0xe/0x10 [drm] [ 64.782727] drm_ioctl_kernel+0xae/0xf0 [drm] [ 64.782749] drm_ioctl+0x241/0x3f0 [drm] [ 64.782774] ? drm_mode_cursor_ioctl+0x60/0x60 [drm] [ 64.782781] ? tomoyo_file_ioctl+0x19/0x20 [ 64.782787] __x64_sys_ioctl+0x91/0xc0 [ 64.782792] do_syscall_64+0x38/0x90 [ 64.782797] entry_SYSCALL_64_after_hwframe+0x44/0xa9 [ 64.782800] RIP: 0033:0x7f485d7c637b [ 64.782804] Code: 0f 1e fa 48 8b 05 15 3b 0d 00 64 c7 00 26 00 00 00 48 c7 c0 ff ff ff ff c3 66 0f 1f 44 00 00 f3 0f 1e fa b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 8b 0d e5 3a 0d 00 f7 d8 64 89 01 48 [ 64.782805] RSP: 002b:7fff78682a28 EFLAGS: 0246 ORIG_RAX: 0010 [ 64.782808] RAX: ffda RBX: 7fff78682a60 RCX: 7f485d7c637b [ 64.782810] RDX: 7fff78682a60 RSI: c02464bb RDI: 000c [ 64.782811] RBP: c02464bb R08: 0040 R09: 0004 [ 64.782813] R10: 0002 R11: 0246 R12: 558647745e40 [ 64.782814] R13: 000c R14: 0002 R15: 02af [ 64.782820] CPU: 51 PID: 1964 Comm: Xorg Not tainted 5.10.0-rc3 #12 [ 64.782821] Hardware name: To be filled. [ 64.782822] Call Trace: [ 64.782828] dump_stack+0x74/0x92 [ 64.782832] ? drm_gem_vram_offset+0x35/0x40 [drm_vram_helper] [ 64.782836] __warn.cold+0x24/0x3f [ 64.782840] ? drm_gem_vram_offset+0x35/0x40 [drm_vram_helper] [ 64.782844] report_bug+0xd6/0x100 [ 64.782847] handle_bug+0x39/0x80 [ 64.782850] exc_invalid_op+0x19/0x70 [ 64.782853] asm_exc_invalid_op+0x12/0x20 .. I hacked up patch and found this hunk in particular introduced the call trace: @@ -135,20 +135,23 @@ static void ttm_buffer_object_destroy(struct ttm_buffer_object *bo) .. + if (pl_flag & DRM_GEM_VRAM_PL_FLAG_TOPDOWN) + pl_flag = TTM_PL_FLAG_TOPDOWN; It seems that these two lines will lead to gbo->placements[c].mem_type be forcibly set to TTM_PL_SYSTEM in the next hunks which caused the problem, even though the pl_flag is DRM_GEM_VRAM_PL_FLAG_VRAM & DRM_GEM_VRAM_PL_FLAG_TOPDOWN. If I comment out these two lines, there will be no call trace any more. -- Regards, Pu Wen