Hi Stefane,

Well, I never ran these test cases before now....

It looks like when I run either rtop or syst, I'm seeing an error
because Perfmon2 seems to think that there's other contexts running when
in fact there isn't. Something isn't getting cleaned up, because I can
increase the number of 'conflicting sessions' simply by running 'self' a
number of times. It seems every run increases this value. 

For example:

sicily ~ # dmesg | grep conflict
perfmon: pfm_reserve_session.250: CPU0 [2549]: system wide imppossible,
8 conflictingtask_sessions
sicily ~ # ~phil/libpfm/examples/self
[CP0_25_0(pmc0)=0x18 event_mask=0x0 usr=1 os=0 sup=0 exl=0 int=1] CYCLES
[CP0_25_0(pmd0)]
[CP0_25_2(pmc1)=0x118 event_mask=0x8 usr=1 os=0 sup=0 exl=0 int=1]
INSNS_COMPLETE
[CP0_25_2(pmd1)]
PMD0            21139069063 CYCLES
PMD1            15000092402 INSNS_COMPLETE
sicily ~ # ~phil/libpfm/examples/syst
pfm_load_context error errno 16
sicily ~ # dmesg | grep conflict
perfmon: pfm_reserve_session.250: CPU0 [2563]: system wide imppossible,
9 conflictingtask_sessions

Any ideas? Let me know where it should be fixed because I'll fix it in
my code base also.

Here is more complete log of earlier failures.

Phil


Nov 29 11:34:03 localhost perfmon: pfm_alloc_fd.739: CPU0 [2501]: new
inode ino=3890 @98000000014d3f30
Nov 29 11:34:03 localhost perfmon: pfm_find_set.499: CPU0 [2501]:
looking for set=0
Nov 29 11:34:03 localhost perfmon: pfm_find_set.571: CPU0 [2501]:
set_id=0 size=2584 view=980000000fb3df70 remap=0 mmap_offs=0
Nov 29 11:34:03 localhost perfmon: pfm_init_evtset.468: CPU0 [2501]:
set0 pmc0=0x10
Nov 29 11:34:03 localhost perfmon: pfm_init_evtset.468: CPU0 [2501]:
set0 pmc1=0x10
Nov 29 11:34:03 localhost perfmon: __pfm_create_context.1625: CPU0
[2501]: ctx=980000000f50e000 flags=0x2 system=1 notify_block=0 no_msg=0
use_fmt=0 remap=0 ctx_fd=3 mode=0
Nov 29 11:34:03 localhost perfmon: pfm_check_task_state.192: CPU0
[2501]: state=1 check_mask=0x0
Nov 29 11:34:03 localhost perfmon: __pfm_getinfo_evtsets.676: CPU0
[2501]: set0 flags=0x0 eff_usec=0 runs=0
Nov 29 11:34:03 localhost perfmon: pfm_check_task_state.192: CPU0
[2501]: state=1 check_mask=0x1
Nov 29 11:34:03 localhost perfmon: pfm_find_set.499: CPU0 [2501]:
looking for set=0
Nov 29 11:34:03 localhost perfmon: __pfm_write_pmcs.465: CPU0 [2501]:
set0 pmc0=0x19 a_pmu=0 u_pmcs=0x1 nu_pmcs=1
Nov 29 11:34:03 localhost perfmon: __pfm_write_pmcs.465: CPU0 [2501]:
set0 pmc1=0x119 a_pmu=0 u_pmcs=0x3 nu_pmcs=2
Nov 29 11:34:03 localhost perfmon: pfm_check_task_state.192: CPU0
[2501]: state=1 check_mask=0x1
Nov 29 11:34:03 localhost perfmon: pfm_find_set.499: CPU0 [2501]:
looking for set=0
Nov 29 11:34:03 localhost perfmon: __pfm_write_pmds.299: CPU0 [2501]:
set0 pmd0=0x0 flags=0x0 a_pmu=0 hw_pmd=0x0 ctx_pmd=0x0 s_reset=0x0
l_reset=0x0 u_pmds=0x1 nu_pmds=1 s_pmds=0x0 r_pmds=0x0 o_pmds=0x0
o_thres=0 compat=0 eventid=0
Nov 29 11:34:03 localhost perfmon: __pfm_write_pmds.299: CPU0 [2501]:
set0 pmd1=0x0 flags=0x0 a_pmu=0 hw_pmd=0x0 ctx_pmd=0x0 s_reset=0x0
l_reset=0x0 u_pmds=0x3 nu_pmds=2 s_pmds=0x0 r_pmds=0x0 o_pmds=0x0
o_thres=0 compat=0 eventid=0
Nov 29 11:34:03 localhost perfmon: pfm_check_task_state.192: CPU0
[2501]: state=1 check_mask=0x2
Nov 29 11:34:03 localhost perfmon: pfm_find_set.499: CPU0 [2501]:
looking for set=0
Nov 29 11:34:03 localhost perfmon: pfm_prepare_sets.112: CPU0 [2501]:
set0 sw_next=0
Nov 29 11:34:03 localhost perfmon: __pfm_load_context.1245: CPU0 [2501]:
load_pid=0 set=0 set_flags=0x0
Nov 29 11:34:03 localhost perfmon: pfm_reserve_session.241: CPU0 [2501]:
in sys_sessions=0 task_sessions=2 syswide=1 cpu=0
Nov 29 11:34:03 localhost perfmon: pfm_reserve_session.250: CPU0 [2501]:
system wide imppossible, 2 conflictingtask_sessions
Nov 29 11:34:03 localhost
Nov 29 11:34:03 localhost perfmon: __pfm_close.531: CPU0 [2501]: state=1
Nov 29 11:34:03 localhost perfmon: pfm_context_free.116: CPU0 [2501]:
free ctx @980000000f50e000


_______________________________________________
perfmon mailing list
[email protected]
http://www.hpl.hp.com/hosted/linux/mail-archives/perfmon/

Reply via email to