On Thu, Aug 10, 2017 at 02:48:52PM -0400, Vince Weaver wrote:
> 
> So I was working on my perf_event_tests on ARM/ARM64 (the end goal was to 
> get ARM64 rdpmc support working, but apparently those patches never made 
> it upstream?)

IIUC by 'rdpmc' you mean direct userspace counter access?

Patches for that never made it upstream. Last I saw, there were no
patches in a suitable state for review.

There are also difficulties (e.g. big.LITTLE systems where the number of
counters can differ across CPUs) which have yet to be solved.

> anyway one test was failing due to an x86/arm difference, which is 
> possibly only tangentially perf related.
> 
> On x86 you can mmap() a perf_event_open() file descriptor multiple times 
> and it works.
> 
> On ARM/ARM64 you can only mmap() it once, any other attempts fail.

Interesting. Which platform(s) are you testing on, with which kernel
version(s)?

> Is this expected behavior?

I'm not sure, but it sounds surprising.

> You can run the
>       tests/record_sample/mmap_multiple
> test in the current git of my perf_event_tests testsuite for a testcase.

This appears to work for me:

nanook@ribbensteg:~/src/perf_event_tests/tests/record_sample$ ./mmap_multiple 
Trying to mmap same perf_event fd multiple times...        PASSED

nanook@ribbensteg:~/src/perf_event_tests/tests/record_sample$ git log --oneline 
HEAD~1..
c82c4dd tests: huge_grou_start: add info that this was fixed in Linux 4.3
nanook@ribbensteg:~/src/perf_event_tests/tests/record_sample$ uname -a
Linux ribbensteg 4.13.0-rc4-00010-g2ce1491 #229 SMP PREEMPT Thu Aug 10 17:06:56 
BST 2017 aarch64 aarch64 aarch64 GNU/Linux

nanook@ribbensteg:~/src/perf_event_tests/tests/record_sample$ strace 
./mmap_multiple 
execve("./mmap_multiple", ["./mmap_multiple"], [/* 18 vars */]) = 0
brk(0)                                  = 0x2d9aa000
faccessat(AT_FDCWD, "/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or 
directory)
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xffff9d10e000
faccessat(AT_FDCWD, "/etc/ld.so.preload", R_OK) = -1 ENOENT (No such file or 
directory)
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=42361, ...}) = 0
mmap(NULL, 42361, PROT_READ, MAP_PRIVATE, 3, 0) = 0xffff9d103000
close(3)                                = 0
faccessat(AT_FDCWD, "/etc/ld.so.nohwcap", F_OK) = -1 ENOENT (No such file or 
directory)
openat(AT_FDCWD, "/lib/aarch64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\0\0\0\0\0\0\0\0\0\3\0\267\0\1\0\0\0(\17\2\0\0\0\0\0"..., 
832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1283776, ...}) = 0
mmap(NULL, 1356664, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 
0xffff9cf9b000
mprotect(0xffff9d0ce000, 61440, PROT_NONE) = 0
mmap(0xffff9d0dd000, 24576, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x132000) = 0xffff9d0dd000
mmap(0xffff9d0e3000, 13176, PROT_READ|PROT_WRITE, 
MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0xffff9d0e3000
close(3)                                = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xffff9cf9a000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xffff9cf99000
mprotect(0xffff9d0dd000, 16384, PROT_READ) = 0
mprotect(0x412000, 4096, PROT_READ)     = 0
mprotect(0xffff9d112000, 4096, PROT_READ) = 0
munmap(0xffff9d103000, 42361)           = 0
perf_event_open(0xfffffbff0310, 0, -1, -1, 0) = 3
mmap(NULL, 36864, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0xffff9d105000
mmap(NULL, 36864, PROT_READ|PROT_WRITE, MAP_SHARED, 3, 0) = 0xffff9cf90000
ioctl(1, SNDCTL_TMR_TIMEBASE or SNDRV_TIMER_IOCTL_NEXT_DEVICE or TCGETS, 
{B38400 opost isig icanon echo ...}) = 0
fstat(1, {st_mode=S_IFCHR|0620, st_rdev=makedev(136, 1), ...}) = 0
mmap(NULL, 65536, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 
0xffff9cf80000
write(1, "Trying to mmap same perf_event f"..., 77Trying to mmap same 
perf_event fd multiple times...        PASSED
) = 77
exit_group(0)                           = ?
+++ exited with 0 +++

Thanks,
Mark.

Reply via email to