On Fri, Nov 12, 2010 at 2:22 PM, heechul Yun <heec...@illinois.edu> wrote:
> It was because of a bug in kernel/perf_event.c  as shown in the following.
> static int perf_event_period(struct perf_event *event, u64 __user *arg)
> {
>        ...
> size = copy_from_user(&value, arg, sizeof(value));
> if (size != sizeof(value))  // <--------- bug.  should be: if (!size) return
> -EFAULT
> return -EFAULT;
>

Ok, I will post a file for this on LKML.
Unless you want to do it.

> I found this bug has been there in all versions before it was finally fixed
> in 2.6.36.
> Heechul
>
> On Thu, Nov 11, 2010 at 9:23 PM, heechul Yun <heec...@illinois.edu> wrote:
>>
>> Hi,
>> I wanted to adjust sampling period using PERF_EVENT_IOC_PERIOD ioctl.
>> I modified the notify_group to change the period at runtime as follows.
>> diff --git a/perf_examples/notify_group.c b/perf_examples/notify_group.c
>> index 76869b0..d8ff5e1 100644
>> --- a/perf_examples/notify_group.c
>> +++ b/perf_examples/notify_group.c
>> @@ -54,6 +54,7 @@ sigio_handler(int n, struct siginfo *info, struct
>> sigcontext *sc)
>>        struct perf_event_mmap_page *hdr;
>>        struct perf_event_header ehdr;
>>        uint64_t ip;
>> +       uint64_t new_period;
>>        int id, ret;
>>
>>        id = perf_fd2event(fds, num_fds, info->si_fd);
>> @@ -87,6 +88,11 @@ skip:
>>        /*
>>         * rearm the counter for one more shot
>>         */
>> +       new_period = SMPL_PERIOD * 2 ;
>> +       ret = ioctl(info->si_fd, PERF_EVENT_IOC_PERIOD, new_period);
>> +       if (ret == -1)
>> +               err(1, "cannot set");
>> +
>>        ret = ioctl(info->si_fd, PERF_EVENT_IOC_REFRESH, 1);
>>        if (ret == -1)
>>                err(1, "cannot refresh");
>> It, however, failed to do so as belows.
>> $ ./notify_group
>> i=0 disabled=0
>> i=1 disabled=1
>> i=2 disabled=1
>> Notification 1: 0x8048f63 fd=3 PERF_COUNT_HW_CPU_CYCLES
>> notify_group: cannot set: Invalid argument
>>
>> Any idea?
>>
>> BTW, I found an error in perf_event.h of libpfm4 which prevents compiling
>> the code.
>> The following diff is the fix.
>> diff --git a/include/perfmon/perf_event.h b/include/perfmon/perf_event.h
>> index cfddef0..7f2889e 100644
>> --- a/include/perfmon/perf_event.h
>> +++ b/include/perfmon/perf_event.h
>> @@ -13,7 +13,7 @@
>>  */
>>  #ifndef _LINUX_PERF_EVENT_H
>>  #define _LINUX_PERF_EVENT_H
>> -
>> +#include <linux/types.h>
>>  #include <sys/types.h>
>>  #include <inttypes.h>
>>  #ifndef PR_TASK_PERF_EVENTS_DISABLE
>> @@ -236,7 +236,7 @@ typedef struct perf_event_attr {
>>  #define PERF_EVENT_IOC_DISABLE         _IO ('$', 1)
>>  #define PERF_EVENT_IOC_REFRESH         _IO ('$', 2)
>>  #define PERF_EVENT_IOC_RESET           _IO ('$', 3)
>> -#define PERF_EVENT_IOC_PERIOD          _IOW('$', 4, u64)
>> +#define PERF_EVENT_IOC_PERIOD          _IOW('$', 4, __u64)
>>  #define PERF_EVENT_IOC_SET_OUTPUT      _IO ('$', 5)
>>  #define PERF_EVENT_IOC_SET_FILTER      _IOW('$', 6, char *)
>>
>>
>> - Heechul
>
>
> ------------------------------------------------------------------------------
> Centralized Desktop Delivery: Dell and VMware Reference Architecture
> Simplifying enterprise desktop deployment and management using
> Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
> client virtualization framework. Read more!
> http://p.sf.net/sfu/dell-eql-dev2dev
> _______________________________________________
> perfmon2-devel mailing list
> perfmon2-devel@lists.sourceforge.net
> https://lists.sourceforge.net/lists/listinfo/perfmon2-devel
>
>

------------------------------------------------------------------------------
Centralized Desktop Delivery: Dell and VMware Reference Architecture
Simplifying enterprise desktop deployment and management using
Dell EqualLogic storage and VMware View: A highly scalable, end-to-end
client virtualization framework. Read more!
http://p.sf.net/sfu/dell-eql-dev2dev
_______________________________________________
perfmon2-devel mailing list
perfmon2-devel@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/perfmon2-devel

Reply via email to