On Fri, Feb 25, 2011 at 12:42 PM, Enrico Granata <[email protected]> wrote:
>
>
> I modified the source code to show exactly how many clock ticks it is taking
> for each call. It seems that the behavior hinted by Mauro Romano Trajber is
> actually there:
> [enrico@espresso ~]$ ./syscallperf 15
> 4925
> 1190
> 942
> 942
> 935
> 942
> 636
> 577
> 627
> 621
> 580
> 591
> 565
> 580
> 565
> I am starting to wonder if this depends on the syscall itself OR on some
> call optimization.. any gcc experts around?
>From the getpid(2) manpage:
"Since glibc version 2.3.4, the glibc wrapper function for getpid()
caches PIDs, so as to avoid additional system calls when a process
calls getpid() repeatedly."
> Enrico Granata
> Computer Science & Engineering Department (EBU3B) - Room 3240
> office phone 858 534 9914
> University of California, San Diego
> On Feb 25, 2011, at 12:30 PM, Mauro Romano Trajber wrote:
>
> Sure, the code is attached.
>
> On Fri, Feb 25, 2011 at 5:15 PM, Daniel Baluta <[email protected]>
> wrote:
>>
>> On Fri, Feb 25, 2011 at 8:22 PM, Mauro Romano Trajber <[email protected]>
>> wrote:
>> > Thanks Enrico and Daniel, you're right. glibc was caching getpid(); but
>> > this
>> > is not the root cause of this behavior.
>> > Going further, I decide to use call getpid without glibc, using
>> > syscall(SYS_getpid) to test this behavior and it happened again.
>> > Calling it once, the test consumes about 7k CPU cycles and 10 calls
>> > consumes
>> > about 10k CPU cycles.
>> > Any ideas ?
>>
>> Can you post a pointer to your code and information about how you got
>> this numbers?
>>
>> thanks,
>> Daniel.
>
> <syscallperf.c>
>
> _______________________________________________
> Kernelnewbies mailing list
> [email protected]
> http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies
>
>
--
Jim Kukunas
[email protected]
http://member.acm.org/~treak007
_______________________________________________
Kernelnewbies mailing list
[email protected]
http://lists.kernelnewbies.org/mailman/listinfo/kernelnewbies