Quoting from the Linux man page:

  The clock() function returns an approximation of processor time used
  by the program.

What you want is `man 2 time`. This will give you the time in seconds.
Not sure of its availability on Windows though.

Owen

On Thu, May 18, 2006 at 01:22:17PM -0400, Randall Barlow wrote:
> Hi,
> 
>    I'm developing a C++ program that times itself, and needs to run on 
> Windows (yeah, I know) as well as Linux.  Now, this isn't a big problem, 
> but it makes me curious.  I use the clock() function from time.h to get 
> the time at the beginning of my simulation and at the end, and then use 
> the difference to measure how long the simulation ran.  I'm noticing a 
> difference in behavior between Windows and Linux.
> 
>    In windows, I will end up getting how much real time the simulation 
> took (i.e., if I suspend the machine for half a day and then let the 
> simulation finish, that half a day would be in the result).  However, in 
> Linux, it seems to give me how much processor time the simulation took.  
> Can anybody explain why the same function from the same header file 
> behaves so differently?
> 
> Randy
-- 
TriLUG mailing list        : http://www.trilug.org/mailman/listinfo/trilug
TriLUG Organizational FAQ  : http://trilug.org/faq/
TriLUG Member Services FAQ : http://members.trilug.org/services_faq/

Reply via email to