On Wed, Apr 1, 2015 at 2:47 AM, Christian Peel <[email protected]> wrote:
> Thanks for your comments
>
> I am just passively collecting metrics about the time that certain commands
> take. One of the experiments I'm doing is to measure the time it takes to
> multiply two 128x128 matrices in Julia. I check the time using the user time
> from uv_rusage, and return the mean over ten runs.  This process is done
> once an hour.  The times are somewhat stable but occasionally jump up above
> the nominal time, even though the code doesn't change.   The occasions when
> the times are high seem to be correlated with the load in the single-core
> container on the 32-core system that I'm trying to use.
>
> You noted that if the system is overcommited that something would have to
> give; my thought was that any time spent on other processes should not be in
> the *user* time (which I'm using) but in the system time in the uv_rusage
> return structure.  Does this match your understanding?

The ru_stime field represents kernel CPU time spent on behalf of the
user process.  For example, if you read 100 MB of data from
/dev/urandom, the CPU time for generating that data will be attributed
to your process and you will see a jump in ru_stime.*

* In theory.  In practice, the operating system may put the process to
sleep and farm out the work to a kernel thread.

> I guess the container-based system could be messing things up; I'll
> investigate that further.   I emailed this list because I was worried that I
> was using the wrong libuv command.
>
> Thank you for your recommendation and other info.

No problem.  Happy to help.

-- 
You received this message because you are subscribed to the Google Groups 
"libuv" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/libuv.
For more options, visit https://groups.google.com/d/optout.

Reply via email to