Thanks for your comments I am just passively collecting metrics about the time that certain commands take. One of the experiments I'm doing is to measure the time it takes to multiply two 128x128 matrices in Julia. I check the time using the user time from uv_rusage, and return the mean over ten runs. This process is done once an hour. The times are somewhat stable but occasionally jump up above the nominal time, even though the code doesn't change. The occasions when the times are high seem to be correlated with the load in the single-core container on the 32-core system that I'm trying to use.
You noted that if the system is overcommited that something would have to give; my thought was that any time spent on other processes should not be in the *user* time (which I'm using) but in the system time in the uv_rusage return structure. Does this match your understanding? I guess the container-based system could be messing things up; I'll investigate that further. I emailed this list because I was worried that I was using the wrong libuv command. Thank you for your recommendation and other info. On Tue, Mar 31, 2015 at 4:03 PM, Ben Noordhuis <[email protected]> wrote: > On Wed, Apr 1, 2015 at 12:42 AM, Christian Peel <[email protected]> > wrote: > > I'm looking to time parts of a program using libuv commands. I call one > of > > the following time commands before and after a chunk of code is run and > take > > the difference to get the time for the code. My goal is to find a time > > metric which isn't influenced by the load on the machine while executing > the > > code. Here are some options in libuv with links to libuv code: > > * Use uv_hrtime, which returns clock time with nanosecond resolution. In > > linux, this calls clock_gettime; I'm not certain how important the > _COARSE > > business is or not http://bit.ly/1DorhB0 > > * Use uv_getrusage, which returns user and system time with microsecond > > resolution. In linux, this eventually calls task_utime and task_stime > > http://fla.st/1Ff7umH > > * Finally, I can call uv_cpu_info, which on linux reads from /proc/stat > in > > the function read_times. See line 658 of http://bit.ly/1DorhB0 > > > > I'm calling libuv from Julia, and am working to check the impact of > garbage > > collection. I don't think there is significant overhead for calling C > from > > Julia, but will continue to investigate that. I'm trying to test code > that > > is both on the order of tens of us to hundreds of ms. The first two > options > > above (uv_hrtime and uv_getrusage) still seem to be influenced by the > system > > load on the 32-core machines that I'm testing on. For uv_rusage I tried > > both the sum of the system and user time and just the user time. The > > third option (uv_cpu_info) seems heavy since I'm just trying to time > short > > chunks of code. Do any of you have suggestions for how to time a short > > section of code in a way that is independent of the system load? > > > > Thanks > > If you are actively profiling code, you should really be using > something like perf or oprofile. > > If you are passively collecting metrics, use uv_hrtime() for tracking > wall clock time and uv_getrusage() for CPU time. uv_getrusage() is > also good for tracking metrics like major/minor page faults and > voluntary/involuntary context switches. > > I'm not sure what you mean with "seem to be influenced by the system > load". There is only so much CPU time to go around. If the system is > overcommitted, something's gotta give, right? > -- [email protected] -- You received this message because you are subscribed to the Google Groups "libuv" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/libuv. For more options, visit https://groups.google.com/d/optout.
