On Sep 30, 1:18 pm, Matt Brown <mrbrow...@gmail.com> wrote:
> Hi.
>
> Thanks all, for your comments.
>
> > You need to use a bigger number than 1000 for these results to be 
> > meaningful.
>
> Out of curiousity, why is this?
> Does the JIT do something different after 1000 iterations?
> Or is the concern simply that the variance of the mean estimate is too
> high with 1000 vs. 10^6 iterations (due to OS background processes,
> etc.)? I originally reported ranges for average execution times (over
> about a dozen runs) to address this particular concern.
>

Generally you get some error introduced by the GC and OS background
processes.

Doing it over 12 processes is similar to doing it like 10^5, but that
still would run for less than a second total, and is still subject to
interpretation.

But anyway, doing it for different periods of time also lets you see
that there is something else going on, and gets you more meaningful
questions. Why do i do this a 10million times and get the same (close
enough) result as doing it 1000 times?.

> > FWIW, I've run both on my Toshiba dual core laptop with ubuntu, and
> > they return approximately the same values.
>
> > (and there is some JIT trickery going on, as I got:
> > user=> (myavgtime (+ 1 2 3) 1000 mytime1)
> > (myavgtime (+ 1 2 3) 1000 mytime1)
> > 0.0029811580000000306
> > user=> (myavgtime (+ 1 2 3) 100000 mytime1)
> > (myavgtime (+ 1 2 3) 100000 mytime1)
> > 0.0017426112899993846
> > user=> (myavgtime (+ 1 2 3) 1e8 mytime1)
> > (myavgtime (+ 1 2 3) 1e8 mytime1)
> > 0.0015456479935035251
>
> > Although the last one ran for quite a bit longer than ,0015)
>
> Thanks for posting this!
> One explanation for a decreasing mean execution time with an
> increasing number of iterations is this: The first iteration's
> execution time is relatively large because it's not JIT optimized
> (0.018 msec on my system). Increasing the number of iterations means
> you're averaging a larger number of small, JIT optimized individual
> execution times (reported as 0.000-0.001 msec on my system) into that
> initial larger value. The mean therefore becomes asymptotically
> smaller with larger numbers of iterations. Is there something else
> going on here as well though (eg: JIT stuff)?
>

Most likely it is because the JIT compiler has completely eliminated
the operation.
You are measuring the time elapsed between system/nano calls.

System/nano has some granularity, so it could be that doing a print
forces it to always be worth at least 1 tick of system/nano, where-as
not doing a print it can somehow call multiple calls to measure all
within the same tick of system/nano?

Meaning system/nano incrementing counter isn't necessarily synchronous
with calls to measure? But doing the print forces it to do count on at
least 1 seperate tick per measurement.

> Using a million iterations, I got:
> (myavgtime (+ 1 2 3) 1e6 mytime1) -> 0.00027 - 0.00029 msec (over a
> dozen repeats)
> (myavgtime (+ 1 2 3) 1e6 mytime2) -> 0.00068 msec (single run,
> printing 10^6 lines takes a long time, I was too impatient for
> repeats)
>
> So, using mytime1 is still just over 2x faster than mytime2 with 10^6
> iterations.
>
> cheers
> Matt
>
Perhaps you could retry with some other side-effecting function, like
making a java array of 1 boolean and flipping the boolean back and
forth? (and also something with a longer timeframe)

In cases of measurement, I think it makes more sense to time the loop
itself than to time the individual calls and sum them.

Looping overhead is small, and with a non-trivial operation, it should
be outweighed by whatever else is going on.

>
>
> > On Sep 29, 12:08 pm, Matt <mrbrow...@gmail.com> wrote:
>
> > > Hi. I'm getting a three-fold difference in timing results when I add a
> > > seemingly trivial println to observe what's going on. Consider:
>
> > > (defmacro mytime1
> > >   "Returns execution time of expr in millisecs"
> > >   [expr]
> > >   `(let [time0# (. System nanoTime)
> > >          exprval# ~expr
> > >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
> > >     time1#))
>
> > > (defmacro mytime2
> > >   "Prints out execution time of expr in millisecs and returns it"
> > >   [expr]
> > >   `(let [time0# (. System nanoTime)
> > >          exprval# ~expr
> > >          time1# (/ (double (- (. System nanoTime) time0#)) 1000000.0)]
> > >     (println "elapsed time (msec): " time1#)
> > >     time1#))
>
> > > Timing macros mytime1 and mytime2 differ only in that mytime2 has the
> > > println expression in the second last line. The println in mytime2
> > > comes after time1# is assigned, so the println expression's execution
> > > time shouldn't be counted. I confirmed this assumption by testing.
> > > (mytime1 (+ 1 2 3)) and (mytime2 (+ 1 2 3)) both return values in the
> > > 0.05 to 0.08 msec range (on a single call, i.e. without Hotspot
> > > optimization).
>
> > > (defmacro myavgtime
> > >   "Calls timefunc on expr n times and returns average of the n
> > > execution times"
> > >   [expr n timefunc]
> > >   `(loop [cumsum# 0 i# ~n]
> > >     (if (<= i# 0)
> > >       (/ cumsum# ~n )
> > >       (recur (+ cumsum# (~timefunc ~expr)) (dec i#) ))))
>
> > > Results:
> > > (myavgtime (+ 1 2 3) 1000 mytime1) returns average execution times in
> > > the 0.0005 - 0.0008 msec range.
>
> > > (myavgtime (+ 1 2 3) 1000 mytime2) returns average execution times in
> > > the 0.0014 - 0.0018 msec range, after printing 1000 lines:
> > > elapsed time (msec):  0.0870
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0020
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > ...
> > > <990 similar output lines not shown>
> > > ...
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
> > > elapsed time (msec):  0.0010
>
> > > So, using mytime2 with the myavgtime macro gives average execution
> > > times for the expression (+ 1 2 3) of 2 to 3 times longer than when
> > > using mytime1. Why is this? Does the JIT optimize differently with all
> > > those println's when using mytime2? (Kind of "quantum mechanics-y" -
> > > observing what's going on changes it.)
>
> > > thanks for any insight here!
> > > Matt
>
> > > System specs:
> > > MacBook Pro, Core2Duo 2.33GHz, 2GB RAM
> > > OSX 10.5.8 Leopard
> > > Clojure 1.1.0-alpha-SNAPSHOT
> > > java version "1.6.0_15"
> > > Java(TM) SE Runtime Environment (build 1.6.0_15-b03-226)
> > > Java HotSpot(TM) 64-Bit Server VM (build 14.1-b02-92, mixed mode)
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google
Groups "Clojure" group.
To post to this group, send email to clojure@googlegroups.com
Note that posts from new members are moderated - please be patient with your 
first post.
To unsubscribe from this group, send email to
clojure+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/clojure?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to