I understand the O(n) aspect, however this was also true for 1.3.0. And
since nth is used to obtain arguments in every single destructuring call,
it seems to me that its performance should not just suddenly be allowed to
drop and especially this drastically. New versions of a software should be
Hi,
Am Freitag, 5. Oktober 2012 10:58:05 UTC+2 schrieb Karsten Schmidt:
I've also seen the checks for RandomAccess and Sequential and by simply
moving them to the top (after the null check) of the long if-then cascade
in RT.nthFrom(), the old performance is restored. I will open a ticket
Okay, after several hours of more testing profiling it seems the culprit
is the implementation of (nth) - and IMHO this is quite a biggie:
(use 'macrochrono.core) ; only used for bench macro below
(def t [[0 0] [100 0] [100 100] [[0 0] 1 100]])
(defn foo [[a b c [[dx dy] r x2]] [px py]]
nth only promises O(n) performance for all things sequential. However,
the implementation on master in RT.java appears to special case
indexed and random-access collections for faster access, so I'm not
sure why you're seeing such a difference. You could try using get in
place of nth, though from
Today, I decided to finally switch one of my projects from Clojure
1.3.0 to 1.4.0 (and test driving the 1.5.0 snapshot) but quickly found
some discouraging effects in terms of performance. The project
involves a lot of geometry and I'm using vanilla vectors for all
vector math. So far I've *not*
I'm not aware of what changes made in 1.4 could cause this performance
degradation.
Out of curiosity, are you willing to share your code for performance profiling
of future Clojure versions? i.e. is it open source already and so that
wouldn't be a problem, or is it closed source?
Have you
Hi Andy, the timings were collected with two litte macros I've written
and which are available here:
http://hg.postspectacular.com/macrochrono/src/tip/src/macrochrono.clj
The actual project in question will be released in the next few
months, once things are more stable.
I haven't run a profiler
time-action seems to assume that the elements of a vector literal are
evaluated sequentially in order of index---is that guaranteed?
On Tue, Oct 2, 2012 at 6:35 PM, Karsten Schmidt toxmeis...@gmail.com wrote:
Hi Andy, the timings were collected with two litte macros I've written
and which are
I'd have thought that vectors are eval'd sequentially, are they not?
FWIW I get the same kind of slowdown measurement between versions when
just employing the built-in time macro... will report back after the
profiling session, but it all seems v. odd!
On 3 October 2012 02:36, Ben Wolfson