I can't look at all that code, but one quick thought: can you use 
ImmutableArrays.jl? For operations on small matrices/vectors, it's quite a lot 
faster.

When you say you didn't get a lot out of the profiling information, do you mean 
it was hard to interpret, or it just wasn't very informative? If the former, 
ProfileView.jl might help.

--Tim

On Monday, May 19, 2014 06:14:30 PM mike c wrote:
> Hi all, (first post)
> 
> I"m having my first look at julia and translated a pathtracer written in
> python into julia. (A pathtracer is a kind of raytracer)
> 
> The translation was relatively painless, and I've mimicked the python
> classes that I lost by using julia's compound types.  Code is here =>
> https://bitbucket.org/mikefc/julia-pathtracer/overview
> 
> However, after going through performance tips and looking for low hanging
> fruit, the julia version is still 3x slower than the python version.  Note:
> the python version does have core matrix/vector and intersection functions
> written in C, and I run it using pypy.
> 
> What are some ways I can make the julia code faster?
> 
> * Is there something in the way I'm systematically handling arrays/vectors
> which is leading to slowdown?
> * The algorithms/techniques are a direct translation of the python code. Is
> this style "not liked" by the compiler?
> * Are there better ways of passing around config information? (see
> Config.jl)
> 
> The key difference between the python and julia code is that I've
> aggressively removed temporary vectors by doing nearly all calculations in
> pre-allocated memory.  I started going down the same road with julia, and
> while I did see some improvements I didn't see enough improvement to
> implement it everywhere (it makes the code quite messy).
> 
> I've toyed with parallel rendering (on multiple cores), but for the moment
> I'm just comparing single-core rendering speed.  I've also had a look at
> the profiling information, and I didn't get a lot of it.
> 
> Any help appreciated!
> 
> thanks
> mike.

Reply via email to