It's the whole "global variable" thing. Put all this in a function (with 
locally-defined T and n_classes), and you won't see any difference.

--Tim

On Wednesday, November 26, 2014 10:28:31 AM Colin Lea wrote:
> Thanks to you both! However, there is still another odd issue.
> 
> These two functions should be the same, but take very different amounts of
> time/memory. Both 'T' and 'n_classes' are both of type Int64.
> 
> @time (
>     for t = 2:T
>         for n = 1:n_classes
>             for j = 1:n_classes
>             end
>         end
>     end
> )
> 
> @time (
>     for t = 2:5000
>         for n = 1:10
>             for j = 1:10
>             end
>         end
>     end
> )
> 
> elapsed time: 0.063186286 seconds (18190040 bytes allocated)
> elapsed time: 0.002261641 seconds (71824 bytes allocated)
> 
> Any insight on this?
> 
> On Wednesday, November 26, 2014 12:37:12 PM UTC-5, Tim Holy wrote:
> > Nice job using track-allocation to figure out where the problem is.
> > 
> > If you really don't want allocation, then you should investigate Devec.jl
> > or
> > InPlaceOps.jl, or write out these steps using loops to access each element
> > of
> > those matrices.
> > 
> > --Tim
> > 
> > On Wednesday, November 26, 2014 07:55:59 AM Colin Lea wrote:
> > > I'm implementing an inference algorithm and am running into memory
> > > allocation issues that are slowing it down. I created a minimal example
> > > that resembles my algorithm and see that the problem persists.
> > > 
> > > The issue is that Julia is allocating a lot of extra memory when adding
> > > matrices together. This happens regardless of whether or not I
> > 
> > preallocate
> > 
> > > the output matrix.
> > > 
> > > Minimal example:
> > > https://gist.github.com/colincsl/ab44884c5542539f813d
> > > 
> > > Memory output of minimal example (using julia --track-allocation=user):
> > > https://gist.github.com/colincsl/c9c9dd86fca277705873
> > > 
> > > Am I misunderstanding something? Should I be performing the operation
> > > differently?
> > > 
> > > One thing I've played with is the matrix C. The indices are a sliding
> > > window (e.g. use C[t-10:t] for all t). When I remove C from the equation
> > > the performance increases by a factor of 2.5. However, it still uses
> > 
> > more
> > 
> > > memory than expected. Could this be the primary issue?

Reply via email to