It does sound like a memory leak. Two options:
- Consider testing on julia 0.4. There have been deep changes to how memory is
managed on 0.4.
- Can you run it under valgrind?
Best,
--Tim
On Wednesday, March 11, 2015 12:14:03 PM Bartolomeo Stellato wrote:
> Thank you for the quick replies and for the suggestions!
>
> I checked which lines give more allocation with --track-allocation=user and
> the amount of memory I posted is from the OSX Activity monitor.
> Even if it is not all necessarily used, if it grows too much the operating
> system is forced to kill Julia.
>
> I slightly edited the code in order to *simulate the closed loop 6 times*
> (for different parameters of N and lambdau). I attach the files. The*
> allocated memory *with the OSX Activity monitor is *2gb now.*
> If I run the code twice with a clear_malloc_data() in between to save
> --track-allocation=user information I get something around 3.77gb!
>
> Are there maybe problems with my code for which the allocated memory
> increases? I can't understand why by simply running the same function 6
> times, the memory increases so much. Unfortunately I need to do it hundreds
> of times in this way it is impossible.
>
> Do you think that using the push! function together with reducing the
> vector computations could significantly reduce this big amount of allocated
> memory?
>
>
> Bartolomeo
>
> Il giorno mercoledì 11 marzo 2015 17:07:23 UTC, Tim Holy ha scritto:
> > --track-allocation doesn't report the _net_ memory allocated, it reports
> > the
> > _gross_ memory allocation. In other words, allocate/free adds to the
> > tally,
> > even if all memory is eventually freed.
> >
> > If you're still concerned about memory allocation and its likely impact on
> > performance: there are some things you can do. From glancing at your code
> > very
> > briefly, a couple of comments:
> > - My crystal ball tells me you will soon come to adore the push! function
> >
> > :-)
> >
> > - If you wish (and it's your choice), you can reduce allocations by doing
> > more
> > operations with scalars. For example, in computeReferenceCurrents, instead
> > of
> > computing tpu and iref arrays outside the loop, consider performing the
> > equivalent operations on scalar values inside the loop.
> >
> > Best,
> > --Tim
> >
> > On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote:
> > > Hi all,
> > >
> > > I recently started using Julia for my *Closed loop MPC simulations.* I
> >
> > fond
> >
> > > very interesting the fact that I was able to do almost everything I was
> > > doing in MATLAB with Julia. Unfortunately, when I started working on
> >
> > more
> >
> > > complex simulations I notice a *memory allocation problem*.
> > >
> > > I am using OSX Yosemite and Julia 0.3.6. I attached a MWE that can be
> > > executed with include("simulation.jl")
> > >
> > > The code executes a single simulation of the closed loop system with a
> >
> > *MPC
> >
> > > controller* solving an optimization problem at each time step via
> >
> > *Gurobi
> >
> > > interface*. At the end of the simulation I am interested in only *two
> > > performance indices* (float numbers).
> > > The simulation, however, takes more than 600mb of memory and, even if
> >
> > most
> >
> > > of the declared variables local to different functions, I can't get rid
> >
> > of
> >
> > > them afterwards with the garbage collector: gc()
> > >
> > > I analyzed the memory allocation with julia --track-allocation=user and
> >
> > I
> >
> > > included the generated .mem files. Probably my code is not optimized,
> >
> > but I
> >
> > > can't understand *why all that memory doesn't get deallocated after the
> > > simulation*.
> > >
> > > Is there anyone who could give me any explanation or suggestion to solve
> > > that problem? I need to perform several of these simulations and it is
> > > impossible for me to allocate for each one more than 600mb.
> > >
> > >
> > > Thank you!
> > >
> > > Bartolomeo