Hmm, looks borked. I may be able to try sometime, but it could be a few days 
until I get to it. You posted all the code earlier in this thread?

Jim Garrison is probably the current expert on combining Julia & valgrind.

--Tim

On Thursday, March 12, 2015 03:01:59 AM Bartolomeo Stellato wrote:
> I installed valgrind 3.11 SVN with homebrew and tried to run the code but I
> am not familiar with the generated output.
> I added a valgrind-julia.supp and used the command parameters explained here
> <https://github.com/JuliaLang/julia/blob/master/doc/devdocs/valgrind.rst>
> 
> I executed:
> 
> valgrind --smc-check=all-non-file --suppressions=valgrind-julia.supp
> /Applications/Julia-dev.app/Contents/Resources/julia/bin/julia
> simulation.jl a.out > log.txt 2>&1
> I attach the generated log.txt file.
> 
> Bartolomeo
> 
> Il giorno giovedì 12 marzo 2015 03:10:10 UTC, Tim Holy ha scritto:
> > On Wednesday, March 11, 2015 05:33:10 PM Bartolomeo Stellato wrote:
> > > I also tried with Julia 0.4.0-dev+3752 and I encounter the same problem.
> > 
> > Hm. If you're sure there's a leak, this should be investigated. Any chance
> > you
> > can try valgrind?
> > 
> > --Tim
> > 
> > > Il giorno mercoledì 11 marzo 2015 22:51:18 UTC, Tony Kelman ha scritto:
> > > > The majority of the memory allocation is almost definitely coming from
> > 
> > the
> > 
> > > > problem setup here. You're using a dense block-triangular fomulation
> > 
> > of
> > 
> > > > MPC, eliminating states and only solving for inputs with inequality
> > > > constraints. Since you're converting your problem data to sparse
> > > > initially,
> > > > you're doing a lot of extra allocation, integer arithmetic, and
> > 
> > consuming
> > 
> > > > more memory to represent a large dense matrix in sparse format.
> > > > Reformulate
> > > > your problem to include both states and inputs as unknowns, and
> > 
> > enforce
> > 
> > > > the
> > > > dynamics as equality constraints. This will result in a block-banded
> > > > problem structure and maintain sparsity much better. The matrices
> > 
> > within
> > 
> > > > the blocks are not sparse here since you're doing an exact
> > 
> > discretization
> > 
> > > > with expm, but a banded problem will scale much better to longer
> > 
> > horizons
> > 
> > > > than a triangular one.
> > > > 
> > > > You also should be able to reuse the problem data, with the exception
> > 
> > of
> > 
> > > > bounds or maybe vector coefficients, between different MPC iterations.
> > > > 
> > > > 
> > > > 
> > > > On Wednesday, March 11, 2015 at 12:14:03 PM UTC-7, Bartolomeo Stellato
> > > > 
> > > > wrote:
> > > >> Thank you for the quick replies and for the suggestions!
> > > >> 
> > > >> I checked which lines give more allocation with
> > 
> > --track-allocation=user
> > 
> > > >> and the amount of memory I posted is from the OSX Activity monitor.
> > > >> Even if it is not all necessarily used, if it grows too much the
> > > >> operating system is forced to kill Julia.
> > > >> 
> > > >> I slightly edited the code in order to *simulate the closed loop 6
> > 
> > times*
> > 
> > > >> (for different parameters of N and lambdau). I attach the files. The*
> > > >> allocated memory *with the OSX Activity monitor is *2gb now.*
> > > >> If I run the code twice with a clear_malloc_data() in between to save
> > > >> --track-allocation=user information I get something around 3.77gb!
> > > >> 
> > > >> Are there maybe problems with my code for which the allocated memory
> > > >> increases? I can't understand why by simply running the same function
> > 
> > 6
> > 
> > > >> times, the memory increases so much. Unfortunately I need to do it
> > > >> hundreds
> > > >> of times in this way it is impossible.
> > > >> 
> > > >> Do you think that using the push! function together with reducing the
> > > >> vector computations could significantly reduce this big amount of
> > > >> allocated
> > > >> memory?
> > > >> 
> > > >> 
> > > >> Bartolomeo
> > > >> 
> > > >> Il giorno mercoledì 11 marzo 2015 17:07:23 UTC, Tim Holy ha scritto:
> > > >>> --track-allocation doesn't report the _net_ memory allocated, it
> > 
> > reports
> > 
> > > >>> the
> > > >>> _gross_ memory allocation. In other words, allocate/free adds to the
> > > >>> tally,
> > > >>> even if all memory is eventually freed.
> > > >>> 
> > > >>> If you're still concerned about memory allocation and its likely
> > 
> > impact
> > 
> > > >>> on
> > > >>> performance: there are some things you can do. From glancing at your
> > > >>> code very
> > > >>> briefly, a couple of comments:
> > > >>> - My crystal ball tells me you will soon come to adore the push!
> > > >>> function :-)
> > > >>> - If you wish (and it's your choice), you can reduce allocations by
> > > >>> doing more
> > > >>> operations with scalars. For example, in computeReferenceCurrents,
> > > >>> instead of
> > > >>> computing tpu and iref arrays outside the loop, consider performing
> > 
> > the
> > 
> > > >>> equivalent operations on scalar values inside the loop.
> > > >>> 
> > > >>> Best,
> > > >>> --Tim
> > > >>> 
> > > >>> On Wednesday, March 11, 2015 07:41:19 AM Bartolomeo Stellato wrote:
> > > >>> > Hi all,
> > > >>> > 
> > > >>> > I recently started using Julia for my *Closed loop MPC
> > 
> > simulations.* I
> > 
> > > >>> fond
> > > >>> 
> > > >>> > very interesting the fact that I was able to do almost everything
> > 
> > I
> > 
> > > >>> was
> > > >>> 
> > > >>> > doing in MATLAB with Julia. Unfortunately, when I started working
> > 
> > on
> > 
> > > >>> more
> > > >>> 
> > > >>> > complex simulations I notice a *memory allocation problem*.
> > > >>> > 
> > > >>> > I am using OSX Yosemite and Julia 0.3.6. I attached a MWE that can
> > 
> > be
> > 
> > > >>> > executed with include("simulation.jl")
> > > >>> > 
> > > >>> > The code executes a single simulation of the closed loop system
> > 
> > with a
> > 
> > > >>> *MPC
> > > >>> 
> > > >>> > controller* solving an optimization problem at each time step via
> > > >>> 
> > > >>> *Gurobi
> > > >>> 
> > > >>> > interface*. At the end of the simulation I am interested in only
> > 
> > *two
> > 
> > > >>> > performance indices* (float numbers).
> > > >>> > The simulation, however, takes more than 600mb of memory and, even
> > 
> > if
> > 
> > > >>> most
> > > >>> 
> > > >>> > of the declared variables local to different functions, I can't
> > 
> > get
> > 
> > > >>> rid of
> > > >>> 
> > > >>> > them afterwards with the garbage collector: gc()
> > > >>> > 
> > > >>> > I analyzed the memory allocation with julia
> > 
> > --track-allocation=user
> > 
> > > >>> and I
> > > >>> 
> > > >>> > included the generated .mem files. Probably my code is not
> > 
> > optimized,
> > 
> > > >>> but I
> > > >>> 
> > > >>> > can't understand *why all that memory doesn't get deallocated
> > 
> > after
> > 
> > > >>> the
> > > >>> 
> > > >>> > simulation*.
> > > >>> > 
> > > >>> > Is there anyone who could give me any explanation or suggestion to
> > > >>> 
> > > >>> solve
> > > >>> 
> > > >>> > that problem? I need to perform several of these simulations and
> > 
> > it is
> > 
> > > >>> > impossible for me to allocate for each one more than 600mb.
> > > >>> > 
> > > >>> > 
> > > >>> > Thank you!
> > > >>> > 
> > > >>> > Bartolomeo

Reply via email to