Hi Jim, A couple of points:
1) Maybe I'm missing something, but you appear to be calculating the same inverse twice on every iteration of your loop. That is, inv(Om_e[(j-1)*nvar+1:j*nvar,:]) gets called twice. 2) As Mauro points out, memory allocation is currently triggered when slicing into 2d arrays on v0.3.x. You can read more about this at the following StackOverflow question: http://stackoverflow.com/questions/28271308/avoid-memory-allocation-when-indexing-an-array-in-julia. Since you are indexing with ranges, my understanding is that in v0.4 you should be able to avoid the allocation. In the meantime, you could try performing the slice once and assign it to a new variable on each iteration, and then use that variable in your matrix calls. 3) I'll strongly second Mauro's suggestion that you pass in to your function everything that is not explicitly defined as a global constant. This should provide a significant performance improvement. For more reading on this, check out the first item in the "Performance Tips" section of the official docs (http://julia.readthedocs.org/en/latest/manual/performance-tips/) So taking all these things together, my version of your function would look something like this: function bb_update(bbj, bbcovj, capZ, nvar, Om_e, yyy) for j = 1:size(yyy, 2) currentInverse = inv(Om_e[(j-1)*nvar+1:j*nvar,:]) currentZSlice = capZ[(j-1)*nvar+1:j*nvar,:] bbcovj += currentZSlice' * currentInverse * currentZSlice bbj += currentZSlice' * currentInverse * yyy[:,j] end return (bbj, bbcovj) end A final question: are you planning on implementing the Model Confidence Set in Julia at any time soon (I'm making the possibly incorrect assumption that you're the same James Nason from Hansen, Lunde, Nason (2011))? Bit of a co-incidence, but I was hoping to implement the Model Confidence Set in Julia sometime in the next few weeks as part of a forecast evaluation package. If you're interested, I can point you to the github source once it is done. Cheers, Colin On Wednesday, 17 June 2015 14:11:49 UTC+10, [email protected] wrote: > > Hi All: > > I am a novice using Julia. As a way to learn Julia, my project is to > convert MatLab code that estimates Bayesian vector autoregressions. The > estimator uses Gibbs sampling. > > The Julia code is running, but is slower than MatLab. Thus, I am doing > something very inefficiently in Julia. > > The Profile.view package and the @time function indicate the problem is > located in a loop of the function bb_update(bbj, bbcovj), which appears > below. > > bb_update(.,.) represents a step in the Gibbs sampling procedure. This > part of the Gibbs sampler updates the column vector of coefficients, bbj, > which are to be estimated and its covariance matrix, bbcovj, which is psd. > bb_update(bbj, bbcovj) returns bbcovj and bbj. > > bbj is (nvar*nvar*plags + nvar) x 1 and bbcovj is (nvar*nvar*plags + nvar) > x (nvar*nvar*plags + nvar). > > The (global) constants are nvar = 6, number of variables (Int64) and plags > = 6, number of lags in the vector autoregression (Int64), which are defined > previously in the programs. > > The loop in bb_update(.,.) involves actual data, capZ and yyy, and a > different covariance matrix, Om_e, that for the purposes of the loops is > fixed. capZ and Om_e are defined outside bb_update(.,.). bbj is estimated > conditional on Om_e. > > capZ = a data matrix (Array Float64, 2), which is (nvar x obs) x > (nvar*nvar*plags + nvar), where the (global) constant obs = 223, number of > observations (Int64). > > yyy is nvar x obs (Array Float64, 2). > > The covariance matrix Om_e is (nvar x obs) x nvar (Array Float64, 2). > > Prior to calling bb_update(.,.), the program sets bbj = zeros( > nvar*nvar*plags + nvar, 1) and bbcovj = zeros( nvar*nvar*plags + nvar, > nvar*nvar*plags), respectively. These are input into bb_update(.,.). > > The loop for j = 1:obs picks off > > 1) a nvar x(nvar*nvar*plags + nvar) block of capZ to form a quadratic > around a nvar x nvar block of Om_e to construct bbcovj > > 2) the transpose of the same block of capZ, the same block of Om_e, and a > nvar x 1 column of yyy are multiplied to compute bbj. > > The @time function suggests 80 to 90% of the run time of the entire Gibbs > sampling procedure is tied up in the loop of bb_update(.,.) and that 55% of > the run time of bb_update(.,.) is devoted to gc() activities. > > I am running version 0.3.8 of Julia and the OS is Xubuntu64, v14.04. > > Your help/advice is much appreciated. > > Sincerely, > > Jim Nason > > ====================================================== > > function bb_update(bbj, bbcovj) > > for j = 1:obs > bbcovj += capZ[(j-1)*nvar+1:j*nvar,:]'*( > inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*capZ[(j-1)*nvar+1:j*nvar,:] ; > bbj += capZ[(j-1)*nvar+1:j*nvar,:]'*( > inv(Om_e[(j-1)*nvar+1:j*nvar,:]) )*yyy[:,j] ; > end > > return (bbj, bbcovj) > end >
