If you'll only ever be working with them as distinct matrices, then a vector of matrices sounds fine. Are there computations that require the full stacked matrix?
On Thu, Sep 17, 2015 at 11:11 PM, Patrick Kofod Mogensen < [email protected]> wrote: > I know the title is extremely general, so let me explain myself. > > I am an economist, and I often work with discrete choice models. Some > agent can take an action, a, indexed by j = 1 ... J. When solving the the > model, there are often a lot of features which depend on the action - a > transition matrix, a payoff, and so on. Let's go with the transition > matrix. Say given a = 1 the state transition from the current period to the > next is governed by F1, but given a = 2 it is governed by another F2. > > Now, I know of two ways to deal with this. > > First, I can store the to transition matrices next to each other (or above > each other) in a matrix with double the number of columns (rows) as would > be needed for one of them. My main concern here is, that every time I have > to use either, I am pretty sure that a temporary will necessarily be > created (when indexing into F[1:n, :] and F[n+1:end,:] for example). Also, > I have to fiddle with indexes (but I can live with that if necessary). I > guess stacking on top of each other is probably smartest performance wise. > > Second, I can create a vector, and store [F1; F2] as F, and simply do F[1] > to access F given a = 1, and F[2] to access F given a = 2. I think it makes > the code so much easier to read when looping over A = {1, 2, ..., J} - but > I am not sure if it is a good idea or not performance wise. Since I've done > this so many times, I really just wanted to put my ignorance out there, and > ask if there is a particular reason why I shouldn't be doing this, and if > there is some other great way to do it. > > Best, > Patrick >
