Hi Michael,
> The approach to use Eigen data structures for the complete grid sounds very 
> promising -- I will try out this one.
> 
If you decide to got his road you could also use the unsupported Tensor 
module(https://eigen.tuxfamily.org/dox-devel/unsupported/eigen_tensors.html 
<https://eigen.tuxfamily.org/dox-devel/unsupported/eigen_tensors.html>). If you 
use Eigen::Tensor<Scalar, 3> each grid matrix could be stored as a tensor slice 
along the 3. dimension. You can then access grid matrix i by using 
tensor.chip(i, 2) which can be used as an lvalue. This could be easier in terms 
of abstraction.

However, some caveats: Even though chip removes a dimension its result is not a 
matrix type and a lot of algorithms Eigen provides for dense matrices won't be 
available. You could probably fix that by using Eigen::Map constructed from 
tensor.data(). If you only perform basic operations on the grid matrices such 
as multiplication or addition the tensor module itself would be sufficient. 

Finally to address a question from your very first email:

> Is having a contiguous memory region actually beneficial or does it not make 
> any difference? 


A cache friendly memory layout is absolutely crucial for good performance and 
memory accesses are often a performance bottleneck in modern applications.

Cheers,
David

> On 29. Jul 2019, at 14:51, Michael Riesch <[email protected]> wrote:
> 
> Dear Janos,
> 
> Thank you for your answer. The approach to use Eigen data structures for the 
> complete grid sounds very promising -- I will try out this one.
> 
> Best regards,
> Michael
> 
> 
> On 07/24/2019 05:11 PM, Janos Meny wrote:
>> My idea for second point is wrong, sorry for that. But I think you could do 
>> the following: You could construct a matrix of type
>> 
>> MatrixXd storage(N, k*N); 
>> 
>> and then map the storage matrix like so 
>> 
>> auto matrix = m.block(0, i*N, N, N); // i + 1 'th matrix for i smaller than k
>> 
>> Best regards
>> 
>> Janos
>> 
>> Am Mi., 24. Juli 2019 um 16:59 Uhr schrieb Janos Meny 
>> <[email protected] <mailto:[email protected]>>:
>> Hey Michael,
>> 
>> I think you should be careful writing something like :
>> 
>> typedef Eigen::Matrix<double, N, N> my_matrix_t;
>> my_matrix_t *matrices = new my_matrix_t[num_gridpoints];
>> 
>> If you are using c++x for some x smaller than 17, the alignment of the 
>> pointer matrices will, I think, not necessarily be correct. One option is 
>> to use std::vector<my_matrix_t, Eigen::aligned_allocator<my_matrix_t>>.
>> 
>> Concerning the second point, one idea could be to write your own stack 
>> allocator (see for example this 
>> <https://howardhinnant.github.io/stack_alloc.html>) and then pass it as a 
>> second argument  to std::vector. 
>> But I don't think the performance benefits will be worth the hustle.
>> 
>> Best regards
>> 
>> Janos
>> 
>> Am Mi., 24. Juli 2019 um 16:34 Uhr schrieb Michael Riesch 
>> <[email protected] <mailto:[email protected]>>:
>> Hello all,
>> 
>> In my simulation tool I have a 1D grid and for each grid point there is 
>> an equally sized and quadratic matrix. For each matrix a certain update 
>> operation is subsequently carried out. Currently, the matrix size is 
>> known at compile time, i.e., something like
>> 
>> typedef Eigen::Matrix<double, N, N> my_matrix_t;
>> 
>> Using this type, I allocate the memory using
>> 
>> my_matrix_t *matrices = new my_matrix_t[num_gridpoints];
>> 
>> Now I would like to address matrices whose sizes are only known at run 
>> time, i.e.,
>> 
>> typedef Eigen::Matrix<double, Dynamic, Dynamic> my_matrix_t;
>> 
>> (but still quadratic). The allocation procedure remains the same and the 
>> code seems to work. However, I assume that the array "matrices" contains 
>> only the pointers to the each individual matrix storage and the overall 
>> performance will degrade as the memory has to be collected from random 
>> places before the operation on each matrix can be carried out.
>> 
>> Does anyone have experience with such a use case? Is having a contiguous 
>> memory region actually beneficial or does it not make any difference? 
>> Can I allocate memory for the complete array of matrices so that I have 
>> a contiguous memory region and, if yes, how?
>> 
>> Thanks in advance for your comments!
>> 
>> Best regards,
>> Michael
>> 
>> 
>> 
>> 
> 

Reply via email to