> Given the benefits of  massively parallel processors on the GPU and the
> push by NVIDIA amongst others, is there any plan to incorporate the use of
> OpenCL and/or CUDA in deal.II. I just thought it'd be a nice issue to
> discuss.

Like so many other things, this is mainly a question of manpower of which we 
have little.

But there's also the question of what operations would be worthwhile to put on 
a GPU. GPUs are really really good at a small number of operations, namely 
operating on large (or many smaller) dense matrices and vectors. This is not 
typically what one does in unstructured/adaptive finite element computations 
with implicit solvers, however, unless you use high-order methods (say, of 
order 8 or 10). So I'm not entirely sure how much one would gain.

There is one matrix class in deal.II, the ChunkSparseMatrix, that stores its 
element but storing whole blocks of the matrix at once. This is the class 
that would probably benefit from experiments with GPUs. It would certainly be 
interesting to see if anyone tried this class with, say, OpenCL or CUDA.

W.

-------------------------------------------------------------------------
Wolfgang Bangerth                email:            [email protected]
                                 www: http://www.math.tamu.edu/~bangerth/

_______________________________________________
dealii mailing list http://poisson.dealii.org/mailman/listinfo/dealii

Reply via email to