> On Wed, 25 Jul 2007, leppert, jan wrote:
>> Second, will there be an implementation in mbp/meep to use the GPU (for
>> example with CUDA from nvidia) for the simulation and how much would be
>> the profit in simulationtime?
>
> We have no plans to implement Meep on CUDA or any other GPU, at least with
> the current state of the technology.  I have little personal interest in
> card-specific programming for limited low-level architectures.

Could you explain for newbies why Meep wouldn't benefit that much with GPU's ?

> MPB could potentially benefit more easily, because almost all of its
> performance depends only on FFTW and the BLAS.  For example, if you had a
> BLAS library for CUDA, that would speed things up somewhat (I have no idea
> what the speedup would be).  Currently, GPUs are only for single-precision
> calculations, as far as I know, so you would have to forgo double
> precision.
>
> Steven

I'm really not an expert on scientific computing, but according to
this article from Intel, GPUs are, at least, three times faster than
CPUs for FFT. Maybe I'm wrong, but it looks to me (from the article)
like they do double-precision, too.
http://dx.doi.org/10.1145/1815961.1816021

Miguel

_______________________________________________
meep-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to