Dear Miguel!

MEEP or FDTD in general would indeed benefit much from GPU's.
I tried it once. There was a freeware solution "fastFDTD" using nVidia GPU's. 
The speed was incredible high. You gain a factor of 10 in speed very easily. 
But the software was completely unstable, full of bugs and no support, so I 
went back to MEEP.

To my opinion, CUDA is a very powerful platform and I would be among the first 
to switch to a solution like MEEP that uses GPU's.

But in fact, I see two problems with CUDA:
1) At the moment there is a strong push from nVidia to promote CUDA, especially 
for scientific computing. But the future is uncertain. Maybe nVidia changes its 
mind and maybe in two years you have a program for an obsolete technology.

2) There are also several technological limitations. You are limited by the 
memory on the graphics card (1.5 or 2 GB per card at the moment), so you can 
use such a solution only for small simulations. You can combine up to 4 graphic 
cards on one mainboard (yes, I know there are mainboards with up to six PCIe 
16x slots, but that doesn't matter) using SLI, but I have no idea what effort 
this takes for the programmers and how this affects simulation performance. But 
nevertheless, more than 8GB per computer is, at the moment, out of reach. And 
if you then try to link several computers using LAN, as it is possible now, 
things get really complicated and you are limited by the speed of your network.

Hope this answer helps!
Best regards,
Roman



-----Ursprüngliche Nachricht-----
Von: [email protected] 
[mailto:[email protected]] Im Auftrag von Miguel Rubio-Roy
Gesendet: Freitag, 25. Juni 2010 10:47
An: [email protected]
Betreff: Re: [Meep-discuss] [MPB-discuss] Multicore-CPU and GPUlocalhost/

> On Wed, 25 Jul 2007, leppert, jan wrote:
>> Second, will there be an implementation in mbp/meep to use the GPU (for
>> example with CUDA from nvidia) for the simulation and how much would be
>> the profit in simulationtime?
>
> We have no plans to implement Meep on CUDA or any other GPU, at least with
> the current state of the technology.  I have little personal interest in
> card-specific programming for limited low-level architectures.

Could you explain for newbies why Meep wouldn't benefit that much with GPU's ?

> MPB could potentially benefit more easily, because almost all of its
> performance depends only on FFTW and the BLAS.  For example, if you had a
> BLAS library for CUDA, that would speed things up somewhat (I have no idea
> what the speedup would be).  Currently, GPUs are only for single-precision
> calculations, as far as I know, so you would have to forgo double
> precision.
>
> Steven

I'm really not an expert on scientific computing, but according to
this article from Intel, GPUs are, at least, three times faster than
CPUs for FFT. Maybe I'm wrong, but it looks to me (from the article)
like they do double-precision, too.
http://dx.doi.org/10.1145/1815961.1816021

Miguel

_______________________________________________
meep-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

_______________________________________________
meep-discuss mailing list
[email protected]
http://ab-initio.mit.edu/cgi-bin/mailman/listinfo/meep-discuss

Reply via email to