[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc

2021-08-27 Thread pinskia at gcc dot gnu.org via Gcc-bugs
https://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028

--- Comment #5 from Andrew Pinski  ---
GCC supports offloading to some GPUs now. I don't know if this enough to close
this bug though.  It has support this feature for a few years now too.

[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc

2009-10-07 Thread rob1weld at aol dot com


--- Comment #4 from rob1weld at aol dot com  2009-10-07 11:21 ---
(In reply to comment #1)
 Yes GPU libraries would be nice but this needs a lot of work to begin with. 
 First you have to support the GPUs.  This also amounts to doubling the 
 support.
  If you really want them, since this is open source, start contributing.


Here is a contribution from my buds at NVidia ...


Quote from the Article:

... support for native execution of C++. For the first time in history, a GPU
can run C++ code with no major issues or performance penalties ...


nVidia GT300's Fermi architecture unveiled: 512 cores, up to 6GB GDDR5 
http://www.brightsideofnews.com/news/2009/9/30/nvidia-gt300s-fermi-architecture-unveiled-512-cores2c-up-to-6gb-gddr5.aspx


That should be more than 3/4 of the job done; only took 6 months.

Rob


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028



[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc

2009-05-20 Thread rob1weld at aol dot com


--- Comment #3 from rob1weld at aol dot com  2009-05-20 13:10 ---
 Some of the newest cards will run at over a PetaFLOP ...
I meant a TeraFLOP :( .


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028



[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc

2009-05-18 Thread rob1weld at aol dot com


--- Comment #2 from rob1weld at aol dot com  2009-05-18 17:36 ---
(In reply to comment #1)
 Yes GPU libraries would be nice but this needs a lot of work to begin with. 
 First you have to support the GPUs.  This also amounts to doubling the
 support. If you really want them, since this is open source, start
 contributing. 

I'm planning a full hardware upgrade in the coming months. I plan
to get an expensive Graphics Card to try this. Some of the newest
cards will run at over a PetaFLOP (only for embarrassingly parallel
code - http://en.wikipedia.org/wiki/Embarrassingly_parallel ).
Some of the newest Motherboards will accept _FOUR_ Graphics Cards.

It seems less expensive to use GPUs and recompile a few apps than 
trying to purchase a Motherboard with multiple CPUs or trying to 
find a chip faster than the 'i7'.

If we could only double our Computer's speed this endeavor
would be well worth doing. I suspect that Fortran's vector math
could be easily converted and benefit greatly.

Look for this feature in gcc in a few years (Sooner with everyone's help).

Rob


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028



[Bug middle-end/40028] RFE - Add GPU acceleration library to gcc

2009-05-05 Thread pinskia at gcc dot gnu dot org


--- Comment #1 from pinskia at gcc dot gnu dot org  2009-05-05 16:25 ---
Yes GPU libraries would be nice but this needs a lot of work to begin with. 
First you have to support the GPUs.  This also amounts to doubling the support.
 If you really want them, since this is open source, start contributing.


-- 


http://gcc.gnu.org/bugzilla/show_bug.cgi?id=40028