> 

Nikhil Patil <nikhilpatil222...@gmail.com> asked

> > I'm fairly new to the world of compilers and trying to understand how they
> > work in more depth. Recently, I started exploring the idea of *parallelizing
> > the internal steps of compilation* — such as parsing, code generation, and
> > optimization — instead of the usual serial approach, and potentially
> > leveraging *GPU acceleration* (like CUDA) for this.
> >
> > I was wondering if this concept has been explored in GCC, or if there are
> > any existing resources, discussions, or directions I could look into?
> >
> > Apologies if this isn’t the right channel — I’m still getting familiar with
> > the community and couldn’t find another communication method. Please let me
> > know if there’s a better place to ask such questions.


First, some versions of GCC can optimize (OpenMP, OpenACC) for GPUs

Then, GCC -I mean cc1plus for C++ compilation- is still currently (and
unfortunately) a single threaded application. In my experience it is not a
practical issue (use make -j). In a few cases some C++ source file might take
too much time to be compiled; and in these cases splitting the C++ source file
in a few ones can be done manually (or assisted with your open source GCC
plugin, improved from https://arxiv.org/abs/1109.0779 and
https://github.com/bstarynk/bismon which I don't maintain anymore)

(but GNU gold is multithreaded and relevant for link time optimization).

At last, GCC compilation time is acceptable w.r.t. competition. But you could
try clang-llvm if you wish.

(GCC compilation time is bad for pathological C++ template things)

Remember that GCC is a huge compiler. (about ten millions lines of code). You
will have to improve a lot of it for your goals (and incremental development
might not be practical).

In some cases, generating (dlopen-ed) code at runtime can be helpful. 

For example you could use partial evaluation techniques and then use libgccjit
or GNU lightning to generate "better" runtime code.

But my belief is that your goal is excessively ambitious and could take you
several, perhaps ten years of work. (who will fund on that task?)

In many cases, parallel compiling on (and for) a supercomputer could be more
effective.

And the GPU is not much useful for compilation related tasks. In the details
(OpenCL, Cuda) is very hardware specific.

Remember also that some system header files contain asm instructions and that
some GCC builtins generate them (and could appear in system header files).

Also, GCC compiled C (or C++ or even Gimple) code can be generated with the help
of external programs (you could experiment generating them, even with ocaml or
python or common lisp code).


-- 
Basile STARYNKEVITCH                            <bas...@starynkevitch.net>
8 rue de la Faïencerie                       http://starynkevitch.net/Basile/  
92340 Bourg-la-Reine                         https://github.com/bstarynk
France                                https://github.com/RefPerSys/RefPerSys

Reply via email to