On Fri, Jun 05, 2020 at 03:13:42PM -0400, y2s1982 . wrote:
> > > The LLVM's repository for OMPD development is at this github repo
> > > <https://github.com/OpenMPToolsInterface/LLVM-openmp/tree/ompd-tests>,
> > > under the branch ompd-test.
> > > The OMPD documentation
> > > <https://www.openmp.org/spec-html/5.0/openmpse43.html#x242-16540005.1>
> > states
> > > that the omp-tools.h be available.
> >
> > I believe it is
> > https://github.com/OpenMP/sources/blob/master/include/omp-tools.h
> >
> > It should be referenced somewhere from the openmp.org but possibly
> > actually isn't.
> >
> 
> Hmm, does this mean the file should be or is already imported via autoconf?

No.  We need our version of omp-tools.h in libgomp/ directory, which has
roughly the same content as the one from the OpenMP/sources repo, and
cross-checked against what the LLVM omp-tools.h.var contains, but with the
omp.h.in code formatting, similar style of multiple inclusion guard, license
boilerplate, the __GOMP_NOTHROW macros etc.

I think that should be your first coding task, the next one would be to
tweak libgomp/Makefile.am so that it also builds the libgompd shared
library, initially just containing a single function or two from the
interfaces and then you'd start adding further ones.

And yes, we'll need to also test it, which can be done by using gdb
with some python scripts that will load either libgomp.so directly shared 
library or
perhaps some wrapper library that will make it usable from python.

> This week, I spent more time on understanding libgomp and gimple.
> For libgomp, I started from the GOMP_parallel() and followed a chain of
> function calls, macros, and various structs used. The structs were the most
> interesting of the three, and I still feel I have much to digest.
> 
> I also spent a few days looking at gimple. Just to be sure, is the
> following progression correct?
> .gimple -> ... -> .omplower -> ompexp -> ompexpssa2 -> ... -> optimized
> The style of code seems to transition a bit between omplower and ompexp.
> 
> Between the gimple documentation and just reading through the gimple files,
> I think I have a rough understanding of how things go. I do have few
> questions, likely with much more to come:
> 1. When compiling for-1.C test file with -fdump-tree-all, I came across the
> variable declaration like this in .gimple file: struct I & D.3626;
> In C++, I would think that's some sort of reference, but then it would need
> to be defined at declaration.
> The declaration disappears in .omplower though the variable is still used.
> 
> 2. When compiling taskloop-5.C with the same flag, in .gimple file, I saw a
> for loop that seems to appear twice.
> 
> Original:
>   #pragma omp taskloop firstprivate (b, f)
>     for (int i = 0; i < 30; i++)
>       {
> 
> .gimple:
>   #pragma omp taskloop private(D.2631)
>   for (D.2631 = 0; D.2631 < 30; D.2631 = D.2631 + 1)
>   {
>          {
>                #pragma omp taskloop firstprivate(f) firstprivate(b)
> private(i)
>                 {
>                       #pragma omp taskloop
>                       for (i = 0; i < 30; i = i + 1)
>                       {
> I was hoping someone could explain why this happened so I can better
> understand how libgomp broke down the problem.

This is just a temporary way how to express the taskloop complexity until
ompexp is done with it.  For taskloop, we need to be able to compute the
number of iterations in the encountering thread, which is what is
represented by the outer taskloop, then it behaves as an explicit task (many
of them in fact), for which we need the data sharing effects of it,
that is the middle construct which actually is really the same GIMPLE as
explicit task, just with an extra flag, and finally we need something in the
task that will iterate on all the iterations that were assigned to the task,
make sure the iteration variable is properly initialized etc., and that is
represented by the innermost taskloop.

What you care more about is what the ompexp dump looks like (or optimized
dump).

        Jakub

Reply via email to