On Wed, Nov 12, 2014 at 02:33:43PM +0100, Thomas Schwinge wrote:
> Months later, with months' worth of GCC internals experience, I now came
> to realize that maybe this has not actually been a useful thing to do
> (and likewise for the GIMPLE_OACC_KERNELS also added later on,
> <http://news.gmane.org/find-root.php?message_id=%3C1393579386-11666-1-git-send-email-thomas%40codesourcery.com%3E>).
> All handling of GIMPLE_OACC_PARALLEL and GIMPLE_OACC_KERNELS closely
> follows that of GIMPLE_OMP_TARGET's GF_OMP_TARGET_KIND_REGION, with only
> minor divergence.  What I did not understand back then, has not been
> obvious to me, was that the underlying structure of all those codes will
> in fact be the same (as already made apparent by using the one
> GIMPLE_OMP_TARGET for all of: OpenMP target offloading regions, OpenMP
> target data regions, OpenMP target data maintenenace "executable"
> statements), and any "customization" then happens via the clauses
> attached to GIMPLE_OMP_TARGET.

I'm fine with merging them into kinds, just please make sure we'll have
some tests on mixing OpenMP and OpenACC directives in the same functions
(it is fine if we error out on combinations that don't make sense or are
too hard to support).
E.g. supporting OpenACC #pragma omp target counterpart inside
of #pragma omp parallel or #pragma omp task should be presumably fine,
supporting OpenACC inside of #pragma omp target should be IMHO just
diagnosed, mixing target data and openacc is generically hard to diagnose,
perhaps at runtime, supporting #pragma omp directives inside of OpenACC
regions not needed (perhaps there are exceptions you want to support?).

> So, sanity check: should we now merge GIMPLE_OACC_PARALLEL and
> GIMPLE_OACC_KERNELS into being "subtypes" of GIMPLE_OMP_TARGET (like
> GF_OMP_TARGET_KIND_REGION), as already done for
> GF_OMP_TARGET_KIND_OACC_DATA (like GF_OMP_TARGET_KIND_DATA), and
> GF_OMP_TARGET_KIND_OACC_UPDATE and
> GF_OMP_TARGET_KIND_OACC_ENTER_EXIT_DATA (like GF_OMP_TARGET_KIND_UPDATE).

Yep.

        Jakub

Reply via email to