> -----Original Message-----
> From: gcc-patches-ow...@gcc.gnu.org [mailto:gcc-patches-
> ow...@gcc.gnu.org] On Behalf Of Jakub Jelinek
> Sent: Friday, February 7, 2014 9:03 AM
> To: Iyer, Balaji V
> Cc: 'Jason Merrill'; 'Jeff Law'; 'Aldy Hernandez'; 'gcc-patches@gcc.gnu.org';
> 'r...@redhat.com'
> Subject: Re: [PING] [PATCH] _Cilk_for for C and C++
> 
> On Wed, Feb 05, 2014 at 05:27:26AM +0000, Iyer, Balaji V wrote:
> >     Attached, please find a fixed patch (diff.txt) that will do as you
> requested (model _Cilk_for like a #pragma omp parallel for). Along with this,
> I have also attached two Changelog entries (1 for C and 1 for C++).
> >     It passes all the tests on my x86_64 box (both 32 and 64 bit modes)
> and does not affect any other tests in the testsuite.
> >     Is this Ok for trunk?
> 
> A step in the right direction, but I still see issues just from looking at the
> *.gimple dump:
> 
> For the first testcase, I see:
>             iter = std::vector<int>::begin (array); [return slot optimization]
>             iter.1 = iter;
>             D.13615 = std::vector<int>::end (array); [return slot 
> optimization]
>             try
>               {
>                 retval.0 = __gnu_cxx::operator-<int*, std::vector<int> > 
> (&D.13615,
> &iter);
>               }
>             finally
>               {
>                 D.13615 = {CLOBBER};
>               }
>             #pragma omp parallel schedule(cilk-for grain,0) if(retval.0)
>             #shared(iter.1) shared(D.13632) shared(D.13615) shared(iter)
>               {
>                 difference_type retval.2;
>                 const difference_type D.13633;
>                 int D.13725;
>                 struct __normal_iterator & D.13726;
>                 bool retval.3;
>                 int & D.13728;
>                 int D.13729;
>                 int & D.13732;
> 
>                 iter = iter.1;
>                  private(D.13631)
>                 _Cilk_for (D.13631 = 0; D.13631 != retval.2; D.13631 = 
> D.13631 + 1)
>                                       D.13725 = D.13631 - D.13632;
> 
> So, the issues I see:
> 1) what is iter.1, why do you have it at all, and, after all, the iterator is 
> a class
> that needs to be constructed/destructed in the general way, so creating any
> further copies of something is both costly and undesirable
> 

Well, to get the loop count, I need to calculate it using operator-(array.end 
(), &iter).

Now, if I do that iter is already set. I need to reset iter back to the 
original one (array.begin ()) in the child function. This is why I used a 
temporary variable called iter1.



> 2) the schedule clause doesn't belong on the omp parallel, but on the
> _Cilk_for
> 

What if grain is a variable say "x"? If I have it in the _Cilk_for, then won't 
it create omp_data_i->x. That is not correct. It should just emit "x." But let 
me look into this to make sure...

> 3) iter should be firstprivate, and there should be no explicit private var 
> with
> assignment during gimplification, just handle it like any other firstprivate
> during omp lowering
> 

Do you mean to say I should manually insert a firstprivate for iter and not the 
system figure out that it is shared? 


> 4) the printing looks weird for _Cilk_for, as I said earlier, the clauses 
> should
> probably be printed after the closing ) of _Cilk_for rather than after nothing
> on the previous line; also, there is no {} printed around the _Cilk_for body
> and the next line is weirdly indented
> 

Ok will look into this.

> But more importantly, if I create some testcase with a generic C++
> conforming iterator (copied over from libgomp/testsuite/libgomp.c++/for-
> 1.C), as in the second testcase, the *.gimple dump shows that _Cilk_for is 
> still
> around the #pragma omp parallel.
> The intent of the second testcase is that you can really eyeball all the
> ctors/dtors/copy ctors etc. that should happen, and for -O0 shouldn't be
> really inlined.
> 
>       Jakub

Reply via email to