Re: omp workshare (PR35423) & beginner questions

2008-04-20 Thread Vasilis Liaskovitis
Hi,

Thanks for the help. Some more questions:

1) I am trying to workshare reduction operators, currently working on
SUM.

   INTEGER N
 REAL AA(N), MYSUM
 !$OMP PARALLEL
 !$OMP WORKSHARE
 MYSUM = SUM(AA)
 !$OMP END WORKSHARE
 !$OMP END PARALLEL

To compute SUM, the scalarizer creates a temporary variable (let's call
it val2) for accumulating the sum.

In order to workshare the sum, I am attempting to create an OMP_FOR loop
with an omp reduction clause for the temporary val2. In pseudocode this
would be

OMP DO REDUCTION(+:val)
   DO I=1,N
   val2 = val2 + AA[I]
END DO

The problem is that I get an error from the gimplifier: "reduction
variable val.2 is private in outer context". I think this is because the
parallel region assumes val2 is a private variable.

I have tried creating an extra omp clause shared for val2

sharedreduction = build_omp_clause(OMP_CLAUSE_SHARED);
OMP_CLAUSE_DECL(sharedreduction) = reduction_variable;

where reduction_variable is the tree node for val2. I am attaching this
clause to the clauses of the OMP_PARALLEL construct.

Doing this breaks the following assertion in gimplify.c:omp_add_variable

/* The only combination of data sharing classes we should see is
FIRSTPRIVATE and LASTPRIVATE.  */
 nflags = n->value | flags;
 gcc_assert ((nflags & GOVD_DATA_SHARE_CLASS)
 == (GOVD_FIRSTPRIVATE | GOVD_LASTPRIVATE));

I think this happens because val2 is first added with GOVD_SHARED |
GOVD_EXPLICIT flags because of my shared clause, and later re-added
(from the default parallel construct handling?) with GOVD_LOCAL |
GOVD_SEEN attributes.

Ignoring this, another assertion breaks in expr.c:

/* Variables inherited from containing functions should have
been lowered by this point.  */
 context = decl_function_context (exp);
gcc_assert (!context
 || context == current_function_decl
 || TREE_STATIC (exp)
 /* ??? C++ creates functions that are not
TREE_STATIC*/
 || TREE_CODE (exp) == FUNCTION_DECL);

I guess val2 is not lowered properly? Ignoring this assertion triggers
an rtl error (assigning wrong machine codes DI to SF) so something is
definitely wrong.
Do I need to attach val2's tree node declaration somewhere else?

2) again for the reduction operators, I would subsequently do the scalar
assignment MYSUM = val2 by one thread using omp single. Is there a
better way? I don't think I can use the program-defined mysum as the
reduction variable inside the sum loop because the rhs needs to be
evaluated before the lhs is assigned to.

3) gfc_check_dependency seems to be an appropriate helper function for
the dependence analysis in the statements of the workshare . If you have
other suggestions let me know.

thanks,

- Vasilis

On Mon, Apr 14, 2008 at 6:47 AM, Jakub Jelinek <[EMAIL PROTECTED]> wrote:
> Hi!
>
>
>  On Wed, Apr 09, 2008 at 11:29:24PM -0500, Vasilis Liaskovitis wrote:
>  > I am a beginner interested in learning gcc internals and contributing
>  > to the community.
>
>  Thanks for showing interest in this area!
>
>
>  > I have started implementing PR35423 - omp workshare in the fortran
>  > front-end. I have some questions - any guidance and suggestions are
>  > welcome:
>  >
>  > - For scalar assignments, wrapping them in OMP_SINGLE clause.
>
>  Yes, though if there is a couple of adjacent scalar assignments which don't
>  involve function calls and won't take too long to execute, you want
>  to put them all into one OMP_SINGLE.  If the assignments make take long
>  because of function calls and there are several such ones adjacent,
>  you can use OMP_WORKSHARE.
>
>  Furthermore, for all statements, not just the scalar ones, you want to
>  do dependency analysis between all the statements within !$omp workshare,
>  and make OMP_SINGLE, OMP_FOR or OMP_SECTIONS and add OMP_CLAUSE_NOWAIT
>  to them where no barrier is needed.
>
>
>  > - Array/subarray assignments: For assignments handled by the
>  > scalarizer,  I now create an OMP_FOR loop instead of a LOOP_EXPR for
>  > the outermost scalarized loop. This achieves worksharing at the
>  > outermost loop level.
>
>  Yes, though on gomp-3_0-branch you actually could use collapsed OMP_FOR
>  loop too.  Just bear in mind that for best performance at least with
>  static OMP_FOR scheduling ideally the same memory (part of array in this
>  case) is accessed by the same thread, as then it is in that CPU's caches.
>  Of course that's not always possible, but if it can be done, gfortran
>  should try that.
>
>
>  > Some array assignments are handled by functions (e.g.
>  > gfc_build_memcpy_call generates calls to memcpy). For these, I believe
>  > we need to divide the arrays into chunks and have each thread call the
>  > builtin function on its own chunk. E.g. If we have the following call
>  > in a parallel workshare construct:
>  >
>  > memcpy(dst, src, len)
>  >
>  > I generate this pseudocode:
>  >
>  > {

looking for loongson support

2008-04-20 Thread Eric Fisher
hi

  Is there anyone working on the loongson support of gcc? I notice
that loongson has been added into the currently developing binutils.

  ericfisher


Re: Official GCC git repository

2008-04-20 Thread Bernie Innocenti
On Fri, 2008-04-18 at 21:07 +0200, Samuel Tardieu wrote:

> I think the mistake is to have them (git & hg) hosted on the same
> machine as svn. Having them on "hg.gcc.gnu.org" and "git.gcc.gnu.org"
> would allow to split the load between machines (even if
> "hg.gcc.gnu.org"
> and "git.gcc.gnu.org" are the same machines originally).

I have not measured, but we're certainly not talking about
such a relevant amount of load to require multiple dedicated
servers.

-- 
  \___/
  |___|  Bernie Innocenti - http://www.codewiz.org/
   \___\ CTO OLPC Europe  - http://www.laptop.org/



Re: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Nicola Musatti

Rupert Wood wrote:

Nicola Musatti wrote:


_main   PROC

; 12   :char * b = "0123456789"; ; 13   : for ( int l = 0; l < 1
<< 30; ++l ) ; 14   : f(b, l); ; 15   : }

xor eax, eax ret0 _main ENDP


Note that it optimised away your whole program! It could blank out
f() because it never needed to call it.


That's true, although f() was still compiled to the equivalent of 
'return 0;'.

 This can be made more evident by changing f() to

#include 

int f(char *buf, int len) {
int res = 0;
len = 1 << 30;
if (buf + len < buf)
res =  1;
std::cout << res << '\n';
return res;
}

The resulting f() amounts to

std::cout << 0 << '\n';
return 0;

Which is still inlined into main().
Cheers,
Nicola
--
Nicola.Musatti  gmail  com
Home: http://nicola.musatti.googlepages.com/home
Blog: http://wthwdik.wordpress.com/



RE: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Rupert Wood
Nicola Musatti wrote:

> _main PROC
>
> ; 12   :  char * b = "0123456789";
> ; 13   :  for ( int l = 0; l < 1 << 30; ++l )
> ; 14   :  f(b, l);
> ; 15   : }
> 
>   xor eax, eax
>   ret 0
> _main ENDP

Note that it optimised away your whole program! It could blank out f() because 
it never needed to call it. Try marking it __declspec(dllexport) and you'll see 
it *does not* get optimised away.

That said, from my experiments VC will optimise f() away to 0 when inlining it.

Rupert.




Re: US-CERT Vulnerability Note VU#162289

2008-04-20 Thread Nicola Musatti

David Edelsohn wrote:

Nicola,

Please send the project files to Robert Seacord.


Done.

Cheers,
Nicola
--
Nicola.Musatti  gmail  com
Home: http://nicola.musatti.googlepages.com/home
Blog: http://wthwdik.wordpress.com/



Re: Official GCC git repository

2008-04-20 Thread Kirill A. Shutemov
Some of branches situated not on top of /branch directory in svn. For
example redhat branches situated under /branch/redhat. There is only one
branch 'redhat' in the git repository, that contain all redhat branches as
directory. It's not very suitable. Does it possible to track this branches
separatly in git?

-- 
Regards,  Kirill A. Shutemov
 + Belarus, Minsk
 + ALT Linux Team, http://www.altlinux.com/


signature.asc
Description: Digital signature