http://gcc.gnu.org/bugzilla/show_bug.cgi?id=48636
--- Comment #9 from Tobias Burnus <burnus at gcc dot gnu.org> 2011-04-20 15:39:47 UTC --- > But do we actually do this? I did some tests a while ago, and IIRC for assumed > shape dummy arguments the procedure always calculates new bounds such that > they > start from 1. That is, the procedure assumes that the actual argument > descriptor may have lower bounds != 1. > So my argument is basically that with the new descriptor it might make sense > to > switch the responsibility around such that it's the caller who makes sure that > all lower bounds are 0 (as we must have the capability to do this anyway in > order to call inter-operable procedures, no?) instead of the callee. No, the conversion is already done in the caller: subroutine bar(B) interface; subroutine foo(a); integer :: a(:); end subroutine foo end interface integer :: B(:) call foo(B) end subroutine bar Shows: parm.4.dim[0].lbound = 1; [...] foo (&parm.4); For assumed-shape actual arguments, creating a new descriptor is actually not needed - only for deferred shape ones - or if one does not have a full array ref. Cf. gfc_conv_array_parameter, which is called by gfc_conv_procedure_call. However, some additional calculation is also done in the the callee to determine the stride and offset; e.g. ubound.0 = (b->dim[0].ubound - b->dim[0].lbound) + 1; again, if the dummy argument is not deferred-shaped (allocatable or pointer), one actually knows that "b->dim[0].lbound" == 1. I think we have some redundancy here -> missed optimization.