https://gcc.gnu.org/bugzilla/show_bug.cgi?id=65841

Paul Thomas <pault at gcc dot gnu.org> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|UNCONFIRMED                 |NEW
   Last reconfirmed|                            |2015-04-22
                 CC|                            |pault at gcc dot gnu.org
     Ever confirmed|0                           |1

--- Comment #1 from Paul Thomas <pault at gcc dot gnu.org> ---
Confirmed

type a
  real, allocatable :: f
end type
type b
  type(a), allocatable :: g
end type
type(b) c,d
c%g=a(1.) 
d=c
d%g%f = 2.0 ! This causes the segfault too
end

However, if I make c and d allocatable

  type a
    real, allocatable :: f
  end type
  type b
    type(a), allocatable :: g
  end type
  type(b), allocatable :: c, d ! Note allocatable now
  allocate (c)
  c%g = a (1.)
  d = c
  d = c
end


... it does not segfault. Note both must be allocatable for this to work. This
should be a clue but I still do not see what is wrong with the code produced
for the segfaulting cases. More staring at it tonight!

Thanks for the report

Paul

Reply via email to