https://gcc.gnu.org/bugzilla/show_bug.cgi?id=92596

--- Comment #3 from Richard Biener <rguenth at gcc dot gnu.org> ---
So for

patt_17 = (long int) patt_18;

for example vect_get_vector_types_for_stmt now computes V2DI and V2SI as
vectype and nunits_vectype.  I think that's undesirable.

      scalar_type = vect_get_smallest_scalar_type (stmt_info, &dummy, &dummy);
      if (scalar_type != TREE_TYPE (vectype))
        {
          if (dump_enabled_p ())
            dump_printf_loc (MSG_NOTE, vect_location,
                             "get vectype for smallest scalar type: %T\n",
                             scalar_type);
          nunits_vectype = get_vectype_for_scalar_type (vinfo, scalar_type,
                                                        group_size);

used to result in the same sized vector type (but actually used
a simple get_vectype_for_scalar_type).

I'm not sure how we should handle max_nunits with the new scheme of
allowing mixed-size vectors?

Certainly there seem to be inconsistencies which lead to this ICE since
during SLP build we always arrive at a V2{DI,SI}mode vector but during
analysis we see also

t2.c:10:1: note:   Build SLP for _8->n[4] = _10;
t2.c:10:1: note:   get vectype for scalar type (group size 1): long int
t2.c:10:1: note:   vectype: vector(1) long int
t2.c:10:1: note:   nunits = 1

(fail)

so eventually the vector type mismatch is introudced by

t2.c:10:1: note:   Split group into 2 and 1

also because

t2.c:10:1: missed:   Build SLP failed: incompatible vector types for: c.0_1 =
c;
t2.c:10:1: note:       old vector type: vector(2) int
t2.c:10:1: note:       new vector type: vector(1) int

(but that's an external node)

Reply via email to