https://gcc.gnu.org/bugzilla/show_bug.cgi?id=95052
Martin Sebor <msebor at gcc dot gnu.org> changed: What |Removed |Added ---------------------------------------------------------------------------- CC| |msebor at gcc dot gnu.org Component|c |target --- Comment #4 from Martin Sebor <msebor at gcc dot gnu.org> --- The optimization in g:23aa9f7c4637ad51587e536e245ae6adb5391bbc should only convert into a STRING_CST the initial portion of the braced initializer list up to just past the last non-nul character. The remaining nuls should be skipped. In other words, this char buf[1*1024*1024] = { 0 }; should result in the same IL as this equivalent: char buf[1*1024*1024] = ""; I don't expect the commit above to have changed anything for the latter form, and I would expect each back end to choose the same optimal code to emit in both cases. So I don't think the commit above is a regression; it just exposed an inefficiency that was already present.