https://gcc.gnu.org/bugzilla/show_bug.cgi?id=113495

--- Comment #28 from JuzheZhong <juzhe.zhong at rivai dot ai> ---
(In reply to Robin Dapp from comment #27)
> Following up on this:
> 
> I'm seeing the same thing Patrick does.  We create a lot of large non-sparse
> sbitmaps that amount to around 33G in total.
> 
> I did local experiments replacing all sbitmaps that are not needed for LCM
> by regular bitmaps.  Apart from output differences vs the original version
> the testsuite is unchanged.
> 
> As expected, wrf now takes longer to compiler, 8 mins vs 4ish mins before
> and we still use 2.7G of RAM for this single file (Likely because of the
> remaining sbitmaps) compared to a max of 1.2ish G that the rest of the
> commpilation uses.
> 
> One possibility to get the best of both worlds would be to threshold based
> on num_bbs * num_exprs.  Once we exceed it switch to the bitmap pass,
> otherwise keep sbitmaps for performance. 
> 
> Messaging with Juzhe offline, his best guess for the LICM time is that he
> enabled checking for dataflow which slows down this particular compilation
> by a lot.  Therefore it doesn't look like a generic problem.

Thanks. I don't think replacing sbitmap is the best solution.
Let's me first disable DF check and reproduce 33G memory consumption in my
local
machine.

I think the best way to optimize the memory consumption is to optimize the
VSETLV PASS algorithm and codes. I have an idea to optimize.
I am gonna work on it.

Thanks for reporting.

Reply via email to