Yes, I think so.

Regards,
Wei.

On Wed, Jun 19, 2013 at 2:00 PM, Xinliang David Li <davi...@google.com> wrote:
> Should the patch be ported to in 48 branch?
>
> thanks,
>
> David
>
> On Wed, Jun 19, 2013 at 11:46 AM, Vladimir Makarov <vmaka...@redhat.com> 
> wrote:
>> On 13-06-19 1:23 AM, Wei Mi wrote:
>>>
>>> Ping.
>>>
>>> On Wed, Jun 12, 2013 at 2:44 PM, Wei Mi <w...@google.com> wrote:
>>>>
>>>> Hi,
>>>>
>>>> http://gcc.gnu.org/bugzilla/show_bug.cgi?id=57518
>>>>
>>>> pr57518 happened because update_equiv_regs in IRA marked a reg
>>>> equivalent with a mem, lowered its mem_cost in scan_one_insn, set
>>>> NO_REGS to its rclass, but didn't consider the reg was used in
>>>> paradoxical subreg which prevented the reg from being replaced by mem
>>>> in LRA phase.
>>>>
>>>> This patch is to check whether a reg is used in a paradoxical subreg
>>>> in update_equiv_regs before reg is set as equivalent to a mem.
>>>>
>>>> bootstrap and regression test on x86_64-linux-gnu ok. Is it ok for
>>>> trunk and gcc-4.8 branch?
>>>>
>>>>
>> Thanks for working on this PR, Wei, and sorry for the delay with the answer
>> (I was on vacation).
>>
>> In general, the PR analysis and the proposed solution looks ok.  I only
>> worry that you are adding additional full scan of all RTL code.  It might
>> add 0.5% to GCC compilation time if data cache is rewritten (which will
>> happen for moderate size or big functions). It would be nice to do it on
>> some other existing RTL traversing. Unfortunately, this info is calculated
>> later (reg_max_width in reload or biggest_mode in LRA).  I am in doubt that
>> other solutions I see now are better:
>>
>>   o calculate this info in regstat_...  function and store it in reg_info_p
>>   o calculate it with update_equiv_regs and use it for invalidation the
>> equiv info later
>>
>> The first one increases reg_info_p footprint and calculation is done many
>> times although it is used once.
>> The second one results in complicated code.
>>
>> So I think the current patch is ok to commit.
>>
>> Thanks, again.
>>
>>
>>

Reply via email to