On 04.05.20 09:27, Richard Biener wrote:
> On Mon, May 4, 2020 at 8:14 AM Andreas Krebbel <[email protected]> wrote:
>>
>> On 30.04.20 18:33, Richard Biener wrote:
>>> On Thu, Apr 30, 2020 at 5:14 PM Andreas Krebbel <[email protected]>
>>> wrote:
>>>>
>>>> On 30.04.20 08:25, Richard Biener via Gcc-patches wrote:
>>>>> On Wed, Apr 29, 2020 at 5:56 PM Jeff Law <[email protected]> wrote:
>>>>>>
>>>>>> On Tue, 2020-04-28 at 11:44 +0200, Richard Biener via Gcc-patches wrote:
>>>>>>>
>>>>>>> Btw, does s390 have different inlining parameters somehow?
>>>>>> I think so. We saw a ton of backend warnings that were specific to the
>>>>>> s390
>>>>>> builds in Fedora that appeared to be triggered by different inlining
>>>>>> decisions.
>>>>>>
>>>>>> It happened enough that one of the ideas we're going to explore is
>>>>>> seeing if we
>>>>>> can tune the x86_64 port to use the same heuristics. That way we can
>>>>>> try to find
>>>>>> more of these issues in our tester without having to dramatically
>>>>>> increase s390x
>>>>>> resourcing. This hasn't started, but I expect we'll be looking at it in
>>>>>> the fall
>>>>>> (it's unclear if we can just use flags or will need hackery to the x86
>>>>>> port, but
>>>>>> we're happy to share whatever we find with the wider community).
>>>>>
>>>>> Hmm, OK. Guess if it becomes an issue for openSUSE I'll simply revert
>>>>> those
>>>>> s390 backend changes.
>>>>
>>>> Too bad. I would prefer to fix the root cause of the warnings instead.
>>>> Isn't it a good thing that
>>>> thanks to the more aggressive inlining actual bugs in the code get
>>>> revealed? Or do you see other
>>>> issues with that?
>>>
>>> I'm obviously more concerned about latent GCC issues only popping up
>>> on s390. Basically
>>> the s390 limits throw much of the QA done on x86_64 targets away when
>>> comparing to s390.
>> That's indeed a problem. It isn't only about GCC itself. We also see more
>> bugs in distro packages
>> revealed on s390. But in the end these often are actual bugs and should be
>> fixed one way or the
>> other. Wouldn't it be better to find more of these bugs upfront?
>>
>> Would it perhaps make sense to introduce an optimization level which tries
>> to maximize the number of
>> warnings produced? I know this might be tricky since some optimizations tend
>> to produce a certain
>> set of warnings while reducing warnings in another area at the same time.
>> But still I think there
>> would be some value in picking extreme params with that option and have it
>> consistent across archs.
>>
>>> For example we do burn-in testing of a new GCC release (10.1 here) in
>>> openSUSE Factory
>>> by using it as system compiler. Then with GCC 10.2 we ship it as
>>> compiler for their own use
>>> to SLES users and ISVs. But obviously all the burn-in testing on
>>> openSUSE Factory
>>> is very x86_64 centric these days.
>> I think Matthias is doing this with Ubuntu also for other archs. Thanks to
>> him we are usually able
>> to fix a bunch of problems beforehand. Kudos to him for doing that!
>
> For build testing yes, but does it really receive much usage?
Right, it is mostly build testing plus package testsuites which are run as part
of the package build.
>
>>>>>
>>>>> As a general advice to backend developers I _strongly_ discourage to
>>>>> adjust
>>>>> --param defaults that affect GIMPLE.
>>>>
>>>> We did tons of measurements to find these values recently and got a nice
>>>> speedup after increasing
>>>> them. We also had a look at the size increase of binaries. It was
>>>> significant but we decided that
>>>> the performance benefit is worth it. On IBM Z the function call overhead
>>>> is bigger than on other
>>>> archs so inlining in general helps us.
>>>>
>>>> I'm not sure I understand your recommendation to leave these values as is.
>>>> I assumed that such
>>>> parameters are supposed to be used to deal with the characteristics of a
>>>> platform. Are there better
>>>> ways to achieve the same effect?
>>>
>>> Get more inlining than other archs? No, I don't think so. But if the
>>> call overhead is bigger then
>>> you should adjust the cost of calls instead?
>> Is there a dedicated knob for adjusting call costs? We played with
>> branch-costs in the past but this
>> is not dedicated to just calls. Increasing it usually had negative effects
>> for our target, although
>> we might need to re-iterate on that one as well.
>
> There's no --param for call cost, there's some knobs for
> prologue/epilogue costs and also
> the magic early-inline-insns (which in early days was supposed to be
> the call overhead).
> There's
>
> /* Initializes weights used by estimate_num_insns. */
>
> void
> init_inline_once (void)
> {
> eni_size_weights.call_cost = 1;
> eni_size_weights.indirect_call_cost = 3;
> ...
> /* Estimating time for call is difficult, since we have no idea what the
> called function does. In the current uses of eni_time_weights,
> underestimating the cost does less harm than overestimating it, so
> we choose a rather small value here. */
> eni_time_weights.call_cost = 10;
> eni_time_weights.indirect_call_cost = 15;
>
> which would hint at a place to introduce target dependences. Note there's
> also individual call argument cost handling - not sure where exactly the call
> cost on z appears, see estimate_num_insns ().
Thanks, I'll have a look.
Andreas
>>> Also the default limits
>>> are a delicate compromise
>>> between speed and size - of course if you disregard size you get more
>>> speed. You also only
>>> adjusted two out of very many params which are not really as
>>> independent as one would think...
>>> (and you forgot to re-tune after last years big changes/parameter space
>>> splits).
>> Oh, we set other inlining params as well. It is just that with z13 we tuned
>> in particular also for
>> these two. Since we don't have measurements on other CPU levels we didn't
>> want to risk producing
>> regressions and just have left the values as is for those.
>>
>>> As said you should at least think of adjusting the adjustments to be
>>> relative to the default
>>> value rather than absolute.
>> Ok.
>>
>> Andreas
>>>
>>> Richard.
>>>
>>>> Andreas
>>>>
>>>>>
>>>>> Richard.
>>>>>
>>>>>> jeff
>>>>>>
>>>>>>
>>>>>>
>>>>
>>