On 13/09/2019 11.00, Linus Torvalds wrote:
> On Fri, Sep 13, 2019 at 8:22 AM Borislav Petkov <[email protected]> wrote:
>>
>> since the merge window is closing in and y'all are on a conference, I
>> thought I should take another stab at it. It being something which Ingo,
>> Linus and Peter have suggested in the past at least once.
>>
>> Instead of calling memset:
>>
>> ffffffff8100cd8d:       e8 0e 15 7a 00          callq  ffffffff817ae2a0 
>> <__memset>
>>
>> and having a JMP inside it depending on the feature supported, let's simply
>> have the REP; STOSB directly in the code:
> 
> That's probably fine for when the memset *is* a call, but:
> 
>> The result is this:
>>
>> static __always_inline void *memset(void *dest, int c, size_t n)
>> {
>>         void *ret, *dummy;
>>
>>         asm volatile(ALTERNATIVE_2_REVERSE("rep; stosb",
> 
> Forcing this code means that if you do
> 
>      struct { long hi, low; } a;
>      memset(&a, 0, sizeof(a));
> 
> you force that "rep stosb". Which is HORRID.
> 
> The compiler should turn it into just one single 8-byte store. But
> because you took over all of memset(), now that doesn't happen.

OK, that answers my question.

> So we do need to have gcc do the __builtin_memset() for the simple cases..

Something like

        if (__builtin_constant_p(c) && __builtin_constant_p(n) && n <= 32)
                return __builtin_memset(dest, c, n);

might be enough? Of course it would be sad if 32 was so high that this
turned into a memset() call, but there's -mmemset-strategy= if one wants
complete control. Though that's of course build-time, so can't consider
differences between cpu models.

Rasmus


Reply via email to