> From: Linus Torvalds
> Sent: 03 November 2015 20:45
> On Tue, Nov 3, 2015 at 12:05 PM, Linus Torvalds
> <torva...@linux-foundation.org> wrote:
> >      result = add_overflow(
> >         mul_overflow(sec, SEC_CONVERSION, &overflow),
> >         mul_overflow(nsec, NSEC_CONVERSION, &overflow),
> >         &overflow);
> >
> >      return overflow ? MAX_JIFFIES : result;
> 
> Thinking more about this example, I think the gcc interface for
> multiplication overflow is fine.
> 
> It would end up something like
> 
>     if (mul_overflow(sec, SEC_CONVERSION, &sec))
>         return MAX_JIFFY_OFFSET;
>     if (mul_overflow(nsec, NSEC_CONVERSION, &nsec))
>         return MAX_JIFFY_OFFSET;
>     sum = sec + nsec;
>     if (sum < sec || sum > MAX_JIFFY_OFFSET)
>         return MAX_JIFFY_OFFSET;
>     return sum;
> 
> and that doesn't look horribly ugly to me.

If mul_overflow() is a real function you've just forced some of the
values out to memory, generating a 'clobber' for all memory
(unless 'strict-aliasing' is enabled) and making a mess of other
optimisations.
(If it is a static inline that might not happen.)

If you assume that no one is stupid enough to multiply very large
values by 1 and not get an error you could have mul_overflow()
return the largest prime if the multiply overflowed.

        David

Reply via email to