On 07/18, Oleg Nesterov wrote:
>
> + * NOTE! currently the only user is cputime_adjust() and thus
> + *
> + *   stime < total && rtime > total
> + *
> + * this means that the end result is always precise and the additional
> + * div64_u64_rem() inside the main loop is called at most once.

Ah, I just noticed that the comment is not 100% correct... in theory we
can drop the precision and even do div64_u64_rem() more than once, but this
can only happen if stime or total = stime + utime is "really" huge, I don't
think this can happen in practice...

We can probably just do

        static u64 scale_stime(u64 stime, u64 rtime, u64 total)
        {
                u64 res = 0, div, rem;

                if (ilog2(stime) + ilog2(rtime) > 62) {
                        div = div64_u64_rem(rtime, total, &rem);
                        res += div * stime;
                        rtime = rem;

                        int shift = ilog2(stime) + ilog2(rtime) - 62;
                        if (shift > 0) {
                                rtime >>= shift;
                                total >>= shitt;
                                if (!total)
                                        return res;
                        }
                }

                return res + div64_u64(stime * rtime, total);
        }

but this way the code looks less generic.

Oleg.

Reply via email to