On Thu, May 25, 2017 at 3:59 AM, Palmer Dabbelt <pal...@dabbelt.com> wrote:
> On Tue, 23 May 2017 04:19:42 PDT (-0700), Arnd Bergmann wrote:
>> On Tue, May 23, 2017 at 2:41 AM, Palmer Dabbelt <pal...@dabbelt.com> wrote:

>> Also, it would be good to replace the multiply+div64
>> with a single multiplication here, see how x86 and arm do it
>> (for the tsc/__timer_delay case).
>
> Makes sense.  I think this should do it
>
>   
> https://github.com/riscv/riscv-linux/commit/d397332f6ebff42f3ecb385e9cf3284fdeda6776
>
> but I'm finding this hard to test as this only works for 2ms sleeps.  It seems
> at least in the right ballpark

+ if (usecs > MAX_UDELAY_US) {
+ __delay((u64)usecs * riscv_timebase / 1000000ULL);
+ return;
+ }

You still do the 64-bit division here. What I meant is to completely
avoid the division and use a multiply+shift.

Also, you don't need to base anything on HZ, as you do not rely
on the delay calibration but always use a timer.

       Arnd

Reply via email to