On Sun, 2011-11-13 at 20:51 -0500, Charles Lepple wrote: > On Nov 13, 2011, at 3:32 PM, Regid Ichira wrote: > > > --- On Sat, 11/12/11, Arnaud Quette <[email protected]> wrote: > > > >>>>> - usleep(250000); > >> > >>>>> + struct timespec delay = {0, 250e6}; nanosleep(&delay, > >>>>> NULL); > >> > >>>> Would it be better to define a local version of usleep in terms of > >>>> nanosleep? I suspect the library version already does that, but if the > >>>> library version is going away, a local version is much more concise and > >>>> readable than calling nanosleep directly. If there are concerns about > >>>> linking, the local version could be, e.g, u_sleep, since all the calls > >>>> are getting touched anyway. > >> > >>> Using AC_REPLACE_FUNCS(... usleep ...) in configure.in and providing a > >>> common/usleep.c->usleep() replacement implementation, in case the > >>> system doesn't provide it, is a better way to go. At least for now. > >>> That way, we avoid regression, while supporting systems that do not > >>> provide usleep. > >> > >> What would be more worthwhile (IMHO) is to modify the code to make use > >> of the remaining time returned by nanosleep. Otherwise, I am not sure > >> I see the benefit of this change. > >> > > > > Does that summarize to: > > > > 1. Have a valid C code for: > > > > int > > substitution_usleep(delay, remaining_time) > > My point was that if we are going to change everything, we should *use* the > remaining time. But given that the delays are very short to begin with, and > given that the drivers tend not to rely on the precision or accuracy of > usleep(), I wonder why we would bother replacing all of these calls to begin > with? > > Before we delve into discussions of implementation, I still think it is > useful to find out what is going on in the Win32 libraries. If the underlying > code sleeps with millisecond precision, using nanosleep() is a step in the > wrong direction, IMHO.
What I understand from MSDN docs is that "sleep" functions (Sleep, WaitFor*) API only allows milliseconds. But by default the actual granularity is even bigger than a millisecond. You can set this granularity to a minimum (via timeBeginPeriod function) but this has an impact on the whole system so it has to be used carefully. -- Team Open Source Eaton - http://powerquality.eaton.com -------------------------------------------------------------------------- _______________________________________________ Nut-upsdev mailing list [email protected] http://lists.alioth.debian.org/cgi-bin/mailman/listinfo/nut-upsdev
