On Tue, Jan 13, 2009 at 05:58:23PM +0300, Kirill Smelkov wrote: > Arjen, Arnaud, > first of all, I'm sorry for my late reply. > > If it's not too late, here is updated al175: > > > On Fri, Dec 26, 2008 at 11:37:04PM +0100, Arjen de Korte wrote: > > Citeren Kirill Smelkov <[email protected]>: > > > > [...] > > > >> Yes, "All the world is not an x86 Linux box," and I've tried to make all > >> the > >> world happy. > > > > I raised several other issues as well, of which the most important one > > isn't addressed. The driver still uses 'alloca' which isn't portable (it > > is not in POSIX). I also indicated how this could be solved by using > > automatic variables. In any case I'm not conviced there is no > > alternative here, so this is a show stopper for now. > > I've reworked it not to use alloca and to use auto-variables + xmalloc > in one place. > > Now it usually looks like > > raw_data reply; > byte_t reply_buf[11]; > > ... > > raw_alloc_onstack(&reply, reply_buf); > > > Also something I really don't like in this driver is the use of alarm(). > > Although this is presently used in the 'net-xml' driver (in case an older > > neon library is detected), there is no need for this here. We have > > timeouts on *all* ser_get* functions and these *must* be used instead. > > Note that with using timeouts I mean something different than setting > > them to '999' seconds and setting an alarm() to breakout early. Unless > > there is a really good reason to use an alarm() (ie, there is no > > alternative), this shouldn't be used. > > I've used alarm in the first place for the following reason: > > Let's look at e.g. upsdrv_updateinfo() -- it reads several al175 > registers, and each read_register consists of a chat between host and > al175. > > And I want the whole transaction to be less than say 5 seconds. What > should I do? manually calculate read/write time budget left for _each_ > read or write? > > I think this is not practical. > > That's why I simply > > 1. arm the timer via alarm(T) to fire after T seconds > 2. do all the I/O, logic, etc... > 3. if I/O would stale at *any* place, it will be *always* aborted right > after T seconds since the transaction start (thanks to alarm) > 4. after all the I/O transaction is done -- reset the timer. > > And exactly for this reason 999 is used as infinity which means 'much > more longer than _any_ T we'll pass to alarm() ever'. If there is a way > to actually pass 'please never timeout on select', I'd be happy to > rework the code. > > Also according to mans, alarm(2) is in SVr4, POSIX.1-2001 and 4.3BSD. > That's why I think using alarm() makes sense. > > But if there is a similiar functionality in NUT to set whole transaction > timeout, I'll be happy to rework the code to use what NUT provides. > > > Even in the updated driver, I see quite a couple of fprintf() lines. > > Although these are in most (all?) cases encased in a 'if > > (nut_debug_level > 0)', we have the upsdebug* functions for that. Do > > yourself (and your fellow developers) a favor and use those instead. > > > Another thing related to this is the XTRACE macro. This will only use a > > single (1) debug level to show output. Please use a layered approach, > > where various levels of debugging information are used. > > I've split debug/trace output into several levels: > > 1 /* user-level trace (status, instcmd, etc */ > 2 /* proto handling info */ > 3 /* IO trace */ > > > so with -D the user will see only high level stuff, with -DD -- > high-level info + any information regarding COMLI proto handling, and > with -DDD -- everything includiong full IO trace. > > Can't test it because I don't have the hardware now, but I hope this is > ok. > > (one place still uses fprintf(stderr, ...) but that's because it is > difficult to construct one line through several calls to upsdebug, > although sure this is doable, like upsdebug_hex does. Mine has pretty > printing for control ascii characters, and I'd better leave it there, > because as I recall this was usefull for debugging. I hope this is not > a show-stopper.) > > > > It also helps understanding the driver if you keep the number of > > macro's limited. > > The reason XTRACE was introduced in the first place is to automatically > put a function name as prefix into trace output through the use of > __FUNCTION__. if it is (again) a protability problem, we could always > workaround this as follows (taken from gcc manual): > > > #if __STDC_VERSION__ < 199901L > # if __GNUC__ >= 2 > # define __func__ __FUNCTION__ > # else > # define __func__ "<unknown>" > # endif > #endif > > > > Here is the interdiff wrt recent version as posted to the list, and the > full patch:
[...] Arnaud, Arjen, So what would we do about this patch? I've tried to address the issues you've raised, and to describe my rationale behind alarm(). If it can't go in -- that's ok with me, but could you please at least tell me what you objections (if any) are now. Thanks, Kirill _______________________________________________ Nut-upsdev mailing list [email protected] http://lists.alioth.debian.org/mailman/listinfo/nut-upsdev
