> Optimization use in select: If all "interesting" file id's are known
> to be below "n", then only the first "n" bits in a FD_ISSET need to
> be examined. As soon as the bits are scattered, it takes MUCH longer
> to check for activity....

That's an optimization, not a correctness issue.

>     for (i = 0; i <= MAX_LOG && i < argc; i++) {
>         sprintf(str,"/tmp/%s",pnames[i]);
>         mkfifo(str,0600);       /* assume it exists */
>         inlogfd[i] = open(str,O_RDONLY | O_NDELAY);
>         FD_SET(inlogfd[i],&in_files);
>     }

This works regardless of what the open() returns. What does not work
is using MAX_LOG (assuming it is constant) later in the following form:

>         while (select(MAX_LOG,&active,NULL,NULL,NULL) >= 0) {

I see no way around computing the maximum of the inlogfd[i] values +1.
(Which can of course be done just after the opens above. Note that the
last opened fd _is_ guaranteed to get the highest number; FD_SET is
one of the library routines where you can be pretty confident they
don't open fds...)

Btw. there are two problems even assuming you do get contiguous fds:
- an off by one error in the case of argc > MAX_LOG, the first
  argument of select() is maximum fd _plus one_
- from an optimization POV it is highly advisable to take only the
  real maximum anyway.

Olaf
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to