Randall Gellens on Wed  2/05 15:15 -0700:
> Qpopper does not keep the spool locked for the duration of the
> session.  Qpopper only locks the spool at the beginning and end of
> sessions.  In non-server mode, this is fine, as Qpopper makes no
> assumptions about the state of the spool between times.  But in server
> mode, you are asserting that the only modifications to the spool are
> done by a final delivery agent, which only appends.  If a shell user
> edits the spool in any other way during a POP session, and Qpopper is
> in server mode, you've got spool corruption.

Ok, I've looked at the code...it looks like in server mode the cache is
created on pop_updt() and left around after process exit.  Then with
each new session, during pop_dropcopy() the cache is read if it exists
(and init_dropinfo() is bypassed) and used to fill in the MsgInfoList
without having to parse the spool:

        else { /* SERVER_MODE */
            /*$
             * Save the temporary drop FILE and fid values
             */
            p->hold = p->drop;
            p->drop = fdopen ( mfd, "r+" );

...
            /*
             * If we have a usable cache file, that saves us from having to
             * process the spool.
             */
            rslt = cache_read ( p, fileno(p->drop), &p->mlp );
            if ( rslt == POP_FAILURE ) {
                rslt = init_dropinfo ( p, p->drop_name, p->drop, time_locked );
                if ( rslt != POP_SUCCESS )
                    goto bailout;
            /*
             * If the cache file is smaller than the spool, we need to
             * process the new messages.
             */
            }
        }

Looks like the comment at the end indicates that corruption problems
between sessions are prevented by testing in cache_read():

    if ( header.spool_end > stbuf.st_size ) {
        pop_log ( p, POP_PRIORITY, HERE,
                  "spool smaller than indicated in cache file %s "
                  "(expected %ld; found %ld)",
                  cache_name, (long) header.spool_end, (long) stbuf.st_size );
        close ( cache_fd );
        return POP_FAILURE;
    }

    /*
     * For now, exit if the spool has been modified or is larger than
     * indicated in the cache.  This is temporary.
     */
    if ( header.spool_end != stbuf.st_size ||
         header.modtime   != stbuf.st_mtime ) {
        DEBUG_LOG5 ( p, "spool modified or larger than indicated in "
                     "cache: %s (%ld %ld vs %ld %ld)",
                     cache_name, (long) header.spool_end, header.modtime,
                     (long) stbuf.st_size, stbuf.st_mtime );
        close ( cache_fd );
        return POP_FAILURE;
    }


    /*
     * Now we know that the cache information is probably compatible
     * with the spool.  Note that the local delivery agent may have
     * added messages to the spool since the cache was written,
     * in which case the new messages need to be processed normally.
     */

in which case init_dropinfo() is called anew in pop_dropcopy() for
safety.  Now we have the lock on the file this whole time, but why do we
run

    if ( p->server_mode )
        flock ( mfd, LOCK_UN );

next, allowing the spool to change before pop_updt() is called?

It shouldn't really be a performance hit to move the unlock to quitting
time instead of unlocking after pop_dropcopy() does its thing, leaving
the window open during the session, making the assumption in server
mode, and then, locking again before the update.  While the lock is
held and the POP session is active, MDAs should do the right thing and
keep trying.  If not or if the MDA times out waiting for a lock the MTA
is responsible for handling temporary failure codes returned from the
MDA.  And a user with a shell MUA modifying the spool directly will be
expected to have a client that respects the locks as well.  This allows
the advantages of the cache file without disallowing shell access and
cooperative use of the spool.

Am I missing something? If not it should be easy to make a quick patch
which surrounds the session instead of just the dropcopy and the update.

Reply via email to