Ah, I missed that.  I see it's in the doxygen docs for the random module.
However, the sources aren't under random/, they're under misc/.  I was
switching between the doxygen docs and looking at the sources; maybe when I
was looking for it, I missed it because the sources were under misc/.

For the standard module: you can keep the unique id size the same, but
still have a root of 10 bytes, by getting rid of the thread_index:
thread_index is very wasteful, using 32 bits, when only a few bits are used.

Change:

typedef struct {
    unsigned int stamp;
    unsigned int in_addr;
    unsigned int pid;
    unsigned short counter;
    unsigned int thread_index;
} unique_id_rec;

to:

typedef struct {
    apr_uint32_t stamp;
    apr_uint32_t counter;
    char root[ROOT_SIZE];
} unique_id_rec;

Have the two ints first for alignment purposes, so there is no padding in
the struct.

With the counter field using apr_uint32_t, you can use apr_atomic_inc32()
to do the increments.  There is less need of thread_index if you increment
the counter atomically.

The initialization of the counter with random data also gives you more
process randomness than the 10 bytes of root.



On Fri, Jul 5, 2013 at 8:04 PM, Stefan Fritsch <s...@sfritsch.de> wrote:

> On Wednesday 26 June 2013, Daniel Lescohier wrote:
> > When I looked into the ap random functions, I didn't like the
> > implementation, because I didn't see anywhere in the httpd codebase
> > that entropy is periodically added to the entropy pool.  After
> > reading the details of how the Linux entropy pool works
> > (https://lwn
> > .net/Articles/525204/), I decided to use /dev/urandom instead,
> > since Linux is periodically adding entropy to it.  This code is
> > not portable, but this was for a private Apache module that is
> > only used on Linux.
> >
> > To preserve entropy on the web server machine, I also only generate
> > a random number once per apache child, then increment an uint32
> > portion of it for each unique id call.  I also have seconds and
> > microseconds, so that's why I think it's OK to do increments from
> > the random base, instead of generating a new random id on each
> > request.
>
> The "insecure" in ap_random_insecure_bytes is there for a reason. But
> if you only use it once per process, anyway, it should be sufficient.
> The fact that several consumers (especially with multi-threaded mpms)
> pull from the same pool in undefined order adds some entropy, too.
>
> FWIW, there is apr_generate_random_bytes() which can do the reading of
> /dev/urandom for you.
>

Reply via email to