Ok, we have real problems with the palloc in the new design of apr_poll().
But it's fundamentally better than falling back on poll() because our design
just sucks, performance-wise.  Ryan speaks of alloca, but that won't
work on all platforms, most especially stack-impaired platforms such as
Netware without the ability to grow the stack.

So here's a design suggestion.  Take the apr_pollfd_t element;

struct apr_pollfd_t {
    apr_pool_t *p;
    apr_datatype_e desc_type;
    apr_int16_t reqevents;
    apr_int16_t rtnevents;
    apr_descriptor desc;
};

If we pull the apr_pool_t *p [which is WAY overkill if you have several
dozen to hundreds of descriptors you might be polling against] and
create a brand new apr_pollfd_set_t [also transparent!]...

typedef struct {
    apr_pool_t *p;
    apr_pollfd_internal_t *int;
    apr_pollfd_t *first;
    int count;
} apr_pollfd_set_t;

Modify apr_poll_setup to return a new apr_pollfd_set_t, with *first
already pointing at a newly allocated apr_pollfd_t array, and set up
apr_pollfd_internal_t to NULL.

The *int pointer tracks APR's internal count and arrays that it needs.
If the count to an invocation of apr_poll() exceeds our internally allocated
count, THEN we go and allocate a larger internal structure.  But not for
every pass into apr_poll().

ADVANTAGES

  . we stop the leak.  Once we grow to some given *int pollset size,
    we won't be growing anymore.

  . we lighten the apr_pollfd_t elements by one pool pointer.

  . using *first, we can change our offset into a huge array of elements.

  . using count, we can resize the array we are interested in.

  . neat feature, we actually can have several apr_pollfd_set_t
    indexes floating around, neatly pointing into different parts
    of a huge apr_pollfd_t array.

DISADVANTAGES

  . if not using apr_poll_setup(), the user is absolutely responsible
    for initializing *v to NULL.

  . two structures to think about.  Not complex structures, but two,
    none the less.

Comments or ideas?  This was idle brainstorming.

Bill
  .




Reply via email to