> On 14 Mar 2020, at 01:18, Steven Fosdick <stevenfosd...@gmail.com> wrote:
> while (!apr_dbd_get_row(drv, rpool, res, &row, -1)) {
> for (int c = 0; c < cols; c++) {
> const char *name = apr_dbd_get_name(drv, res, c);
> const char *value = apr_dbd_get_entry(drv, row, c);
> printf("%s=%s\n", name, value);
> }
> apr_pool_clear(rpool);
> putchar('\n');
> }
Clearing the pool within a loop using it? Yes, that would segfault!
> I can obviously resolve this by initialising row to NULL each time:
Indeed. Or if you're dealing with millions of rows, you might consider
a hybrid approach: clear and reset to NULL every $count rows?
> It seems to me that if there is some data to be retained between
> fetching one row and the next it should really be allocated from the
> memory pool given to apr_dbd_select, not the one given to
> apr_dbd_get_row. What do you think?
Indeed, I wonder if you're the first to try that particular pool usage?
It's your choice of pools. Are you suggesting apr_dbd_get_row should
take two pool arguments, or that apr_dbd_results_t should remember
its pool and allocate rows out of it? Mightn't that itself become a nasty
gotcha for programmers, and risk blowing up existing programs?
--
Nick Kew