> do we want (size_t) members of apr_table nelts values, or are we happy to > have them int?
I'll vote size_t -- int suggests that it's possible for us to have negative-sized arrays, which strikes me a kind of silly. I've always been fond of size_t for nelts-type members, because when I write *static* arrarys, I use this code pattern requently: for (i = 0; i < (sizeof(array) / sizeof(array[0])); i++) do_stuff(array[i]); Besides which, if it gets made into an unsigned quantity, it will clear up a *pile* of casts (to clear up signed/unsigned comparison warnings) in my code. ;) > I think int is fine. If apr_table_t were rewritten to be scalable to > > 2^31 elements, it would probably acquire a different iterator > interface. I'm not sure that's a valid concern, unless I'm missing something non-obvious. Are you worried only about the cost running comp() more than two billion times? If so -- I would suggest that if you're storing that many records in an apr_table_t, and need to apr_table_do() over them regularly, that you have simply chosen the wrong data storage/abstraction and basically deserve what you get... I currently store about 150K highly-volatile records in an apr hash table (with frequent disk flushes), and recognize that it's not the best choice (although the allocation changes made to it about six months ago, combined with fiddling the initial alloc quantity made a BIG difference)... I'm looking at either stuffing those records into Oracle or a Berkeley-style DBM (haven't decided which yet, I have beta-code written for both so I can pick which solution agrees best with real-life). Wes -- Wesley W. Garland Director, Product Development PageMail, Inc. +1 613 542 2787 x 102
