Shannon, Bryan wrote: > The hash building does cause a > huge difference in mem-usage. (obvious).
It depends. If you are making millions of hashes (as opposed to millions of arrays), it will make a difference. If you're iterating through the rows and thus making one hash vs. one array it will not matter. > The constant folding really does the trick That is a nice addition. > fetchrow_hashref() actually doesn't give you the same hashref each time as > fetchrow_arrayref() does... So until that is fixed, I have to double-up on > the hash copying into my "iterator" ... Seems like you'd just be slinging references around, which is pretty efficient. > But maybe I'll set my eyes on DBI's implementation and see if constant > folding might better be done there... Though beyond the scope of this > mailing list, I could see potential for a "fetchrow_resultset()" that gives > you an hash-like access (and indeed could be tied) but would use constant > folding transparently... You can code one of those yourself pretty easilly (just copy the data into a hash), but I don't recommend using tied variables. They're slow. > or... Maybe I'll make my own incredibly unsafe and insecure (but very quick) > Stash replacement that access the data directly..... That would actually make a big difference and you should consider it if you have serious performance problems. There was some discussion about an alternate simple Stash implementation at TPC last year. If you make some assumptions about the data structures that will be handed to TT (disallowing objects and code refs, for example), you can make a huge difference in performance of the stash. - Perrin
