I found a performance problem w/prefetching yesterday: Performance degraded catastrophically at some point(>20x slowdown).
What happened was as follows: - A query was executed w/prefetch. Various to-many relationships were resolved. One query per relationship, very nice. - The query size & prefetched records eventually broke the default cache size limit - The to-many relationships where then resolved one at the time(one SQL/record to be resolved) in addition to the one query per relationship. In my case the solution was simple: increase cache size by 10x. :-) The access pattern of my application is sequential access. This wouldn't jell very well with an LRU cache. I have no idea what a good caching strategy might be for datarows even if it was pluggable. The access is sequential and caching really works best w/localized accesses... Also, I'm wondering why the records aren't locked in memory in the first place. I guess cayenne.DataRowStore.snapshot.size refers to referenced and unreferenced objects combined and the objects prefetched are not referenced until the query & prefetching is complete, thus they are tossed out of the cache during the query + prefetch phase. Once the query is complete, then the total memory usage will be identical no matter what the cache size is set to and how many queries were executed, so I don't see the upside to tossing out objects during a query + prefetch phase. For the particular case I was working on, nothing but loading all the objects into memory would do the trick, so it's not really a question of caching strategies. -- Øyvind Harboe http://www.zylin.com/zy1000.html ARM7 ARM9 XScale Cortex JTAG debugger and flash programmer
