On Wed, Jul 10, 2019 at 6:03 PM Thomas Munro <thomas.mu...@gmail.com> wrote: > Hmm. I wonder if we should just make ShmContextFree() do nothing! And > make ShmContextAlloc() allocate (say) 8KB chunks (or larger if needed > for larger allocation) and then hand out small pieces from the > 'current' chunk as needed. Then the only way to free memory is to > destroy contexts, but for the use case being discussed, that might > actually be OK. I suppose you'd want to call this implementation > something different, like ShmRegionContext, ShmZoneContext or > ShmArenaContext[1].
<after sleeping on this> I guess what I said above is only really appropriate for complex things like plans that have their own contexts so that we can delete them easily "in bulk". I guess it's not true for caches of simpler objects like catcache, that don't want a context for each cached thing and want to free objects "retail" (one by one). So I guess you might want something more like your current patch for (say) SharedCatCache, and something like the above-quoted idea for (say) SharedPlanCache or SharedRelCache. For an implementation that supports retail free, perhaps you could store the address of the clean-up list element in some extra bytes before the returned pointer, so you don't have to find it by linear search. Next, I suppose you don't want to leave holes in the middle of the array, so perhaps instead of writing NULL there, you could transfer the last item in the array to this location (with associated concurrency problems). Since I don't think anyone ever said it explicitly, the above discussion is all about how we get to this situation, while making sure that we're mostly solving problems that occur in both multi-process and multi-threaded designs: shared_metadata_cache = '100MB' shared_plan_cache = '100MB' -- Thomas Munro https://enterprisedb.com