>> Now cache on shmem is managed by a hash table which is on shmem as
>> well. Since shmem cannot be expanded dynamically, the max number of
>> entries in the hash table is also fixed. For this I have added new
>> directive called "memqcache_max_num_cache" which specifies the max
>> number of hash table entries. If the number of cache entries exceed
>> it, simply an error is triggered at this point. Maybe we could trash
>> away old cache entries and make a room to have hash entries in this
>> case.
> 
> I don't agree that if cached entries exceed the limit, give a error,
> instead I suggest not to cache those entries that always exceed the
> limit because generally those query don't need high performance.

I think you misunderstand. I'm talking about the limit of number of
hash entries. Probably you are talking about memqcache_maxcache which
is the maximum SELECT result size in bytes. Yes, SELECTs return huge
large size of result tend to not need high performance, I agree. So
current implementation just does not create cache entry for these
queries as you suggest.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


_______________________________________________
Pgpool-hackers mailing list
Pgpool-hackers@pgfoundry.org
http://pgfoundry.org/mailman/listinfo/pgpool-hackers

Reply via email to