Harvell F wrote:

Getting back to the original posting, as I remember it, the question was about seldom changed information. In that case, and assuming a repetitive query as above, a simple query results cache that is keyed on the passed SQL statement string and that simply returns the previously cooked result set would be a really big performance win.

I believe the main point that Mark made was the extra overhead is in the sql parsing and query planning - this is the part that postgres won't get around. Even if you setup simple tables for caching it still goes through the parser and planner and looses the benefits that memcached has. Or you fork those requests before the planner and loose the benefits of postgres. The main benefit of using memcached is to bypass the parsing and query planning.

You will find there is more to sql parsing than you first think, it needs to find the components that make up the sql statement (tables column names functions) and check that they exist and can be used in the context of the given sql and the given data matches the context that is given to be used in, it needs to check that the current user has enough privileges to perform the requested task, then it locates the data whether it be in the memory cache, on disk or an integrated version of memcached, this would also include checks to make sure another user hasn't locked the data to change it and whether there exists more than one version of the data, committed and uncommitted and then sends the results back to the client requesting it.

Registering each cache entry by the tables included in the query and invalidating the cache during on a committed update or insert transaction to any of the tables would, transparently, solve the consistency problem.

That was part of my thinking when I made the suggestion of adding something like memcached into postgres.


---------------------------(end of broadcast)---------------------------
TIP 6: explain analyze is your friend

Reply via email to