On Fri, Feb 26, 2010 at 11:49 AM, Jorge Montero
<jorge_mont...@homedecorators.com> wrote:
>
>
>>>> Tory M Blue <tmb...@gmail.com> 02/26/10 12:52 PM >>>
>>>
>>> This is too much. Since you have 300 connections, you will probably swap
>>> because of this setting, since each connection may use this much
>>> work_mem. The rule of the thumb is to set this to a lower general value
>>> (say, 1-2 MB), and set it per-query when needed.

>
> 1-2MB is good enough for many families of queries, but it's hard to say what 
> the right default should be for you. The right number can be estimated by 
> running explain analyze on your most common queries, with parameters that are 
> representative to regular use, and see how much memory they actually claim to 
> use. In my case, for example, most of my queries do just fine with 10 MB, 
> while the reporting queries that accumulate quite a bit of deta request up to 
> 60MB.
>
> If your average query needs 100 MB, it'd still mean that 40 connections take 
> 4 gigs worth of work memory, which might be better spent caching the database.

Ya my boxes are pretty well stacked, but a question. How does one get
the memory usage of a query. You state to look at explain analyze but
this gives timing and costs, but is one of the numbers memory or do I
have to take values and do some math?

--------------------------------------------------------------------------------------------------------------------------
 Function Scan on listings_search  (cost=0.00..260.00 rows=1000
width=108) (actual time=904.374..904.383 rows=10 loops=1)
 Total runtime: 904.411 ms

Thanks
Tory

Also don't think this 5 second thing is the DB.. Sure is not checkpoints.

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance

Reply via email to