On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:

> scott.marlowe wrote:
> > On Thu, 7 Aug 2003, Yaroslav Mazurak wrote:
> >>Shridhar Daithankar wrote:
> > That's a nice theory, but it doesn't work out that way.  About every two 
> > months someone shows up wanting postgresql to use all the memory in their 
> > box for caching and we wind up explaining that the kernel is better at 
> > caching than postgresql is, and how it's better not to push the usage of 
> > the memory right up to the limit.
>       I'm reading this mailing list just few days. :)))

We all get started somewhere.  Glad to have you on the list.

> > The reason you don't want to use every bit for postgresql is that, if you 
> > use add load after that you may make the machine start to swap out and 
> > slow down considerably.
>       What kind of load? PostgreSQL or another? I say that for this PC 
> primary task and critical goal is DBMS and it's performance.

Just Postgresql.  Imagine that you set up the machine with 64 Meg sort_mem 
setting, and it has only two or three users right now.  If the number of 
users jumps up to 16 or 32, then it's quite possible that all those 
connections can each spawn a sort or two, and if they are large sorts, 
then poof, all your memory is gone and your box is swapping out like mad.

> > My guess is that this is exactly what's happening to you, you're using so 
> > much memory that the machine is running out and slowing down.
> > Drop shared_buffers to 1000 to 4000, sort_mem to 8192 and start over from 
> > there.  Then, increase them each one at a time until there's no increase 
> > in speed, or stop if it starts getting slower and back off.
> > bigger is NOT always better.
>       Let I want to use all available RAM with PostgreSQL.
>       Without executing query (PostgreSQL is running) top say now:
> Mem: 71M Active, 23M Inact, 72M Wired, 436K Cache, 48M Buf, 208M Free
> Swap: 368M Total, 2852K Used, 366M Free
>       It's right that I can figure that I can use 384M (total RAM) - 72M 
> (wired) - 48M (buf) = 264M for PostgreSQL.
>       Hence, if I set effective_cache_size to 24M (3072 8K blocks), 
> reasonable value (less than 240M, say 48M) for sort_mem, some value for 
> shared_buffers (i.e. 24M, or 6144 4K blocks (FreeBSD), or 3072 8K blocks 
> (PostgreSQL)), and rest of RAM 264M (total free with OS cache) - 24M 
> (reserved for OS cache) - 48M (sort) - 24M (shared) = 168M PostgreSQL 
> allocate dynamically by himself?

It's important to understand that effective_cache_size is simply a number 
that tells the query planner about how big the kernel cache is for 

Note that in your top output, it shows 48 M buffer, and 208M free, and 
436k cache.  Adding those up comes to about 256 Megs of available cache to 
the OS.

But that's assuming postgresql isn't gonna use some of that for sorts or 
buffers, so assuming some of the memory will get used for that, then it's 
likely that effective_cache_size will really be about 100 to 150 Meg.

Like someone else said, you set effective cache size last.  First set 
buffers to a few thousand (1000 to 5000 is usually a good number) and set 
sort_mem to 8 to 32 meg to start, and adjust it as you test the database 
under parallel load.  Then, take the numbers you get for free/buffer/cache 
from top to figure out effective_cache_size.

Again, I'll repeat what I said in an earlier post on this, the size of 
buffers and effective_cache_size are set in POSTGRESQL blocks.  i.e. your 
kernel page block size is meaningless here.  If you have 100 Meg left 
over, then you need to do the math as:





12800 (8k blocks.)

Reading your other response I got the feeling you may have been under the 
impression that this is set in OS blocks, so I just wanted to make sure it 
was clear it's not.

---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
      subscribe-nomail command to [EMAIL PROTECTED] so that your
      message can get through to the mailing list cleanly

Reply via email to