Re: [PERFORM] How much memory is PostgreSQL using

2010-04-03 Thread Guillaume Lelarge
Le 02/04/2010 22:10, Campbell, Lance a écrit :
 Greg,
 Thanks for your help.
 
 1) How does the number of buffers provided by pg_buffercache compare to
 memory (buffers * X = Y meg)?  

1 buffer is 8 KB.

 2) Is there a way to tell how many total buffers I have available/max?

With pg_buffercache, yes.

SELECT count(*)
FROM pg_buffercache
WHERE relfilenode IS NOT NULL;

should give you the number of non-empty buffers.


-- 
Guillaume.
 http://www.postgresqlfr.org
 http://dalibo.com

-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


Re: [PERFORM] How much memory is PostgreSQL using

2010-04-02 Thread Campbell, Lance
Greg,
Thanks for your help.

1) How does the number of buffers provided by pg_buffercache compare to
memory (buffers * X = Y meg)?  
2) Is there a way to tell how many total buffers I have available/max?

Thanks,

Lance Campbell
Software Architect/DBA/Project Manager
Web Services at Public Affairs
217-333-0382


-Original Message-
From: Greg Smith [mailto:g...@2ndquadrant.com] 
Sent: Monday, March 29, 2010 11:54 AM
To: Campbell, Lance
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] How much memory is PostgreSQL using

Campbell, Lance wrote:

 Or is there some way to ask PostgreSQL how much memory are you using 
 to cache disk blocks currently?


You can install contrib/pg_buffercache into each database and count how 
many used blocks are there.  Note that running queries using that 
diagnostic tool is really intensive due to the locks it takes, so be 
careful not to do that often on a production system.


 When you do a PG_DUMP does PostgreSQL put the disk blocks into shared 
 buffers as it runs?


To some extent.  Most pg_dump activity involves sequential scans that 
are reading an entire table.  Those are no different from any other 
process that will put disk blocks into shared_buffers.  However, that 
usage pattern makes pg_dump particularly likely to run into an 
optimization in 8.3 and later that limits how much of shared_buffers is 
used when sequentially scanning a large table.  See P10 of 
http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf 
for the exact implementation.  Basically, anything bigger than 
shared_buffers / 4 uses a 256K ring to limit its cache use, but it's a 
little more complicated than that.

-- 
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


-- 
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How much memory is PostgreSQL using

2010-03-29 Thread Campbell, Lance
PostgreSQL 8.4.3

OS: Linux Red Hat 4.x

 

I changed my strategy with PostgreSQL recently to use a large segment of
memory for shared buffers with the idea of caching disk blocks.  How can
I see how much memory PostgreSQL is using for this?

 

I tried:

 

ps aux | grep post | sort -k4

 

This lists the processes using memory at the bottom.  Are they sharing
memory or using individual their own blocks of memory?

 

When I do top I see that VIRT is the value of my shared buffers plus a
tiny bit.  I see %MEM is only 2.4%, 2.6%, 1.0%,1.5%, and 1.1% for all of
the running processes.  Do I add these percentages up to see what amount
of VIRT I am really using? 

 

Or is there some way to ask PostgreSQL how much memory are you using to
cache disk blocks currently?

 

When you do a PG_DUMP does PostgreSQL put the disk blocks into shared
buffers as it runs? 

 

Thanks,

 

Lance Campbell

Software Architect/DBA/Project Manager

Web Services at Public Affairs

217-333-0382

 



Re: [PERFORM] How much memory is PostgreSQL using

2010-03-29 Thread Greg Smith

Campbell, Lance wrote:


Or is there some way to ask PostgreSQL how much memory are you using 
to cache disk blocks currently?




You can install contrib/pg_buffercache into each database and count how 
many used blocks are there.  Note that running queries using that 
diagnostic tool is really intensive due to the locks it takes, so be 
careful not to do that often on a production system.



When you do a PG_DUMP does PostgreSQL put the disk blocks into shared 
buffers as it runs?




To some extent.  Most pg_dump activity involves sequential scans that 
are reading an entire table.  Those are no different from any other 
process that will put disk blocks into shared_buffers.  However, that 
usage pattern makes pg_dump particularly likely to run into an 
optimization in 8.3 and later that limits how much of shared_buffers is 
used when sequentially scanning a large table.  See P10 of 
http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf 
for the exact implementation.  Basically, anything bigger than 
shared_buffers / 4 uses a 256K ring to limit its cache use, but it's a 
little more complicated than that.


--
Greg Smith  2ndQuadrant US  Baltimore, MD
PostgreSQL Training, Services and Support
g...@2ndquadrant.com   www.2ndQuadrant.us


--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance


[PERFORM] How much memory PostgreSQL is going to use?

2007-06-12 Thread Arnau

Hi all,

  I have a server with 4GB of memory and I'm tweaking the PostgreSQL 
configuration. This server will be dedicated to run PostgreSQL so I'd 
like to dedicate as much as possible RAM to it.


  I have dedicated 1GB to shared_buffers (shared_buffers=131072) but 
I'm not sure if this will be the maximum memory used by PostgreSQL or 
additional to this it will take more memory. Because if shared_buffers 
is the maximum I could raise that value even more.


Cheers!
--
Arnau

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PERFORM] How much memory PostgreSQL is going to use?

2007-06-12 Thread Bill Moran
In response to Arnau [EMAIL PROTECTED]:

 Hi all,
 
I have a server with 4GB of memory and I'm tweaking the PostgreSQL 
 configuration. This server will be dedicated to run PostgreSQL so I'd 
 like to dedicate as much as possible RAM to it.
 
I have dedicated 1GB to shared_buffers (shared_buffers=131072) but 
 I'm not sure if this will be the maximum memory used by PostgreSQL or 
 additional to this it will take more memory. Because if shared_buffers 
 is the maximum I could raise that value even more.

Individual backend processes will allocate more memory above shared_buffers
for processing individual queries.  See work_mem.

-- 
Bill Moran
Collaborative Fusion Inc.
http://people.collaborativefusion.com/~wmoran/

[EMAIL PROTECTED]
Phone: 412-422-3463x4023

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


[PERFORM] How much memory in 32 bits Architecture to Shared Buffers is Possible

2006-10-02 Thread Marcelo Costa

Hi, to all!

Recently i try increasing the memory values of shared buffers on one
IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1
Storage.

I try change these shared memory values to use 25% of memory ram (2048
MB) and effective_cache_size to 50% (4096 MB) of memory. All this
settings to 220 Max Connections.

Where I start up the cluster very messages of configurations errors on
shared_memmory and SHMMAX look up. I try change the values of
shared_memory, max_connections and effective_cache_size and large the
size of SHMALL and SHMMAX to use 4294967296 (4096 MB)  but the cluster
don't start.

Only with 15% of value on shared memory i can start up this cluster.
In my tests the maximum value who i can put is 1.9 GB, more of this
the cluster don't start.

Can anybody help me and explicate if exist one limit to memory on 32
bits Architecture.

Anybody was experience with tuning servers with this configurations
and increasing ?

thanks to all.



--
Marcelo Costa

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] How much memory in 32 bits Architecture to Shared Buffers

2006-10-02 Thread Joshua D. Drake
Marcelo Costa wrote:
 Hi, to all!
 
 Recently i try increasing the memory values of shared buffers on one
 IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1
 Storage.

You haven't specified your OS so I am going to assume Linux.

 Where I start up the cluster very messages of configurations errors on
 shared_memmory and SHMMAX look up. I try change the values of
 shared_memory, max_connections and effective_cache_size and large the
 size of SHMALL and SHMMAX to use 4294967296 (4096 MB)  but the cluster
 don't start.

You have to edit your sysctl.conf see:

http://www.postgresql.org/docs/8.1/static/runtime.html

I *think* (I would have to double check) the limit for shared memory on
linux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all
64bit now.

Sincerely,

Joshua D. Drake



-- 

   === The PostgreSQL Company: Command Prompt, Inc. ===
Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240
   Providing the most comprehensive  PostgreSQL solutions since 1997
 http://www.commandprompt.com/



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PERFORM] How much memory in 32 bits Architecture to Shared Buffers is Possible

2006-10-02 Thread Marcelo Costa
Yes, my system is DEBIAN SARGE 3.0thanks,Marcelo2006/10/2, Joshua D. Drake [EMAIL PROTECTED]:
Marcelo Costa wrote: Hi, to all! Recently i try increasing the memory values of shared buffers on one IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1 Storage.
You haven't specified your OS so I am going to assume Linux. Where I start up the cluster very messages of configurations errors on shared_memmory and SHMMAX look up. I try change the values of
 shared_memory, max_connections and effective_cache_size and large the size of SHMALL and SHMMAX to use 4294967296 (4096 MB)but the cluster don't start.You have to edit your sysctl.conf see:
http://www.postgresql.org/docs/8.1/static/runtime.htmlI *think* (I would have to double check) the limit for shared memory onlinux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all
64bit now.Sincerely,Joshua D. Drake-- === The PostgreSQL Company: Command Prompt, Inc. ===Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensivePostgreSQL solutions since 1997
 http://www.commandprompt.com/-- Marcelo Costa


[PERFORM] How much memory?

2005-10-27 Thread PostgreSQL
Is there a rule-of-thumb for determining the amount of system memory a 
database requres (other than all you can afford)? 



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PERFORM] How much memory?

2005-10-27 Thread Jim C. Nasby
On Thu, Oct 27, 2005 at 06:39:33PM -0400, Ron Peacetree wrote:
 Databases basically come in 4 sizes:
 
 1= The entire DB fits into memory.
 2= The performance critical table(s) fit(s) into memory
 3= The indexes of the performance critical table(s) fit into memory.
 4= Neither the performance critical tables nor their indexes fit into memory.
 
 Performance decreases (exponentially), and development + maintenance 
 cost/difficulty/pain increases (exponentially), as you go down the list.
 
 While it is often not possible to be in class 1 above, do everything you 
 can to be in at least class 3 and do everything you can to avoid class 4.
 
 At ~$75-$150 per GB as of this post, RAM is the cheapest investment you can 
 make in a high perfomance, low hassle DBMS.  IWill's and Tyan's 16 DIMM slot 
 mainboards are worth every penny.

And note that your next investment after RAM should be better disk IO.
More CPUs *generally* don't buy you much (if anything). My rule of
thumb: the only time your database should be CPU-bound is if you've got
a bad design*.

*NOTE: before everyone goes off about query parallelism and big
in-memory sorts and what-not, keep in mind I said rule of thumb. :)
-- 
Jim C. Nasby, Sr. Engineering Consultant  [EMAIL PROTECTED]
Pervasive Software  http://pervasive.comwork: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf   cell: 512-569-9461

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster