Re: [PERFORM] How much memory is PostgreSQL using
Le 02/04/2010 22:10, Campbell, Lance a écrit : Greg, Thanks for your help. 1) How does the number of buffers provided by pg_buffercache compare to memory (buffers * X = Y meg)? 1 buffer is 8 KB. 2) Is there a way to tell how many total buffers I have available/max? With pg_buffercache, yes. SELECT count(*) FROM pg_buffercache WHERE relfilenode IS NOT NULL; should give you the number of non-empty buffers. -- Guillaume. http://www.postgresqlfr.org http://dalibo.com -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] How much memory is PostgreSQL using
Greg, Thanks for your help. 1) How does the number of buffers provided by pg_buffercache compare to memory (buffers * X = Y meg)? 2) Is there a way to tell how many total buffers I have available/max? Thanks, Lance Campbell Software Architect/DBA/Project Manager Web Services at Public Affairs 217-333-0382 -Original Message- From: Greg Smith [mailto:g...@2ndquadrant.com] Sent: Monday, March 29, 2010 11:54 AM To: Campbell, Lance Cc: pgsql-performance@postgresql.org Subject: Re: [PERFORM] How much memory is PostgreSQL using Campbell, Lance wrote: Or is there some way to ask PostgreSQL how much memory are you using to cache disk blocks currently? You can install contrib/pg_buffercache into each database and count how many used blocks are there. Note that running queries using that diagnostic tool is really intensive due to the locks it takes, so be careful not to do that often on a production system. When you do a PG_DUMP does PostgreSQL put the disk blocks into shared buffers as it runs? To some extent. Most pg_dump activity involves sequential scans that are reading an entire table. Those are no different from any other process that will put disk blocks into shared_buffers. However, that usage pattern makes pg_dump particularly likely to run into an optimization in 8.3 and later that limits how much of shared_buffers is used when sequentially scanning a large table. See P10 of http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf for the exact implementation. Basically, anything bigger than shared_buffers / 4 uses a 256K ring to limit its cache use, but it's a little more complicated than that. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] How much memory is PostgreSQL using
Campbell, Lance wrote: Or is there some way to ask PostgreSQL how much memory are you using to cache disk blocks currently? You can install contrib/pg_buffercache into each database and count how many used blocks are there. Note that running queries using that diagnostic tool is really intensive due to the locks it takes, so be careful not to do that often on a production system. When you do a PG_DUMP does PostgreSQL put the disk blocks into shared buffers as it runs? To some extent. Most pg_dump activity involves sequential scans that are reading an entire table. Those are no different from any other process that will put disk blocks into shared_buffers. However, that usage pattern makes pg_dump particularly likely to run into an optimization in 8.3 and later that limits how much of shared_buffers is used when sequentially scanning a large table. See P10 of http://www.westnet.com/~gsmith/content/postgresql/InsideBufferCache.pdf for the exact implementation. Basically, anything bigger than shared_buffers / 4 uses a 256K ring to limit its cache use, but it's a little more complicated than that. -- Greg Smith 2ndQuadrant US Baltimore, MD PostgreSQL Training, Services and Support g...@2ndquadrant.com www.2ndQuadrant.us -- Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-performance
Re: [PERFORM] How much memory PostgreSQL is going to use?
In response to Arnau [EMAIL PROTECTED]: Hi all, I have a server with 4GB of memory and I'm tweaking the PostgreSQL configuration. This server will be dedicated to run PostgreSQL so I'd like to dedicate as much as possible RAM to it. I have dedicated 1GB to shared_buffers (shared_buffers=131072) but I'm not sure if this will be the maximum memory used by PostgreSQL or additional to this it will take more memory. Because if shared_buffers is the maximum I could raise that value even more. Individual backend processes will allocate more memory above shared_buffers for processing individual queries. See work_mem. -- Bill Moran Collaborative Fusion Inc. http://people.collaborativefusion.com/~wmoran/ [EMAIL PROTECTED] Phone: 412-422-3463x4023 ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PERFORM] How much memory in 32 bits Architecture to Shared Buffers
Marcelo Costa wrote: Hi, to all! Recently i try increasing the memory values of shared buffers on one IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1 Storage. You haven't specified your OS so I am going to assume Linux. Where I start up the cluster very messages of configurations errors on shared_memmory and SHMMAX look up. I try change the values of shared_memory, max_connections and effective_cache_size and large the size of SHMALL and SHMMAX to use 4294967296 (4096 MB) but the cluster don't start. You have to edit your sysctl.conf see: http://www.postgresql.org/docs/8.1/static/runtime.html I *think* (I would have to double check) the limit for shared memory on linux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all 64bit now. Sincerely, Joshua D. Drake -- === The PostgreSQL Company: Command Prompt, Inc. === Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensive PostgreSQL solutions since 1997 http://www.commandprompt.com/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PERFORM] How much memory in 32 bits Architecture to Shared Buffers is Possible
Yes, my system is DEBIAN SARGE 3.0thanks,Marcelo2006/10/2, Joshua D. Drake [EMAIL PROTECTED]: Marcelo Costa wrote: Hi, to all! Recently i try increasing the memory values of shared buffers on one IBM xseries 255 (Quad XEON 2.8, 8 GB RAM, 2 disk SCSI 36 GB(Raid 1), 1 Storage. You haven't specified your OS so I am going to assume Linux. Where I start up the cluster very messages of configurations errors on shared_memmory and SHMMAX look up. I try change the values of shared_memory, max_connections and effective_cache_size and large the size of SHMALL and SHMMAX to use 4294967296 (4096 MB)but the cluster don't start.You have to edit your sysctl.conf see: http://www.postgresql.org/docs/8.1/static/runtime.htmlI *think* (I would have to double check) the limit for shared memory onlinux 32bit is 2 gig. Possibly 2 gig per CPU I don't recall. I run all 64bit now.Sincerely,Joshua D. Drake-- === The PostgreSQL Company: Command Prompt, Inc. ===Sales/Support: +1.503.667.4564 || 24x7/Emergency: +1.800.492.2240 Providing the most comprehensivePostgreSQL solutions since 1997 http://www.commandprompt.com/-- Marcelo Costa
Re: [PERFORM] How much memory?
On Thu, Oct 27, 2005 at 06:39:33PM -0400, Ron Peacetree wrote: Databases basically come in 4 sizes: 1= The entire DB fits into memory. 2= The performance critical table(s) fit(s) into memory 3= The indexes of the performance critical table(s) fit into memory. 4= Neither the performance critical tables nor their indexes fit into memory. Performance decreases (exponentially), and development + maintenance cost/difficulty/pain increases (exponentially), as you go down the list. While it is often not possible to be in class 1 above, do everything you can to be in at least class 3 and do everything you can to avoid class 4. At ~$75-$150 per GB as of this post, RAM is the cheapest investment you can make in a high perfomance, low hassle DBMS. IWill's and Tyan's 16 DIMM slot mainboards are worth every penny. And note that your next investment after RAM should be better disk IO. More CPUs *generally* don't buy you much (if anything). My rule of thumb: the only time your database should be CPU-bound is if you've got a bad design*. *NOTE: before everyone goes off about query parallelism and big in-memory sorts and what-not, keep in mind I said rule of thumb. :) -- Jim C. Nasby, Sr. Engineering Consultant [EMAIL PROTECTED] Pervasive Software http://pervasive.comwork: 512-231-6117 vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461 ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster