Re: [ADMIN] DELETE FROM pg_description WHERE ...
On Fri, Apr 22, 2011 at 5:26 PM, Erwin Brandstetter wrote: > Hi! > > It is generally not advisable to write to system catalogs directly ... > > However, I have a database with dozens of schemas and hundreds of > tables. There is a bunch of useless comments on columns I want to get > rid of, scattered all across the db. The fastest & easiest way would > be: > > DELETE FROM pg_description WHERE description = 'My very special > totally useless comment.'; Why would you *not* use the COMMENT command for that? What is slower or harder about using it? -- Simon Riggs http://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training & Services -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] DELETE FROM pg_description WHERE ...
Simon Riggs writes: > On Fri, Apr 22, 2011 at 5:26 PM, Erwin Brandstetter > wrote: >> DELETE FROM pg_description WHERE description = 'My very special >> totally useless comment.'; > Why would you *not* use the COMMENT command for that? If what he wants is to drop comments containing a particular string, COMMENT isn't an especially easy way to do it. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] Scaling
My SMB database is currently running on a HP-UX box running in a master - slave slonyI cluster to create a hot spare database. Recently the server has be getting hammered and we are consistently hitting our max db connections. We have our web CMS and our web apps, our desktop client apps, and a couple application servers running off of this database server and it is crumbling under the stress. Currently we have been doing table partitioning and table archiving to trim the fat off of the indexing processes etc. My project at hand is to upgrade this configuration to a linux cluster of servers, 2 - 3. The ideal solution is to just do load balancing through DNS requests and have 2 servers in a multi master configuration. So far we have not had any hardware issues on our current set up ( knock on wood). So in doing this I am not immediately looking for a hot spare set up. But in the future am looking at setting up a VPN tunnel to an offsite server and sending WAL updates to that. Any and all comments are truly appreciated as well as any nudges in various directions. David. -- David Hornsby Beechglen Development Inc. E-Mail: da...@beechglen.com Phone: 513 922 0509 x432 Fax: 513 347 2834 Web: beechglen.com
Re: [ADMIN] Scaling
On 04/24/2011 04:21 PM, David Hornsby wrote: > My SMB database is currently running on a HP-UX box running in a master > - slave slonyI cluster to create a hot spare database. Recently the > server has be getting hammered and we are consistently hitting our max > db connections. We have our web CMS and our web apps, our desktop client > apps, and a couple application servers running off of this database > server and it is crumbling under the stress. Currently we have been > doing table partitioning and table archiving to trim the fat off of the > indexing processes etc. > > My project at hand is to upgrade this configuration to a linux cluster > of servers, 2 - 3. The ideal solution is to just do load balancing > through DNS requests and have 2 servers in a multi master configuration. > So far we have not had any hardware issues on our current set up ( knock > on wood). So in doing this I am not immediately looking for a hot spare > set up. But in the future am looking at setting up a VPN tunnel to an > offsite server and sending WAL updates to that. > > Any and all comments are truly appreciated as well as any nudges in > various directions. I'm not aware of any production ready multi-master solution for PostgreSQL. So I would not suggest perusing that at the moment. If your primary problem is hitting your max db connections (and the performance degradation that goes along with that), I would suggest looking in to connection pooling. This will allow you to cap the number of connections without just rejecting connections and will enable you to dedicate more resources to each of the active connections. Ultimately, your overall throughput should increase. pgbouncer is very easy to set up and is great if you want simple pooling. pgpool is is also a great pooler and a lot more (e.g. load balancing, horizontal scaling, etc), but is a bit more complex to set up. - Chris -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin