[ADMIN] Migration and snapshot of database
I want to migrate physical server on to virtual machine My physical server consist of 1. Some Application. 2. PostgreSQL Database Here the system migrated from Physical machine will be replicated into two VM's. - One will be act as live machine, on which live operations will be performed. - Second will be test VM, where I can make changes in application and test for new features. Both VM need to have their own database. But two copies of database require twice the storage. I want to optimize for storage. Once the database is migrated to new machine, how I can use this database for both VM's without two independent copies. Test VM will have its own write operations, which will be ignored later. My approach Migration: -I am thinking of using pg_basebackup for online database backup from physical machine to new machine. -Is pg_barman or pg_rman is faster than pg_basebackup? Storage optimization: I want to use snapshot to optimize for storage. - Will take snapshot of original database and export the snapshot with read/write access to test VM. Since, I could not found anything within postgres, I plan to do it using file system or LVM. So with this method I can do my testing and delete the snapshot after testing is done without reflecting original database. Has anyone work on such scenario? What are problem faced in writable snapshot?
[ADMIN] Least intrusive way to move primary data
We're looking into options for the least intrusive way of moving our pg_data onto faster storage. The basic setup is as follows : 6 disk RAID-0 array of EBS volumes used for primary data storage 2 disk RAID-0 array of EBS volumes used for transaction logs RAID arrays are xfs It's the primary data volumes we'd like to switch onto a faster RAID array. The options I'm looking at are the following: - Setup new slave on similar hardware, do a normal master - slave failover when it's ready. Upside, minimal downtime. Downside, quite a lot of work for something that's essentially just a copy of data - rsync the data storage to new RAID-0 array. stop Postgres. rsync again to apply changes. Change mount points and restart Postgres. Longer downtime but seems to be the most straight forward. - Go the full route of stopping Postgres, make consistent snapshots of all the volumes and recreate a new RAID-0 array from them and restarting. https://github.com/alestic/ec2-consistent-snapshot The rsync route sounds like the most straight-forward option, however I'm a little worried about it's consistency on a busy filesystem despite it being stopped when the second sync happens. Scripting the snapshotting, and reassembling of the RAID array sounds like the most reliable way of doing it and possibly the quickest as well. Any pitfalls, gotchas or suggestions for making the switch? Kind regards, Armand
[ADMIN] Standby doesn't chose a new timeline when promoted to master
Hi all, I am trying to set up Postgres 9.2 in HA mode. But i have noticed something strange happening with the xlogs being generated after a switch over. Problem: When a standby is promoted to master mode its not choosing a new timeline to work with. Can anyone please help me with the situations in which a standby doesn't chose a new timeline when promoted.. regards, Prakhar.
Re: [ADMIN] Least intrusive way to move primary data
On Thu, May 30, 2013 at 12:19 PM, Andrew W. Gibbs awgi...@awgibbs.comwrote: Going with your first option, a master-slave replication, has the added benefit that you build the expertise for doing Continuous Point In Time Recovery, and after you do this storage system migration you can use that knowledge to put in a place a permanent standby server. Yes, it is a bit of work, but you'd kill two birds with one stone, and these are worthwhile birds. If you've got a busy server, and you want to do regular back-ups, and you don't have a much more expensive/sophisticated solution at your behest (or you want to support load balancing of reads in the future), this is probably something you want to do anyway. Thanks Andrew, actually I have a streaming slave running for backups at the moment. It's just not as powerful as the primary. It would probably actually be an option to upgrade it and do the failover as well.
[ADMIN] How do I know my table is bloated?
Hi, everyone. I saw some people talking about the reindex command and I read in the docs the one reason to use reindex is when a table is bloated. But how do I know when a table is bloated?
Re: [ADMIN] How do I know my table is bloated?
From: Rodrigo Barboza [mailto:rodrigombu...@gmail.com] Sent: Thursday, May 30, 2013 2:50 PM To: Igor Neyman Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] How do I know my table is bloated? Well, maybe I am. But I am worried because I know that there are some tables that do lots of updates and delete. As this concept is new for me, I am trying to be prepared to detect a situation like this. -- It all depends on pattern of your inserts/updates/deletes. If your index accumulates lots of almost (but not completely) empty pages with just few entries left, than - yes, REINDEX is your friend. b.t.w., this concept is not unique to Postgres, it's just a nature of B-tree indexes. Igor -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] How do I know my table is bloated?
On Thu, May 30, 2013 at 3:55 PM, Igor Neyman iney...@perceptron.com wrote: From: Rodrigo Barboza [mailto:rodrigombu...@gmail.com] Sent: Thursday, May 30, 2013 2:50 PM To: Igor Neyman Cc: pgsql-admin@postgresql.org Subject: Re: [ADMIN] How do I know my table is bloated? Well, maybe I am. But I am worried because I know that there are some tables that do lots of updates and delete. As this concept is new for me, I am trying to be prepared to detect a situation like this. -- It all depends on pattern of your inserts/updates/deletes. If your index accumulates lots of almost (but not completely) empty pages with just few entries left, than - yes, REINDEX is your friend. b.t.w., this concept is not unique to Postgres, it's just a nature of B-tree indexes. Igor I see... I guess the only way to know is if I suspect that size of my db is growing unexpected. But it is good to know that a b-tree concept concept. Because I came from a very old mysql installation (that is way postgres is new for me) and I am trying to keep my postgres the most stable possible.
Re: [ADMIN] Output of pg_controldata
Hi I want to know the meaning of these fields specifically : *Latest checkpoint location: 0/5F20* *Prior checkpoint location:0/5E20* *Latest checkpoint's REDO location:0/5F20* * * *Latest checkpoint's NextXID: 0/1894* * * *Latest checkpoint's NextMultiXactId: 1* *Latest checkpoint's NextMultiOffset: 0* Thanks Viru On Fri, May 24, 2013 at 2:53 PM, Alvaro Herrera alvhe...@2ndquadrant.comwrote: Virupaksha Kanjilal escribió: Does anyone know what the fields in the output of pg_controldata mean? A lot of people. What do you want to know, precisely? -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services
Re: [ADMIN] How do I know my table is bloated?
On Thu, May 30, 2013 at 6:05 AM, Rodrigo Barboza rodrigombu...@gmail.com wrote: I saw some people talking about the reindex command and I read in the docs the one reason to use reindex is when a table is bloated. But how do I know when a table is bloated? Take a look at the pgcompactor tool https://code.google.com/p/pgtoolkit/. This will find out what tables and indexes in your cluster are bloated and softly (without heavy locks) remove the bloat. Before using it setup the pgstattuple extension in all the databases you age going to check for bloat. Some usage examples are below. To get bloat statistics for all the cluster do: pgcompactor --all --reindex --verbose info --dry-run To remove bloat from all the cluster: pgcompactor --all --reindex --verbose info For a specific table use: pgcompactor --table tablename --reindex --verbose info --dry-run You can also specify --dbname, --schema, --exclude-dbname/schema/table, etc. For all the options: pgcompactor --man -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA Profile: http://www.linkedin.com/in/grayhemp Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979 Skype: gray-hemp Jabber: gray...@gmail.com -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] How do I know my table is bloated?
On Fri, May 31, 2013 at 2:06 AM, Sergey Konoplev gray...@gmail.com wrote: On Thu, May 30, 2013 at 6:05 AM, Rodrigo Barboza rodrigombu...@gmail.com wrote: I saw some people talking about the reindex command and I read in the docs the one reason to use reindex is when a table is bloated. But how do I know when a table is bloated? Take a look at the pgcompactor tool https://code.google.com/p/pgtoolkit/. This will find out what tables and indexes in your cluster are bloated and softly (without heavy locks) remove the bloat. Before using it setup the pgstattuple extension in all the databases you age going to check for bloat. Some usage examples are below. To get bloat statistics for all the cluster do: pgcompactor --all --reindex --verbose info --dry-run To remove bloat from all the cluster: pgcompactor --all --reindex --verbose info For a specific table use: pgcompactor --table tablename --reindex --verbose info --dry-run You can also specify --dbname, --schema, --exclude-dbname/schema/table, etc. For all the options: pgcompactor --man -- Kind regards, Sergey Konoplev PostgreSQL Consultant and DBA Profile: http://www.linkedin.com/in/grayhemp Phone: USA +1 (415) 867-9984, Russia +7 (901) 903-0499, +7 (988) 888-1979 Skype: gray-hemp Jabber: gray...@gmail.com Thanks, guys!