Re: [ADMIN] pg_dump: schema with OID 145167 does not exist
Hi, Once I had the same problem with not existing schema. In my case removing all the references to the schema worked fine and I had no problems with the db ever since. It wa PG 8.2.1 and since the operation we migrated all the DBs including the one to 9.0.6 and everything works fine. Hope this helps. Regards Luke 2012/3/21 Paul Wouters paul.wout...@resilion.be Hello, ** ** We have some problems using pg_dump. We get the following error: pg_dump: schema with OID 145167 does not exist ** ** I found one entry in the pg_type and another one in the pg_class. I was able to remove the one in the pg_class but when we try to remove the row in the pg_type, I get an error: Error: Invalid input syntax for type oid: “record_in” ** ** select * from pg_type where typnamespace = 145167; typname | typnamespace | typowner | typlen | typbyval | typtype | typisdefined | typdelim | typrelid | typelem | typinput | typoutput | typreceive | typsend | typanalyze | typalign | typstorage | typnotnull | typbasetype | typtypmod | typndims | typdefaultbin | typdefault -+--+--++--+-+--+--+--+-+---++-+-++--+++-+---+--+---+ trainings0rrtop | 145167 |25488 | -1 | f| c | t| ,| 145208 | 0 | record_in | record_out | record_recv | record_send | - | d| x | f | 0 |-1 |0 | | (1 row) ** ** In the table pg_depend I have also e reference to 124208 Can I Also remove that dependency? ** ** We are using the 8.2.5 version of PG. ** ** Kind regards Paul -- Łukasz Brodziak Do you bury me when I'm gone Do you teach me while I'm here Just as soon I belong Then it's time I disappear
[ADMIN] Making PostgreSQL portable on WinXP
Hello, after some fiddling i had success in making PostgreSQL 9.1 somewhat more portable. Some of our customers need to copy and run our application from an USB device. If the VC 2008 Redistributable packet is not installed PostgreSQL denies to start. Solution: copy msvcr90.dll + the manifest file below to .\bin directory. LibreOffice portable solved the problem in the same manner. I suggest to add the files to the standard Win32 distribution. Installing the Redistributable packet, which is done by the standard installation, would be unnecessary then. Microsoft.VC90.CRT.manifest ?xml version=1.0 encoding=UTF-8 standalone=yes? assembly xmlns=urn:schemas-microsoft-com:asm.v1 manifestVersion=1.0 noInheritable/ assemblyIdentity type=win32 name=Microsoft.VC90.CRT version=9.0.21022.8 processorArchitecture=x86 publicKeyToken=1fc8b3b9a1e18e3b / file name=msvcr90.dll / file name=msvcp90.dll / file name=msvcm90.dll / /assembly -- View this message in context: http://postgresql.1045698.n5.nabble.com/Making-PostgreSQL-portable-on-WinXP-tp5664455p5664455.html Sent from the PostgreSQL - admin mailing list archive at Nabble.com. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] Backup/disaster recovery and bandwidth (long)
Hello, everyone. I want to throw a scenario out there to see what y'all think. Soon, my cluster backups will be increasing in size inordinately. They're going to immediately go to 3x as large as they currently are with the potential to be about 20x within a year or so. My current setup uses a single PG 8.x server doing nightly dumps (not ideal but sufficient for the moment, and one of the main reasons to move to PG 9) which are then downloaded from my hosting center to our offices for DR purposes. Each night I pull down roughly 5GB of compressed pg_dump data. Dumping this takes about 1.5hrs. Downloading this at 15Mbps takes about an hour. Soon I'll be looking at somewhere around 7hrs for the dumps to complete and downloading a 12GB file (which will take about 3 hrs). Oh, and I'll have to pay for significant bandwidth overage since I'm charged on a 95%, and while an hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night certainly will, so there's a real cost associated with this strategy as well. While the time of the actual dumps is not a huge issue, the time of the download IS a large concern, especially since my support folks use that file daily to extract individual customer databases for restore in assisting customer support issues. So, while now I have my pg_dumps completed around 2AM and downloaded to my local network at about 3AM, with the increase in our database sizes, what will be happening is that my pg_dump will not be completed until around 7AM, and the download would not be completed until around 10AM, best-case scenario. Add into that support trying to restore a database...more on that in a moment. My _new_ setup will instead be 2 PG 9.x servers with hot-standby enabled (at my hosting center) and a 3rd PG 9.x server at my local office also replicating off of the master. Each one of those servers will perform his own pg_dumps of the individual databases for backup/disaster recovery purposes, and while each dump might not be consistent with one another, each SERVER will have dumps consistent to itself, which is viable for our situation, and does not require me to download 12GB (or more) each night with all of those associated nightmares, costs and other problems. Alright, well, I've got that part all thought out, and it seems like a good way to do it to me, but I'm _still_ running into the situation that I've got to take 8hrs-ish to run the pg_dump no matter where it runs, and when my support folks need it (which they do daily), this basically means that if they have to have a customer database up NOW NOW NOW for support reasons, they simply cannot have it within an hour in many cases. Specifically, one database takes between 2 and 7.5hrs to pg_dump depending on which format I use, so if they need a CURRENT copy, they're at least 4 hours out. Additionally, they can't directly use the replicating server at my local office, because they need to test the problems the customers are having which include pesky things like INSERT, UPDATE and DELETE, so they have to restore this data to another internal PG backend. Enter my outside-the-box thinking. I rather assume that you cannot do a start/stop backup on a hot-standby server. HOWEVER, what if I set up a 4th database server internally at my office. Each night I stop PG on my 3rd server (the local one replicating off of the master) and rsync my pg_data directory to this new 4th server. I bring up the 4th server NOT as a standby, but as a master. They would then have all customer data on an internal, usable PG system from the time of the rsync, and while it might not reflect the immediate state of the database, that's pretty well always true, and they're used to that, since whenever they clone a site, they're using the dumps done around midnight anyway. I believe, then, that when I restart server #3 (the standby who is replicating), he'll say oh, geez, I was down, let me catch up on all that crap that happened while I was out of the loop, he'll replay the WAL files that were written while he was down, and then he'll catch back up. Does this sound like a viable option? Or does someone have additional suggestions?
Re: [ADMIN] Backup/disaster recovery and bandwidth (long)
Hi Scott, Why you do not replicate this master to the other location/s using other methods like bucardo?, you can pick the tables you really want get replicated there. For the backup turn to hot backup (tar $PGDATA)+ archiving, easier, faster and more efficient rather than a logical copy with pgdump. A.A On 04/25/2012 09:11 AM, Scott Whitney wrote: Hello, everyone. I want to throw a scenario out there to see what y'all think. Soon, my cluster backups will be increasing in size inordinately. They're going to immediately go to 3x as large as they currently are with the potential to be about 20x within a year or so. My current setup uses a single PG 8.x server doing nightly dumps (not ideal but sufficient for the moment, and one of the main reasons to move to PG 9) which are then downloaded from my hosting center to our offices for DR purposes. Each night I pull down roughly 5GB of compressed pg_dump data. Dumping this takes about 1.5hrs. Downloading this at 15Mbps takes about an hour. Soon I'll be looking at somewhere around 7hrs for the dumps to complete and downloading a 12GB file (which will take about 3 hrs). Oh, and I'll have to pay for significant bandwidth overage since I'm charged on a 95%, and while an hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night certainly will, so there's a real cost associated with this strategy as well. While the time of the actual dumps is not a huge issue, the time of the download IS a large concern, especially since my support folks use that file daily to extract individual customer databases for restore in assisting customer support issues. So, while now I have my pg_dumps completed around 2AM and downloaded to my local network at about 3AM, with the increase in our database sizes, what will be happening is that my pg_dump will not be completed until around 7AM, and the download would not be completed until around 10AM, best-case scenario. Add into that support trying to restore a database...more on that in a moment. My _new_ setup will instead be 2 PG 9.x servers with hot-standby enabled (at my hosting center) and a 3rd PG 9.x server at my local office also replicating off of the master. Each one of those servers will perform his own pg_dumps of the individual databases for backup/disaster recovery purposes, and while each dump might not be consistent with one another, each SERVER will have dumps consistent to itself, which is viable for our situation, and does not require me to download 12GB (or more) each night with all of those associated nightmares, costs and other problems. Alright, well, I've got that part all thought out, and it seems like a good way to do it to me, but I'm _still_ running into the situation that I've got to take 8hrs-ish to run the pg_dump no matter where it runs, and when my support folks need it (which they do daily), this basically means that if they have to have a customer database up NOW NOW NOW for support reasons, they simply cannot have it within an hour in many cases. Specifically, one database takes between 2 and 7.5hrs to pg_dump depending on which format I use, so if they need a CURRENT copy, they're at least 4 hours out. Additionally, they can't directly use the replicating server at my local office, because they need to test the problems the customers are having which include pesky things like INSERT, UPDATE and DELETE, so they have to restore this data to another internal PG backend. Enter my outside-the-box thinking. I rather assume that you cannot do a start/stop backup on a hot-standby server. HOWEVER, what if I set up a 4th database server internally at my office. Each night I stop PG on my 3rd server (the local one replicating off of the master) and rsync my pg_data directory to this new 4th server. I bring up the 4th server NOT as a standby, but as a master. They would then have all customer data on an internal, usable PG system from the time of the rsync, and while it might not reflect the immediate state of the database, that's pretty well always true, and they're used to that, since whenever they clone a site, they're using the dumps done around midnight anyway. I believe, then, that when I restart server #3 (the standby who is replicating), he'll say oh, geez, I was down, let me catch up on all that crap that happened while I was out of the loop, he'll replay the WAL files that were written while he was down, and then he'll catch back up. Does this sound like a viable option? Or does someone have additional suggestions?
Re: [ADMIN] Backup/disaster recovery and bandwidth (long)
Both good points, thanks, although I suspect that a direct network copy of the pg_data directory will be faster than a tar/untar event. - Original Message - Hi Scott, Why you do not replicate this master to the other location/s using other methods like bucardo?, you can pick the tables you really want get replicated there. For the backup turn to hot backup (tar $PGDATA)+ archiving, easier, faster and more efficient rather than a logical copy with pgdump. A.A On 04/25/2012 09:11 AM, Scott Whitney wrote: Hello, everyone. I want to throw a scenario out there to see what y'all think. Soon, my cluster backups will be increasing in size inordinately. They're going to immediately go to 3x as large as they currently are with the potential to be about 20x within a year or so. My current setup uses a single PG 8.x server doing nightly dumps (not ideal but sufficient for the moment, and one of the main reasons to move to PG 9) which are then downloaded from my hosting center to our offices for DR purposes. Each night I pull down roughly 5GB of compressed pg_dump data. Dumping this takes about 1.5hrs. Downloading this at 15Mbps takes about an hour. Soon I'll be looking at somewhere around 7hrs for the dumps to complete and downloading a 12GB file (which will take about 3 hrs). Oh, and I'll have to pay for significant bandwidth overage since I'm charged on a 95%, and while an hour a day does NOT kick me up to 15Mbps usage at 95%, 3hrs per night certainly will, so there's a real cost associated with this strategy as well. While the time of the actual dumps is not a huge issue, the time of the download IS a large concern, especially since my support folks use that file daily to extract individual customer databases for restore in assisting customer support issues. So, while now I have my pg_dumps completed around 2AM and downloaded to my local network at about 3AM, with the increase in our database sizes, what will be happening is that my pg_dump will not be completed until around 7AM, and the download would not be completed until around 10AM, best-case scenario. Add into that support trying to restore a database...more on that in a moment. My _new_ setup will instead be 2 PG 9.x servers with hot-standby enabled (at my hosting center) and a 3rd PG 9.x server at my local office also replicating off of the master. Each one of those servers will perform his own pg_dumps of the individual databases for backup/disaster recovery purposes, and while each dump might not be consistent with one another, each SERVER will have dumps consistent to itself, which is viable for our situation, and does not require me to download 12GB (or more) each night with all of those associated nightmares, costs and other problems. Alright, well, I've got that part all thought out, and it seems like a good way to do it to me, but I'm _still_ running into the situation that I've got to take 8hrs-ish to run the pg_dump no matter where it runs, and when my support folks need it (which they do daily), this basically means that if they have to have a customer database up NOW NOW NOW for support reasons, they simply cannot have it within an hour in many cases. Specifically, one database takes between 2 and 7.5hrs to pg_dump depending on which format I use, so if they need a CURRENT copy, they're at least 4 hours out. Additionally, they can't directly use the replicating server at my local office, because they need to test the problems the customers are having which include pesky things like INSERT, UPDATE and DELETE, so they have to restore this data to another internal PG backend. Enter my outside-the-box thinking. I rather assume that you cannot do a start/stop backup on a hot-standby server. HOWEVER, what if I set up a 4th database server internally at my office. Each night I stop PG on my 3rd server (the local one replicating off of the master) and rsync my pg_data directory to this new 4th server. I bring up the 4th server NOT as a standby, but as a master. They would then have all customer data on an internal, usable PG system from the time of the rsync, and while it might not reflect the immediate state of the database, that's pretty well always true, and they're used to that, since whenever they clone a site, they're using the dumps done around midnight anyway. I believe, then, that when I restart server #3 (the standby who is replicating), he'll say oh, geez, I was down, let me catch up on all that crap that happened while I was out of the loop, he'll replay the WAL files that were written while he was down, and then he'll catch back up. Does this sound like a viable option? Or does someone have additional suggestions?
Re: [ADMIN] Backup/disaster recovery and bandwidth (long)
On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote: I believe, then, that when I restart server #3 (the standby who is replicating), he'll say oh, geez, I was down, let me catch up on all that crap that happened while I was out of the loop, he'll replay the WAL files that were written while he was down, and then he'll catch back up. Does this sound like a viable option? Or does someone have additional suggestions? Perfectly viable. However, the WAL files must exist for this to happen. So you need to set wal_keep_segments appropriately, or set up WAL archiving. Note that you could even provide a more up-to-date database for your people to work with. If the testbed is nearly up to date, then an rsync to update it would take very little time. So you could shut down the replica, rsync, and bring the replica back up. -- Scott Ribe scott_r...@elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] Backup/disaster recovery and bandwidth (long)
On 04/25/2012 09:11 AM, Scott Whitney wrote: ... My current setup uses a single PG 8.x... My _new_ setup will instead be 2 PG 9.x ... It is best to specify actual major version. While 8.0.x or 9.1.x is sufficient to discuss features and capabilities, 9.1 is a different major release than 9.0, not a minor update to version 9. I set up a 4th database server internally at my office. Each night I stop PG on my 3rd server (the local one replicating off of the master) and rsync my pg_data directory to this new 4th server. I bring up the 4th server NOT as a standby, but as a master Does this sound like a viable option? Or does someone have additional suggestions? And speaking of major versions, what you really want is PostgreSQL version 9.2 with cascading replication: http://www.postgresql.org/docs/devel/static/warm-standby.html#CASCADING-REPLICATION Unfortunately that version is currently in development/testing and will probably not be released till toward the end of the year (based on my totally uninformed guesstimate method). With cascading replication you could maintain a constantly up-to-date local copy which you could cascade-replicate to other clusters as necessary. Whether you maintain one or more constantly updated local cascaded replicas off your primary local replica or just spin one off as necessary will be determined by how quickly you need to access the test/dev/debug replica. It's likely that you can come up with a pretty fast method of spinning off a cascaded replica as needed. Note that there is no requirement for the various replicas to reside on different servers. You can run multiple clusters on a single machine and one cluster can replicate to one or more others. Not good as a backup strategy, obviously, but might be just the ticket for your needs - especially since the data copies necessary to bring up a replica are all on your local disks - no network transfer required. You may want to experiment with a current development copy of 9.2 to see how it works (and provide bug reports to the developers). Perhaps 9.2 final will be released before your database grows too big for current solutions. Cheers, Steve
Re: [ADMIN] Backup/disaster recovery and bandwidth (long)
I mean bucardo (even though there are more tools like this one) just for the replication stuff and the hot database backup only for the backup stuff and only one bounce is needed to turn the archiving on, you do not need to turn anything at all down during the backup. A.A On 04/25/2012 10:23 AM, Scott Ribe wrote: On Apr 25, 2012, at 10:11 AM, Scott Whitney wrote: I believe, then, that when I restart server #3 (the standby who is replicating), he'll say oh, geez, I was down, let me catch up on all that crap that happened while I was out of the loop, he'll replay the WAL files that were written while he was down, and then he'll catch back up. Does this sound like a viable option? Or does someone have additional suggestions? Perfectly viable. However, the WAL files must exist for this to happen. So you need to set wal_keep_segments appropriately, or set up WAL archiving. Note that you could even provide a more up-to-date database for your people to work with. If the testbed is nearly up to date, then an rsync to update it would take very little time. So you could shut down the replica, rsync, and bring the replica back up. -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] Query REST Service
Hi folks, Does anybody know if it is possible to create a PL which sends an http GET request and retrieves its response ? I am willing to use a REST service from within pgsql. Maybe I am just trying to do an impossible. Any hints ? warm regards, Ricardo
Re: [ADMIN] Query REST Service
Ricardo Bayley ricardo.bay...@gmail.com wrote: Does anybody know if it is possible to create a PL which sends an http GET request and retrieves its response ? Have you looked at PL/Python? http://www.postgresql.org/docs/current/interactive/plpython.html -Kevin -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
[ADMIN] psql: could not connect to server: No route to host
Hi all; I've setup PostgreSQL to talk across servers thousand of times... not sure what I'm doing wrong, maybe I'm just over-tired. I have 2 scientific linux VM's running in vmware workstation server 1 - 192.168.1.125 server 2 - 192.168.1.127 I've disabled selinux on both servers Ive instaled PostgreSQL 9.1.3 on both servers I have listen_addresses on server 1 set to '*' postgres=# show listen_addresses ; listen_addresses -- * (1 row) I've added this to the pg_hba.conf file of server 1: host all all 192.168.1.0/24trust I can scp files between the servers Then from server 2 I run this: $ psql -h 192.168.1.125 psql: could not connect to server: No route to host Is the server running on host 192.168.1.125 and accepting TCP/IP connections on port 5432? am i loosing my mind? thanks in advance -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin
Re: [ADMIN] psql: could not connect to server: No route to host
Kevin Kempter cs_...@consistentstate.com writes: I can scp files between the servers Really? $ psql -h 192.168.1.125 psql: could not connect to server: No route to host Because that is not a Postgres problem, that is a network connectivity problem. I'd bet that there's something wrong with the VM configuration, such that the VM host isn't connecting 192.168.1.* in one VM to 192.168.1.* in the other. regards, tom lane -- Sent via pgsql-admin mailing list (pgsql-admin@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-admin