Re: [GENERAL] VACUUM FULL doesn't reduce table size
Adrian Klaver-4 wrote Also per Kevin Grittner and Tom Lane there is a Nabble issue at work where the list here is not seeing all the information. Example: It's because I edited some post. Then is visible only on nabble because edited version isn't sent back to the mailing list. Adrian Klaver-4 wrote Also per Kevin Grittner we are looking at a moving target, so some sort of information about current state would be helpful. I'm doing pg_dump from snapshot - on production that isn't possible. Today at night my colleague will run vacuum full verbose on this table in production environment, so we will see if this problem occurs only on snapshot or was replicated from production. -- View this message in context: http://postgresql.nabble.com/VACUUM-FULL-doesn-t-reduce-table-size-tp5840782p5841434.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] VACUUM FULL doesn't reduce table size
On 03/11/2015 08:52 AM, pinker wrote: Adrian Klaver-4 wrote Also per Kevin Grittner and Tom Lane there is a Nabble issue at work where the list here is not seeing all the information. Example: It's because I edited some post. Then is visible only on nabble because edited version isn't sent back to the mailing list. Per previous suggestions I would post to this(pgsql-general) list directly. Hidden information is not to going to be well received here and will impede an answer to your issue, just saying. Adrian Klaver-4 wrote Also per Kevin Grittner we are looking at a moving target, so some sort of information about current state would be helpful. I'm doing pg_dump from snapshot - on production that isn't possible. Today at night my colleague will run vacuum full verbose on this table in production environment, so we will see if this problem occurs only on snapshot or was replicated from production. Alright. -- View this message in context: http://postgresql.nabble.com/VACUUM-FULL-doesn-t-reduce-table-size-tp5840782p5841434.html Sent from the PostgreSQL - general mailing list archive at Nabble.com. -- Adrian Klaver adrian.kla...@aklaver.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Benchmarking partitioning triggers and rules
Hey I hate to bump my post but I would really appreciate some input on this benchmark. I am very alarmed that adding a very simple partitioning trigger slows the insert speed by an order of magnitude. Is there any way to speed this up? On Mon, Mar 9, 2015 at 11:44 AM, Tim Uckun timuc...@gmail.com wrote: I wrote a quick benchmarking script to test various partitioning strategies. The code is here. https://gist.github.com/timuckun/954ab6bdce36fa14bc1c I was astonished at the variability of the timings between the different variations. The test data contained 270K records. I did a simple insert into without any triggers, with three different trigger variations and with a rule. The results were like this clean 0.00 0.00 0.00 ( 3.119498) func_1 0.00 0.00 0.00 ( 7.435094) func_2 0.00 0.00 0.00 ( 28.427617) func_3 0.00 0.00 0.00 ( 18.348554) Rule 0.00 0.00 0.00 ( 2.901931) A clean insert 3.1 seconds, putting a rule took less time! A simple insert into table_name values (NEW.*) doubled the time it takes to insert the records. Using an EXECUTE with an interpolated table name took TEN TIMES LONGER!. Making a slight change to the EXECUTE took a third off the execution time WTF??? This has left me both baffled and curious. If changing little things like this makes a huge difference what else can I do to make this even faster? Would using a different language help? Is Javasscript, Python or Perl faster? Is there some other syntax I can use? I tried this EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' VALUES (' || NEW.* || ')' but that gave me an error.
Re: [GENERAL] Benchmarking partitioning triggers and rules
On 11.3.2015 21:43, Tim Uckun wrote: Hey I hate to bump my post but I would really appreciate some input on this benchmark. I am very alarmed that adding a very simple partitioning trigger slows the insert speed by an order of magnitude. Is there any way to speed this up? I think to actually give you a meaningful response, we really need more details about your configuration - what PostgreSQL version are you using, what configuration have you changed from the defaults and such. Anyway, you're right that triggers are not cheap. The numbers I get on the development version with a bit of tuning look like this: INSERT (direct)1.5 sec INSERT 4.0 sec INSERT (EXECUTE) 11.5 sec So it's ~ what you've measured. Rules have the lowest overhead, but also there's a lot of tricky places. There's not much you can do, except for inserting the data directly into the right partition (without any triggers). -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Postgres and data warehouses
The logical replication features in 9.4 look very promising as an alternative to the binary replication that requires an identical schema on both ends. There's no reason that the thing you're replicating to couldn't be something like RabbitMQ or Storm. I've been playing with a JSON-based decoder plugin so Postgres could emit changes to non-SQL systems like this. See https://bitbucket.org/btubbs/decoder_json/src. It should be still considered experimental and only used by people unafraid of hacking on C at this point. More eyeballs would be welcome. I blogged about more options on streaming updates from Postgres at http://btubbs.com/streaming-updates-from-postgres.html. On Tue, Mar 10, 2015 at 8:11 AM, Jim Nasby jim.na...@bluetreble.com wrote: On 3/8/15 11:02 PM, Rob Sargent wrote: I expected the latter but … some proprietors like to brag :) Standing down. On Mar 8, 2015, at 9:56 PM, John R Pierce pie...@hogranch.com wrote: On 3/8/2015 8:24 PM, Rob Sargent wrote: I strongly suspect many would like further details on your implementation. A user conference session well recorded perhaps? the details are proprietary and quite specific to our workload and requirements. You could build something similar with Skytools; it's what Skype actually created it for. They ran many real time datamarts/warehouses using it. The big difference between it and Slony is that Skytools is modular, so it's designed to allow for this kind of flexibility. -- Jim Nasby, Data Architect, Blue Treble Consulting Data in Trouble? Get it in Treble! http://BlueTreble.com -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL-related legal question
On March 12, 2015 06:43:40 AM Gavin Flower wrote: Bill cannot comment, but it might be along the lines of assigning all intellectual property rights, or something of that ilk. In that case, it might give the company ownership of stuff he may have contributed (or intends to contribute) to PostgreSQL in some way – which could lead to legal complications affecting PostgreSQL adversely, which would be expensive and an unnecessary distraction. I used to work for a company that did exactly that - you had to sign a contract that claimed copyright of all your work, even work done outside of work hours, to the company. They did however tell you beforehand that if you were an established contributor to an open-source project they could make exceptions for that, but you had to go through legal. But the upshot was that if you wrote an iPhone app in 15 minutes, the company would own that, technically. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL-related legal question
One of my sons was hired by Google last year after spending the past several years working on various open-source projects, it took 2 days of back-and-forth with Google's legal department before he was satisfied with the restrictions in their offer. -- Mike Nolan On Wed, Mar 11, 2015 at 4:46 PM, Jan de Visser j...@de-visser.net wrote: On March 12, 2015 06:43:40 AM Gavin Flower wrote: Bill cannot comment, but it might be along the lines of assigning all intellectual property rights, or something of that ilk. In that case, it might give the company ownership of stuff he may have contributed (or intends to contribute) to PostgreSQL in some way – which could lead to legal complications affecting PostgreSQL adversely, which would be expensive and an unnecessary distraction. I used to work for a company that did exactly that - you had to sign a contract that claimed copyright of all your work, even work done outside of work hours, to the company. They did however tell you beforehand that if you were an established contributor to an open-source project they could make exceptions for that, but you had to go through legal. But the upshot was that if you wrote an iPhone app in 15 minutes, the company would own that, technically. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FW: Installation
On Mon, 9 Mar 2015 16:35:12 + Martin Caverly mcave...@koboproductsinc.com wrote: I am trying to install PostgresSQL 9.0.12 (stand-alone, fresh installation), but cannot pass the password screen. I have tried these passwords: root, Root, Superuser, superuser, postgres, Postgres, Postgressql, postgressql… none of them was accepted. Please kindly advise the default password for fresh installation. What are you installing on? (and as Adrian noted, why the old distribution on a fresh install?) On Debian for instance, the new Postgres user has no password. You need to do (as root): # su - postgres $ psql -- Salutations, Vincent Veyron https://marica.fr/ Gestion des contentieux, des dossiers de sinistres assurance et des contrats pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] FW: Installation
Thanks Vincent for your response. I had used the wrong set up program. Martin -Original Message- From: Vincent Veyron [mailto:vv.li...@wanadoo.fr] Sent: Wednesday, March 11, 2015 12:57 PM To: Martin Caverly Cc: 'pgsql-general@postgresql.org' Subject: Re: [GENERAL] FW: Installation On Mon, 9 Mar 2015 16:35:12 + Martin Caverly mcave...@koboproductsinc.com wrote: I am trying to install PostgresSQL 9.0.12 (stand-alone, fresh installation), but cannot pass the password screen. I have tried these passwords: root, Root, Superuser, superuser, postgres, Postgres, Postgressql, postgressql… none of them was accepted. Please kindly advise the default password for fresh installation. What are you installing on? (and as Adrian noted, why the old distribution on a fresh install?) On Debian for instance, the new Postgres user has no password. You need to do (as root): # su - postgres $ psql -- Salutations, Vincent Veyron https://marica.fr/ Gestion des contentieux, des dossiers de sinistres assurance et des contrats pour le service juridique -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Basic Question on Point In Time Recovery
Hi Robert... On Wed, Mar 11, 2015 at 11:54 AM, Robert Inder rob...@interactive.co.uk wrote: Is our current frequent pg_dump approach a sensible way to go about things. Or are we missing something? Is there some other way to restore one database without affecting the others? As you've been told before, pg_dump is the way to go and it hits hard on the IO load. Also, depending on where you are dumping to you may be hitting yourself on the foot ( dump to another disk, or on another machine ). You may try streaming replication + pg_dump, we are currently doing this, although not in your exact scenario. This is, build an streaming replication slave, pg_dump from the slave. If needed, restore in the master. The thing is you can use desktop class machines for the slave. If you do not have spare machines I would suggest a desktop class machine with big RAM and whatever disks you need for the DB plus an extra disk to pg_dump to ( so pg_dump does not compete with DB for the db disks, this really kills performance ). Replication slaves do not need that much RAM ( as the only query it is going to run is the pg_dump ones, but desktop ram is cheap ). We did this with a not so powerful desktop with an extra sata disk to store the pg_dumps and it worked really well, and we are presently using two servers, using one of the extra gigabit interfaces with a crossover cable for the replication connection plus an extra sata disk to make hourly pg_dumps and it works quite well. Francisco Olarte.
Re: [GENERAL] PostgreSQL-related legal question
On 12/03/15 01:55, Bill Moran wrote: On Wed, 11 Mar 2015 13:50:02 +0100 Dorian Hoxha dorian.ho...@gmail.com wrote: I don't see how it could have negative impact on the postgresql project? It's not like your job will be to find vulnerabilities and not disclose them ? I don't think I should discuss the particulars of the situation on the list. That's why I'm just looking for a lawyer who understands the situation and can advise me. On Wed, Mar 11, 2015 at 1:28 PM, Bill Moran wmo...@potentialtech.com wrote: I've been asked to sign a legal document related to a PostgreSQL- related job opening. I have concerns about the document and that signing it could have a negative impact on the PostgreSQL project (in addition to personal concerns). I'm guessing I'm not the first person to go through this. I'm hoping someone on this list can refer me to a lawyer who is familiar with the challenges of NDAs and open source projects. I'm not asking for pro-bono, I'm willing to pay for services, but I just figured that I might get better results getting a referral than by contacting $random_legal_service. -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general Bill cannot comment, but it might be along the lines of assigning all intellectual property rights, or something of that ilk. In that case, it might give the company ownership of stuff he may have contributed (or intends to contribute) to PostgreSQL in some way – which could lead to legal complications affecting PostgreSQL adversely, which would be expensive and an unnecessary distraction. -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL-related legal question
On Wed, 11 Mar 2015 08:28:11 -0400 Bill Moran wmo...@potentialtech.com wrote: I've been asked to sign a legal document related to a PostgreSQL- related job opening. I have concerns about the document and that signing it could have a negative impact on the PostgreSQL project (in addition to personal concerns). Open source licenses are designed to protect the project, usually, so it should be hard for you to be able to hurt it. I would rather be worried of signing for something illegal on your end, a good thing to check upon anyway. I'm guessing I'm not the first person to go through this. I'm hoping someone on this list can refer me to a lawyer who is familiar with the challenges of NDAs and open source projects. I'm not asking for pro-bono, I'm willing to pay for services, but I just figured that I might get better results getting a referral than by contacting $random_legal_service. I would try posting your question on one of the mailing lists at the OSI first (probably License-discuss?) http://opensource.org/lists -- Salutations, Vincent Veyron https://libremen.com/ Legal case, contract and insurance claim management software -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL-related legal question
Michael Nolan htf...@gmail.com writes: One of my sons was hired by Google last year after spending the past several years working on various open-source projects, it took 2 days of back-and-forth with Google's legal department before he was satisfied with the restrictions in their offer. FWIW, I had a pretty similar discussion with Salesforce when I joined them. If you're looking at an employment agreement with verbiage like this, get them to modify it. They're probably hiring you in part *because* you are a contributor to PG, so they should be willing to bend their standard language for you. If not, maybe you don't want that job. regards, tom lane -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Basic Question on Point In Time Recovery
We are developing a new software system which is now used by a number of independent clients for gathering and storing live data as part of their day to day work. We have a number of clients sharing a single server. It is running one Postgres service, and each client is a separate user with access to their own database. Each client's database will contain hundreds of thousands of records, and will be supporting occasional queries by a small number of users. So the system is currently running on modest hardware. To guard against the server failing, we have a standby server being updated by WAL files, so if the worst comes to the worst we'll only lose a few minutes work. No problems there. But, at least while the system is under rapid development, we also want to have a way to roll a particular client's database back to a (recent) known good state, but without affecting any other client. My understanding is that the WAL files mechanism is installation-wide -- it will affect all clients alike. So to allow us to restore data for an individual client, we're running pg_dump once an hour on each database in turn. In the event of a problem with one client's system, we can restore just that one database, without affecting any other client. The problem is that we're finding that as the number of clients grows, and with it the amount of data, pg_dump is becoming more intrusive. Our perception is that when pg_dump is running for any database, performance on all databases is reduced. I'm guessing this is because the dump is making heavy use of the disk. There is obviously scope for improving performance by getting using more, or more powerful, hardware. That's obviously going to be necessary at some point, but it is obviously an expense that our client would like to defer as long as possible. So before we go down that route, I'd like to check that we're not doing something dopey. Is our current frequent pg_dump approach a sensible way to go about things. Or are we missing something? Is there some other way to restore one database without affecting the others? Thanks in advance. Robert. -- Robert Inder, Interactive Information Ltd, 3, Lauriston Gardens, Edinburgh EH3 9HH Registered in Scotland, Company no. SC 150689 Interactions speak louder than words -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] Asynchronous replication in postgresql
Hi, I am new to postgresql and evaluating the streaming replication for my use case. My use case is: 1. Need to replicate data from primary database (master) to secondary database (slave) asynchronously. 2. If master goes down, the slave should automatically be promoted to master. 3. Later, when the original primary server (original master) is brought up again, it should obtain back its master role and the new master should assume the slave again as it was with original setup. For #1, the streaming replication of postgresql is good enough. For #2, we need to create the trigger file. How can we do this automatically? For #3, this seems to be quite complicated. Is this even possible using streaming replication? If yes, how can this be achieved? Your help is greatly appreciated in this regard. Thank you very much..
Re: [GENERAL] PostgreSQL-related legal question
On Wed, 11 Mar 2015 13:50:02 +0100 Dorian Hoxha dorian.ho...@gmail.com wrote: I don't see how it could have negative impact on the postgresql project? It's not like your job will be to find vulnerabilities and not disclose them ? I don't think I should discuss the particulars of the situation on the list. That's why I'm just looking for a lawyer who understands the situation and can advise me. On Wed, Mar 11, 2015 at 1:28 PM, Bill Moran wmo...@potentialtech.com wrote: I've been asked to sign a legal document related to a PostgreSQL- related job opening. I have concerns about the document and that signing it could have a negative impact on the PostgreSQL project (in addition to personal concerns). I'm guessing I'm not the first person to go through this. I'm hoping someone on this list can refer me to a lawyer who is familiar with the challenges of NDAs and open source projects. I'm not asking for pro-bono, I'm willing to pay for services, but I just figured that I might get better results getting a referral than by contacting $random_legal_service. -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] PostgreSQL-related legal question
I don't see how it could have negative impact on the postgresql project? It's not like your job will be to find vulnerabilities and not disclose them ? On Wed, Mar 11, 2015 at 1:28 PM, Bill Moran wmo...@potentialtech.com wrote: I've been asked to sign a legal document related to a PostgreSQL- related job opening. I have concerns about the document and that signing it could have a negative impact on the PostgreSQL project (in addition to personal concerns). I'm guessing I'm not the first person to go through this. I'm hoping someone on this list can refer me to a lawyer who is familiar with the challenges of NDAs and open source projects. I'm not asking for pro-bono, I'm willing to pay for services, but I just figured that I might get better results getting a referral than by contacting $random_legal_service. -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Basic Question on Point In Time Recovery
On Wed, Mar 11, 2015 at 10:54:56AM +, Robert Inder wrote: But, at least while the system is under rapid development, we also want to have a way to roll a particular client's database back to a (recent) known good state, but without affecting any other client. My understanding is that the WAL files mechanism is installation-wide -- it will affect all clients alike. Right. It's the WAL, so everything in the relevant Postgres system is involved. So before we go down that route, I'd like to check that we're not doing something dopey. No, frequent pg_dumps are indeed hard on I/O. Is our current frequent pg_dump approach a sensible way to go about things. Or are we missing something? Is there some other way to restore one database without affecting the others? Slony-I, which is a PITA to administer, has a mode where you can ship logs off and restore them in pieces. The logs are not WAL, but Slony logs (which are produced by triggers and some explicit event writing for schema changes). So they work per-database. Schema changes are really quite involved for Slony, and there's overhead resulting from the triggrs, and as I said it's rather clunky to administer. But it's been around some time, it still is actively maintained, and it has this functionality. The PITR tools were, last I checked, pretty primitive. But the tool might work for your case. I don't know whether Bucardo or Londiste (two alternative systems that work on roughly the same principle) have this functionality, but I kind of doubt it since both were designed to get rid of several of the complexities that Slony presented. (Slony had all those complexities because it was trying to offer all this functionality at once.) Best regards, A -- Andrew Sullivan a...@crankycanuck.ca -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
[GENERAL] PostgreSQL-related legal question
I've been asked to sign a legal document related to a PostgreSQL- related job opening. I have concerns about the document and that signing it could have a negative impact on the PostgreSQL project (in addition to personal concerns). I'm guessing I'm not the first person to go through this. I'm hoping someone on this list can refer me to a lawyer who is familiar with the challenges of NDAs and open source projects. I'm not asking for pro-bono, I'm willing to pay for services, but I just figured that I might get better results getting a referral than by contacting $random_legal_service. -- Bill Moran -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Benchmarking partitioning triggers and rules
I am using postgres 9.4, the default install with brew install postgres, no tuning at all. BTW if I use postgres.app application the benchmarks run twice as slow! Why do you think there is such dramatic difference between EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' SELECT ($1).*' USING NEW ; and EXECUTE 'INSERT INTO ' || quote_ident(partition_name) || ' VALUES( ($1).*)' USING NEW ; One is thirty percent faster than the other. Also is there an even better way that I don't know about. On Thu, Mar 12, 2015 at 10:42 AM, Tomas Vondra tomas.von...@2ndquadrant.com wrote: On 11.3.2015 21:43, Tim Uckun wrote: Hey I hate to bump my post but I would really appreciate some input on this benchmark. I am very alarmed that adding a very simple partitioning trigger slows the insert speed by an order of magnitude. Is there any way to speed this up? I think to actually give you a meaningful response, we really need more details about your configuration - what PostgreSQL version are you using, what configuration have you changed from the defaults and such. Anyway, you're right that triggers are not cheap. The numbers I get on the development version with a bit of tuning look like this: INSERT (direct)1.5 sec INSERT 4.0 sec INSERT (EXECUTE) 11.5 sec So it's ~ what you've measured. Rules have the lowest overhead, but also there's a lot of tricky places. There's not much you can do, except for inserting the data directly into the right partition (without any triggers). -- Tomas Vondrahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Remote DBA, Training Services -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Basic Question on Point In Time Recovery
On Wed, Mar 11, 2015 at 9:55 AM, Andrew Sullivan a...@crankycanuck.ca wrote: On Wed, Mar 11, 2015 at 09:40:09AM -0400, Joseph Kregloh wrote: Have you looked into Barman? http://www.pgbarman.org/ It does what you want. You can take a full daily backup and it keeps track of the WAL files to allow for a PITR. I just had a look at the documentation (and the rest of your mail), and this doesn't actually seem to do what the OP wanted, which is to get PITR _per database_ in the same cluster. Upthread someone suggested a way around this, which is to PITR a cluster to a known-good point and then pg_dump the target database. This is where Barman comes in. It would take care of handling all of the stuff required for a PITR. Then he can do the pg_dump for a selected database. Barman does backup the entire cluster. But if Barman can do this automatically, that'd be cool (it's just not in the docs). Barman does look like a nice convenience package for managing WAL-shipping type backup installations instead of building one's own scripts, so this note isn't intended as a criticism of the package. I'm just not sure it does the thing requested in this case. A -- Andrew Sullivan a...@crankycanuck.ca -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general
Re: [GENERAL] Basic Question on Point In Time Recovery
On Wed, Mar 11, 2015 at 09:40:09AM -0400, Joseph Kregloh wrote: Have you looked into Barman? http://www.pgbarman.org/ It does what you want. You can take a full daily backup and it keeps track of the WAL files to allow for a PITR. I just had a look at the documentation (and the rest of your mail), and this doesn't actually seem to do what the OP wanted, which is to get PITR _per database_ in the same cluster. Upthread someone suggested a way around this, which is to PITR a cluster to a known-good point and then pg_dump the target database. But if Barman can do this automatically, that'd be cool (it's just not in the docs). Barman does look like a nice convenience package for managing WAL-shipping type backup installations instead of building one's own scripts, so this note isn't intended as a criticism of the package. I'm just not sure it does the thing requested in this case. A -- Andrew Sullivan a...@crankycanuck.ca -- Sent via pgsql-general mailing list (pgsql-general@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-general