Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2
PFA the required header file. Regards, -- Kedar. On Tue, Jun 9, 2009 at 12:26 AM, Jaime Casanova < jcasa...@systemguards.com.ec> wrote: > On Mon, Jun 8, 2009 at 1:38 PM, Grzegorz Jaskiewicz > wrote: > > > > make[3]: *** No rule to make target > `../../../src/include/catalog/pg_partition.h', needed by `postgres.bki'. > Stop. > > there is no pg_partition.h file in the patch, please send it > > > -- > Atentamente, > Jaime Casanova > Soporte y capacitación de PostgreSQL > Asesoría y desarrollo de sistemas > Guayaquil - Ecuador > Cel. +59387171157 > /*- * * pg_partition.h *definition of the system "partition" relation (pg_partition) *along with the relation's initial contents. * * * Portions Copyright (c) 1996-2008, PostgreSQL Global Development Group * * $PostgreSQL: pgsql/src/include/catalog/pg_partition.h,v 1.0 2009/02/03 03:57:34 tgl Exp $ * * NOTES *the genbki.sh script reads this file and generates .bki *information from the DATA() statements. * *- */ #ifndef PG_PARTITION_H #define PG_PARTITION_H #include "catalog/genbki.h" /* * pg_partition definition. cpp turns this into * typedef struct FormData_pg_partitions * */ #define PartitionRelationId 2336 CATALOG(pg_partition,2336) { Oid partrelid; /* partition table Oid */ Oid parentrelid;/* Parent table Oid */ int2parttype; /* Type of partition, list, hash, range */ Oid partkey;/* partition key Oid */ Oid keytype;/* type of partition key */ int2keyorder; /* order of the key in multi-key partitions */ bytea minval; bytea maxval; /* min and max for range partition */ bytea listval; int2hashval;/* hash value */ } FormData_pg_partition; /* * Form_pg_partitions corresponds to a pointer to a tuple with * the format of pg_partitions relation. * */ typedef FormData_pg_partition *Form_pg_partition; /* * compiler constants for pg_partitions * */ #define Natts_pg_partition 10 #define Anum_pg_partition_partrelid 1 #define Anum_pg_partition_parentrelid 2 #define Anum_pg_partition_parttype 3 #define Anum_pg_partition_partkey 4 #define Anum_pg_partition_minval7 #define Anum_pg_partition_maxval8 #define Anum_pg_partition_listval 9 #define Anum_pg_partition_hashval 10 #define Anum_pg_partition_keytype 5 #define Anum_pg_partition_keyorder 6 #endif /* PG_PARTITIONS_H */ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
On Mon, Jun 8, 2009 at 7:34 PM, Tom Lane wrote: > Robert Haas writes: >> I see that you've carefully not quoted Greg's remark about "mechanism >> not policy" with which I completely agree. > > Mechanism should exist to support useful policy. I don't believe that > the proposed switch has any real-world usefulness. I guess I agree that it doesn't seem to make much sense to trigger failover on a DB crash, as the OP suggested. The most likely cause of a DB crash is probably a software bug, in which case failover isn't going to help (won't you just trigger the same bug on the standby server?). The case where you'd probably want to do failover is when the whole server has gone down to a hardware or power failure, in which case your hypothetical home-grown supervisor process won't be able to run anyway. But I'm still not 100% convinced that the proposed mechanism is useless. There might be other reasons to want to get control in the event of a crash. You might want to page the system administrator, or trigger a filesystem snapshot so you can go back and do a post-mortem. (The former could arguably be done just as well by scanning the log file for the relevant log messages, I suppose, but the latter certainly couldn't be, if your goal is to get a snapshot before recovery is done.) But maybe I'm all wet... ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Multicolumn index corruption on 8.4 beta 2
Hi, I pgdump'ed a 8.3.7 database and loaded the dump to a different server running PostgreSQL 8.4 beta 2 (compiled from source) under Opensolaris. One of the tables has about 6 million records, and a Btree index that spans 3 columns. I am having the problem that some queries are unable to find rows when using the index. When I force a sequential scan, by doing "set enable_indexscan=false; set enable_bitmapscan=false;", the same queries work fine. In addition, while running "vacuum full analyze" I got the following error a couple times: == ERROR: failed to re-find parent key in index "pgb_idx" for deletion target page 25470 === Doing "reindex" or dropping and creating the index, makes the error go away for a while. However it does not solve the problem of the missing rows, making me believe the index Postgresql generates is still corrupt. According to memtest the memory of the server is fine, and according to "zpool status" there are no disk or ZFS checksum errors. Any idea how to solve or debug this issue? Yours sincerely, Floris Bos -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
Robert Haas writes: > I see that you've carefully not quoted Greg's remark about "mechanism > not policy" with which I completely agree. Mechanism should exist to support useful policy. I don't believe that the proposed switch has any real-world usefulness. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
On Mon, Jun 8, 2009 at 4:30 PM, Tom Lane wrote: > Greg Stark writes: >>> On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote: I think the proposed don't-restart flag is exceedingly ugly and will not solve any real-world problem. > >> Hm. I'm not sure I see a solid use case for it -- in my experience you >> want to be pretty sure you have a persistent problem before you fail >> over. > > Yeah, and when you do fail over you want more guarantee than "none at > all" that the primary won't start back up again on its own. > >> But I don't really see why it's ugly either. > > Because it's intentionally blowing a hole in one of the most prized > properties of the database, ie, that it doesn't go down if it can help > it. I want a *WHOLE* lot stronger rationale than "somebody might want > it someday" before providing a switch that lets somebody thoughtlessly > break a property we've sweated blood for ten years to ensure. I see that you've carefully not quoted Greg's remark about "mechanism not policy" with which I completely agree. This seems like a pretty useful switch for people who want more control over how the database gets restarted on those rare occasions when it wipes out (and possibly for debugging crash-type problems as well). The amount of blood-sweating that was required to make a robust automatic restart mechanism doesn't seem relevant to this discussion, though it is certainly a cool feature. I also don't see any reason to assume that users will do this "thoughtlessly". Perhaps someone will, but if our policy is to not add any features on the theory that someone might use in a stupid way, we'd better get busy reverting a significant fraction of the work done for 8.4. I'm not going to go so far as to say that we should never reject a feature because the danger of someone shooting themselves in the foot is too high, but this doesn't even seem like a likely candidate. If we put an option in postgresql.conf called "automatic_restart_after_crash = on", anyone who switches that to "off" should have a pretty good idea what the likely consequences of that decision will be. The people who are too stupid to figure that one out are likely to have a whole lot of other problems too, and they're not the people at whom we should be targetting this product. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> I've spent some time thinking about possible workarounds for this, and > >> not really come up with any. The only feasible thing I can think of > >> to do is teach pg_migrator to refuse to migrate if (a) the old DB > >> contains contrib/isn, and (b) the new DB has FLOAT8PASSYVAL (which > >> can be checked in pg_control). One question here is how you decide > >> if the old DB contains contrib/isn. I don't think looking for the > >> type name per se is a hot idea. The best plan that has come to mind > >> is to look through pg_proc to see if there are any C-language functions > >> that reference "$libdir/isn". > > > Sure, pg_migrator is good at checking. Please confirm you want this > > added to pg_migrator. > > Yeah, I'd suggest it. Even if we later come up with a workaround for > contrib/isn, you're going to want to have the infrastructure in place > for this type of check, because there will surely be cases that need it. > > Note that I think the FLOAT8PASSYVAL check is a must. There is no > reason to forbid migrating isn on 32-bit machines, for example. Done, with patch attached, and pg_migrator beta6 released. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ? tools ? log ? src/pg_migrator Index: src/controldata.c === RCS file: /cvsroot/pg-migrator/pg_migrator/src/controldata.c,v retrieving revision 1.14 diff -c -r1.14 controldata.c *** src/controldata.c 13 May 2009 15:19:16 - 1.14 --- src/controldata.c 8 Jun 2009 21:29:31 - *** *** 23,29 */ void get_control_data(migratorContext *ctx, const char *bindir, ! const char *datadir, ControlData *ctrl) { char cmd[MAXPGPATH]; char bufin[MAX_STRING]; --- 23,30 */ void get_control_data(migratorContext *ctx, const char *bindir, ! const char *datadir, ControlData *ctrl, ! const uint32 pg_version) { char cmd[MAXPGPATH]; char bufin[MAX_STRING]; *** *** 43,48 --- 44,50 bool got_index = false; bool got_toast = false; bool got_date_is_int = false; + bool got_float8_pass_by_value = false; char *lang = NULL; /* Because we test the pg_resetxlog output strings, it has to be in English. */ *** *** 65,71 /* Only pre-8.4 has these so if they are not set below we will check later */ ctrl->lc_collate = NULL; ctrl->lc_ctype = NULL; ! /* we have the result of cmd in "output". so parse it line by line now */ while (fgets(bufin, sizeof(bufin), output)) { --- 67,80 /* Only pre-8.4 has these so if they are not set below we will check later */ ctrl->lc_collate = NULL; ctrl->lc_ctype = NULL; ! ! /* Not in pre-8.4 */ ! if (pg_version < 80400) ! { ! ctrl->float8_pass_by_value = false; ! got_float8_pass_by_value = true; ! } ! /* we have the result of cmd in "output". so parse it line by line now */ while (fgets(bufin, sizeof(bufin), output)) { *** *** 249,254 --- 258,275 ctrl->date_is_int = strstr(p, "64-bit integers") != NULL; got_date_is_int = true; } + else if ((p = strstr(bufin, "Float8 argument passing:")) != NULL) + { + p = strchr(p, ':'); + + if (p == NULL || strlen(p) <= 1) + pg_log(ctx, PG_FATAL, "%d: pg_resetxlog problem\n", __LINE__); + + p++;/* removing ':' char */ + /* used later for /contrib check */ + ctrl->float8_pass_by_value = strstr(p, "by value") != NULL; + got_float8_pass_by_value = true; + } /* In pre-8.4 only */ else if ((p = strstr(bufin, "LC_COLLATE:")) != NULL) { *** *** 305,311 if (!got_xid || !got_oid || !got_log_id || !got_log_seg || !got_tli || !got_align || !got_blocksz || !got_largesz || !got_walsz || !got_walseg || !got_ident || !got_index || !got_toast || ! !got_date_is_int) { pg_log(ctx, PG_REPORT, "Some required control information is missing; cannot find:\n"); --- 326,332 if (!got_xid || !got_oid || !got_log_id || !got_log_seg || !got_tli || !got_align || !got_blocksz || !got_largesz || !got_walsz || !got_walseg || !got_ident || !got_index || !got_toast || ! !got_date_is_int || !got_float8_pass_by_value) { pg_log(ctx, PG_REPORT, "Some required control information is missing; cannot find:\n"); *** *** 352,357 --- 373,382 if (!got_date_is_int) pg_log(ctx, PG_REPORT, " dates/times are integers?\n"); + /* value added in Postgres 8.4 */ + if (!got_float8_pass_by_value) + pg_log(ctx, PG_REPORT, " float8 argument passing method\n"); + pg_log(ctx, PG_FATAL, "Unable to continue without required control information, terminating\n"); } Index: src/pg_migrator.c ==
Re: [HACKERS] postmaster recovery and automatic restart suppression
Greg Stark writes: >> On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote: >>> I think the proposed don't-restart flag is exceedingly ugly and will not >>> solve any real-world problem. > Hm. I'm not sure I see a solid use case for it -- in my experience you > want to be pretty sure you have a persistent problem before you fail > over. Yeah, and when you do fail over you want more guarantee than "none at all" that the primary won't start back up again on its own. > But I don't really see why it's ugly either. Because it's intentionally blowing a hole in one of the most prized properties of the database, ie, that it doesn't go down if it can help it. I want a *WHOLE* lot stronger rationale than "somebody might want it someday" before providing a switch that lets somebody thoughtlessly break a property we've sweated blood for ten years to ensure. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
On Mon, Jun 8, 2009 at 6:58 PM, Simon Riggs wrote: > > On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote: > >> I think the proposed don't-restart flag is exceedingly ugly and will not >> solve any real-world problem. > > Agreed. Hm. I'm not sure I see a solid use case for it -- in my experience you want to be pretty sure you have a persistent problem before you fail over. But I don't really see why it's ugly either. I mean our auto-restart behaviour is pretty arbitrary. You could just as easily argue we shouldn't auto-restart and rely on the user to restart the service like he would any service which crashes. I would file it under "mechanism not policy" and make it optional. The user should be able to select what to do when a backend crash is detected from amongst the various safe options, even if we think some of the options don't have any use cases we can think of. Someone will surely think of one at some point. (idly I wonder if cloud environments where you can have an infinite supply of slaves are such a use case...) -- greg http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Alvaro Herrera wrote: > Bruce Momjian escribi?: > > > For longterm strategy, let me list the challenges for pg_migrator from > > any major upgrade (listed in the DEVELOPERS file): > > > > Change Conversion Method > > > > -- > > clognone > > heap page header, including bitmask convert to new page format on read > > tuple header, including bitmask convert to new page format on read > > data value format create old data type in new cluster > > index page format reindex, or recreate old index > > methods > > TOAST changes? Good point, added. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Tom Lane wrote: >> I've spent some time thinking about possible workarounds for this, and >> not really come up with any. The only feasible thing I can think of >> to do is teach pg_migrator to refuse to migrate if (a) the old DB >> contains contrib/isn, and (b) the new DB has FLOAT8PASSYVAL (which >> can be checked in pg_control). One question here is how you decide >> if the old DB contains contrib/isn. I don't think looking for the >> type name per se is a hot idea. The best plan that has come to mind >> is to look through pg_proc to see if there are any C-language functions >> that reference "$libdir/isn". > Sure, pg_migrator is good at checking. Please confirm you want this > added to pg_migrator. Yeah, I'd suggest it. Even if we later come up with a workaround for contrib/isn, you're going to want to have the infrastructure in place for this type of check, because there will surely be cases that need it. Note that I think the FLOAT8PASSYVAL check is a must. There is no reason to forbid migrating isn on 32-bit machines, for example. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian escribió: > For longterm strategy, let me list the challenges for pg_migrator from > any major upgrade (listed in the DEVELOPERS file): > > Change Conversion Method > > -- > clognone > heap page header, including bitmask convert to new page format on read > tuple header, including bitmask convert to new page format on read > data value format create old data type in new cluster > index page format reindex, or recreate old index > methods TOAST changes? -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > I wrote: > > [ concerning various migration hazards in contrib/ ] > > > * isn has got the nastiest problems of the lot: since it piggybacks > > on type bigint, a migrated database might work, or might crash > > miserably, depending on whether bigint has become pass-by-value > > in the new database. I'm not very sure if we can fix this reasonably. > > I've spent some time thinking about possible workarounds for this, and > not really come up with any. The only feasible thing I can think of > to do is teach pg_migrator to refuse to migrate if (a) the old DB > contains contrib/isn, and (b) the new DB has FLOAT8PASSYVAL (which > can be checked in pg_control). One question here is how you decide > if the old DB contains contrib/isn. I don't think looking for the > type name per se is a hot idea. The best plan that has come to mind > is to look through pg_proc to see if there are any C-language functions > that reference "$libdir/isn". Sure, pg_migrator is good at checking. Please confirm you want this added to pg_migrator. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2
On Mon, Jun 8, 2009 at 1:38 PM, Grzegorz Jaskiewicz wrote: > > make[3]: *** No rule to make target > `../../../src/include/catalog/pg_partition.h', needed by `postgres.bki'. > Stop. there is no pg_partition.h file in the patch, please send it -- Atentamente, Jaime Casanova Soporte y capacitación de PostgreSQL Asesoría y desarrollo de sistemas Guayaquil - Ecuador Cel. +59387171157 -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas writes: > On Mon, Jun 8, 2009 at 1:32 PM, Tom Lane wrote: >> You mean like PG_MODULE_MAGIC? > Hey, how about that. Why doesn't that solve our problem here? > [ thinks ... ] I guess it's because there's no guarantee that the > function is installed on the SQL level with the signature that is > appropriate on the C level. Yeah. And it's more than just the function itself. For example, in the contrib/isn mess, the function definitions didn't change. The problem is the passbyval flag (or lack of it) on the type definition. I think we've speculated in the past about having ways of embedding per-function data into the .so libraries so that these sorts of things could be caught automatically. But it'd be a lot of work for rather limited reward. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
I wrote: > [ concerning various migration hazards in contrib/ ] > * isn has got the nastiest problems of the lot: since it piggybacks > on type bigint, a migrated database might work, or might crash > miserably, depending on whether bigint has become pass-by-value > in the new database. I'm not very sure if we can fix this reasonably. I've spent some time thinking about possible workarounds for this, and not really come up with any. The only feasible thing I can think of to do is teach pg_migrator to refuse to migrate if (a) the old DB contains contrib/isn, and (b) the new DB has FLOAT8PASSYVAL (which can be checked in pg_control). One question here is how you decide if the old DB contains contrib/isn. I don't think looking for the type name per se is a hot idea. The best plan that has come to mind is to look through pg_proc to see if there are any C-language functions that reference "$libdir/isn". > * pg_freespacemap has made *major* changes in its ABI. There's > probably no hope of this working either, but we need to be sure > it's not a crash risk. This turns out not to be a problem: the set of exposed C function names changed, so the function definitions will fail to migrate. Dropping the module before migrating and reinstalling it afterwards is an easy workaround. > * pgstattuple has made changes in the output types of its functions. > This is a serious crash risk, and I'm not immediately sure how to > fix it. Note that simply migrating the module will not draw any > errors. This doesn't seem to create serious problems either. pgstatindex() has changed some output columns from int4 to int8, but because it creates the result tuple from text strings, it manages to just work anyway. (In principle you could get some overflow problems with very large indexes, but I doubt that's an issue in practice; and it couldn't cause a crash anyway.) pg_relpages() likewise changed its return type, but in this particular case you could only get garbage answers not a crash. So I think we can just tell people to reinstall the SQL file after migration. > * tsearch2 has opclass support function changes, but unlike other > cases of this, it will fail to migrate to 8.4 because the functions > are references to core functions instead of being declared in the > module. Also, "drop it first" isn't a very useful recommendation > since the domains it defines might be used somewhere. It would be nice if this migrated cleanly, but it doesn't and there's not much we can do about it. At least it will fail safely. So, other than the suggested pg_migrator hack for contrib/isn, the only thing left to do here is fix dblink_current_query(). I'll take care of that, but not till after Joe commits his remaining patch, so as not to risk creating merge hazards for him. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2
make -C catalog all ( echo src/backend/catalog/catalog.o src/backend/catalog/dependency.o src/backend/catalog/heap.o src/backend/catalog/index.o src/backend/ catalog/indexing.o src/backend/catalog/namespace.o src/backend/catalog/ aclchk.o src/backend/catalog/pg_aggregate.o src/backend/catalog/ pg_constraint.o src/backend/catalog/pg_conversion.o src/backend/ catalog/pg_depend.o src/backend/catalog/pg_enum.o src/backend/catalog/ pg_inherits.o src/backend/catalog/pg_largeobject.o src/backend/catalog/ pg_namespace.o src/backend/catalog/pg_operator.o src/backend/catalog/ pg_proc.o src/backend/catalog/pg_shdepend.o src/backend/catalog/ pg_type.o src/backend/catalog/storage.o src/backend/catalog/ toasting.o ) >objfiles.txt make[3]: *** No rule to make target `../../../src/include/catalog/ pg_partition.h', needed by `postgres.bki'. Stop. make[2]: *** [catalog-recursive] Error 2 (that's on mac os x). -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas wrote: > > Now, we don't want to over-promise, but at the same time we shouldn't > > downplay the tool either. ?For a sufficiently-experienced administrator, > > pg_migrator is a useful migration tool, and we need to convey that. > > Even if you have to hire a consultant to manage the migration, if it > > saves days of downtime, it is worth it. ?Consultants don't often use > > experimental tools, but they do use complex, powerful tools that are > > often rough around the edges in terms of usability, e.g. read the > > INSTALL file carefully. > > Fair enough. I'm game to use a different word. I spent approximately > 30 seconds coming up with that suggestion. :-) I think the text I already posted is appropriate: pg_migrator is designed for experienced users with large databases, for whom the typical dump/restore required for major version upgrades is a hardship. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas wrote: > On Mon, Jun 8, 2009 at 1:06 PM, Bruce Momjian wrote: > > Tom Lane wrote: > >> Bruce Momjian writes: > >> > At a minimum it would be great if items could mark themselves as > >> > non-binary-upgradable. > >> > >> It's hardly difficult to make that happen --- just change the C name of > >> some function, or the name of the whole .so file. > > > > Yes, but it needs to happen. ?;-) ?PostGIS has done this, which is good. > > The problem is that if they don't do it it is out of the control of > > pg_migrator. > > I think it might be possible to implement a system that can't be > broken by accident. Firefox (at least AIUI) requires plugin authors > to explicitly flag their modules as compatible with new versions of > Firefox. When you upgrade your firefox installation in place (heh, > heh) it goes off to the web site and checks whether all of your > extensions have been so flagged. Any that have not been get disabled > automatically. Interesting it allows the flagging to happen in real-time, rather than requiring the system to know at install time whether it is compatible with future verions (almost an impossibility). I am afraid we would need some kind of real-time check, or at least have major versions flag which external stuff cannot be upgraded, which we have discussed here already. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
On Mon, Jun 8, 2009 at 1:32 PM, Tom Lane wrote: > Robert Haas writes: >> Obviously we don't want to get into connecting to a web site, but we >> could probably come up with some other API for .so files to indicate >> which versions of PG they're compatible with. > > You mean like PG_MODULE_MAGIC? Hey, how about that. Why doesn't that solve our problem here? [ thinks ... ] I guess it's because there's no guarantee that the function is installed on the SQL level with the signature that is appropriate on the C level. To fix that, I suppose we'd need to version the contents of the .sql file that installs the definitions, which gets back to the problem of building a general-purpose module facility. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
On Mon, 2009-06-08 at 09:47 -0400, Tom Lane wrote: > I think the proposed don't-restart flag is exceedingly ugly and will not > solve any real-world problem. Agreed. -- Simon Riggs www.2ndQuadrant.com PostgreSQL Training, Services and Support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas writes: > Obviously we don't want to get into connecting to a web site, but we > could probably come up with some other API for .so files to indicate > which versions of PG they're compatible with. You mean like PG_MODULE_MAGIC? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Patch for automating partitions in PostgreSQL 8.4 Beta 2
Kedar, Added to first CommitFest of 8.5. Thanks for the nice test case. Folks who are not busy with 8.4 are urged to test this as soon as you can. -- Josh Berkus PostgreSQL Experts Inc. www.pgexperts.com -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
On Mon, Jun 8, 2009 at 1:20 PM, Bruce Momjian wrote: > Bruce Momjian wrote: >> > Oh, to me "experimental" does not imply that usefulness is uncertain; >> > rather, it implies that usefulness has been established but that the >> > code is new (item #4 above) and may be not be 100% feature-complete >> > (items #1 and #3 above). >> > >> > > I think we can say: ?"pg_migrator is designed for experienced users with >> > > large databases, for whom the typical dump/restore required for major >> > > version upgrades is a hardship". >> > >> > Precisely. In other words, if you are an INEXPERIENCED user (that is >> > to say, most of them) or you don't have a particular large database, >> > dump + reload is probably the safest option. We're not discouraging >> > you from use pg_migrator, but please be careful and observe that it is >> > new and has some limitations. >> >> Agreed. There is no reason for most users to need pg_migrator; it is >> not worth the risk for them, however small. There are some people who >> really need it, and hopefully they are experienced users, while there is >> a larger group who want to know such an option _exists_, so if they ever >> need it, it is available. > > I think this "larger group" is where my problem with the word > "experimental" come in. I think pg_migrator is far enough along that we > know it works, and that it will probably work for future major releases. > By calling it "experimental" we are not conveying confidence in the tool > for people who are making deployment decisions based on the existence of > such tool, even if they aren't going to use it initially. And by not > conveying confidence, we will lose the adoption advantage we can get > from pg_migrator. > > Now, we don't want to over-promise, but at the same time we shouldn't > downplay the tool either. For a sufficiently-experienced administrator, > pg_migrator is a useful migration tool, and we need to convey that. > Even if you have to hire a consultant to manage the migration, if it > saves days of downtime, it is worth it. Consultants don't often use > experimental tools, but they do use complex, powerful tools that are > often rough around the edges in terms of usability, e.g. read the > INSTALL file carefully. Fair enough. I'm game to use a different word. I spent approximately 30 seconds coming up with that suggestion. :-) > For longterm strategy, let me list the challenges for pg_migrator from > any major upgrade (listed in the DEVELOPERS file): > > Change Conversion Method > -- > clog none > heap page header, including bitmask convert to new page format on read > tuple header, including bitmask convert to new page format on read > data value format create old data type in new cluster > index page format reindex, or recreate old index > methods > > These are the issue we will have to address for 8.5 and beyond if > pg_migrator is to remain useful. No arguments here, sounds like interesting stuff. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
On Mon, Jun 8, 2009 at 1:06 PM, Bruce Momjian wrote: > Tom Lane wrote: >> Bruce Momjian writes: >> > At a minimum it would be great if items could mark themselves as >> > non-binary-upgradable. >> >> It's hardly difficult to make that happen --- just change the C name of >> some function, or the name of the whole .so file. > > Yes, but it needs to happen. ;-) PostGIS has done this, which is good. > The problem is that if they don't do it it is out of the control of > pg_migrator. I think it might be possible to implement a system that can't be broken by accident. Firefox (at least AIUI) requires plugin authors to explicitly flag their modules as compatible with new versions of Firefox. When you upgrade your firefox installation in place (heh, heh) it goes off to the web site and checks whether all of your extensions have been so flagged. Any that have not been get disabled automatically. Obviously we don't want to get into connecting to a web site, but we could probably come up with some other API for .so files to indicate which versions of PG they're compatible with. If they don't implement that API, we assume the predate its introduction and are not upgradeable. I'm fuzzy on the details here but the point is that if you implement an opt-in system rather than an opt-out system then people have to deliberately circumvent it to break things, rather than just needing to be lazy. ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian wrote: > > Oh, to me "experimental" does not imply that usefulness is uncertain; > > rather, it implies that usefulness has been established but that the > > code is new (item #4 above) and may be not be 100% feature-complete > > (items #1 and #3 above). > > > > > I think we can say: ?"pg_migrator is designed for experienced users with > > > large databases, for whom the typical dump/restore required for major > > > version upgrades is a hardship". > > > > Precisely. In other words, if you are an INEXPERIENCED user (that is > > to say, most of them) or you don't have a particular large database, > > dump + reload is probably the safest option. We're not discouraging > > you from use pg_migrator, but please be careful and observe that it is > > new and has some limitations. > > Agreed. There is no reason for most users to need pg_migrator; it is > not worth the risk for them, however small. There are some people who > really need it, and hopefully they are experienced users, while there is > a larger group who want to know such an option _exists_, so if they ever > need it, it is available. I think this "larger group" is where my problem with the word "experimental" come in. I think pg_migrator is far enough along that we know it works, and that it will probably work for future major releases. By calling it "experimental" we are not conveying confidence in the tool for people who are making deployment decisions based on the existence of such tool, even if they aren't going to use it initially. And by not conveying confidence, we will lose the adoption advantage we can get from pg_migrator. Now, we don't want to over-promise, but at the same time we shouldn't downplay the tool either. For a sufficiently-experienced administrator, pg_migrator is a useful migration tool, and we need to convey that. Even if you have to hire a consultant to manage the migration, if it saves days of downtime, it is worth it. Consultants don't often use experimental tools, but they do use complex, powerful tools that are often rough around the edges in terms of usability, e.g. read the INSTALL file carefully. For longterm strategy, let me list the challenges for pg_migrator from any major upgrade (listed in the DEVELOPERS file): Change Conversion Method -- clognone heap page header, including bitmask convert to new page format on read tuple header, including bitmask convert to new page format on read data value format create old data type in new cluster index page format reindex, or recreate old index methods These are the issue we will have to address for 8.5 and beyond if pg_migrator is to remain useful. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial vacuum versus pg_class.reltuples
On Mon, Jun 8, 2009 at 10:40 AM, Alvaro Herrera wrote: > Tom Lane escribió: >> Alvaro Herrera writes: >> > Robert Haas escribió: >> >> Maybe we should just have a GUC to enable/disable >> >> partial vacuums. >> >> > IIRC you can set vacuum_freeze_table_age to 0. >> >> That has the same effects as issuing VACUUM FREEZE, no? > > As far as I can make from the docs, I think it only forces a full table > scan, but the freeze age remains the same. Yeah, that looks like what I was looking for, thanks. I looked for it in the docs under the vacuum-related sections, but couldn't find it, and I didn't know the name of it so I couldn't find it that way either. I wonder if we should consider moving all of the vacuum and autovacuum parameters into one section. http://developer.postgresql.org/pgdocs/postgres/runtime-config-client.html#GUC-VACUUM-FREEZE-TABLE-AGE In the worst case scenario where the new partial-table-vacuums are causing headaches for someone, they should hopefully be able to use this parameter to basically turn them off without too many nasty side effects. (Another nice thing about this parameter is that if you have a really big table that is append-mostly, you can potentially tune this parameter downward to spread out the freezing activity. With the default settings, you might cruise along merrily until you hit 150M transactions and then generate an I/O storm as you freeze a pretty big chunk of the table all at once. With a lower setting, non-partial vacuums will be more frequent, but each one will generate a smaller amount of write traffic.) ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > At a minimum it would be great if items could mark themselves as > > non-binary-upgradable. > > It's hardly difficult to make that happen --- just change the C name of > some function, or the name of the whole .so file. Yes, but it needs to happen. ;-) PostGIS has done this, which is good. The problem is that if they don't do it it is out of the control of pg_migrator. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas wrote: > > Let me list the problems with pg_migrator: > > > > ? ? ? ?o ?/contrib and plugin migration (not unique to pg_migrator) > > ? ? ? ?o ?you must read/follow the install instructions > > ? ? ? ?o ?might require post-migration table/index rebuilds > > ? ? ? ?o ?new so serious bugs might exist > > I pretty much agree with this list. With respect to #2, I don't think > that it's asking a lot for people to read/follow the install > instructions, so I don't consider that a serious problem. My point was that I think someday pg_migrator will be point-and-click, but it is not now. > Oh, to me "experimental" does not imply that usefulness is uncertain; > rather, it implies that usefulness has been established but that the > code is new (item #4 above) and may be not be 100% feature-complete > (items #1 and #3 above). > > > I think we can say: ?"pg_migrator is designed for experienced users with > > large databases, for whom the typical dump/restore required for major > > version upgrades is a hardship". > > Precisely. In other words, if you are an INEXPERIENCED user (that is > to say, most of them) or you don't have a particular large database, > dump + reload is probably the safest option. We're not discouraging > you from use pg_migrator, but please be careful and observe that it is > new and has some limitations. Agreed. There is no reason for most users to need pg_migrator; it is not worth the risk for them, however small. There are some people who really need it, and hopefully they are experienced users, while there is a larger group who want to know such an option _exists_, so if they ever need it, it is available. > > I assume this will be the same adoption pattern we had with the Win32 > > port, where it was a new platform in 8.0 and we dealt with some issues > > as it was deployed, and that people who want it will find it and > > hopefully it will be useful for them. > > Completely agree. And like the Windows port, hopefully after a > release or two, we'll figure out what we can improve and do so. I am > interested in this problem but all of my free time lately has been > going into the EXPLAIN patch I'm working on, so I haven't had time to > dig into it much. The problems of being a hobbyist... One difference in risk is that the Windows port usually had _new_ data meaning you were not risking as much as using pg_migrator on an estabilished database installation. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > At a minimum it would be great if items could mark themselves as > non-binary-upgradable. It's hardly difficult to make that happen --- just change the C name of some function, or the name of the whole .so file. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> There is a different problem though: sometimes the recommended fix for > >> contrib-module incompatibilities is to load the new contrib module into > >> the target database before trying to load your old dump file. (We told > >> people to do that for 8.2->8.3 tsearch2, for example.) In the > >> pg_migrator context there is no way to insert the new contrib module > >> first, and also no way to ignore the duplicate-object errors that you > >> typically get while loading the old dump. > > > Ah, OK, interesting. We could have pg_migrator detect this issue and > > fail right away with a message indicating pg_migrator cannot be used > > unless those objects are dumped manually and the /contrib removed. > > How would it detect it? I think the only thing you could do is > hard-wire tests for specific objects, which is klugy and doesn't > extend to third-party modules that you don't know about. Yep, that is the only way. The good news is that we are not modifying any data so it is just a detection issue, i.e. it would not destabilize pg_migrator's operation. > In most cases where there are major incompatibilities, attempting to > load the old dump file would fail anyway, so I don't think pg_migrator > really needs any hard-wired test. It's the minor changes that are > risky. I think ultimately the burden for those has to be on the module > author: he has to either avoid cross-version incompatibilities or make > sure they will fail safely. Yep. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
On Mon, Jun 8, 2009 at 11:08 AM, Bruce Momjian wrote: > Robert Haas wrote: >> > Right now nothing in the project is referring to pg_migrator except in >> > the press release, and it is marked as beta there. ?How do you want to >> > deemphasize it more than that? ?Why did I bother working on this if the >> > community reaction is to try to figure out how to make people avoid >> > using it? >> >> Because Rome wasn't built in a day. >> >> It seems to me that you yourself placed a far more disparaging label >> on it than anything that anyone has proposed today; this was a week >> ago: >> >> http://archives.postgresql.org/pgsql-hackers/2009-05/msg01470.php >> >> I don't think it's anyone's intention to disparage your work on this >> tool. It certainly isn't mine. But it seems obvious to me that it >> has some pretty severe limitations and warts. The fact that those >> limitations and warts are well-documented doesn't negate their >> existence. I also don't think calling the software "beta" or >> "experimental" is a way of deemphasizing it. I think it's a way of >> being clear that this software is not the bullet-proof, rock-solid, >> handles-all-cases-and-keeps-on-trucking level of robustness that >> people have come to expect from PostgreSQL. >> >> FWIW, I have no problem at all with mentioning pg_migrator in the >> release notes or the documentation; my failure to respond to your last >> emails on this topic was due to being busy and having already spent >> too much time responding to other emails, not due to thinking it was a >> bad idea. I actually think it's a good idea. But I also think those >> references should describe it as experimental, because I think it is. >> I really hope it won't remain experimental forever, but I think that's >> an accurate characterization of where it is now. > > pg_migrator should be looked at critically here, and I agree we should > avoid letting pg_migrator failures reflect badly on Postgres. > > Let me list the problems with pg_migrator: > > o /contrib and plugin migration (not unique to pg_migrator) > o you must read/follow the install instructions > o might require post-migration table/index rebuilds > o new so serious bugs might exist I pretty much agree with this list. With respect to #2, I don't think that it's asking a lot for people to read/follow the install instructions, so I don't consider that a serious problem. > and let me list its benefits: > > o first in-place upgrade capability in years > o tested by some users, all successful (since late alpha) > o removes major impediment to adoption > o includes extensive error checking and reporting > o contains detailed installation/usage instructions > > So let's look at pg_migrator as an opportunity and a risk. As far as I > know, only Hiroshi Saito and I have have looked at the code. Why don't > others read the pg_migrator source code looking for bugs? Why have more > people not test it? No reason at all - I very much hope that happens. > I think "experimental" is the wrong label. Experimental assumes its > usefulness is uncertain and that it is still being researched --- > neither is true. Once I release pg_migrator 8.4 final at the end of > this week (assuming no bugs are reported), I consider it done, or at > least advanced as far as I can go until I get more feedback from users. Oh, to me "experimental" does not imply that usefulness is uncertain; rather, it implies that usefulness has been established but that the code is new (item #4 above) and may be not be 100% feature-complete (items #1 and #3 above). > I think we can say: "pg_migrator is designed for experienced users with > large databases, for whom the typical dump/restore required for major > version upgrades is a hardship". Precisely. In other words, if you are an INEXPERIENCED user (that is to say, most of them) or you don't have a particular large database, dump + reload is probably the safest option. We're not discouraging you from use pg_migrator, but please be careful and observe that it is new and has some limitations. > I assume this will be the same adoption pattern we had with the Win32 > port, where it was a new platform in 8.0 and we dealt with some issues > as it was deployed, and that people who want it will find it and > hopefully it will be useful for them. Completely agree. And like the Windows port, hopefully after a release or two, we'll figure out what we can improve and do so. I am interested in this problem but all of my free time lately has been going into the EXPLAIN patch I'm working on, so I haven't had time to dig into it much. The problems of being a hobbyist... ...Robert -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Tom Lane wrote: >> There is a different problem though: sometimes the recommended fix for >> contrib-module incompatibilities is to load the new contrib module into >> the target database before trying to load your old dump file. (We told >> people to do that for 8.2->8.3 tsearch2, for example.) In the >> pg_migrator context there is no way to insert the new contrib module >> first, and also no way to ignore the duplicate-object errors that you >> typically get while loading the old dump. > Ah, OK, interesting. We could have pg_migrator detect this issue and > fail right away with a message indicating pg_migrator cannot be used > unless those objects are dumped manually and the /contrib removed. How would it detect it? I think the only thing you could do is hard-wire tests for specific objects, which is klugy and doesn't extend to third-party modules that you don't know about. In most cases where there are major incompatibilities, attempting to load the old dump file would fail anyway, so I don't think pg_migrator really needs any hard-wired test. It's the minor changes that are risky. I think ultimately the burden for those has to be on the module author: he has to either avoid cross-version incompatibilities or make sure they will fail safely. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine wrote: > Tom Lane writes: > > Exactly. And note that this is not pg_migrator's fault: a pg_dump > > dump and reload of the database exposes the user to the same risks, > > if the module author has not been careful about compatibility. > > It seems to me the dump will contain text string representation of data > and pg_restore will run the input function of the type on this, so that > maintaining backward compatibility of the data type doesn't sound > hard. As far as the specific index support goes, pg_restore creates the > index from scratch. > > So, from my point of view, supporting backward compatibility by means of > dump and restore is the easy way. Introducing pg_migrator in the game, > the data type and index internals upgrade are now faced to the same > problem as the -core in-place upgrade project. > > Maybe we'll be able to provide the extension authors (not only contrib) > a specialized API to trigger in case of in-place upgrade of PG version > or the extension itself, ala Erlang code upgrade facility e.g.: > http://erlang.org/doc/reference_manual/code_loading.html#12.3 > > This part of the extension design will need help from C dynamic module > experts around, because it's terra incognita as far as I'm concerned. At a minimum it would be great if items could mark themselves as non-binary-upgradable. Perhaps the existence of a symbol in the *.so file could indicate that, or a function call could be made and you pass in the old and new major version numbers and it would return true/false based on binary upgradeability. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> I think that #1 and #4 could be substantially alleviated if the > >> instructions recommended doing a trial run with a schema-only dump > >> of the database. That is, > >> > >> * pg_dumpall -s > >> * load that into a test installation (of the *old* PG version) > >> * migrate the test installation to new PG version > >> * do the same sorts of applications compatibility checks you'd want to > >> do anyway before a major version upgrade > > > But you have no data in the database --- can any meaningful testing be > > done? > > Well, you'd have to insert some. But this is something you'd have to do > *anyway*, unless you are willing to just pray that your apps don't need > any changes. The only new thing I'm suggesting here is incorporating > use of pg_migrator into your normal compatibility testing. Ah, I see. Interesting. I have added the second sentence to the pg_migrator README: Installation See the INSTALL file for detailed installation instructions. For deployment testing, create a schema-only copy of the old cluster, insert dummy data, and migrate that. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Dimitri Fontaine wrote: > >> So the case where pg_migrator still fails is when the .sql file of the > >> module has changed in a way that restoring what pg_dump gives no longer > >> match what the .so exposes, or if the new .so is non backward > >> compatible? > > > Yes, that is a problem. It is not a pg_migrator-specific problem > > because people traditionally bring the /contrib schema over from the old > > install (I think). The only pg_migrator-specific failure is when the > > data format changed and dump/restore would fix it, but pg_migrator would > > migrate corrupt data. :-( > > There is a different problem though: sometimes the recommended fix for > contrib-module incompatibilities is to load the new contrib module into > the target database before trying to load your old dump file. (We told > people to do that for 8.2->8.3 tsearch2, for example.) In the > pg_migrator context there is no way to insert the new contrib module > first, and also no way to ignore the duplicate-object errors that you > typically get while loading the old dump. > > It would probably be a relatively simple feature addition to teach > pg_migrator to load such-and-such modules into the new databases before > loading the old dump. But I'm still scared to death by the idea of > letting it ignore errors, so there doesn't seem to be any good solution > to this type of migration scenario. Ah, OK, interesting. We could have pg_migrator detect this issue and fail right away with a message indicating pg_migrator cannot be used unless those objects are dumped manually and the /contrib removed. As long as pg_migrator is clear, I don't think people will complain. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
I wrote: > * pg_buffercache has changed the view pg_buffercache, which is > definitely going to be a migration issue. Need to test whether > it represents a crash risk if the old definition is migrated. I checked this, and there is not a crash risk: the function successfully creates its result tuplestore, and then the main executor notices it's not compatible with the old view. So you get regression=# select * from pg_buffercache; ERROR: function return row and query-specified return row do not match DETAIL: Returned row contains 8 attributes, but query expects 7. You can fix it by dropping and recreating the view (eg, run the module's uninstall and then install scripts). I suppose that might be a bit annoying if you've built additional views atop this one, but overall it doesn't sound too bad. So I don't plan to do anything about this module. Still working on the others. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane writes: > Exactly. And note that this is not pg_migrator's fault: a pg_dump > dump and reload of the database exposes the user to the same risks, > if the module author has not been careful about compatibility. It seems to me the dump will contain text string representation of data and pg_restore will run the input function of the type on this, so that maintaining backward compatibility of the data type doesn't sound hard. As far as the specific index support goes, pg_restore creates the index from scratch. So, from my point of view, supporting backward compatibility by means of dump and restore is the easy way. Introducing pg_migrator in the game, the data type and index internals upgrade are now faced to the same problem as the -core in-place upgrade project. Maybe we'll be able to provide the extension authors (not only contrib) a specialized API to trigger in case of in-place upgrade of PG version or the extension itself, ala Erlang code upgrade facility e.g.: http://erlang.org/doc/reference_manual/code_loading.html#12.3 This part of the extension design will need help from C dynamic module experts around, because it's terra incognita as far as I'm concerned. Regards, -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Tom Lane wrote: >> I think that #1 and #4 could be substantially alleviated if the >> instructions recommended doing a trial run with a schema-only dump >> of the database. That is, >> >> * pg_dumpall -s >> * load that into a test installation (of the *old* PG version) >> * migrate the test installation to new PG version >> * do the same sorts of applications compatibility checks you'd want to >> do anyway before a major version upgrade > But you have no data in the database --- can any meaningful testing be > done? Well, you'd have to insert some. But this is something you'd have to do *anyway*, unless you are willing to just pray that your apps don't need any changes. The only new thing I'm suggesting here is incorporating use of pg_migrator into your normal compatibility testing. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Dimitri Fontaine wrote: >> So the case where pg_migrator still fails is when the .sql file of the >> module has changed in a way that restoring what pg_dump gives no longer >> match what the .so exposes, or if the new .so is non backward >> compatible? > Yes, that is a problem. It is not a pg_migrator-specific problem > because people traditionally bring the /contrib schema over from the old > install (I think). The only pg_migrator-specific failure is when the > data format changed and dump/restore would fix it, but pg_migrator would > migrate corrupt data. :-( There is a different problem though: sometimes the recommended fix for contrib-module incompatibilities is to load the new contrib module into the target database before trying to load your old dump file. (We told people to do that for 8.2->8.3 tsearch2, for example.) In the pg_migrator context there is no way to insert the new contrib module first, and also no way to ignore the duplicate-object errors that you typically get while loading the old dump. It would probably be a relatively simple feature addition to teach pg_migrator to load such-and-such modules into the new databases before loading the old dump. But I'm still scared to death by the idea of letting it ignore errors, so there doesn't seem to be any good solution to this type of migration scenario. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Dimitri Fontaine writes: > > So the case where pg_migrator still fails is when the .sql file of the > > module has changed in a way that restoring what pg_dump gives no longer > > match what the .so exposes, or if the new .so is non backward > > compatible? > > Exactly. And note that this is not pg_migrator's fault: a pg_dump > dump and reload of the database exposes the user to the same risks, > if the module author has not been careful about compatibility. Agreed. In many ways pg_migrator failures will be caused by failures of the many tools it relies upon, as I mentioned a few days ago. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Tom Lane wrote: > >> There was just some discussion about that on postgis-devel. I think the > >> conclusion was that you would have to do the PostGIS update as a > >> separate step. They intend to support both 1.3.x and 1.4.x on current > >> versions of Postgres for some time, so in principle you could do it in > >> either order. > > > Oh, yea, you can't go from PostGIS version 1.3 to 1.4 _while_ you do the > > pg_migrator upgrade. It has to be done either before or after > > pg_migrator is run. I wonder how I could prevent someone from trying > > that trick. > > You don't need to, because it will fail automatically. They are using > version-numbered .so files, so C-language functions referencing the 1.3 > .so will fail to load if only the 1.4 .so is in the new installation. Nice. Something to be learned there. ;-) -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Bruce Momjian writes: > > Let me list the problems with pg_migrator: > > > o /contrib and plugin migration (not unique to pg_migrator) > > o you must read/follow the install instructions > > o might require post-migration table/index rebuilds > > o new so serious bugs might exist > > I think that #1 and #4 could be substantially alleviated if the > instructions recommended doing a trial run with a schema-only dump > of the database. That is, > > * pg_dumpall -s > * load that into a test installation (of the *old* PG version) > * migrate the test installation to new PG version > * do the same sorts of applications compatibility checks you'd want to > do anyway before a major version upgrade But you have no data in the database --- can any meaningful testing be done? FYI, pg_migrator will do the schema load pretty early (even before the file copy) and fail on errors. Retrying pg_migrator is pretty easy and is now well documented in the INSTALL file. > This would certainly catch migration-time failures caused by plugins, > and the followup testing would probably catch any large post-migration > issue. > > Somebody who is not willing to do this type of testing should not be > using pg_migrator (yet), and probably has not got a database large > enough to need it anyway. Agreed. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine writes: > So the case where pg_migrator still fails is when the .sql file of the > module has changed in a way that restoring what pg_dump gives no longer > match what the .so exposes, or if the new .so is non backward > compatible? Exactly. And note that this is not pg_migrator's fault: a pg_dump dump and reload of the database exposes the user to the same risks, if the module author has not been careful about compatibility. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Tom Lane wrote: >> There was just some discussion about that on postgis-devel. I think the >> conclusion was that you would have to do the PostGIS update as a >> separate step. They intend to support both 1.3.x and 1.4.x on current >> versions of Postgres for some time, so in principle you could do it in >> either order. > Oh, yea, you can't go from PostGIS version 1.3 to 1.4 _while_ you do the > pg_migrator upgrade. It has to be done either before or after > pg_migrator is run. I wonder how I could prevent someone from trying > that trick. You don't need to, because it will fail automatically. They are using version-numbered .so files, so C-language functions referencing the 1.3 .so will fail to load if only the 1.4 .so is in the new installation. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine wrote: > Tom Lane writes: > > > Dimitri Fontaine writes: > >> My vote would go to detect and error out without recovering option. If > >> the tool is not able to handle a situation and knows it, I don't see > >> what would be good about it leting the user lose data on purpose. > > > > No, that's not what's being discussed. The proposal is to have it error > > out when it *does not* know whether there is a real problem; and, in > > fact, when there's only a rather low probability of there being a real > > problem. My view is that that's basically counterproductive. It leads > > directly to having to have a --force switch and then to people using > > that switch carelessly. > > True, it could be that the data type representation has not changed > between 8.3 and 8.4, nor the index content format. In this case > pg_migrator will work fine on the cluster as soon as you installed the > new .so... Yes. > So the case where pg_migrator still fails is when the .sql file of the > module has changed in a way that restoring what pg_dump gives no longer > match what the .so exposes, or if the new .so is non backward > compatible? Yes, that is a problem. It is not a pg_migrator-specific problem because people traditionally bring the /contrib schema over from the old install (I think). The only pg_migrator-specific failure is when the data format changed and dump/restore would fix it, but pg_migrator would migrate corrupt data. :-( -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Bruce Momjian writes: > Let me list the problems with pg_migrator: > o /contrib and plugin migration (not unique to pg_migrator) > o you must read/follow the install instructions > o might require post-migration table/index rebuilds > o new so serious bugs might exist I think that #1 and #4 could be substantially alleviated if the instructions recommended doing a trial run with a schema-only dump of the database. That is, * pg_dumpall -s * load that into a test installation (of the *old* PG version) * migrate the test installation to new PG version * do the same sorts of applications compatibility checks you'd want to do anyway before a major version upgrade This would certainly catch migration-time failures caused by plugins, and the followup testing would probably catch any large post-migration issue. Somebody who is not willing to do this type of testing should not be using pg_migrator (yet), and probably has not got a database large enough to need it anyway. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine wrote: > >> I don't think that anything in that line is going to be helpful. > >> What it will lead to is people mindlessly using --force (cf our > >> bad experiences with -i for pg_dump). If you can't give a *useful* > >> ie trustworthy warning/error, issuing a useless one is not a good > >> substitute. > > > > Well, in that case, error out would be a better option than doing it and > > probably fail later. And have a --force option available, but don't > > suggest it. > > My vote would go to detect and error out without recovering option. If > the tool is not able to handle a situation and knows it, I don't see > what would be good about it leting the user lose data on purpose. > > The --force option should be for the user to manually drop his columns > and indexes (etc) and try pg_migrator again, which will stop listing > faulty objects but care about the now compatible cluster. > > Restoring the lost data is not the job of pg_migrator, of course. Agreed. Right now pg_migrator never modifies the old cluster except for renaming pg_control (documented) so the old cluster is not accidentally restarted. I don't want to change that behavior. I would filter the dump --schema file, but again, it is best to let the administrator do it, and if they can't, they should just do dump/restore. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane writes: > Dimitri Fontaine writes: >> My vote would go to detect and error out without recovering option. If >> the tool is not able to handle a situation and knows it, I don't see >> what would be good about it leting the user lose data on purpose. > > No, that's not what's being discussed. The proposal is to have it error > out when it *does not* know whether there is a real problem; and, in > fact, when there's only a rather low probability of there being a real > problem. My view is that that's basically counterproductive. It leads > directly to having to have a --force switch and then to people using > that switch carelessly. True, it could be that the data type representation has not changed between 8.3 and 8.4, nor the index content format. In this case pg_migrator will work fine on the cluster as soon as you installed the new .so... So the case where pg_migrator still fails is when the .sql file of the module has changed in a way that restoring what pg_dump gives no longer match what the .so exposes, or if the new .so is non backward compatible? Ok, maybe there's a way it'll just work. I withdraw my vote. Thanks for your patience, regards, -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Magnus Hagander wrote: > Dimitri Fontaine wrote: > > Le 6 juin 09 ? 20:45, Josh Berkus a ?crit : > >> So, here's what we need for 8.3 --> 8.4 for contrib modules: > > > > That does nothing for external modules whose code isn't in PostgreSQL > > control. I'm thinking of those examples I cited up-thread --- and some > > more. (ip4r, temporal, prefix, hstore-new, oracfe, etc). > > For me and I know several others, the big question would be PostGIS. > Unfortunately I haven't had the time to run any tests myself, so I'll > join the line of people being worried, but I have a number of customers > with somewhere between pretty and very large PostGIS databases that > could really benefit from pg_migrator. > > As long as PostGIS is the same version in both of them, is pg_migrator > is likely to work? (one can always run the PostGIS upgrade as a separate > step) Yes, it should work with the same version of PostGIS, but I have not tested it. There is nothing special about PostGIS that would cause it not work work --- we use the same pg_dump as you would for a major upgrade --- we just move the files around instead of dumping the data. > > Could pg_migrator detect usage of "objects" oids (data types in > > relation, index opclass, ...) that are unknown to be in the standard > > -core + contrib distribution, and quit trying to upgrade the cluster in > > this case, telling the user his database is not supported? > > +1 on this. > > Or at least, have it exit and say "if you know that these things are > reasonably safe, run pg_migrator again with --force" or something like that. Right now pg_migrator throws an error if the schema load doesn't work. Assuming you use the same version on the old and new clusters, it should work fine. I am unclear what checking oids would do. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Magnus Hagander wrote: > >>> Could pg_migrator detect usage of "objects" oids (data types in > >>> relation, index opclass, ...) that are unknown to be in the standard > >>> -core + contrib distribution, and quit trying to upgrade the cluster in > >>> this case, telling the user his database is not supported? > > > >> +1 on this. > > > >> Or at least, have it exit and say "if you know that these things are > >> reasonably safe, run pg_migrator again with --force" or something like > >> that. > > > > I don't think that anything in that line is going to be helpful. > > What it will lead to is people mindlessly using --force (cf our > > bad experiences with -i for pg_dump). If you can't give a *useful* > > ie trustworthy warning/error, issuing a useless one is not a good > > substitute. > > Well, in that case, error out would be a better option than doing it and > probably fail later. And have a --force option available, but don't > suggest it. Uh, what doesn't error out now in pg_migrator? -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Magnus Hagander writes: > > As long as PostGIS is the same version in both of them, is pg_migrator > > is likely to work? (one can always run the PostGIS upgrade as a separate > > step) > > There was just some discussion about that on postgis-devel. I think the > conclusion was that you would have to do the PostGIS update as a > separate step. They intend to support both 1.3.x and 1.4.x on current > versions of Postgres for some time, so in principle you could do it in > either order. Oh, yea, you can't go from PostGIS version 1.3 to 1.4 _while_ you do the pg_migrator upgrade. It has to be done either before or after pg_migrator is run. I wonder how I could prevent someone from trying that trick. > >> Could pg_migrator detect usage of "objects" oids (data types in > >> relation, index opclass, ...) that are unknown to be in the standard > >> -core + contrib distribution, and quit trying to upgrade the cluster in > >> this case, telling the user his database is not supported? > > > +1 on this. > > > Or at least, have it exit and say "if you know that these things are > > reasonably safe, run pg_migrator again with --force" or something like that. > > I don't think that anything in that line is going to be helpful. > What it will lead to is people mindlessly using --force (cf our > bad experiences with -i for pg_dump). If you can't give a *useful* > ie trustworthy warning/error, issuing a useless one is not a good > substitute. Yep. The install instructions explain how you have to get around this, and if they don't understand it, they shouldn't be using pg_migrator and should just do the traditional dump/restore. It is too tempting to give them a force flag. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas wrote: > > Right now nothing in the project is referring to pg_migrator except in > > the press release, and it is marked as beta there. ?How do you want to > > deemphasize it more than that? ?Why did I bother working on this if the > > community reaction is to try to figure out how to make people avoid > > using it? > > Because Rome wasn't built in a day. > > It seems to me that you yourself placed a far more disparaging label > on it than anything that anyone has proposed today; this was a week > ago: > > http://archives.postgresql.org/pgsql-hackers/2009-05/msg01470.php > > I don't think it's anyone's intention to disparage your work on this > tool. It certainly isn't mine. But it seems obvious to me that it > has some pretty severe limitations and warts. The fact that those > limitations and warts are well-documented doesn't negate their > existence. I also don't think calling the software "beta" or > "experimental" is a way of deemphasizing it. I think it's a way of > being clear that this software is not the bullet-proof, rock-solid, > handles-all-cases-and-keeps-on-trucking level of robustness that > people have come to expect from PostgreSQL. > > FWIW, I have no problem at all with mentioning pg_migrator in the > release notes or the documentation; my failure to respond to your last > emails on this topic was due to being busy and having already spent > too much time responding to other emails, not due to thinking it was a > bad idea. I actually think it's a good idea. But I also think those > references should describe it as experimental, because I think it is. > I really hope it won't remain experimental forever, but I think that's > an accurate characterization of where it is now. pg_migrator should be looked at critically here, and I agree we should avoid letting pg_migrator failures reflect badly on Postgres. Let me list the problems with pg_migrator: o /contrib and plugin migration (not unique to pg_migrator) o you must read/follow the install instructions o might require post-migration table/index rebuilds o new so serious bugs might exist and let me list its benefits: o first in-place upgrade capability in years o tested by some users, all successful (since late alpha) o removes major impediment to adoption o includes extensive error checking and reporting o contains detailed installation/usage instructions So let's look at pg_migrator as an opportunity and a risk. As far as I know, only Hiroshi Saito and I have have looked at the code. Why don't others read the pg_migrator source code looking for bugs? Why have more people not test it? I think "experimental" is the wrong label. Experimental assumes its usefulness is uncertain and that it is still being researched --- neither is true. Once I release pg_migrator 8.4 final at the end of this week (assuming no bugs are reported), I consider it done, or at least advanced as far as I can go until I get more feedback from users. I think we can say: "pg_migrator is designed for experienced users with large databases, for whom the typical dump/restore required for major version upgrades is a hardship". I assume this will be the same adoption pattern we had with the Win32 port, where it was a new platform in 8.0 and we dealt with some issues as it was deployed, and that people who want it will find it and hopefully it will be useful for them. -- Bruce Momjian http://momjian.us EnterpriseDB http://enterprisedb.com + If your life is a hard drive, Christ can be your backup. + -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial vacuum versus pg_class.reltuples
Tom Lane escribió: > Alvaro Herrera writes: > > Robert Haas escribi�: > >> Maybe we should just have a GUC to enable/disable > >> partial vacuums. > > > IIRC you can set vacuum_freeze_table_age to 0. > > That has the same effects as issuing VACUUM FREEZE, no? As far as I can make from the docs, I think it only forces a full table scan, but the freeze age remains the same. -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine writes: > My vote would go to detect and error out without recovering option. If > the tool is not able to handle a situation and knows it, I don't see > what would be good about it leting the user lose data on purpose. No, that's not what's being discussed. The proposal is to have it error out when it *does not* know whether there is a real problem; and, in fact, when there's only a rather low probability of there being a real problem. My view is that that's basically counterproductive. It leads directly to having to have a --force switch and then to people using that switch carelessly. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Magnus Hagander writes: > Tom Lane wrote: >> Magnus Hagander writes: Could pg_migrator detect usage of "objects" oids (data types in relation, index opclass, ...) that are unknown to be in the standard -core + contrib distribution, and quit trying to upgrade the cluster in this case, telling the user his database is not supported? >> >>> +1 on this. >> >>> Or at least, have it exit and say "if you know that these things are >>> reasonably safe, run pg_migrator again with --force" or something like that. >> >> I don't think that anything in that line is going to be helpful. >> What it will lead to is people mindlessly using --force (cf our >> bad experiences with -i for pg_dump). If you can't give a *useful* >> ie trustworthy warning/error, issuing a useless one is not a good >> substitute. > > Well, in that case, error out would be a better option than doing it and > probably fail later. And have a --force option available, but don't > suggest it. My vote would go to detect and error out without recovering option. If the tool is not able to handle a situation and knows it, I don't see what would be good about it leting the user lose data on purpose. The --force option should be for the user to manually drop his columns and indexes (etc) and try pg_migrator again, which will stop listing faulty objects but care about the now compatible cluster. Restoring the lost data is not the job of pg_migrator, of course. Regards, -- dim -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial vacuum versus pg_class.reltuples
Alvaro Herrera writes: > Robert Haas escribió: >> Maybe we should just have a GUC to enable/disable >> partial vacuums. > IIRC you can set vacuum_freeze_table_age to 0. That has the same effects as issuing VACUUM FREEZE, no? regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Partial vacuum versus pg_class.reltuples
Robert Haas escribió: > Basically, I'm trying to figure out what we're going to recommend to > someone who gets bitten by whatever remaining corner case still exists > after your recent patch, and I admit I'm not real clear on what that > is. VACUUM FULL doesn't seem like a good solution because it's more > than just "vacuum but don't skip any pages even if the visibility map > says you can". Maybe we should just have a GUC to enable/disable > partial vacuums. IIRC you can set vacuum_freeze_table_age to 0. -- Alvaro Herrerahttp://www.CommandPrompt.com/ The PostgreSQL Company - Command Prompt, Inc. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Developer meeting minutes up
Hi, Quoting "Nicolas Barbier" : ISTM that back-patching I take this to mean "back-patching by cherry picking". a change to a file that wasn't modified on the back-branch leads exactly to merging a change to a (file-wise) ancestor? Regarding the file's contents - and therefore the immediately visible result - that's correct. However, for a merge, the two ancestor revisions are stored, where as with cherry-pinging this information is lost (at least for git). So, trying to merge on top of a cherry-pick, git must merge these changes again (which might or might not work). Merging on top of merging works just fine. Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Tom Lane wrote: > Magnus Hagander writes: >> As long as PostGIS is the same version in both of them, is pg_migrator >> is likely to work? (one can always run the PostGIS upgrade as a separate >> step) > > There was just some discussion about that on postgis-devel. I think the > conclusion was that you would have to do the PostGIS update as a > separate step. They intend to support both 1.3.x and 1.4.x on current > versions of Postgres for some time, so in principle you could do it in > either order. Doing them as two steps is totally fine with me, because IIRC the PostGIS upgrades generally don't require hours and hours of downtime. >>> Could pg_migrator detect usage of "objects" oids (data types in >>> relation, index opclass, ...) that are unknown to be in the standard >>> -core + contrib distribution, and quit trying to upgrade the cluster in >>> this case, telling the user his database is not supported? > >> +1 on this. > >> Or at least, have it exit and say "if you know that these things are >> reasonably safe, run pg_migrator again with --force" or something like that. > > I don't think that anything in that line is going to be helpful. > What it will lead to is people mindlessly using --force (cf our > bad experiences with -i for pg_dump). If you can't give a *useful* > ie trustworthy warning/error, issuing a useless one is not a good > substitute. Well, in that case, error out would be a better option than doing it and probably fail later. And have a --force option available, but don't suggest it. //Magnus -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Magnus Hagander writes: > As long as PostGIS is the same version in both of them, is pg_migrator > is likely to work? (one can always run the PostGIS upgrade as a separate > step) There was just some discussion about that on postgis-devel. I think the conclusion was that you would have to do the PostGIS update as a separate step. They intend to support both 1.3.x and 1.4.x on current versions of Postgres for some time, so in principle you could do it in either order. >> Could pg_migrator detect usage of "objects" oids (data types in >> relation, index opclass, ...) that are unknown to be in the standard >> -core + contrib distribution, and quit trying to upgrade the cluster in >> this case, telling the user his database is not supported? > +1 on this. > Or at least, have it exit and say "if you know that these things are > reasonably safe, run pg_migrator again with --force" or something like that. I don't think that anything in that line is going to be helpful. What it will lead to is people mindlessly using --force (cf our bad experiences with -i for pg_dump). If you can't give a *useful* ie trustworthy warning/error, issuing a useless one is not a good substitute. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
Gregory Stark writes: > I think the accepted way to handle this kind of situation is called STONITH -- > "Shoot The Other Node In The Head". Yeah, and the reason people go to the trouble of having special hardware for that is that pure-software solutions are unreliable. I think the proposed don't-restart flag is exceedingly ugly and will not solve any real-world problem. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
On Sun, Jun 7, 2009 at 12:36 PM, Tom Lane wrote: > I wrote: >> * pageinspect has changed the ABI of get_raw_page() in a way that will >> likely make it dump core if the function definition is migrated from >> an old DB. This needs to be fixed. >> [ and similarly for some other contrib modules ] > > After thinking about this some more, I think that there is a fairly > simple coding rule we can adopt to prevent post-migration crashes > of the sort I'm worrying about above. That is: > > * If you change the ABI of a C-language function, change its C name. > > This will ensure that if someone tries to migrate the old function > definition from an old database, they will get a pg_migrator failure, > or at worst a clean runtime failure when they attempt to use the old > definition. They won't get a core dump or some worse form of security > problem. > > As an example, the problem in pageinspect is this diff: > > *** > *** 6,16 > -- > -- get_raw_page() > -- > ! CREATE OR REPLACE FUNCTION get_raw_page(text, int4) > RETURNS bytea > AS 'MODULE_PATHNAME', 'get_raw_page' > LANGUAGE C STRICT; > > -- > -- page_header() > -- > --- 6,21 > -- > -- get_raw_page() > -- > ! CREATE OR REPLACE FUNCTION get_raw_page(text, text, int4) > RETURNS bytea > AS 'MODULE_PATHNAME', 'get_raw_page' > LANGUAGE C STRICT; > > + CREATE OR REPLACE FUNCTION get_raw_page(text, int4) > + RETURNS bytea > + AS $$ SELECT get_raw_page($1, 'main', $2); $$ > + LANGUAGE SQL STRICT; > + > -- > -- page_header() > -- > *** > > The underlying C-level get_raw_page function is still there, but > it now expects three arguments not two, and will crash if it's > passed an int4 where it's expecting a text argument. But the old > function definition will migrate without error --- there's no way > for pg_migrator to realize it's installing a security hazard. > > The way we should have done this, which I intend to go change it to, > is something like > > CREATE OR REPLACE FUNCTION get_raw_page(text, int4) > RETURNS bytea > AS 'MODULE_PATHNAME', 'get_raw_page' > LANGUAGE C STRICT; > > CREATE OR REPLACE FUNCTION get_raw_page(text, text, int4) > RETURNS bytea > AS 'MODULE_PATHNAME', 'get_raw_page_3' > LANGUAGE C STRICT; > > so that the old function's ABI is preserved. Migration of the old > contrib module will then lead to the 3-argument function not being > immediately available, but the 2-argument one still works. Had we not > wanted to keep the 2-argument form for some reason, we would have > provided only get_raw_page_3 in the .so file, and attempts to use the > old function definition would fail safely. > > (We have actually seen similar problems before with people trying > to dump and reload database containing contrib modules. pg_migrator > is not creating a problem that wasn't there before, it's just making > it worse.) > > Comments, better ideas? maybe, get_raw_page_v2, etc? I suppose you could run into situation with multiple versions of the function w/same # of arguments? merlin -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
Hi, On Mon, Jun 8, 2009 at 6:45 PM, Gregory Stark wrote: > Fujii Masao writes: > >> On the other hand, the primary postgres might *not* restart automatically. >> So, it's difficult for clusterware to choose whether to do failover when it >> detects the death of the primary postgres, I think. > > > I think the accepted way to handle this kind of situation is called STONITH -- > "Shoot The Other Node In The Head". > > You need some way when the cluster software decides to initiate failover to > ensure that the first node *cannot* come back up. That could mean shutting the > power to it at the PDU or disabling its network connection at the switch, or > various other options. Yes, I understand that STONITH is a safe solution for split-brain. But, since some special equipment like PDU must probably be prepared, I think that some people (including me) want another reasonable way. The proposed feature is not perfect solution, but is a convenient way to prevent one of split-brain situations. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Developer meeting minutes up
Markus Wanner wrote: Quoting "Mark Mielke" : I am a theory person - I run things in my head. To me, the concept of having more context to make the right decision, and an algorithm that takes advantage of this context to make the right decision, is simple and compelling on its own. Knowing the algorithms that are in use, including how it selects the most recent common ancestor gives me confidence. Than makes me wondering why you are speaking against merges, where there are common ancestors. I'd argue that in theory (and generally) a merge yields better results than cherry-picking (where there is no common ancestor, thus less information). Especially for back-branches, where there obviously is a common ancestor. Nope - definitely not speaking against merges. Automatic merges = best. Automatic cherry picking = second best if the work flow doesn't allow for merges. Doing things by hand = bad but sometimes necessary. Automatic merges or automatic cherry picking with some manual tweaking (hopefully possible from kdiff3) = necessary at times but still better than doing things by hand completely. I think you and I are in agreement. (Even Tom and I are in agreement on many things - I just didn't respond to his well thought out great posts, like the one that describes why back patching is often better than forward patching when having multiple parallel releases open at the same time) No amount of discussions where others say "it works great" and you say "I don't believe you until you provide me with output" is going to get anywhere. Well, I guess it can be frustrating for both sides. However, I think these discussions are worthwhile (and necessary) none the less. As not even those who highly appreciate merge algorithms (you and me, for example) are in agreement on how to use them (cherry-picking vs. merging) it doesn't surprise me that others are generally skeptic. We're in agreement on the merge algorithms I think. :-) That said, it is a large domain, and there is room for disagreement even between those with experience, and you are right that it shouldn't be surprising that others are generally sceptic. Cheers, mark -- Mark Mielke -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
Fujii Masao writes: > On the other hand, the primary postgres might *not* restart automatically. > So, it's difficult for clusterware to choose whether to do failover when it > detects the death of the primary postgres, I think. I think the accepted way to handle this kind of situation is called STONITH -- "Shoot The Other Node In The Head". You need some way when the cluster software decides to initiate failover to ensure that the first node *cannot* come back up. That could mean shutting the power to it at the PDU or disabling its network connection at the switch, or various other options. Gregory Stark http://mit.edu/~gsstark/resume.pdf -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] postmaster recovery and automatic restart suppression
Hi, On Fri, Jun 5, 2009 at 9:24 PM, Kolb, Harald (NSN - DE/Munich) wrote: >> Good point. I also think that this makes a handling of failover >> more complicated. In other words, clusterware cannot determine >> whether to do failover when it detects the death of the primary >> postgres. A wrong decision might cause split brain syndrome. > Mh, I cannot follow your reflections. Could you explain a little bit > more ? >> >> How about new GUC parameter to determine whether to restart >> postmaster automatically when it fails abnormally? This would >> be useful for various failover system. The primary postgres might restart automatically after clusterware finished failover (i.e. the standby postgres has came up live). In this case, postgres would work in each server, and they are independent of each other. This is known as one of Split-Brain syndrome. The problem is that, for example, if they share the archival storage, some archived files might get lost; the original primary postgres might overwrite the archived file which is written by the new primary. On the other hand, the primary postgres might *not* restart automatically. So, it's difficult for clusterware to choose whether to do failover when it detects the deatch of the primary postgres, I think. > A new GUC parameter would be the optimal solution. > Since I'm new to the community, what's the "usual" way to make this > happen ? The followings might be a good reference to you. http://www.pgcon.org/2009/schedule/events/178.en.html http://wiki.postgresql.org/wiki/Submitting_a_Patch Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Developer meeting minutes up
Hi, Quoting "Nicolas Barbier" : If I understand correctly, "nearby variable renaming" refers to changes to the few lines surrounding the changes-to-be-merged. Hm.. I took that to mean "changes on the same line". I now realize this interpretation has been an overly strict interpretation. There is certainly supposed to be an advantage relative to diff/patch here: as all changes leading to both versions are known (up to some common ancestor), git doesn't need "context lines" to recognize the position in the file that is supposed to receive the updates. Yes, that's how I understand it as well. Your example seems fine (except that it does not make much sense to merge with an ancestor). I'm not sure if git also works line by line (as does monotone). However, IIRC kdiff3 uses some finer grained comparison, so it can even merge unrelated change on the same line, i.e.: ancestor: aaa bbb left: axa bbb (modified a -> x) right:aaa byb (modified b -> y) merge:axa byb (contains both modifications) Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Developer meeting minutes up
Robert Haas wrote: > On Fri, Jun 5, 2009 at 12:15 PM, Tom Lane wrote: >> ... but I'm not at all excited about cluttering the >> long-term project history with a zillion micro-commits. One of the >> things I find most annoying about reviewing the current commit history >> is that Bruce has taken a micro-commit approach to managing the TODO >> list --- I was seldom so happy as the day that disappeared from CVS, >> because of the ensuing reduction in noise level. For better or worse, git also includes a command "git-rebase" that can collapse such micro-commits into a larger one. Quoting the git-rebase man page: A range of commits could also be removed with rebase. If we have the following situation: E---F---G---H---I---J topicA then the command git-rebase --onto topicA~5 topicA~3 topicA would result in the removal of commits F and G: E---H´---I´---J´ topicA While I wouldn't recommend using this for historical revisionism, I imagine it could be useful during code-review time when the micro-commits (from both the patch submitter and patch reviewer) are interesting. After the review, the commits could be collapsed into meaningful-sized-chunks just before they're merged into the official branches. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] PostgreSQL Developer meeting minutes up
Hi, Quoting "Mark Mielke" : I am a theory person - I run things in my head. To me, the concept of having more context to make the right decision, and an algorithm that takes advantage of this context to make the right decision, is simple and compelling on its own. Knowing the algorithms that are in use, including how it selects the most recent common ancestor gives me confidence. Than makes me wondering why you are speaking against merges, where there are common ancestors. I'd argue that in theory (and generally) a merge yields better results than cherry-picking (where there is no common ancestor, thus less information). Especially for back-branches, where there obviously is a common ancestor. No amount of discussions where others say "it works great" and you say "I don't believe you until you provide me with output" is going to get anywhere. Well, I guess it can be frustrating for both sides. However, I think these discussions are worthwhile (and necessary) none the less. As not even those who highly appreciate merge algorithms (you and me, for example) are in agreement on how to use them (cherry-picking vs. merging) it doesn't surprise me that others are generally skeptic. Regards Markus Wanner -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Dimitri Fontaine wrote: > Le 6 juin 09 à 20:45, Josh Berkus a écrit : >> So, here's what we need for 8.3 --> 8.4 for contrib modules: > > That does nothing for external modules whose code isn't in PostgreSQL > control. I'm thinking of those examples I cited up-thread --- and some > more. (ip4r, temporal, prefix, hstore-new, oracfe, etc). For me and I know several others, the big question would be PostGIS. Unfortunately I haven't had the time to run any tests myself, so I'll join the line of people being worried, but I have a number of customers with somewhere between pretty and very large PostGIS databases that could really benefit from pg_migrator. As long as PostGIS is the same version in both of them, is pg_migrator is likely to work? (one can always run the PostGIS upgrade as a separate step) > Could pg_migrator detect usage of "objects" oids (data types in > relation, index opclass, ...) that are unknown to be in the standard > -core + contrib distribution, and quit trying to upgrade the cluster in > this case, telling the user his database is not supported? +1 on this. Or at least, have it exit and say "if you know that these things are reasonably safe, run pg_migrator again with --force" or something like that. -- Magnus Hagander Self: http://www.hagander.net/ Work: http://www.redpill-linpro.com/ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_migrator issue with contrib
Robert Haas wrote: On Sun, Jun 7, 2009 at 11:50 PM, Bruce Momjian wrote: Stefan Kaltenbrunner wrote: Josh Berkus wrote: On 6/7/09 10:56 AM, Robert Haas wrote: OK, that's more or less what I thought, and what I intended to convey upthread. As far as core Postgres is concerned this is a new feature, and we haven't worked out all the kinks yet. Yes, I'm calling it "pg_migrator beta" in any advocacy/PR about it. AFAIC, until we have these sorts of issues worked out, it's still a beta. afaiks bruce stated he is going to remove the BETA tag from pg_migrator soon so I guess calling it beta in the main project docs will confuse the hell out of people(or causing them to think that it is not beta any more). So maybe calling it experimental(from the POV of the main project) or something similar might still be the better solution. This all sounds very discouraging. It is like, "Oh, my, there is a migration tool and it might have bugs. How do we prevent people from using it?" Right now nothing in the project is referring to pg_migrator except in the press release, and it is marked as beta there. How do you want to deemphasize it more than that? Why did I bother working on this if the community reaction is to try to figure out how to make people avoid using it? Because Rome wasn't built in a day. indeed It seems to me that you yourself placed a far more disparaging label on it than anything that anyone has proposed today; this was a week ago: http://archives.postgresql.org/pgsql-hackers/2009-05/msg01470.php well that is way more discouraging than what I wanted to say :) I don't think it's anyone's intention to disparage your work on this tool. It certainly isn't mine. But it seems obvious to me that it has some pretty severe limitations and warts. The fact that those limitations and warts are well-documented doesn't negate their existence. I also don't think calling the software "beta" or "experimental" is a way of deemphasizing it. I think it's a way of being clear that this software is not the bullet-proof, rock-solid, handles-all-cases-and-keeps-on-trucking level of robustness that people have come to expect from PostgreSQL. Exactly my point. pg_migrator gained a lot of momentum in the last weeks an months, but imho it has still way to go. I do think that binary upgrades are extremely important for us(that's why I did a fair amount of testing on it) but I don't think that we should go too far for this release. A lot of the code that makes postgresql what it is now took years to mature on pgfoundry or in contrib. So some of the questions to ask would be: * is pg_migrator ready for contrib/? Probably not - it is still a way too moving target so pgfoundry is good * is pg_migrator ready for /src/bin? Realistically I think we need to get at least one full cycle to see what happens in the field with something as complex as pg_migrator to really get a grasp on what else comes up. FWIW, I have no problem at all with mentioning pg_migrator in the release notes or the documentation; my failure to respond to your last emails on this topic was due to being busy and having already spent too much time responding to other emails, not due to thinking it was a bad idea. I actually think it's a good idea. But I also think those references should describe it as experimental, because I think it is. I really hope it won't remain experimental forever, but I think that's an accurate characterization of where it is now. yep - I was not against mentioning it either. We just should do it in a sane way(ie it is not part of the core project yet but endorsed and might get added in the future or such) so we don't confuse people(like we call it beta, the homepage does not) and yet get valuable feedback which we certainly need to go forward. Stefan -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Simple, safe hot backup and recovery
Hi Sano-san, On Fri, Jun 5, 2009 at 7:02 PM, Yoshinori Sano wrote: >> In v8.4, pg_stop_backup waits until all WAL files used during backup >> are archived. >> So, "sleep" is already unnecessary for standalone hot backup. > > Oh, it's a great news! We don't need to use the unsafe approach (the > sleep command) anymore if we use v8.4, do we? Yes in upcoming v8.4. Of course, in or before v8.3, you still need to make up the safe mechanism. For that, XLogArchiveIsBusy() function which was added in v8.4 may be a good reference to you. Regards, -- Fujii Masao NIPPON TELEGRAPH AND TELEPHONE CORPORATION NTT Open Source Software Center -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers