[HACKERS] [PoC] load balancing in libpq
Hi all, I have just written the first PoC code to enable load balancing in the libpq library. This libpq enhancement is intended to allow PostgreSQL users to take advantage of the replication in easier way. With using this patch, PQconnectdb() function accepts multiple connection info strings, and pick one of them by round-robin basis to connect. This example, shown in below, shows that PQconnect() accepts a connection string that contains four different databases (db1~db4) on different servers (dbhost1~dbhost4), and then, connects them in round-robin basis. I know there are several things to be considered, but at first, I need your feedback or comments for this enhancement. Do you think it would be useful? Regards, [snaga@devvm03 libpq_repli]$ cat libpq_lb_test.c #include stdio.h #include libpq-fe.h void _connect() { PGconn *conn; PGresult *res; conn = PQconnectdb(host=dbhost1 dbname=db1 user=snaga; host=dbhost2 dbname=db2 user=snaga; host=dbhost3 dbname=db3 user=snaga; host=dbhost4 dbname=db4 user=snaga); res = PQexec(conn, SELECT current_database()); if ( PQresultStatus(res)==PGRES_TUPLES_OK ) { printf(current_database = %s on %s\n, PQgetvalue(res, 0, 0), PQhost(conn)); } PQfinish(conn); } int main(void) { int i; for (i=0 ; i8 ; i++) _connect(); return 0; } [snaga@devvm03 libpq_repli]$ ./libpq_lb_test current_database = db1 on dbhost1 current_database = db2 on dbhost2 current_database = db3 on dbhost3 current_database = db4 on dbhost4 current_database = db1 on dbhost1 current_database = db2 on dbhost2 current_database = db3 on dbhost3 current_database = db4 on dbhost4 [snaga@devvm03 libpq_repli]$ -- Satoshi Nagayasu sn...@uptime.jp Uptime Technologies, LLC. http://www.uptime.jp diff --git a/src/interfaces/libpq/fe-connect.c b/src/interfaces/libpq/fe-connect.c index 9eaf410..14e31b6 100644 --- a/src/interfaces/libpq/fe-connect.c +++ b/src/interfaces/libpq/fe-connect.c @@ -569,6 +569,81 @@ PQconnectStartParams(const char *const * keywords, return conn; } +struct ConninfoRepli { + int n_conninfo; + char *conninfo_array[128]; +}; + +static bool +parse_conninfo_repli(char *conninfo, size_t len, const char *conninfo_repli) +{ + char buf[1024]; + char *conninfo_ptr; + char *end_ptr; + struct ConninfoRepli conninfo_r; + static int conninfo_idx = 0; + + int i; + + memset(conninfo_r, 0, sizeof(struct ConninfoRepli)); + + conninfo_r.n_conninfo = 0; + +#ifdef REPLI_DEBUG + printf(DEBUG: %s\n, conninfo_repli); +#endif + + conninfo_ptr = (char *)conninfo_repli; + + while (1) + { + int len; + + end_ptr = strchr(conninfo_ptr, ';'); + if ( end_ptr ) + len = end_ptr - conninfo_ptr; + else + len = strlen(conninfo_ptr); + + memset(buf, 0, sizeof(buf)); + strncpy(buf, conninfo_ptr, len); + + conninfo_r.conninfo_array[conninfo_r.n_conninfo] = strdup(buf); + conninfo_r.n_conninfo++; + + if ( !end_ptr ) + break; + + conninfo_ptr = end_ptr + 1; + } + +#ifdef REPLI_DEBUG + printf(DEBUG: n_conninfo = %d\n, conninfo_r.n_conninfo); + + for (i=0 ; iconninfo_r.n_conninfo ; i++) + { + printf(DEBUG: %s\n, conninfo_r.conninfo_array[i]); + } +#endif + + strncpy(conninfo, conninfo_r.conninfo_array[conninfo_idx], len); + conninfo_idx++; + +#ifdef REPLI_DEBUG + printf(DEBUG: conninfo = %s\n, conninfo); +#endif + + if ( conninfo_idx = conninfo_r.n_conninfo ) + conninfo_idx = 0; + + for (i=0 ; iconninfo_r.n_conninfo ; i++) + { + free(conninfo_r.conninfo_array[i]); + } + + return true; +} + /* * PQconnectStart * @@ -592,6 +667,7 @@ PGconn * PQconnectStart(const char *conninfo) { PGconn *conn; + char conninfo2[1024]; /* * Allocate memory for the conn structure @@ -601,9 +677,15 @@ PQconnectStart(const char *conninfo) return NULL; /* +* Parse an user-specified conninfo string that contains +* multiple conninfo strings, and pick one by round-robin basis. +*/ + parse_conninfo_repli(conninfo2, sizeof(conninfo2), conninfo); + + /* * Parse the conninfo string */ - if (!connectOptions1(conn, conninfo)) + if (!connectOptions1(conn, conninfo2)) return conn; /* -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed
On Fri, 2012-09-21 at 17:16 +0800, Rural Hunter wrote: If I run initdb with '-E zh_CN.utf8', it will tell me there is no such charset in the system. Because that is the name of a locale, not an encoding. I found a workaround to run initdb with '--lc-collate=zh_CN.utf8 --lc-ctype=zh_CN.utf8 --lc-messages=zh_CN.utf8 --lc-monetary=zh_CN.utf8 --lc-numeric=zh_CN.utf8 --lc-time=zh_CN.utf8'. But the case problem is really confusing. Try initdb --locale='zn_CN.utf8'. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PoC] load balancing in libpq
On 23-09-2012 07:50, Satoshi Nagayasu wrote: I have just written the first PoC code to enable load balancing in the libpq library. Your POC is totally broken. Just to point out two problems: (i) semicolon (;) is a valid character for any option in the connection string and (ii) you didn't think about PQsetdb[Login](), PQconnectdbParams() and PQconnectStartParams(). If you want to pursue this idea, you should think a way to support same option multiple times (one idea is host1, host2, etc). Isn't it easier to add support on your application or polling software? -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed
于2012年9月23日 20:33:48,Peter Eisentraut写到: On Fri, 2012-09-21 at 17:16 +0800, Rural Hunter wrote: If I run initdb with '-E zh_CN.utf8', it will tell me there is no such charset in the system. Because that is the name of a locale, not an encoding. I found a workaround to run initdb with '--lc-collate=zh_CN.utf8 --lc-ctype=zh_CN.utf8 --lc-messages=zh_CN.utf8 --lc-monetary=zh_CN.utf8 --lc-numeric=zh_CN.utf8 --lc-time=zh_CN.utf8'. But the case problem is really confusing. Try initdb --locale='zn_CN.utf8'. Ah yes, seems I used a wrong parameter. The --locale='zh_CN.utf8' works. --locale='zh_CN.UTF8' also works. But still the question is, should the encoding name be case sensitive? -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [PoC] load balancing in libpq
We historically have connection pooling as an external thing; with the high degree to which people keep implementing and reimplementing this, I think *something* more than we have ought to be built in. This, with perhaps better implementation, might be an apropos start. Parallel with LDAP: it takes very much this approach, where configuration typically offers a list of LDAP servers. I am not certain if OpenLDAP does round robin on the list, or if it tries targets in order, stopping when it succeeds. A decent debate fits in, there. I could see this being implemented instead via something alongside PGSERVICE; that already offers a well-defined way to capture a registry of connection configuration. Specifying a list of service names would allow the command line configuration to remain short and yet very flexible. On 2012-09-23 10:01 AM, Euler Taveira eu...@timbira.com wrote: On 23-09-2012 07:50, Satoshi Nagayasu wrote: I have just written the first PoC code to enable load balancing in the libpq library. Your POC is totally broken. Just to point out two problems: (i) semicolon (;) is a valid character for any option in the connection string and (ii) you didn't think about PQsetdb[Login](), PQconnectdbParams() and PQconnectStartParams(). If you want to pursue this idea, you should think a way to support same option multiple times (one idea is host1, host2, etc). Isn't it easier to add support on your application or polling software? -- Euler Taveira de Oliveira - Timbira http://www.timbira.com.br/ PostgreSQL: Consultoria, Desenvolvimento, Suporte 24x7 e Treinamento -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_reorg in core?
2012/09/23 12:37, Greg Sabino Mullane wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I think it's time to consider some *umbrella project* for maintaining several small projects outside the core. Well, that was pgfoundry, and it didn't work out. I'm not sure that is quite analogous to what was being proposed. I read it as more of let's package a bunch of these small utilities together into a single project, such that installing one installs them all (e.g. aptitude install pg_tools), and they all have a single bug tracker, etc. That tracker could be github, of course. Exactly --- I do not care the SCM system though. :) I'm not convinced of the merit of that plan, but that's an alternative interpretation that doesn't involve our beloved pgfoundry. :) For example, xlogdump had not been maintained for 5 years when I picked it up last year. And the latest pg_filedump that supports 9.2 has not been released yet. pg_reorg as well. If those tools are in a single project, it would be easier to keep attention on it. Then, developers can easily build *all of them* at once, fix them, and post any patch on the single mailing list. Actually, it would save developers from waisting their time. From my viewpoint, it's not just a SCM or distributing issue. It's about how to survive for such small projects around the core even if these could not come in the core. Regards, Oh, and -1 for putting it in core. Way too early, and not important enough. - -- Greg Sabino Mullane g...@turnstep.com PGP Key: 0x14964AC8 201209222334 http://biglumber.com/x/web?pk=2529DF6AB8F79407E94445B4BC9B906714964AC8 -BEGIN PGP SIGNATURE- iEYEAREDAAYFAlBeg/AACgkQvJuQZxSWSsjL5ACgimT71B4lSb1ELhgMw5EBzAKs xHIAn08vxGzmM6eSmDfZfxlJDTousq7h =KgXW -END PGP SIGNATURE- -- Satoshi Nagayasu sn...@uptime.jp Uptime Technologies, LLC. http://www.uptime.jp -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] trivial typo in src/tools/RELEASE_CHANGES
There's a typo in src/tools/RELEASE_CHANGES It just ticked off my OCD I guess... Cheers, Jan diff --git a/src/tools/RELEASE_CHANGES b/src/tools/RELEASE_CHANGES new file mode 100644 index 5f1277a..aba1630 *** a/src/tools/RELEASE_CHANGES --- b/src/tools/RELEASE_CHANGES *** Then doing it like this: *** 164,170 void print_stuff(int arg1, int arg2) { ! print_stuff(arg1, arg2, 0); } would maintain binary compatibility. Obviously this would add a fair --- 164,170 void print_stuff(int arg1, int arg2) { ! print_stuff2(arg1, arg2, 0); } would maintain binary compatibility. Obviously this would add a fair -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)
Excerpts from Amit kapila's message of sáb sep 22 01:14:40 -0300 2012: On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote: Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012: On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote: Well, there is a difficulty here which is that the number of processes connected to databases must be configured during postmaster start (because it determines the size of certain shared memory structs). So you cannot just spawn more tasks if all max_worker_tasks are busy. (This is a problem only for those workers that want to be connected as backends. Those that want libpq connections do not need this and are easier to handle.) If not above then where there is a need of dynamic worker tasks as mentioned by Simon? Well, I think there are many uses for dynamic workers, or short-lived workers (start, do one thing, stop and not be restarted). In my design, a worker is always restarted if it stops; otherwise there is no principled way to know whether it should be running or not (after a crash, should we restart a registered worker? We don't know whether it stopped before the crash.) So it seems to me that at least for this first shot we should consider workers as processes that are going to be always running as long as postmaster is alive. On a crash, if they have a backend connection, they are stopped and then restarted. a. Is there a chance that it would have made shared memory inconsitent after crash like by having lock on some structure and crash before releasing it? If such is case, do we need reinitialize the shared memory as well with worker restart? Any worker that requires access to shared memory will have to be stopped and restarted on a crash (of any other postmaster child process). Conversely, if a worker requires shmem access, it will have to cause the whole system to be stopped/restarted if it crashes in some ugly way. Same as any current process that's connected to shared memory, I think. So, to answer your question, yes. We need to take the safe route and consider that a crashed process might have corrupted shmem. (But if it dies cleanly, then there is no need for this.) b. do these worker tasks be able to take any new jobs, or whatever they are started with they will do only those jobs? Not sure I understand this question. If a worker connects to a database, it will stay connected to that database until it dies; changing DBs is not allowed. If you want a worker that connects to database A, does stuff there, and then connects to database B, it could connect to A, do its deed, then set up database=B in shared memory and stop, which will cause postmaster to restart it; next time it starts, it reads shmem and knows to connect to the other DB. My code has the ability to connect to no particular database -- what autovac launcher does (this lets it read shared catalogs). So you could do useful things like have the first invocation of your worker connect to that on the first invocation and read pg_database to determine what DB to connect next, then terminate. You could also have worker groups commanded by one process: one queen bee, one or more worker bees. The queen determines what to do, sets tasklist info in shmem, signals worker bees. While the tasklist is empty, workers would sleep. As you can see there are many things that can be done with this. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] trivial typo in src/tools/RELEASE_CHANGES
Excerpts from Jan Urbański's message of dom sep 23 14:21:53 -0300 2012: There's a typo in src/tools/RELEASE_CHANGES It just ticked off my OCD I guess... Pushed, thanks. -- Álvaro Herrerahttp://www.2ndQuadrant.com/ PostgreSQL Development, 24x7 Support, Training Services -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Draft release notes complete
On 09/22/2012 01:57 PM, Stephen Frost wrote: Andrew, Below is the patch that I mentioned at pgOpen. I'm pretty sure my silly github pull request got screwed up anyway, so probably best to ignore it. Regardless, please let me know what you think. I'd be happy to rework it to operate off of a single hash, though I think that would require having 'one true hash' of all possible steps and it kind of looked like you were trying to avoid that. I'm not sure it's a great advance, but I'll take a look. In any case please sent it as a proper MIME attachment. It did not come through clean. Alternatively, and probably better, put this on a topic branch that I can git-fetch (that's recommended practice for github pull requests too). cheers andrew -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] patch review for Array ELEMENT Foreign Keys
The patch I reviewd is listed among CommitFest 2012-09 patches. I hope it can be of some help, despite the fact, that I don't really feel like being terribly qualified here. Submission review The patch goes cleanly against v9.3, as fetched by clonig current HEAD from postgres git repository: $ git clone .. $ ./configure --with-readline $ make $ make world $ make check Apart from problems with substituted readline linkage, all that is this patch related went smoothly. The patch contains signifficant amount of testing code, so unit test it provides gives confidence, that the patch works as intended. Usability review Do we want that? Do we already have it? As long as I'm concerned, this patch solves dataset partitioning problem, which is hard to cope with otherwise. The nested storage this patch provides solves the problem for me. Does the patch actually implement that? Current version of the patch does not look like implementing ON UPDATE CASCADE and ON UPDATE DELETE, just yet; but this is documented Apart from that, basic FK functionality work fine. Feature test Before running make check, I've made a few tests off the top of my head, and everything was correct and intuitive. make check also went flowlessly. So the patch looks fine for me. -R -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] [ADMIN] pg_upgrade from 9.1.3 to 9.2 failed
On Sun, 2012-09-23 at 22:20 +0800, Rural Hunter wrote: Ah yes, seems I used a wrong parameter. The --locale='zh_CN.utf8' works. --locale='zh_CN.UTF8' also works. But still the question is, should the encoding name be case sensitive? PostgreSQL treats encoding names as case insensitive. But it depends on the operating system whether locale names are case sensitive. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] pg_reorg in core?
On Mon, Sep 24, 2012 at 1:14 AM, Satoshi Nagayasu sn...@uptime.jp wrote: 2012/09/23 12:37, Greg Sabino Mullane wrote: -BEGIN PGP SIGNED MESSAGE- Hash: RIPEMD160 I think it's time to consider some *umbrella project* for maintaining several small projects outside the core. Well, that was pgfoundry, and it didn't work out. I'm not sure that is quite analogous to what was being proposed. I read it as more of let's package a bunch of these small utilities together into a single project, such that installing one installs them all (e.g. aptitude install pg_tools), and they all have a single bug tracker, etc. That tracker could be github, of course. Exactly --- I do not care the SCM system though. :) The bug tracker is going to be a mess if it has to manage 100 subprojects, knowing that each of them is strictly independant. Maintainers are also different people for each tool. I'm not convinced of the merit of that plan, but that's an alternative interpretation that doesn't involve our beloved pgfoundry. :) For example, xlogdump had not been maintained for 5 years when I picked it up last year. And the latest pg_filedump that supports 9.2 has not been released yet. pg_reorg as well. If those tools are in a single project, it would be easier to keep attention on it. Then, developers can easily build *all of them* at once, fix them, and post any patch on the single mailing list. Actually, it would save developers from waisting their time. From my viewpoint, it's not just a SCM or distributing issue. It's about how to survive for such small projects around the core even if these could not come in the core. The package manager system could be easily pgxn. It is already designed for that. For development what you are looking for here is something that github could perfectly manage. As proposed by Masahiko, a single organization grouping all the tools (one repository per tool) would be enough. Please note that github can also host documentation. Bug tracker would be tool-dedicated in this case. -- Michael Paquier http://michael.otacoo.com
Re: [HACKERS] Suggestion for --truncate-tables to pg_restore
Attached is version 2. The sgml did not build. On 09/23/2012 12:24:27 AM, Karl O. Pinc wrote: Whoops. Do over. Sent the wrong file. On 09/23/2012 12:19:07 AM, Karl O. Pinc wrote: On 09/21/2012 10:54:05 AM, Karl O. Pinc wrote: On 09/20/2012 12:24:49 PM, Karl O. Pinc wrote: I've had problems using pg_restore --data-only when restoring individual schemas (which contain data which has had bad things done to it). --clean does not work well because of dependent objects in other schemas. Since there wasn't much more to do I've gone ahead and written the patch. Works for me. Against git master. Passes regression tests, but there's no regression tests for pg_restore so this does not say much. Since there's no regression tests I've not written one. Since this is a real patch for application I've given it a new name (it's not a v2). Truncate done right before COPY, since that's what the parallel restores do. Karl k...@meme.com Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein --quoted attachment-- -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers Karl k...@meme.com Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index b276da6..488d8dc 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -539,6 +539,26 @@ /varlistentry varlistentry + termoption--truncate-tables//term + listitem + para +This option is only relevant when performing a data-only +restore. It instructs applicationpg_restore/application +to execute commands to truncate the target tables while the +data is reloaded. Use this when restoring tables or schemas +and option--clean/option cannot be used because dependent +objects would be destroyed. + /para + + para + The option--disable-triggers/option will almost always + always need to be used in conjunction with this option to + disable check constraints on foreign keys. + /para + /listitem + /varlistentry + + varlistentry termoption--use-set-session-authorization/option/term listitem para diff --git a/src/bin/pg_dump/pg_backup.h b/src/bin/pg_dump/pg_backup.h index 3b49395..0aaf1d3 100644 --- a/src/bin/pg_dump/pg_backup.h +++ b/src/bin/pg_dump/pg_backup.h @@ -101,6 +101,8 @@ typedef struct _restoreOptions int noTablespace; /* Don't issue tablespace-related commands */ int disable_triggers; /* disable triggers during data-only * restore */ + int truncate_tables; /* truncate tables during data-only + * restore */ int use_setsessauth;/* Use SET SESSION AUTHORIZATION commands * instead of OWNER TO */ int no_security_labels; /* Skip security label entries */ diff --git a/src/bin/pg_dump/pg_backup_archiver.c b/src/bin/pg_dump/pg_backup_archiver.c index 722b3e9..43b5806 100644 --- a/src/bin/pg_dump/pg_backup_archiver.c +++ b/src/bin/pg_dump/pg_backup_archiver.c @@ -311,6 +311,11 @@ RestoreArchive(Archive *AHX) if (ropt-createDB ropt-dropSchema) exit_horribly(modulename, -C and -c are incompatible options\n); + /* When the schema is dropped and re-created then no point + * truncating tables. */ + if (ropt-dropSchema ropt-truncate_tables) + exit_horribly(modulename, -c and --truncate-tables are incompatible options\n); + /* * -C is not compatible with -1, because we can't create a database inside * a transaction block. @@ -412,6 +417,10 @@ RestoreArchive(Archive *AHX) } } + /* Truncate tables only when restoring data. */ + if (!ropt-dataOnly ropt-truncate_tables) + exit_horribly(modulename, --truncate-tables requires the --data-only option\n); + /* * Setup the output file if necessary. */ @@ -553,6 +562,7 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, int retval = 0; teReqs reqs; bool defnDumped; + bool truncate; AH-currentTE = te; @@ -687,15 +697,22 @@ restore_toc_entry(ArchiveHandle *AH, TocEntry *te, * server, so no need to see if we should issue BEGIN. */ StartTransaction(AH); + truncate = 1; + } else + /* Truncate the table when asked to. */ + truncate = ropt-truncate_tables; + if (truncate) { /* * If the server version is = 8.4, make sure we issue * TRUNCATE with ONLY so that child tables are not - * wiped. + * wiped. If we don't know the server version + * then err on the side of safety. */ ahprintf(AH, TRUNCATE TABLE %s%s;\n\n, - (PQserverVersion(AH-connection) = 80400 ? - ONLY : ), + (!AH-connection + ||
[HACKERS] Doc patch to note which system catalogs have oids
Hi, The attached patch documents the oid column of those system catalogs having an oid. Distinguish system catalogs with an oid from those without and make the primary key clear to the newbie. Found catalogs with an oid by querying a 9.2 installation: select pg_class.relkind, pg_class.relname from pg_class, pg_attribute where pg_attribute.attrelid = pg_class.oid and pg_attribute.attname = 'oid' and pg_class.relname like 'pg_%' and (pg_class.relkind = 'r'-- table or pg_class.relkind = 'v') -- view order by pg_class.relkind, pg_class.relname; Karl k...@meme.com Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein diff --git a/doc/src/sgml/catalogs.sgml b/doc/src/sgml/catalogs.sgml index f999190..babb11c 100644 --- a/doc/src/sgml/catalogs.sgml +++ b/doc/src/sgml/catalogs.sgml @@ -427,6 +427,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldamname/structfield/entry entrytypename/type/entry entry/entry @@ -683,6 +690,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldamopfamily/structfield/entry entrytypeoid/type/entry entryliterallink linkend=catalog-pg-opfamilystructnamepg_opfamily/structname/link.oid/literal/entry @@ -819,6 +833,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldamprocfamily/structfield/entry entrytypeoid/type/entry entryliterallink linkend=catalog-pg-opfamilystructnamepg_opfamily/structname/link.oid/literal/entry @@ -902,6 +923,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldadrelid/structfield/entry entrytypeoid/type/entry entryliterallink linkend=catalog-pg-classstructnamepg_class/structname/link.oid/literal/entry @@ -1257,6 +1285,14 @@ /thead tbody + + row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + row entrystructfieldrolname/structfield/entry entrytypename/type/entry @@ -1462,6 +1498,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldcastsource/structfield/entry entrytypeoid/type/entry entryliterallink linkend=catalog-pg-typestructnamepg_type/structname/link.oid/literal/entry @@ -1577,6 +1620,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldrelname/structfield/entry entrytypename/type/entry entry/entry @@ -1984,6 +2034,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldconname/structfield/entry entrytypename/type/entry entry/entry @@ -2250,6 +2307,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldcollname/structfield/entry entrytypename/type/entry entry/entry @@ -2350,6 +2414,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfieldconname/structfield/entry entrytypename/type/entry entry/entry @@ -2443,6 +2514,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfielddatname/structfield/entry entrytypename/type/entry entry/entry @@ -2652,6 +2730,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry + entry/entry + entryRow identifier/entry + /row + + row entrystructfielddefaclrole/structfield/entry entrytypeoid/type/entry entryliterallink linkend=catalog-pg-authidstructnamepg_authid/structname/link.oid/literal/entry @@ -3005,6 +3090,13 @@ tbody row + entrystructfieldoid/structfield/entry + entrytypeoid/type/entry +
[HACKERS] Add big fat caution to pg_restore docs regards partial db restores
Hi, Adds a caution to the pg_restore docs Against git master. Karl k...@meme.com Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein diff --git a/doc/src/sgml/ref/pg_restore.sgml b/doc/src/sgml/ref/pg_restore.sgml index 488d8dc..ad42d38 100644 --- a/doc/src/sgml/ref/pg_restore.sgml +++ b/doc/src/sgml/ref/pg_restore.sgml @@ -45,6 +45,13 @@ portable across architectures. /para + caution +paraRestoring less than a complete database dump, especially +when option--disable-triggers/ is used, can result in a loss +of referential, trigger, or application enforced data +integrity./para + /caution + para applicationpg_restore/application can operate in two modes. If a database name is specified, applicationpg_restore/application -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
[HACKERS] Doc patch to See Also: CREATE TABLE AS in CREATE TABLE docs
Hi, Patch to add CREATE TABLE AS to the See Also: section of the CREATE TABLE docs. Against git master. (Builds, as do all the previous doc patches.) Karl k...@meme.com Free Software: You don't pay back, you pay forward. -- Robert A. Heinlein diff --git a/doc/src/sgml/ref/create_table.sgml b/doc/src/sgml/ref/create_table.sgml index 445ca40..8872920 100644 --- a/doc/src/sgml/ref/create_table.sgml +++ b/doc/src/sgml/ref/create_table.sgml @@ -1453,6 +1453,7 @@ CREATE TABLE employees OF employee_type ( simplelist type=inline memberxref linkend=sql-altertable/member memberxref linkend=sql-droptable/member + memberxref linkend=sql-createtableas/member memberxref linkend=sql-createtablespace/member memberxref linkend=sql-createtype/member /simplelist -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [HACKERS] Doc patch to note which system catalogs have oids
Karl O. Pinc k...@meme.com writes: The attached patch documents the oid column of those system catalogs having an oid. I think this is fundamentally wrong, or at least misleading, because it documents OID as if it were an ordinary column. Somebody who did select * from pg_class and didn't see any oid in the result would think the docs were wrong. It's possible that it's worth expending a boilerplate paragraph in each of those pages to say this catalog has OIDs (or that it doesn't). But this isn't the way. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers
Re: [v9.3] Extra Daemons (Re: [HACKERS] elegant and effective way for running jobs inside a database)
On Monday, September 24, 2012 12:24 AM Alvaro Herrera wrote: Excerpts from Amit kapila's message of sáb sep 22 01:14:40 -0300 2012: On Friday, September 21, 2012 6:50 PM Alvaro Herrera wrote: Excerpts from Amit Kapila's message of vie sep 21 02:26:49 -0300 2012: On Thursday, September 20, 2012 7:13 PM Alvaro Herrera wrote: You could also have worker groups commanded by one process: one queen bee, one or more worker bees. The queen determines what to do, sets tasklist info in shmem, signals worker bees. While the tasklist is empty, workers would sleep. As you can see there are many things that can be done with this. Yes, this really is a good feature which can be used for many different functionalaties. With Regards, Amit Kapila. -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers