Re: [HACKERS] 27 second plan times
Gregory Stark [EMAIL PROTECTED] writes: I think there's still a problem here with some kind of n^2 behaviour for appends of very wide tables but I haven't quite nailed it yet. In any case is there any reason not to make the following small change to move the constraint exclusion ahead of the size estimates and index checks and save ourselves potentially a lot of work? Applied along with some other hacking to reduce the costs of the lower-level functions that this example shows to be inefficient. They'd still be slow in large queries, whether CE applies or not. regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [COMMITTERS] pgsql: Add --with-libxslt configure option
Andrew Dunstan wrote: Log Message: --- Add --with-libxslt configure option It should be added to the installation documentation as well. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Improving deadlock error messages
Neil Conway [EMAIL PROTECTED] writes: On Fri, 2007-04-20 at 02:55 -0400, Tom Lane wrote: I don't think you've thought of quite all of the failure cases. One that's a bit pressing is that a deadlock isn't necessarily confined to objects in your own database. I'm not sure I follow. If we conditionally acquire the locks we need and always fallback to just printing the numeric OIDs, ISTM we should be able to avoid the infinite recursion problem. (If necessary, we can always special-case objects outside our own database, although I'm not sure that's necessary.) Maybe so, but you're going to be writing quite a lot of duplicative code, because the existing routines you might have been thinking of using (lsyscache.c etc) don't behave that way. The basic objection I've got to this is that it'll introduce a lot of complexity and fragility into a seldom-taken error-recovery path, which is almost by definition not well enough tested already. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] modifying the table function
Hi again It seems now that I am one step away from the end. So far I have succeeded in returing row by row from the backend to the frontend, knew this from debugging. Now comes the point of displaying them directly not to wait till the end of the query. These are the steps I took: 1) redefined 'PrintQueryResults' in common.c to be extern (not static as the initial definition) to be able to use it elsewhere 2) added a declaration for ''PrintQueryResults' in common.h, to tell other files about it 3) removed 'PrintQueryResults' invocation from 'SendQuery' common.c 4) added #include ../bin/psql/common.h to fe-exec.c 5) called 'PrintQueryResults' from within 'PQexecFinish', last statment in the while loop when I gmake the project I receive the following error: ../../../src/interfaces/libpq/libpq.so: undefined reference to `PrintQueryTuples' collect2: ld returned 1 exit status gmake[3]: *** [initdb] Error 1 gmake[3]: Leaving directory `/home/grads/imehegaz/postgresql-8.2.3-b/src/bin/initdb' I wonder what does this error mean and how to solve it? Regards Islam Hegazy - Original Message - From: Tom Lane [EMAIL PROTECTED] To: Islam Hegazy [EMAIL PROTECTED] Cc: pgsql-hackers@postgresql.org Sent: Wednesday, April 18, 2007 6:38 PM Subject: Re: [HACKERS] modifying the table function Islam Hegazy [EMAIL PROTECTED] writes: I wonder if I am on the right track or not and how to know such kind of message sent from the server? Seems like you're doing it the hard way. Wouldn't it be easier to fix the client to display data before it's received the whole query result? regards, tom lane ---(end of broadcast)--- TIP 7: You can help support the PostgreSQL project by donating at http://www.postgresql.org/about/donate
Re: [HACKERS] Eliminating unnecessary left joins
But then what about the null values? Perhaps unique + notnull is better? Otto 2007/4/20, Nicolas Barbier [EMAIL PROTECTED]: 2007/4/16, Ottó Havasvölgyi [EMAIL PROTECTED]: Eliminate the table T from the query/subquery if the following requirements are satisfied: 1. T is left joined 2. T is referenced only in the join expression where it is left joined 3. the left join's join expression is a simple equality expression like T1.C1=T2.C2; T1!=T2 and (T==T1 or T==T2) 4. the column of T in the join exression is the primary key of T Condition 4 should be: the column of T in the join expression is a key of T (i.e. it doesn't need to be the PK, a UNIQUE constraint would be enough). This process can be done recursively (implementation doesn't have to be recursive, of course), to eliminate whole sub-trees of the join tree. Nicolas -- Nicolas Barbier http://www.gnu.org/philosophy/no-word-attachments.html
[HACKERS] functions to obtain query plan
Hello For a particular query, are there any functions which can give me the start-up cost, total run-cost, number of rows and width? Thanks, Sharat.
[HACKERS] [Fwd: PGBuildfarm member narwhal Branch HEAD Status changed from OK to InstallCheck failure]
I've been seeing this failure intermittently on Narwhal HEAD, and once on 8.1. Other branches have been OK, as have other animals running on the same physical box. Narwhal-HEAD is run more often than any other builds however. Anyone have any idea what might be wrong? It seems unlikely to be a hardware issue given that it's the exact same test failures each time. Regards, Dave. Original Message Subject: PGBuildfarm member narwhal Branch HEAD Status changed from OK to InstallCheck failure Date: Fri, 20 Apr 2007 13:46:22 -0700 (PDT) From: PG Build Farm [EMAIL PROTECTED] To: [EMAIL PROTECTED], [EMAIL PROTECTED] The PGBuildfarm member narwhal had the following event on branch HEAD: Status changed from OK to InstallCheck failure The snapshot timestamp for the build that triggered this notification is: 2007-04-20 20:00:01 The specs of this machine are: OS: Windows Server 2003 R2 / 5.2.3790 Arch: i686 Comp: GCC / 3.4.2 (mingw-special) For more information, see http://www.pgbuildfarm.org/cgi-bin/show_history.pl?nm=narwhalbr=HEAD ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] functions to obtain query plan
On Apr 21, 2007, at 4:46 , sharath kumar wrote: For a particular query, are there any functions which can give me the start-up cost, total run-cost, number of rows and width? -hackers is a list for discussion of development of PostgreSQL itself. Your question would probably be more appropriate in -general (which I am cc'ing) or perhaps -performance. I believe EXPLAIN ANALYZE will provide what you're looking for. The PostgreSQL documentation is a good place to get started: http://www.postgresql.org/docs/8.2/interactive/sql-explain.html http://www.postgresql.org/docs/8.2/interactive/performance-tips.html Hope this helps. Michael Glaesemann grzm seespotcode net ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] [COMMITTERS] pgsql: Add --with-libxslt configure option
Peter Eisentraut wrote: Andrew Dunstan wrote: Log Message: --- Add --with-libxslt configure option It should be added to the installation documentation as well. done cheers andrew ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] 27 second plan times
Tom Lane [EMAIL PROTECTED] writes: Applied along with some other hacking to reduce the costs of the lower-level functions that this example shows to be inefficient. They'd still be slow in large queries, whether CE applies or not. BIG difference. The case that caused swapping and took almost 15m to plan is now down to 2.5s. The profile still looks a bit odd but I can't argue with the results. [EMAIL PROTECTED]:/var/tmp/db$ gprof /usr/local/pgsql/bin/postgres gmon.out Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds secondscalls s/call s/call name 24.36 2.46 2.46 418517 0.00 0.00 SearchCatCache 7.33 3.20 0.74 2564235 0.00 0.00 hash_any 6.34 3.84 0.64 4283964 0.00 0.00 hash_search_with_hash_value 4.36 4.28 0.44 216316 0.00 0.00 list_nth_cell 3.96 4.68 0.40 6535943 0.00 0.00 AllocSetAlloc 3.37 5.02 0.34 4165664 0.00 0.00 _bt_compare 2.67 5.29 0.27 2266696 0.00 0.00 MemoryContextAllocZeroAligned ... 0.010.032000/424529 get_namespace_name [164] 0.010.032001/424529 pg_class_aclmask [167] 0.010.032001/424529 get_rel_name [163] 0.010.032002/424529 has_subclass [165] 1.212.69 204102/424529 get_attavgwidth [37] 1.212.69 204308/424529 TupleDescInitEntry [36] [632]0.00.000.00 418517 SearchSysCache cycle 9 [632] 418517 SearchCatCache cycle 9 [15] -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] 27 second plan times
Gregory Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Applied along with some other hacking to reduce the costs of the lower-level functions that this example shows to be inefficient. They'd still be slow in large queries, whether CE applies or not. BIG difference. The case that caused swapping and took almost 15m to plan is now down to 2.5s. The profile still looks a bit odd but I can't argue with the results. [EMAIL PROTECTED]:/var/tmp/db$ gprof /usr/local/pgsql/bin/postgres gmon.out Flat profile: Each sample counts as 0.01 seconds. % cumulative self self total time seconds secondscalls s/call s/call name 24.36 2.46 2.46 418517 0.00 0.00 SearchCatCache 7.33 3.20 0.74 2564235 0.00 0.00 hash_any 6.34 3.84 0.64 4283964 0.00 0.00 hash_search_with_hash_value 4.36 4.28 0.44 216316 0.00 0.00 list_nth_cell 3.96 4.68 0.40 6535943 0.00 0.00 AllocSetAlloc 3.37 5.02 0.34 4165664 0.00 0.00 _bt_compare 2.67 5.29 0.27 2266696 0.00 0.00 MemoryContextAllocZeroAligned For what it's worth if I defeat the same column position optimization SearchCatCache and list_nth_cell pop back to the top. I had a version that kept the col_mappings and translated_vars lists in arrays to fix the n^2 list_nth_cell behaviour. But the SearchCatCache is something else. 53.77 87.3687.36 4813018 0.00 0.00 SearchCatCache 11.29105.7118.35 1610810 0.00 0.00 list_nth_cell 3.09110.72 5.01 17738640 0.00 0.00 hash_any 2.50114.78 4.05 30612499 0.00 0.00 hash_search_with_hash_value 2.16118.29 3.51 54588745 0.00 0.00 AllocSetAlloc -- Gregory Stark EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] 27 second plan times
Gregory Stark [EMAIL PROTECTED] writes: Tom Lane [EMAIL PROTECTED] writes: Applied along with some other hacking to reduce the costs of the lower-level functions that this example shows to be inefficient. They'd still be slow in large queries, whether CE applies or not. BIG difference. The case that caused swapping and took almost 15m to plan is now down to 2.5s. The profile still looks a bit odd but I can't argue with the results. I'm still feeling a bit annoyed with the behavior of the stats machinery (pgstat_initstats and related macros). Yesterday I fixed it so that pgstat_initstats doesn't perform redundant searches of the tabstat arrays, but there's still an issue, which is that any rel that's heap_opened or index_opened within a transaction is going to get a tabstat entry, whether any events are subsequently counted or not. In typical scenarios I don't think this is a big deal, but in examples such as yours we're going to be sending a whole lot of all-zero tabstat messages; there'll be one for every heap or index that the planner even momentarily considered. That means more UDP traffic and more work for the stats collector. gprof won't show the resulting overhead since it doesn't know anything about kernel-level overhead or activity in the stats collector. (Hm, might be able to measure it in oprofile though...) We could fix this by not doing pgstat_initstats at heap_open time, but postponing it until something more interesting happens. The trouble is that that'd add at least a small amount of overhead at the places where something more interesting is counted, since the pgstat macros would have to check validity of the tabstat pointer. The added overhead should be only about one extra comparison, but maybe that's enough to make someone object? regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[HACKERS] Re: [BUGS] BUG #3242: FATAL: could not unlock semaphore: error code 298
Magnus Hagander wrote: Tom Lane wrote: Magnus Hagander [EMAIL PROTECTED] writes: No, it's definitly the right primitive. But we're creating it with a max count of 1. That's definitely wrong. There are at least three reasons for a PG process's semaphore to be signaled (heavyweight lock release, LWLock release, pin count waiter), and at least two of them can occur concurrently (eg, if deadlock checker fires, it will need to take LWLocks, but there's nothing saying that the original lock won't be released while it waits for an LWLock). The effective max count on Unixen is typically in the thousands, and I'd suggest the same on Windows unless there's some efficiency reason to keep it small (in which case, maybe ten would do). AFAIK there's no problem with huge numbers (it takes an int32, and the documentation says nothing about a limit - I'm sure it's just a 32-bit counter in the kernel). I'll give that a shot. Magnus, Tom, thank you for finding what causes the problem :) I hope that was also a reason why other transactions were hung (because that is a prior, I think). Marcin - can you test a source patch? Or should I try to build you a binary for testing? It'd be good if you can confirm that it works before we commit anything, I think. Of course I will check fix :) I will be able to do tests on monday. I think source path should be enought, despite I've newer build PostgreSQL on Windows (I definitely should try). If i have problems then I will ask you for binary. Regards, Marcin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Fragmentation project
Well, I'm thinking in define (maybe via SQL) a set of servers as a cluster and make the fragmentation rules based on select clauses, storing this configuration in a specific catalog in global schema. For example: when a record is inserted in a server which not store this fragment (no rule matches), it will be automatically moved do the correct server (the server who matches that record), transparently to the user. This process involves transaction management and a lot more validations. A propose is: create partition name on site as select clause This create a replication in this site with the rules of the select clause. We can provide, for example, users create a foreign key in a site referencing a table who stored in other(s). In this case, the foreign key will be created in global schema but the validations of the local schema's would be managed by DRDBMS. These are my ideas. Is this supported today? Gustavo. P.S.: sorry by the English mistakes... On 4/19/07, Heikki Linnakangas [EMAIL PROTECTED] wrote: Gustavo Tonini wrote: I'm thinking in project and implement data fragmentation based on catalogs inside pgcluster as my university final project. I would like suggestions and would be happy if anyone help me to define it. Sorry if this is the incorrect list...I had no answer on pgcluster-general... This is the correct list. I'm not sure what you mean by data fragmentation, but we do support horizontal partitioning by table constraints. And toasting is like vertical partitioning. What exactly are you thinking of implementing? -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Improving deadlock error messages
On Sat, 2007-04-21 at 02:38 -0400, Tom Lane wrote: Maybe so, but you're going to be writing quite a lot of duplicative code, because the existing routines you might have been thinking of using (lsyscache.c etc) don't behave that way. Right, I'm envisioning doing a conditional LockAcquire and then heap_open() / heap_getnext() by hand. That will be relatively slow, but code that emits a deadlock error message is almost by definition not performance critical. BTW, another alternative would be to set a global variable instructing LockAcquire() to not block waiting for a lock; instead, it would longjmp(), a la elog(ERROR). You could even construct something similar to PG_TRY(): PG_COND_LOCK(); { /* do various things that might acquire lmgr locks */ } PG_ACQUIRE_FAILED() { /* failed to acquire an lmgr lock */ } PG_END_COND_LOCK(); The risk would be leaving the LockAcquire() call site in an inconsistent state when we longjmp(), but since DeadLockReport() is going to ereport(ERROR) anyway, it might be sufficiently safe. This scheme does seem a bit fragile, though... -Neil ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Improving deadlock error messages
On Sat, 2007-04-21 at 17:56 -0400, Neil Conway wrote: Right, I'm envisioning doing a conditional LockAcquire and then heap_open() / heap_getnext() by hand. That will be relatively slow, but code that emits a deadlock error message is almost by definition not performance critical. ... although it turns out you'd need to conditionally lock a *lot* of system catalogs to guarantee that you're not going to block on a lock at some point. Needless to say, that approach would be pretty ugly and fragile. BTW, another alternative would be to set a global variable instructing LockAcquire() to not block waiting for a lock; instead, it would longjmp(), a la elog(ERROR). You could even construct something similar to PG_TRY() Attached is a very quick hack of a patch to do this. -Neil Index: src/backend/storage/lmgr/deadlock.c === RCS file: /home/neilc/postgres/cvs_root/pgsql/src/backend/storage/lmgr/deadlock.c,v retrieving revision 1.47 diff -c -p -r1.47 deadlock.c *** src/backend/storage/lmgr/deadlock.c 20 Apr 2007 20:15:52 - 1.47 --- src/backend/storage/lmgr/deadlock.c 21 Apr 2007 23:43:13 - *** *** 25,34 --- 25,39 */ #include postgres.h + #include access/htup.h + #include commands/dbcommands.h #include lib/stringinfo.h #include miscadmin.h #include storage/proc.h + #include utils/builtins.h + #include utils/lsyscache.h #include utils/memutils.h + #include utils/syscache.h /* One edge in the waits-for graph */ *** static bool FindLockCycleRecurse(PGPROC *** 73,78 --- 78,85 static bool ExpandConstraints(EDGE *constraints, int nConstraints); static bool TopoSort(LOCK *lock, EDGE *constraints, int nConstraints, PGPROC **ordering); + static char *format_relation(Oid reloid); + static const char *format_database(Oid dboid); #ifdef DEBUG_DEADLOCK static void PrintLockQueue(LOCK *lock, const char *info); *** DescribeLockTag(StringInfo buf, const LO *** 846,875 { case LOCKTAG_RELATION: appendStringInfo(buf, ! _(relation %u of database %u), ! lock-locktag_field2, ! lock-locktag_field1); break; case LOCKTAG_RELATION_EXTEND: appendStringInfo(buf, ! _(extension of relation %u of database %u), ! lock-locktag_field2, ! lock-locktag_field1); break; case LOCKTAG_PAGE: appendStringInfo(buf, ! _(page %u of relation %u of database %u), lock-locktag_field3, ! lock-locktag_field2, ! lock-locktag_field1); break; case LOCKTAG_TUPLE: appendStringInfo(buf, ! _(tuple (%u,%u) of relation %u of database %u), lock-locktag_field3, lock-locktag_field4, ! lock-locktag_field2, ! lock-locktag_field1); break; case LOCKTAG_TRANSACTION: appendStringInfo(buf, --- 853,882 { case LOCKTAG_RELATION: appendStringInfo(buf, ! _(relation %s of database %s), ! format_relation(lock-locktag_field2), ! format_database(lock-locktag_field1)); break; case LOCKTAG_RELATION_EXTEND: appendStringInfo(buf, ! _(extension of relation %s of database %s), ! format_relation(lock-locktag_field2), ! format_database(lock-locktag_field1)); break; case LOCKTAG_PAGE: appendStringInfo(buf, ! _(page %u of relation %s of database %s), lock-locktag_field3, ! format_relation(lock-locktag_field2), ! format_database(lock-locktag_field1)); break; case LOCKTAG_TUPLE: appendStringInfo(buf, ! _(tuple (%u,%u) of relation %s of database %s), lock-locktag_field3, lock-locktag_field4, ! format_relation(lock-locktag_field2), ! format_database(lock-locktag_field1)); break; case LOCKTAG_TRANSACTION: appendStringInfo(buf, *** DescribeLockTag(StringInfo buf, const LO *** 878,887 break; case LOCKTAG_OBJECT: appendStringInfo(buf, ! _(object %u of class %u of database %u), lock-locktag_field3, lock-locktag_field2, ! lock-locktag_field1); break; case LOCKTAG_USERLOCK: /* reserved for old contrib code, now on pgfoundry */ --- 885,894 break; case LOCKTAG_OBJECT: appendStringInfo(buf, ! _(object %u of class %u of database %s), lock-locktag_field3, lock-locktag_field2, ! format_database(lock-locktag_field1)); break; case LOCKTAG_USERLOCK: /* reserved for old contrib code, now on pgfoundry */ *** DescribeLockTag(StringInfo buf, const LO *** 907,912 --- 914,977 } } + static char * + format_relation(Oid reloid) + { + char *result; + + PG_LOCK_NOWAIT(); + { + HeapTuple class_tup; + + class_tup = SearchSysCache(RELOID, + ObjectIdGetDatum(reloid), +
[HACKERS] PgAdmin pt_BR traduction
Hello, I translated some things and I corrected many others in the translation of pgadmin3 for Portuguese of Brazil. It incorrectly had much translated thing, and much thing with translation of one another one. It would like to know you as I can disponibilizar this my small contribution. -- Nabucodonosor Coutinho Database Administrator Accu Hosting - www.accuhosting.com ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] PgAdmin pt_BR traduction
Nabucodonosor Coutinho wrote: I translated some things and I corrected many others in the translation of pgadmin3 for Portuguese of Brazil. It incorrectly had much translated thing, and much thing with translation of one another one. It would like to know you as I can disponibilizar this my small contribution. This is not the right list. Just send your updated version to [EMAIL PROTECTED] so one of the committers can apply it. -- Euler Taveira de Oliveira http://www.timbira.com/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [Fwd: PGBuildfarm member narwhal Branch HEAD Status changed from OK to InstallCheck failure]
Dave Page [EMAIL PROTECTED] writes: I've been seeing this failure intermittently on Narwhal HEAD, and once on 8.1. Other branches have been OK, as have other animals running on the same physical box. Narwhal-HEAD is run more often than any other builds however. Anyone have any idea what might be wrong? It seems unlikely to be a hardware issue given that it's the exact same test failures each time. Yeah, I'd been wondering about that too, but have no clue what's up. It seems particularly odd that all the failures are in installcheck not check. If you want to poke at it, I'd suggest changing the ERROR to PANIC (it's in bufmgr.c) to cause a core dump, run installchecks till you get a panic, and then look around in the dump to see what you can find. It'd be particularly interesting to see what the buffer actually contains. Also you could look at the corresponding page of the disk file (which in theory should be the same as the buffer contents, since this error check is only made just after a read() ...) regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] PgAdmin pt_BR traduction
ops, error when typing the name of the list. excuses 2007/4/21, Euler Taveira de Oliveira [EMAIL PROTECTED]: Nabucodonosor Coutinho wrote: I translated some things and I corrected many others in the translation of pgadmin3 for Portuguese of Brazil. It incorrectly had much translated thing, and much thing with translation of one another one. It would like to know you as I can disponibilizar this my small contribution. This is not the right list. Just send your updated version to [EMAIL PROTECTED] so one of the committers can apply it. -- Euler Taveira de Oliveira http://www.timbira.com/ -- Nabucodonosor Coutinho Database Administrator Accu Hosting - www.accuhosting.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] 27 second plan times
I wrote: I'm still feeling a bit annoyed with the behavior of the stats machinery (pgstat_initstats and related macros). ... That means more UDP traffic and more work for the stats collector. gprof won't show the resulting overhead since it doesn't know anything about kernel-level overhead or activity in the stats collector. (Hm, might be able to measure it in oprofile though...) I spent some time with oprofile and couldn't muster any evidence suggesting that this was accounting for more than 1% or so of total runtime. So for the moment I'll leave it alone. It might eventually become worth worrying about, though. The thing I saw as being more interesting than the tabstat overhead is that the planner does RelationGetNumberOfBlocks (ie, an lseek kernel call) on every child rel ... and would do it on every index of every child rel, too, if the example had any. It would be nice if we could postpone all of the work of get_relation_info() until after we've checked for constraint exclusion. This looks like it'd require some nontrivial refactoring though --- in particular, I'm not sure how we'd handle the total_table_pages calculation. If you're satisfied with the performance as it now stands, let's leave this for the maybe-do-someday list. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend