Re: [PATCHES] [pgsql-patches] Recalculating OldestXmin in a long-running vacuum

2007-02-03 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas [EMAIL PROTECTED] writes:

I have two runs of DBT-2, one with the patch and one without.



Patched:


autovac public.stock scans:1 pages:1285990(-0) 
tuples:25303056(-2671265) CPU 95.22s/38.02u sec elapsed 10351.17 sec



Unpatched:


autovac public.stock scans:1 pages:1284504(-0) 
tuples:25001369(-1973760) CPU 86.55s/34.70u sec elapsed 9628.13 sec


So that makes this patch a good idea why?  (Maybe what we need to see
is the impact on the total elapsed time for the DBT-2 test, rather
than just the VACUUM runtime.)


The patch is a good idea because the vacuum collected more dead tuples, 
not because it makes vacuum run faster (it doesn't).


I believe the increase in runtime is caused by some random variations in 
the test. As I said, there's something going on that server that I don't 
 fully understand. I'll setup another test set.


The impact on the whole DBT-2 test is hard to measure, it would require 
a long run so that you reach a steady state where tables are not growing 
because of dead tuples anymore. We'll need to do that anyway, so 
hopefully I'll get a chance to measure the impact of this patch as well. 
The effect I'm expecting the patch to have is to make the steady-state 
size of the stock table smaller, giving more cache hits and less I/O.



BTW I've got serious reservations about whether this bit is safe:


+   /* The table could've grown since vacuum started, and 
there
+* might already be dead tuples on the new pages. Catch 
them
+* as well. Also, we want to include any live tuples in 
the
+* new pages in the statistics.
+*/
+   nblocks = RelationGetNumberOfBlocks(onerel);


I seem to recall some assumptions somewhere in the system that a vacuum
won't visit newly-added pages.


Hmm, I can't think of anything.

Without the patch, there can't be any dead tuples on the newly-added 
pages, according to the snapshot taken at the beginning of the vacuum, 
so it would be a waste of time to visit them.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


[PATCHES] Fix database is ready race condition

2007-02-03 Thread Markus Schiltknecht

Hi,

is there a good reason to print the database system is ready message 
in StartupXLOG() in xact.c? It has a) nothing to do with xlog and b) 
opens a small race condition: the message gets printed, while it still 
take some CPU cycles until the postmaster really gets the SIGCHLD signal 
and sets StartupPID = 0. If you (or rather: an automated test program) 
try to connect within this timespan, you get a database is starting up 
error, which clearly contradicts the is ready message.


I admit this is not a real issue in the common case and only matters in 
automated testing or some such. But in case this does not break 
anything...  (ereport is used in the reaper, so I guess it's fine to use 
that in signal handlers). I'm not sure if the message is needed at all 
in BS_XLOG_BOOTSTRAP mode. Probably it should better say something 
different.


Patch attached.

Regards

Markus

*** src/backend/access/transam/xlog.c	2191ee8ca338d74f666b4d3509cc4361c44b4353
--- src/backend/access/transam/xlog.c	e77a26a26ec46d4479563ed7ff5885ea9c21135a
*** StartupXLOG(void)
*** 5168,5176 
  	/* Reload shared-memory state for prepared transactions */
  	RecoverPreparedTransactions();
  
- 	ereport(LOG,
- 			(errmsg(database system is ready)));
- 
  	/* Shut down readFile facility, free space */
  	if (readFile = 0)
  	{
--- 5168,5173 

*** src/backend/bootstrap/bootstrap.c	55fd17241f51b6f23131a0d36d5ce583aa7a3488
--- src/backend/bootstrap/bootstrap.c	8a54e88b06acad46c83320ca8fe13caa75ad77b9
*** BootstrapMain(int argc, char *argv[])
*** 418,423 
--- 418,425 
  			bootstrap_signals();
  			BootStrapXLOG();
  			StartupXLOG();
+ 			ereport(LOG,
+ 	(errmsg(database system is ready)));
  			break;
  
  		case BS_XLOG_STARTUP:

*** src/backend/postmaster/postmaster.c	561d13618e62e95a32b42b2e9305a638edacf24f
--- src/backend/postmaster/postmaster.c	5a567893b0ed78d312a19e7054127dc5b6b69df3
*** reaper(SIGNAL_ARGS)
*** 2040,2045 
--- 2040,2048 
  		if (StartupPID != 0  pid == StartupPID)
  		{
  			StartupPID = 0;
+ 			ereport(LOG,
+ 	(errmsg(database system is ready)));
+ 
  			/* Note: FATAL exit of startup is treated as catastrophic */
  			if (!EXIT_STATUS_0(exitstatus))
  			{

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] [HACKERS] \copy (query) delimiter syntax error

2007-02-03 Thread Andrew Dunstan



Michael Fuhr wrote:

On Sat, Feb 03, 2007 at 10:52:29AM -0600, Andrew Dunstan wrote:
  

I'd say fix psql. Not sure how far back we should backpatch it. It's
interesting that this has been there since 8.0 and is only now discovered.



The problem is new in 8.2 because COPY (query) doesn't support USING
DELIMITERS.  COPY tablename does, so it has worked all along.

  


oh, good point. OK, I have cut this quick patch that will continue to 
accept the legacy syntax in psql in non-inline-query cases, but will 
make psql unreservedly emit new style syntax for COPY to the backend.  
Does that seem reasonable, or is it too much of a change for the stable 
branch?


cheers

andrew
Index: src/bin/psql/copy.c
===
RCS file: /cvsroot/pgsql/src/bin/psql/copy.c,v
retrieving revision 1.72
diff -c -r1.72 copy.c
*** src/bin/psql/copy.c	5 Jan 2007 22:19:49 -	1.72
--- src/bin/psql/copy.c	3 Feb 2007 19:37:34 -
***
*** 118,123 
--- 118,124 
  	char	   *token;
  	const char *whitespace =  \t\n\r;
  	char		nonstd_backslash = standard_strings() ? 0 : '\\';
+ 	boolhave_query = false;
  
  	if (args)
  		line = pg_strdup(args);
***
*** 163,168 
--- 164,170 
  			xstrcat(result-table,  );
  			xstrcat(result-table, token);
  		}
+ 		have_query = true;
  	}
  
  	token = strtokx(NULL, whitespace, .,(), \,
***
*** 268,291 
  	0, false, false, pset.encoding);
  
  	/*
! 	 * Allows old COPY syntax for backward compatibility 2002-06-19
  	 */
! 	if (token  pg_strcasecmp(token, using) == 0)
! 	{
! 		token = strtokx(NULL, whitespace, NULL, NULL,
! 		0, false, false, pset.encoding);
! 		if (!(token  pg_strcasecmp(token, delimiters) == 0))
! 			goto error;
! 	}
! 	if (token  pg_strcasecmp(token, delimiters) == 0)
  	{
! 		token = strtokx(NULL, whitespace, NULL, ',
! 		nonstd_backslash, true, false, pset.encoding);
! 		if (!token)
! 			goto error;
! 		result-delim = pg_strdup(token);
! 		token = strtokx(NULL, whitespace, NULL, NULL,
! 		0, false, false, pset.encoding);
  	}
  
  	if (token)
--- 270,297 
  	0, false, false, pset.encoding);
  
  	/*
! 	 * Allows old COPY syntax for backward compatibility.
! 	 * Skip if we have an inline query instead of a table name.
  	 */
! 	if (! have_query)
  	{
! 		if (token  pg_strcasecmp(token, using) == 0)
! 		{
! 			token = strtokx(NULL, whitespace, NULL, NULL,
! 			0, false, false, pset.encoding);
! 			if (!(token  pg_strcasecmp(token, delimiters) == 0))
! goto error;
! 		}
! 		if (token  pg_strcasecmp(token, delimiters) == 0)
! 		{
! 			token = strtokx(NULL, whitespace, NULL, ',
! 			nonstd_backslash, true, false, pset.encoding);
! 			if (!token)
! goto error;
! 			result-delim = pg_strdup(token);
! 			token = strtokx(NULL, whitespace, NULL, NULL,
! 			0, false, false, pset.encoding);
! 		}
  	}
  
  	if (token)
***
*** 480,511 
  
  	printfPQExpBuffer(query, COPY );
  
- 	/* Uses old COPY syntax for backward compatibility 2002-06-19 */
- 	if (options-binary)
- 		appendPQExpBuffer(query, BINARY );
- 
  	appendPQExpBuffer(query, %s , options-table);
  
  	if (options-column_list)
  		appendPQExpBuffer(query, %s , options-column_list);
  
- 	/* Uses old COPY syntax for backward compatibility 2002-06-19 */
- 	if (options-oids)
- 		appendPQExpBuffer(query, WITH OIDS );
- 
  	if (options-from)
  		appendPQExpBuffer(query, FROM STDIN);
  	else
  		appendPQExpBuffer(query, TO STDOUT);
  
  
! 	/* Uses old COPY syntax for backward compatibility 2002-06-19 */
  	if (options-delim)
! 		emit_copy_option(query,  USING DELIMITERS , options-delim);
  
- 	/* There is no backward-compatible CSV syntax */
  	if (options-null)
! 		emit_copy_option(query,  WITH NULL AS , options-null);
  
  	if (options-csv_mode)
  		appendPQExpBuffer(query,  CSV);
--- 486,513 
  
  	printfPQExpBuffer(query, COPY );
  
  	appendPQExpBuffer(query, %s , options-table);
  
  	if (options-column_list)
  		appendPQExpBuffer(query, %s , options-column_list);
  
  	if (options-from)
  		appendPQExpBuffer(query, FROM STDIN);
  	else
  		appendPQExpBuffer(query, TO STDOUT);
  
  
! 	if (options-binary)
! 		appendPQExpBuffer(query,  BINARY );
! 
! 	if (options-oids)
! 		appendPQExpBuffer(query,  OIDS );
! 
  	if (options-delim)
! 		emit_copy_option(query,  DELIMITER , options-delim);
  
  	if (options-null)
! 		emit_copy_option(query,  NULL AS , options-null);
  
  	if (options-csv_mode)
  		appendPQExpBuffer(query,  CSV);

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Index split WAL reduction

2007-02-03 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Heikki Linnakangas wrote:
 Here's an updated patch that fixes a bug with full_page_writes, and an 
 alignment issue in replay code. Also added a new test case to 
 crashtest.sh that exercises the fixed bug.
 
 Has anyone looked at this? I now consider it ready for committing, if 
 no-one sees a problem with it.
 
 Here's the original description of the patch:
 
  Currently, an index split writes all the data on the split page to 
  WAL. That's a lot of WAL traffic. The tuples that are copied to the 
  right page need to be WAL logged, but the tuples that stay on the 
  original page don't.
 
  Here's a patch to do that. It needs further testing, I have used the 
  attached crude crashtest.sh to test the basics, but we need to test 
  the more obscure cases like splitting non-leaf or root page.
 
  On a test case that inserts 1 rows in increasing key order with a 
  100 characters wide text-field as key, the patch reduced the total 
  generated WAL traffic from 45MB to 33MB, or ~ 25%. Your mileage may 
  vary, depending on the tuple and key sizes, and the order of inserts.
 
 -- 
Heikki Linnakangas
EnterpriseDB   http://www.enterprisedb.com

[ application/x-shellscript is not supported, skipping... ]


 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 7: You can help support the PostgreSQL project by donating at

http://www.postgresql.org/about/donate


Re: [PATCHES] Index split WAL reduction

2007-02-03 Thread Bruce Momjian

Sorry, please disregard.  Newer patch being added.

---

Bruce Momjian wrote:
 
 Your patch has been added to the PostgreSQL unapplied patches list at:
 
   http://momjian.postgresql.org/cgi-bin/pgpatches
 
 It will be applied as soon as one of the PostgreSQL committers reviews
 and approves it.
 
 ---
 
 
 Heikki Linnakangas wrote:
  Here's an updated patch that fixes a bug with full_page_writes, and an 
  alignment issue in replay code. Also added a new test case to 
  crashtest.sh that exercises the fixed bug.
  
  Has anyone looked at this? I now consider it ready for committing, if 
  no-one sees a problem with it.
  
  Here's the original description of the patch:
  
   Currently, an index split writes all the data on the split page to 
   WAL. That's a lot of WAL traffic. The tuples that are copied to the 
   right page need to be WAL logged, but the tuples that stay on the 
   original page don't.
  
   Here's a patch to do that. It needs further testing, I have used the 
   attached crude crashtest.sh to test the basics, but we need to test 
   the more obscure cases like splitting non-leaf or root page.
  
   On a test case that inserts 1 rows in increasing key order with a 
   100 characters wide text-field as key, the patch reduced the total 
   generated WAL traffic from 45MB to 33MB, or ~ 25%. Your mileage may 
   vary, depending on the tuple and key sizes, and the order of inserts.
  
  -- 
 Heikki Linnakangas
 EnterpriseDB   http://www.enterprisedb.com
 
 [ application/x-shellscript is not supported, skipping... ]
 
 
  
  ---(end of broadcast)---
  TIP 3: Have you checked our extensive FAQ?
  
 http://www.postgresql.org/docs/faq
 
 -- 
   Bruce Momjian   [EMAIL PROTECTED]
   EnterpriseDBhttp://www.enterprisedb.com
 
   + If your life is a hard drive, Christ can be your backup. +
 
 ---(end of broadcast)---
 TIP 7: You can help support the PostgreSQL project by donating at
 
 http://www.postgresql.org/about/donate

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] Index split WAL reduction

2007-02-03 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Heikki Linnakangas wrote:
 Here's an updated patch that fixes a bug with full_page_writes, and an 
 alignment issue in replay code. Also added a new test case to 
 crashtest.sh that exercises the fixed bug.
 
 Has anyone looked at this? I now consider it ready for committing, if 
 no-one sees a problem with it.
 
 Here's the original description of the patch:
 
  Currently, an index split writes all the data on the split page to 
  WAL. That's a lot of WAL traffic. The tuples that are copied to the 
  right page need to be WAL logged, but the tuples that stay on the 
  original page don't.
 
  Here's a patch to do that. It needs further testing, I have used the 
  attached crude crashtest.sh to test the basics, but we need to test 
  the more obscure cases like splitting non-leaf or root page.
 
  On a test case that inserts 1 rows in increasing key order with a 
  100 characters wide text-field as key, the patch reduced the total 
  generated WAL traffic from 45MB to 33MB, or ~ 25%. Your mileage may 
  vary, depending on the tuple and key sizes, and the order of inserts.
 
 -- 
Heikki Linnakangas
EnterpriseDB   http://www.enterprisedb.com

[ application/x-shellscript is not supported, skipping... ]


 
 ---(end of broadcast)---
 TIP 3: Have you checked our extensive FAQ?
 
http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


[PATCHES] Remove log segment and log_id fields from pg_controldata

2007-02-03 Thread Bruce Momjian

The original discussion of this patch was here:

http://archives.postgresql.org/pgsql-hackers/2006-11/msg00876.php

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Simon Riggs wrote:
 On Tue, 2006-12-05 at 17:26 -0500, Tom Lane wrote:
  Simon Riggs [EMAIL PROTECTED] writes:
   On Tue, 2006-12-05 at 16:24 -0500, Tom Lane wrote:
   Sure, what would happen is that every backend passing through this code
   would execute the several lines of computation needed to decide whether
   to call RequestCheckpoint.
  
   Right, but the calculation uses RedoRecPtr, which may not be completely
   up to date. So presumably you want to re-read the shared memory value
   again to make sure we are exactly accurate and allow only one person to
   call checkpoint? Either way we have to take a lock. Insert lock causes
   deadlock, so we would need to use infolock. 
  
  Not at all.  It's highly unlikely that RedoRecPtr would be so out of
  date as to result in a false request for a checkpoint, and if it does,
  so what?  Worst case is we perform an extra checkpoint.
 
 On its own, I wouldn't normally agree...
 
  Also, given the current structure of the routine, this is probably not
  the best place for that code at all --- it'd make more sense for it to
  be in the just-finished-a-segment code stretch, which would ensure that
  it's only done by one backend once per segment.
 
 But thats a much better plan since it requires no locking.
 
 There's a lot more changes there for such a simple fix though and lots
 more potential bugs, but I've coded it as you suggest and removed the
 fields from pg_control.
 
 Patch passes make check, applies cleanly on HEAD.
 pg_resetxlog and pgcontroldata tested.
 
 
 -- 
   Simon Riggs 
   EnterpriseDB   http://www.enterprisedb.com
 

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] Bundle of patches

2007-02-03 Thread Bruce Momjian

I assume we have taken all the patches from here we want.

---

Teodor Sigaev wrote:
 The 1C (http://www.1c.ru/) company kindly permits to publish a set of patches
 we (Oleg and me) developed during our work on porting of the '1C:Enterprise' 
 system to support the PostgreSQL database.
 
 We would like to suggest to commit they to HEAD.
 
 
 1) Typmod for user-defined types
http://www.sigaev.ru/misc/user_defined_typmod-0.7.gz
Patch is based on ideas from
http://archives.postgresql.org/pgsql-hackers/2004-06/msg00932.php
http://archives.postgresql.org/pgsql-hackers/2005-08/msg01007.php
 
Patch adds to type definition two optional function: typmodinput and
typmodoutput. That allows to develop user-defined types with type's
modificators. Built-in types use typmod input/output functions too.
Typmod internally is represented  as non-negative int4 value,
but patch allows to point list of integer in type definition. So,
NUMERIC type works with a help of typmodin/typmodout function.
 
 
 2) ORDER BY .. [ NULLS ( FIRST | LAST ) ]
http://www.sigaev.ru/misc/NULLS_82-0.5.gz
Allow to sort NULLs as greater or lesser than any value. The goal was to
simplificate migrations from MySQL/MS SQL which think that NULL is less.
Also, syntax conforms to SQL2003. It operate on gram.y level, and
adds 'field is [not] null' qualification to sortClause.
Note, to allow queries like 'select .. union .. order by f nulls first'
pgsql now can rewrite that query to
'select * from (select .. union ..) order by f nulls first'. This solves 
 the
problem with 'resjunk' column in SelectStmt-sortClause.
 
 3) Allow to use index for IS [NOT] NULL
http://www.sigaev.ru/misc/indexnulls_82-0.6.gz
Initially patch was developed by Martijn van Oosterhout 
 kleptog@svana.org.
But it's reworked  and support of searching NULLS to GiST too. Patch
adds new column named amsearchnull to pg_am. To recognize IS NULL clause
ScanKey-sk_flags contains (SK_ISNULL  SK_INDEXFINDNULL) and
ScanKey-sk_strategy == BTEqualStrategyNumber. For IS NOT NULL,
ScanKey-sk_strategy == BTLessStrategyNumber. Thats because NULLs are
treated greater than any value. It might be look some odd that
for IS [NOT] NULL clauses we use Btree strategy numbers even for GiST,
but if sk_flags contains SK_ISNULL then we never call user-defined 
 functions.
 
 4) OR clauses optimizations
http://www.sigaev.ru/misc/OR_82-0.6.gz
Patch can suggest new indexpaths to optimizer for ORed clauses. Patch uses
generate_bitmapscan and predicate_implied_by/predicate_refuted_by 
 machineries
 
 4.1) Allow any useful common restriction clauses to be extracted from
OR-of-AND quals. Also, it allows to combine several different
operations to one which can be used in index scan.
SELECT
  a, b
FROM
  tst
WHERE ( a = 5 ) OR ( a  5 AND b  5 )
ORDER BY a, b
LIMIT 20;
Limit  (cost=0.00..2.95 rows=20 width=8) (actual time=0.271..0.677 rows=20 
 loops=1)
 -  Index Scan using abidx on tst  (cost=0.00..3671.26 rows=24878 
 width=8) 
 (actual time=0.265..0.611 rows=20 loops=1)
   Index Cond: (a = 5)
   Filter: ((a = 5) OR ((a  5) AND (b  5)))
 4.2) When OR clauses aren't intersect and use the same index, it's possible
to just concatenate results of indexscans. For that, now postgres may use
Append node. Append node is modified to have a pathkeys.
 
SELECT
  a
FROM
  tst
WHERE ( a  6 AND a  61000 ) OR ( a  2 AND a  21000 )
ORDER BY ASC
LIMIT 20;
Limit  (cost=0.00..39.86 rows=20 width=4) (actual time=0.364..0.883 
 rows=20 
 loops=1)
 -  Result  (cost=0.00..4001.55 rows=2008 width=4) (actual 
 time=0.359..0.824 
 rows=20 loops=1)
   -  Append  (cost=0.00..4001.55 rows=2008 width=4) (actual 
 time=0.349..0.742 rows=20 loops=1)
 -  Index Scan using aidx on tst  (cost=0.00..2000.42 
 rows=990 
 width=4) (actual time=0.346..0.684 rows=20 loops=1)
   Index Cond: ((a  2) AND (a  21000))
 -  Index Scan using aidx on tst  (cost=0.00..2001.12 
 rows=1018 
 width=4) (never executed)
   Index Cond: ((a  6) AND (a  61000))
 
Also, if there is a 'ORDER BY' clause, childs nodes may be ordered by theys
ranges (compare plan with previous one).
SELECT
  a
FROM
  tst
WHERE ( a  6 AND a  61000 ) OR ( a  2 AND a  21000 )
ORDER BY a DESC
LIMIT 20;
Limit  (cost=0.00..39.86 rows=20 width=4) (actual time=0.162..0.651 
 rows=20 
 loops=1)
 -  Result  (cost=0.00..4001.55 rows=2008 width=4) (actual 
 time=0.157..0.589 
 rows=20 loops=1)
   -  Append  (cost=0.00..4001.55 rows=2008 width=4) (actual 
 time=0.149..0.511 rows=20 loops=1)
 -  

Re: [PATCHES] [HACKERS] elog(FATAL)ing non-existent roles during client

2007-02-03 Thread Bruce Momjian

Your patch has been added to the PostgreSQL unapplied patches list at:

http://momjian.postgresql.org/cgi-bin/pgpatches

It will be applied as soon as one of the PostgreSQL committers reviews
and approves it.

---


Gavin Sherry wrote:
 On Tue, 5 Dec 2006, Gavin Sherry wrote:
 
  On Thu, 30 Nov 2006, Tom Lane wrote:
 
   Gavin Sherry [EMAIL PROTECTED] writes:
I wonder if we should check if the role exists for the other
authentication methods too? get_role_line() should be very cheap and it
would prevent unnecessary authentication work if we did it before
contacting, for example, the client ident server. Even with trust, it
would save work because otherwise we do not check if the user exists 
until
InitializeSessionUserId(), at which time we're set up our proc entry 
etc.
  
   This only saves work if the supplied ID is in fact invalid, which one
   would surely think isn't the normal case; otherwise it costs more.
 
  Yes.
 
   I could see doing this in the ident path, because contacting a remote
   ident server is certainly expensive on both sides.  I doubt it's a good
   idea in the trust case.
 
  Agreed. How about Kerberos too, applying the same logic?
 
 Attached is a patch check adds the checks.
 
 Gavin
Content-Description: 

[ Attachment, skipping... ]

 
 ---(end of broadcast)---
 TIP 9: In versions below 8.0, the planner will ignore your desire to
choose an index scan if your joining column's datatypes do not
match

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings