Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Tom Lane
"Dann Corbit"  writes:
>> From: Andrew Dunstan [mailto:and...@dunslane.net]

>> Why won't
>> PQstatus(conn) == CONNECTION_OK
>> be true and thus the code will succeed without requiring a password?

> It returns the value CONNECTION_STARTED

It certainly shouldn't.  You're effectively asserting that PQconnectdb
is broken for everyone on every platform, which is demonstrably not the
case.  Are you fooling with modified libpq code by any chance?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
Pg_ctl.exe is exiting with a success code from line 1946 of PG_CTL.C

This is not appropriate behavior for a service unless shutdown has been
requested.

> -Original Message-
> From: pgsql-hackers-ow...@postgresql.org [mailto:pgsql-hackers-
> ow...@postgresql.org] On Behalf Of Dann Corbit
> Sent: Tuesday, April 28, 2009 11:47 AM
> To: pgsql-hackers@postgresql.org
> Subject: [HACKERS] Building Postgresql under Windows question
> 
> We are still having trouble with the service controller pg_ctl.exe
> vanishing after some period of time.
> 
> Hence I am doing debug builds from the source tree according to the
> instructions found at:
>
http://developer.postgresql.org/pgdocs/postgres/install-win32-full.html
> 
> Specifically, I am using this method:
> It is also possible to build from inside the Visual Studio GUI. In
this
> case, you need to run:
>  perl mkvcbuild.pl
> 
> There are a few projects that will not build (only 7 out of 99).
> 
> I have not installed ossp-uuid yet, so one of the failures is
expected.
> On the CONTRIB projects I am not concerned at all.
> However, on the internationalization failures, I want to know what the
> failures mean.
> Here is the output of the debug build process from the Visual Studio
> IDE
> (includes all failures):
> 
> 1>-- Build started: Project: utf8_and_euc_kr, Configuration: Debug
> Win32 --
> 1>Generate DEF file
> 1>Not re-generating UTF8_AND_EUC_KR.DEF, file already exists.
> 1>Linking...
> 1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
> euc_jp_to_utf8
> 1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
> pg_finfo_euc_jp_to_utf8
> 1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
> pg_finfo_utf8_to_euc_jp
> 1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
> utf8_to_euc_jp
> 1>Debug\utf8_and_euc_kr\utf8_and_euc_kr.lib : fatal error LNK1120: 4
> unresolved externals
> 2>-- Build started: Project: utf8_and_euc_cn, Configuration: Debug
> Win32 --
> 3>-- Build started: Project: euc_kr_and_mic, Configuration: Debug
> Win32 --
> 1>Build log was saved at
> "file://c:\dcorbit64\postgresql\postgresql-
> 8.3.7\Debug\utf8_and_euc_kr\B
> uildLog.htm"
> 1>utf8_and_euc_kr - 5 error(s), 0 warning(s)
> 2>Generate DEF file
> 3>Generate DEF file
> 3>Not re-generating EUC_KR_AND_MIC.DEF, file already exists.
> 2>Not re-generating UTF8_AND_EUC_CN.DEF, file already exists.
> 3>Linking...
> 2>Linking...
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> euc_jp_to_mic
> 2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
> koi8r_to_utf8
> 2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
> pg_finfo_koi8r_to_utf8
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> euc_jp_to_sjis
> 2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
> pg_finfo_utf8_to_koi8r
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> mic_to_euc_jp
> 2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
> utf8_to_koi8r
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> mic_to_sjis
> 2>Debug\utf8_and_euc_cn\utf8_and_euc_cn.lib : fatal error LNK1120: 4
> unresolved externals
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_euc_jp_to_mic
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_euc_jp_to_sjis
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_mic_to_euc_jp
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_mic_to_sjis
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_sjis_to_euc_jp
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> pg_finfo_sjis_to_mic
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> sjis_to_euc_jp
> 3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
> sjis_to_mic
> 3>Debug\euc_kr_and_mic\euc_kr_and_mic.lib : fatal error LNK1120: 12
> unresolved externals
> 2>Build log was saved at
> "file://c:\dcorbit64\postgresql\postgresql-
> 8.3.7\Debug\utf8_and_euc_cn\B
> uildLog.htm"
> 2>utf8_and_euc_cn - 5 error(s), 0 warning(s)
> 4>-- Build started: Project: utf8_and_shift_jis_2004,
> Configuration:
> Debug Win32 --
> 4>Generate DEF file
> 3>Build log was saved at
> "file://c:\dcorbit64\postgresql\postgresql-
> 8.3.7\Debug\euc_kr_and_mic\Bu
> ildLog.htm"
> 3>euc_kr_and_mic - 13 error(s), 0 warning(s)
> 4>Not re-generating UTF8_AND_SHIFT_JIS_2004.DEF, file already exists.
> 5>-- Build started: Project: uuid-ossp, Configuration: Debug Win32
> --
> 5>Compiling...
> 4>Linking...
> 5>uuid-ossp.c
> 4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
> symbol pg_finfo_uhc_to_utf8
> 4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
> symbol pg_finfo_utf8_to_uhc
> 4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
> symbol uhc_to_u

Re: [HACKERS] docs: mention autovacuum when ANALYZE is recommended

2009-04-28 Thread Tom Lane
Alvaro Herrera  writes:
> Bruce asked me to look for places in the docs that mention that an
> ANALYZE is recommended, to mention the possibility that autovacuum takes
> care.  This patch does that.

I think you found the right places to touch, but is "let the autovacuum
daemon do it" sufficient?  It seems like that needs some qualifiers
about whether autovacuum is enabled, how long you should expect to wait
for the stats to get updated, etc.  It's probably not a good idea to
duplicate all that in each place, but maybe a link to the main
documentation about autovacuum is reasonable in each place.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Andrew Dunstan



Dann Corbit wrote:

-Original Message-
From: Andrew Dunstan [mailto:and...@dunslane.net]
Sent: Tuesday, April 28, 2009 2:27 PM
To: Dann Corbit
Cc: pgsql-hackers@postgresql.org
Subject: Re: [HACKERS] Building Postgresql under Windows question



Dann Corbit wrote:


In this function:
static bool
test_postmaster_connection(bool do_checkpoint)

This code will never succeed:
snprintf(connstr, sizeof(connstr),
 "dbname=postgres port=%s connect_timeout=5",
portstr);

for (i = 0; i < wait_seconds; i++)
{
if ((conn = PQconnectdb(connstr)) != NULL &&
(PQstatus(conn) == CONNECTION_OK ||
 PQconnectionNeedsPassword(conn)))
{
PQfinish(conn);
success = true;
break;
}


Because pg_hba.conf has this:
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
hostall all ::1/128   trust

to allow connections from the local machine.

(A password is not needed to connect, so the code always fails).


  

Why won't

PQstatus(conn) == CONNECTION_OK

be true and thus the code will succeed without requiring a password?



It returns the value CONNECTION_STARTED

I have found a work-around for now.

If I set the method to password in pg_hba.conf, the service starts and
runs correctly.

It is only when the method is set to trust that we get 100% failures.


  


Then that looks like a libpq bug :-( According to the docs:

"The status can be one of a number of values. However, only two of these 
are seen outside of an asynchronous connection procedure: CONNECTION_OK 
and CONNECTION_BAD."


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
> -Original Message-
> From: Andrew Dunstan [mailto:and...@dunslane.net]
> Sent: Tuesday, April 28, 2009 2:27 PM
> To: Dann Corbit
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Building Postgresql under Windows question
> 
> 
> 
> Dann Corbit wrote:
> > In this function:
> > static bool
> > test_postmaster_connection(bool do_checkpoint)
> >
> > This code will never succeed:
> > snprintf(connstr, sizeof(connstr),
> >  "dbname=postgres port=%s connect_timeout=5",
> > portstr);
> >
> > for (i = 0; i < wait_seconds; i++)
> > {
> > if ((conn = PQconnectdb(connstr)) != NULL &&
> > (PQstatus(conn) == CONNECTION_OK ||
> >  PQconnectionNeedsPassword(conn)))
> > {
> > PQfinish(conn);
> > success = true;
> > break;
> > }
> >
> >
> > Because pg_hba.conf has this:
> > # TYPE  DATABASEUSERCIDR-ADDRESS  METHOD
> >
> > # IPv4 local connections:
> > hostall all 127.0.0.1/32  trust
> > # IPv6 local connections:
> > hostall all ::1/128   trust
> >
> > to allow connections from the local machine.
> >
> > (A password is not needed to connect, so the code always fails).
> >
> >
> 
> 
> Why won't
> 
>   PQstatus(conn) == CONNECTION_OK
> 
> be true and thus the code will succeed without requiring a password?

It returns the value CONNECTION_STARTED

I have found a work-around for now.

If I set the method to password in pg_hba.conf, the service starts and
runs correctly.

It is only when the method is set to trust that we get 100% failures.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] docs: mention autovacuum when ANALYZE is recommended

2009-04-28 Thread Alvaro Herrera
Hi,

Bruce asked me to look for places in the docs that mention that an
ANALYZE is recommended, to mention the possibility that autovacuum takes
care.  This patch does that.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.
Index: doc/src/sgml/backup.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.124
diff -c -p -r2.124 backup.sgml
*** doc/src/sgml/backup.sgml	7 Apr 2009 00:31:25 -	2.124
--- doc/src/sgml/backup.sgml	28 Apr 2009 21:38:12 -
*** pg_dump -h host1 vacuumdb -a -z; this is
  equivalent to running VACUUM ANALYZE on each database
! manually.  For more advice on how to load large amounts of data
  into PostgreSQL efficiently, refer to .
 
--- 171,178 
  database so the query optimizer has useful statistics. An easy way
  to do this is to run vacuumdb -a -z; this is
  equivalent to running VACUUM ANALYZE on each database
! manually.  Alternatively, you can let the autovacuum daemon do it.
! For more advice on how to load large amounts of data
  into PostgreSQL efficiently, refer to .
 
Index: doc/src/sgml/indices.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/indices.sgml,v
retrieving revision 1.76
diff -c -p -r1.76 indices.sgml
*** doc/src/sgml/indices.sgml	7 Feb 2009 20:05:44 -	1.76
--- doc/src/sgml/indices.sgml	28 Apr 2009 19:48:25 -
*** SELECT am.amname AS index_method,
*** 1012,1018 
 
  
   Always run 
!  first.  This command
   collects statistics about the distribution of the values in the
   table.  This information is required to guess the number of rows
   returned by a query, which is needed by the planner to assign
--- 1012,1018 
 
  
   Always run 
!  first (or ensure that the autovacuum daemon did it recently).  This command
   collects statistics about the distribution of the values in the
   table.  This information is required to guess the number of rows
   returned by a query, which is needed by the planner to assign
Index: doc/src/sgml/perform.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/perform.sgml,v
retrieving revision 1.69
diff -c -p -r1.69 perform.sgml
*** doc/src/sgml/perform.sgml	13 Dec 2008 19:13:43 -	1.69
--- doc/src/sgml/perform.sgml	28 Apr 2009 19:47:58 -
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 961,967 
  table.  With no statistics or obsolete statistics, the planner might
  make poor decisions during query planning, leading to poor
  performance on any tables with inaccurate or nonexistent
! statistics.
 

  
--- 961,969 
  table.  With no statistics or obsolete statistics, the planner might
  make poor decisions during query planning, leading to poor
  performance on any tables with inaccurate or nonexistent
! statistics.  Note that it is possible that the autovacuum daemon
! will automatically run ANALYZE for you in some
! cases, if enabled.
 

  
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 1015,1021 
   
   

!Run ANALYZE afterwards.

   
  
--- 1017,1024 
   
   

!Run ANALYZE afterwards.  (Or let the autovacuum daemon
!do it.)

   
  
*** SELECT * FROM x, y, a, b, c WHERE someth
*** 1041,1047 
  while loading the data, but don't bother increasing
  maintenance_work_mem; rather, you'd do that while
  manually recreating indexes and foreign keys afterwards.
! And don't forget to ANALYZE when you're done.
 


--- 1044,1051 
  while loading the data, but don't bother increasing
  maintenance_work_mem; rather, you'd do that while
  manually recreating indexes and foreign keys afterwards.
! And don't forget to ANALYZE when you're done, or let
! the autovacuum daemon do it.
 


Index: doc/src/sgml/ref/pg_dump.sgml
===
RCS file: /home/alvherre/Code/cvs/pgsql/doc/src/sgml/ref/pg_dump.sgml,v
retrieving revision 1.113
diff -c -p -r1.113 pg_dump.sgml
*** doc/src/sgml/ref/pg_dump.sgml	22 Mar 2009 16:44:26 -	1.113
--- doc/src/sgml/ref/pg_dump.sgml	28 Apr 2009 19:51:57 -
*** CREATE DATABASE foo WITH TEMPLATE templa
*** 834,840 
 does not contain the statistics used by the optimizer to make
 query planning decisions.  Therefore, it is wise to run
 ANALYZE after restoring from a dump file
!to ensure good performance.  The dump file also does not
 contain any ALTER DATABASE ... SET commands;
 these

Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Andrew Dunstan



Dann Corbit wrote:

In this function:
static bool
test_postmaster_connection(bool do_checkpoint)

This code will never succeed:
snprintf(connstr, sizeof(connstr),
 "dbname=postgres port=%s connect_timeout=5",
portstr);

for (i = 0; i < wait_seconds; i++)
{
if ((conn = PQconnectdb(connstr)) != NULL &&
(PQstatus(conn) == CONNECTION_OK ||
 PQconnectionNeedsPassword(conn)))
{
PQfinish(conn);
success = true;
break;
}


Because pg_hba.conf has this:
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
hostall all ::1/128   trust

to allow connections from the local machine.

(A password is not needed to connect, so the code always fails).

  



Why won't

	PQstatus(conn) == CONNECTION_OK 


be true and thus the code will succeed without requiring a password?

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS]

2009-04-28 Thread vadim


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
In this function:
static bool
test_postmaster_connection(bool do_checkpoint)

This code will never succeed:
snprintf(connstr, sizeof(connstr),
 "dbname=postgres port=%s connect_timeout=5",
portstr);

for (i = 0; i < wait_seconds; i++)
{
if ((conn = PQconnectdb(connstr)) != NULL &&
(PQstatus(conn) == CONNECTION_OK ||
 PQconnectionNeedsPassword(conn)))
{
PQfinish(conn);
success = true;
break;
}


Because pg_hba.conf has this:
# TYPE  DATABASEUSERCIDR-ADDRESS  METHOD

# IPv4 local connections:
hostall all 127.0.0.1/32  trust
# IPv6 local connections:
hostall all ::1/128   trust

to allow connections from the local machine.

(A password is not needed to connect, so the code always fails).



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
> -Original Message-
> From: Andrew Dunstan [mailto:and...@dunslane.net]
> Sent: Tuesday, April 28, 2009 12:42 PM
> To: Dann Corbit
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Building Postgresql under Windows question
> 
> 
> 
> Dann Corbit wrote:
> > Pg_ctl.exe is exiting with a success code from line 1946 of PG_CTL.C
> >
> > This is not appropriate behavior for a service unless shutdown has
> been
> > requested.
> >
> 
> pg_ctl calls *StartServiceCtrlDispatcher*(). It can only get to the
> line
> you mention when called as a service after that call returns. MSDN
> states:
> 
> "If *StartServiceCtrlDispatcher* succeeds, it connects the calling
> thread to the service control manager and does not return until all
> running services in the process have entered the SERVICE_STOPPED
> state."
> 
> So it appears that something is causing your service to enter that
> state.

It appears that SERVICE_STOPPED comes from here:

static void WINAPI
pgwin32_ServiceMain(DWORD argc, LPTSTR * argv)
{
PROCESS_INFORMATION pi;
DWORD   ret;
DWORD   check_point_start;

/* Initialize variables */
status.dwWin32ExitCode = S_OK;
status.dwCheckPoint = 0;
status.dwWaitHint = 6;
status.dwServiceType = SERVICE_WIN32_OWN_PROCESS;
status.dwControlsAccepted = SERVICE_ACCEPT_STOP |
SERVICE_ACCEPT_SHUTDOWN | SERVICE_ACCEPT_PAUSE_CONTINUE;
status.dwServiceSpecificExitCode = 0;
status.dwCurrentState = SERVICE_START_PENDING;

memset(&pi, 0, sizeof(pi));

read_post_opts();

/* Register the control request handler */
if ((hStatus = RegisterServiceCtrlHandler(register_servicename,
pgwin32_ServiceHandler)) == (SERVICE_STATUS_HANDLE) 0)
return;

if ((shutdownEvent = CreateEvent(NULL, true, false, NULL)) ==
NULL)
return;

/* Start the postmaster */
pgwin32_SetServiceStatus(SERVICE_START_PENDING);
if (!CreateRestrictedProcess(pgwin32_CommandLine(false), &pi,
true))
{
pgwin32_SetServiceStatus(SERVICE_STOPPED);
return;
}
postmasterPID = pi.dwProcessId;
postmasterProcess = pi.hProcess;
CloseHandle(pi.hThread);

if (do_wait)
{
write_eventlog(EVENTLOG_INFORMATION_TYPE, _("Waiting for
server startup...\n"));
if (test_postmaster_connection(true) == false)
{
write_eventlog(EVENTLOG_INFORMATION_TYPE,
_("Timed out waiting for server startup\n"));
pgwin32_SetServiceStatus(SERVICE_STOPPED);
return;
}
write_eventlog(EVENTLOG_INFORMATION_TYPE, _("Server
started and accepting connections\n"));
}

/*
 * Save the checkpoint value as it might have been incremented
in
 * test_postmaster_connection
 */
check_point_start = status.dwCheckPoint;

pgwin32_SetServiceStatus(SERVICE_RUNNING);

/* Wait for quit... */
ret = WaitForMultipleObjects(2, shutdownHandles, FALSE,
INFINITE);

pgwin32_SetServiceStatus(SERVICE_STOP_PENDING);
switch (ret)
{
case WAIT_OBJECT_0: /* shutdown event */
kill(postmasterPID, SIGINT);

/*
 * Increment the checkpoint and try again Abort
after 12
 * checkpoints as the postmaster has probably
hung
 */
while (WaitForSingleObject(postmasterProcess,
5000) == WAIT_TIMEOUT && status.dwCheckPoint < 12)
status.dwCheckPoint++;
break;

case (WAIT_OBJECT_0 + 1):   /* postmaster
went down */
break;

default:
/* shouldn't get here? */
break;
}

CloseHandle(shutdownEvent);
CloseHandle(postmasterProcess);

pgwin32_SetServiceStatus(SERVICE_STOPPED);
}

I will set a breakpoint on every place that the status is set to
SERVICE_STOPPED and report what I have found.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
> -Original Message-
> From: Andrew Dunstan [mailto:and...@dunslane.net]
> Sent: Tuesday, April 28, 2009 12:42 PM
> To: Dann Corbit
> Cc: pgsql-hackers@postgresql.org
> Subject: Re: [HACKERS] Building Postgresql under Windows question
> 
> 
> 
> Dann Corbit wrote:
> > Pg_ctl.exe is exiting with a success code from line 1946 of PG_CTL.C
> >
> > This is not appropriate behavior for a service unless shutdown has
> been
> > requested.
> >
> 
> pg_ctl calls *StartServiceCtrlDispatcher*(). It can only get to the
> line
> you mention when called as a service after that call returns. MSDN
> states:
> 
> "If *StartServiceCtrlDispatcher* succeeds, it connects the calling
> thread to the service control manager and does not return until all
> running services in the process have entered the SERVICE_STOPPED
> state."
> 
> So it appears that something is causing your service to enter that
> state.

It is interesting that it happens even if I run no queries at all.

This is the only reference to the service control dispatcher I can find
in pg_ctl.c:

static void
pgwin32_doRunAsService(void)
{
SERVICE_TABLE_ENTRY st[] = {{register_servicename,
pgwin32_ServiceMain},
{NULL, NULL}};

if (StartServiceCtrlDispatcher(st) == 0)
{
write_stderr(_("%s: could not start service \"%s\":
error code %d\n"), progname, register_servicename, (int)
GetLastError());
exit(1);
}
}

BTW, the exit(1) calls should be exit(EXIT_FAILURE) though there will be
no difficulty on any POSIX system.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Andrew Dunstan



Dann Corbit wrote:

Pg_ctl.exe is exiting with a success code from line 1946 of PG_CTL.C

This is not appropriate behavior for a service unless shutdown has been
requested.
  


pg_ctl calls *StartServiceCtrlDispatcher*(). It can only get to the line 
you mention when called as a service after that call returns. MSDN states:


"If *StartServiceCtrlDispatcher* succeeds, it connects the calling 
thread to the service control manager and does not return until all 
running services in the process have entered the SERVICE_STOPPED state."


So it appears that something is causing your service to enter that state.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword list sanity check

2009-04-28 Thread David Fetter
On Tue, Apr 28, 2009 at 11:57:19AM +0300, Heikki Linnakangas wrote:
> Greg Stark wrote:
>> I had previously considered adding an assertion in the backend to
>> check they're sorted properly. That would be less formatting
>> dependent and  would be only a couple lines of C.
>>
>> I don't think we can do that with the gram.y check though.
>
> Well, the ordering in gram.y is just pro forma anyway. Checking that
> all  keywords are present and correctly labeled in kwlist.h is more
> important. What's still missing is to check that all keywords
> defined  with "%token " in gram.y are present in one of the
> keyword lists.
>
> Hmm, it just occurred to me that this script is only a few steps
> away  from actually generating kwlist.h from gram.y, instead of
> merely  cross-checking them.

Single Point of Truth is good :)

Cheers,
David.
-- 
David Fetter  http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Building Postgresql under Windows question

2009-04-28 Thread Dann Corbit
We are still having trouble with the service controller pg_ctl.exe
vanishing after some period of time.

Hence I am doing debug builds from the source tree according to the
instructions found at:
http://developer.postgresql.org/pgdocs/postgres/install-win32-full.html

Specifically, I am using this method:
It is also possible to build from inside the Visual Studio GUI. In this
case, you need to run: 
 perl mkvcbuild.pl

There are a few projects that will not build (only 7 out of 99).

I have not installed ossp-uuid yet, so one of the failures is expected.
On the CONTRIB projects I am not concerned at all.
However, on the internationalization failures, I want to know what the
failures mean.
Here is the output of the debug build process from the Visual Studio IDE
(includes all failures):

1>-- Build started: Project: utf8_and_euc_kr, Configuration: Debug
Win32 --
1>Generate DEF file
1>Not re-generating UTF8_AND_EUC_KR.DEF, file already exists.
1>Linking...
1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
euc_jp_to_utf8
1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
pg_finfo_euc_jp_to_utf8
1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
pg_finfo_utf8_to_euc_jp
1>utf8_and_euc_kr.def : error LNK2001: unresolved external symbol
utf8_to_euc_jp
1>Debug\utf8_and_euc_kr\utf8_and_euc_kr.lib : fatal error LNK1120: 4
unresolved externals
2>-- Build started: Project: utf8_and_euc_cn, Configuration: Debug
Win32 --
3>-- Build started: Project: euc_kr_and_mic, Configuration: Debug
Win32 --
1>Build log was saved at
"file://c:\dcorbit64\postgresql\postgresql-8.3.7\Debug\utf8_and_euc_kr\B
uildLog.htm"
1>utf8_and_euc_kr - 5 error(s), 0 warning(s)
2>Generate DEF file
3>Generate DEF file
3>Not re-generating EUC_KR_AND_MIC.DEF, file already exists.
2>Not re-generating UTF8_AND_EUC_CN.DEF, file already exists.
3>Linking...
2>Linking...
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
euc_jp_to_mic
2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
koi8r_to_utf8
2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
pg_finfo_koi8r_to_utf8
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
euc_jp_to_sjis
2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
pg_finfo_utf8_to_koi8r
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
mic_to_euc_jp
2>utf8_and_euc_cn.def : error LNK2001: unresolved external symbol
utf8_to_koi8r
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
mic_to_sjis
2>Debug\utf8_and_euc_cn\utf8_and_euc_cn.lib : fatal error LNK1120: 4
unresolved externals
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_euc_jp_to_mic
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_euc_jp_to_sjis
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_mic_to_euc_jp
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_mic_to_sjis
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_sjis_to_euc_jp
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
pg_finfo_sjis_to_mic
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
sjis_to_euc_jp
3>euc_kr_and_mic.def : error LNK2001: unresolved external symbol
sjis_to_mic
3>Debug\euc_kr_and_mic\euc_kr_and_mic.lib : fatal error LNK1120: 12
unresolved externals
2>Build log was saved at
"file://c:\dcorbit64\postgresql\postgresql-8.3.7\Debug\utf8_and_euc_cn\B
uildLog.htm"
2>utf8_and_euc_cn - 5 error(s), 0 warning(s)
4>-- Build started: Project: utf8_and_shift_jis_2004, Configuration:
Debug Win32 --
4>Generate DEF file
3>Build log was saved at
"file://c:\dcorbit64\postgresql\postgresql-8.3.7\Debug\euc_kr_and_mic\Bu
ildLog.htm"
3>euc_kr_and_mic - 13 error(s), 0 warning(s)
4>Not re-generating UTF8_AND_SHIFT_JIS_2004.DEF, file already exists.
5>-- Build started: Project: uuid-ossp, Configuration: Debug Win32
--
5>Compiling...
4>Linking...
5>uuid-ossp.c
4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
symbol pg_finfo_uhc_to_utf8
4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
symbol pg_finfo_utf8_to_uhc
4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
symbol uhc_to_utf8
4>utf8_and_shift_jis_2004.def : error LNK2001: unresolved external
symbol utf8_to_uhc
4>Debug\utf8_and_shift_jis_2004\utf8_and_shift_jis_2004.lib : fatal
error LNK1120: 4 unresolved externals
5>.\contrib\uuid-ossp\uuid-ossp.c(27) : fatal error C1083: Cannot open
include file: 'uuid.h': No such file or directory
5>Build log was saved at
"file://c:\dcorbit64\postgresql\postgresql-8.3.7\Debug\uuid-ossp\BuildLo
g.htm"
5>uuid-ossp - 1 error(s), 0 warning(s)
4>Build log was saved at
"file://c:\dcorbit64\postgresql\postgresql-8.3.7\Debug\utf8_and_shift_ji
s_2004\BuildLog.htm"
4>utf8_and_shift_jis_2004 - 5 error(s), 0 warning(s)
6>-- Build started: Project: hstor

Re: [HACKERS] Restore deleted rows

2009-04-28 Thread Robert Treat
On Monday 27 April 2009 11:17:42 Tom Lane wrote:
> Alvaro Herrera  writes:
> > Anton Egorov escribió:
> >> I need to recover deleted rows from table. After I delete those rows I
> >> stopped postgres immediately and create tar archive of database. I found
> >> solution
> >> http://archives.postgresql.org/pgsql-hackers/2005-03/msg00965.php, but
> >> is there another (easyer) way to do it?
> >
> > I guess you could figure out the Xid of the transaction that deleted the
> > tuples, and mark it as aborted in pg_clog; you'd also need to reset the
> > hint bits on the tuples themselves.  Not necessarily any easier than the
> > above, but at least you don't have to patch Postgres code.
>
> The solution recommended in that message doesn't work anyway --- it will
> consider *all* tuples visible, even ones you don't want.  Reversing a
> single transaction, or small number of transactions, as Alvaro suggests
> is much less likely to create a huge mess.
>

We had started down the path of making a function to read deleted tuples from 
a table for a DR scenario we were involved with once. The idea was that you 
could do something like select * from viewdeletedpages('tablename') t (table 
type), which would allow you to see the dead rows. It ended up unnessesary, 
so we never finished it, but I still think the utility of such a function 
would be high... for most people, if you told them that they could do create 
table as select * from viewdeletedttuples(...) t(...) after doing a 
mis-placed delete/update, at the cost of having to sift through extra data, 
they would make that trade in a heartbeat. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prepared transactions vs novice DBAs, again

2009-04-28 Thread Robert Treat
On Wednesday 22 April 2009 15:49:32 Tom Lane wrote:
> I wrote:
> > Heikki Linnakangas  writes:
> >> Configuration affects what can be tested in installcheck, that's quite
> >> natural. I would be happy with simply adding an alternative expected
> >> output file for min_prepared_xacts=0 case. Like we've done for xml test
> >> cases, for example, though that's a compile-time option.
> >
> > Hmm, that's true; the xml case is a relevant precedent.  This would be
> > a pretty low-effort way of addressing the problem.  Another nice thing
> > about it is that we'd stop having a default max_prepared_transactions
> > value that's completely useless (5 is guaranteed to be either too much
> > or not enough...)
>
> The more I think about this the more I like it.  The current default of
> 5 never had any justification beyond allowing the regression tests to
> run --- it's almost certainly not enough for production usage of the
> feature, but it exposes you to all of the downsides of accidental use.
> If we change it to zero, we could alter the Notes for PREPARE
> TRANSACTION to urge more strongly that the feature not be enabled
> without having set up appropriate external infrastructure.
>
> Warning about very old prepared transactions is something that we
> could think about doing as well; it doesn't have to be either-or.
> I think the need for it would decrease quite a bit if they weren't
> enabled by default, though.
>
> Comments?  Anyone seriously opposed to making the default be zero?
>

I see this has already been committed, and I am not seriously opposed to 
changing it, but I wanted to chime in on a point no one seemed to raise. I 
used to recommend people set this to 0 pretty regularly, since most web shops 
don't even know what prepared transactions are, let alone use them. I got 
less agressive about this after a few people reported to me that they had run 
out of lock slots on thier systems. Now, you'd think that ~300 lock slots 
wouldn't make that much difference, but it did make me a little nervous; so I 
thought I'd mention it. 

-- 
Robert Treat
Conjecture: http://www.xzilla.net
Consulting: http://www.omniti.com

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Kevin Grittner
Tom Lane  wrote: 
 
> I was thinking in terms of throwing a warning in 8.4 and
> implementing new behavior in 8.5.
 
If we're on that time frame with it, I guess it's not too early to
suggest what we would implement in 8.5.
 
I would suggest that it is clear that what Pavel is requesting is
fully compliant with the spec's definition of global temporary tables.
Since the essence of "global" in this context is that they are not
constrained by modules, the absence module support doesn't seem to
create any possible compatibility issue when and if modules are added.
 
I would suggest (with somewhat less confidence) that both created and
declared local temporary tables might make sense in the context of
whatever procedural languages are supported.  In PL/pgSQL, for
example, the declaration for a declared local tempoary table would be
allowed inside the PL's BEGIN block, in the area where local variables
are allowed.  Such a table would be visible only within the context of
the block (meaning we would probably need to munge the name somehow to
support recursion or other functions with a duplicate table name). 
The temporary table would be materialized at the point where it is
declared, and dropped at the END of the block.
 
I not clear on whether a created local temporary table should retain
its contents from one invocation of a function to the next.  I'm
inclined to think it shouldn't -- that the scope for a materialized
instance is the same as a declared local tempoarary table; the CREATE
just ensures a consistent definition wherever used.
 
Or perhaps it's just a bad idea to attempt to use the LOCAL syntax
outside of a proper module at all.  The GLOBAL option seems clear;
LOCAL seems a bit muddy to me.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Tom Lane
"Kevin Grittner"  writes:
> Tom Lane  wrote: 
>> However, if we're going to do that then we should start
>> throwing warnings for use of the keywords, preferably before the
>> release in which they actually start doing something different.
 
> We might actually want to have a major release which rejects the
> standard syntax before the release where we implement standard
> behavior for it.  (After, of course, a major release which issues the
> warning.)  When we get to the point of breaking existing code (which
> is inevitable if we move to compliance here), it's better to break in
> a clear and predictable way

I was thinking in terms of throwing a warning in 8.4 and implementing
new behavior in 8.5.  An extra release only helps if you assume everyone
adopts that release at some point.  The number of questions we see about
multi-version jumps should disabuse people of the notion that everyone
does it that way ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Kevin Grittner
Tom Lane  wrote: 
 
> One thing I just noticed is that the spec does not consider
> GLOBAL/LOCAL to be optional --- per spec you *must* write one or the
> other in front of TEMPORARY.
 
Agreed.
 
> So we could adopt the view that omitting this keyword
> implies our current non-spec behavior (which is far too useful to
> give up, spec compliance or no) while writing one or the other
> selects the spec behavior.
 
+1 (+1)
 
> However, if we're going to do that then we should start
> throwing warnings for use of the keywords, preferably before the
> release in which they actually start doing something different.
 
We might actually want to have a major release which rejects the
standard syntax before the release where we implement standard
behavior for it.  (After, of course, a major release which issues the
warning.)  When we get to the point of breaking existing code (which
is inevitable if we move to compliance here), it's better to break in
a clear and predictable way
 
Of course, that would mean that implementation would be three releases
away (warn, disable syntax, reenable syntax with standard semantics).
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Tom Lane
"Kevin Grittner"  writes:
> Tom Lane  wrote: 
>> it could be considered either a global or a local temp table per
>> spec (without any module support you can't really say which it is).
 
> That seems bogus -- without modules it is clearly not LOCAL.

You could just as easily say it's not GLOBAL.

> What
> Pavel is requesting exactly matches the spec's definition of a global
> temporary table, but it does make me uneasy that after accepting the
> standard syntax, and behaving differently from it (including making no
> distinction between GLOBAL and LOCAL declarations) we would suddenly
> go to compliance on GLOBAL declarations but leave LOCAL as is.

Right.  What I'm suggesting is that before we mess with this we should
have a road map on whether we are going to try to get to spec
compliance in this area, and if so how.

One thing I just noticed is that the spec does not consider GLOBAL/LOCAL
to be optional --- per spec you *must* write one or the other in front
of TEMPORARY.  So we could adopt the view that omitting this keyword
implies our current non-spec behavior (which is far too useful to give
up, spec compliance or no) while writing one or the other selects the
spec behavior.  However, if we're going to do that then we should start
throwing warnings for use of the keywords, preferably before the release
in which they actually start doing something different.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small problem with PlaceHolderVar mechanism

2009-04-28 Thread Tom Lane
Alvaro Herrera  writes:
> Greg Stark wrote:
>> On Tue, Apr 28, 2009 at 4:41 PM, Tom Lane  wrote:
>>> Another place where planner regression tests might've helped :-(
>> 
>> I would suggest we start gathering up such tests in an sql file now
>> and worry about how to compare the output later.

> At the very least, we could run them just make sure that they run with
> no weird errors.

Well, the cases where you get a weird error more often than not do get
memorialized in regular regression tests.  The hard thing to test for
(in our current framework) is where you get the right answer but the
plan is stupider than it's supposed to be.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small problem with PlaceHolderVar mechanism

2009-04-28 Thread Tom Lane
Greg Stark  writes:
> On Tue, Apr 28, 2009 at 4:41 PM, Tom Lane  wrote:
>> Another place where planner regression tests might've helped :-(

> I would suggest we start gathering up such tests in an sql file now
> and worry about how to compare the output later.

If anyone feels like doing the legwork, there are interesting examples
in the CVS commit logs.  I happened to notice the current problem while
I was re-reading the logs whilst checking the release notes.  For no
particularly good reason I retried the examples mentioned in this item,
and behold it wasn't what I expected ...

2008-08-17 15:40  tgl

* src/backend/optimizer/path/joinrels.c: Add some defenses against
constant-FALSE outer join conditions.  Since eval_const_expressions
will generally throw away anything that's ANDed with constant
FALSE, what we're left with given an example like

select * from tenk1 a where (unique1,0) in (select unique2,1 from
tenk1 b);

is a cartesian product computation, which is really not acceptable.
 This is a regression in CVS HEAD compared to previous releases,
which were able to notice the impossible join condition in this
case --- though not in some related cases that are also improved by
this patch, such as

select * from tenk1 a left join tenk1 b on (a.unique1=b.unique2 and
0=1);

Fix by skipping evaluation of the appropriate side of the outer
join in cases where it's demonstrably unnecessary.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Tom Lane
"Kevin Grittner"  writes:
> ... Both forms of CREATE TEMP
> TABLE should persist the definition if you go by the standard, so you
> don't want to muddy the waters by complying on one and not the other?
 
Right.  This goes back to our old principle of trying not to use
spec-defined syntax for not-per-spec behavior.  We are already behind
the eight ball as far as temp tables go, but let's not make it worse by
blindly picking some spec-defined syntax without a plan for where we go
from here.  (I'm assuming that it's reasonably likely that we will want
a spec-compatible module feature someday.  We'll really have painted
ourselves into a corner if we don't think about the issue now.)

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Kevin Grittner
Tom Lane  wrote: 
 
> GLOBAL/LOCAL TEMP TABLE distinction is not related to cross-session
> persistence of the table definitions
 
On a re-read, I think I see your point -- it is the DECLARE LOCAL TEMP
TABLE versus CREATE { GLOBAL | LOCAL } TEMP TABLE which determines
whether the table definition is persisted.  Both forms of CREATE TEMP
TABLE should persist the definition if you go by the standard, so you
don't want to muddy the waters by complying on one and not the other?
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Tom Lane
"Kevin Grittner"  writes:
> Tom Lane  wrote: 
>> we have already concluded that the spec's
>> GLOBAL/LOCAL TEMP TABLE distinction is not related
>> to cross-session persistence of the table definitions
 
> How do you reconcile that conclusion with the following,
> from ISO/IEC 9075-2:2003 (E), 4.14 Tables:

The point is that what we call "temp tables" are not either global or
local temp tables by the spec's definition.  If we invent something that
behaves as Pavel suggests, then it could be considered either a global
or a local temp table per spec (without any module support you can't
really say which it is).  We're stuck in a terminological problem
anyway, but it will get a whole lot worse if we fail to acknowledge that
there's more than one property involved here.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Kevin Grittner
Tom Lane  wrote: 
 
> it could be considered either a global or a local temp table per
> spec (without any module support you can't really say which it is).
 
That seems bogus -- without modules it is clearly not LOCAL.  What
Pavel is requesting exactly matches the spec's definition of a global
temporary table, but it does make me uneasy that after accepting the
standard syntax, and behaving differently from it (including making no
distinction between GLOBAL and LOCAL declarations) we would suddenly
go to compliance on GLOBAL declarations but leave LOCAL as is.
 
Maybe too messy to try to improve.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small problem with PlaceHolderVar mechanism

2009-04-28 Thread Alvaro Herrera
Greg Stark wrote:
> On Tue, Apr 28, 2009 at 4:41 PM, Tom Lane  wrote:
> > Another place where planner regression tests might've helped :-(
> 
> I would suggest we start gathering up such tests in an sql file now
> and worry about how to compare the output later.

At the very least, we could run them just make sure that they run with
no weird errors.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Small problem with PlaceHolderVar mechanism

2009-04-28 Thread Greg Stark
On Tue, Apr 28, 2009 at 4:41 PM, Tom Lane  wrote:
> Another place where planner regression tests might've helped :-(

I would suggest we start gathering up such tests in an sql file now
and worry about how to compare the output later. There are a lot of
people who can put together some perl hackery to filter the results
but only a relatively few people who can design good targeted sql
tests.

And later we can always reuse the same sql tests with equivalent xml
hackery to filter the desired features or whatever other interface we
come up with.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Kevin Grittner
Tom Lane  wrote: 
 
> we have already concluded that the spec's
> GLOBAL/LOCAL TEMP TABLE distinction is not related
> to cross-session persistence of the table definitions
 
How do you reconcile that conclusion with the following,
from ISO/IEC 9075-2:2003 (E), 4.14 Tables:
 
"The definition of a global temporary table or a created local
temporary table appears in a schema. In SQL language, the name and the
scope of the name of a global temporary table or a created local
temporary table are indistinguishable from those of a persistent base
table. However, because global temporary table contents are distinct
within SQL-sessions, and created local temporary tables are distinct
within SQL-client modules within SQL-sessions, the effective  of the schema in which the global temporary table or the created
local temporary table is instantiated is an implementation-dependent
 that may be thought of as having been effectively
derived from the  of the schema in which the global
temporary table or created local temporary table is defined and the
implementation-dependent SQL- session identifier associated with the
SQL-session."
 
There is a distinction between the definition, which "appears in a
schema" and for which "the name and the scope ... are
indistinguishable from those of a persistent base table", versus the
effective schema in which an instance is materialized, which is
session and/or module dependent.
 
-Kevin

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Small problem with PlaceHolderVar mechanism

2009-04-28 Thread Tom Lane
I noticed that queries involving constant-false join conditions are a
lot dumber than they were a couple of months ago.  For instance

regression=# explain select * from tenk1 a where (unique1,0) in (select 
unique2,1 from tenk1 b);
 QUERY PLAN 
 
-
 Nested Loop  (cost=483.12..797.68 rows=50 width=244)
   ->  HashAggregate  (cost=483.12..483.62 rows=50 width=4)
 ->  Seq Scan on tenk1 b  (cost=0.00..483.00 rows=50 width=4)
   Filter: (0 = 1)
   ->  Index Scan using tenk1_unique1 on tenk1 a  (cost=0.00..6.27 rows=1 
width=244)
 Index Cond: (a.unique1 = b.unique2)
(6 rows)

CVS HEAD from mid-February produces

QUERY PLAN
--
 Result  (cost=0.00..0.01 rows=1 width=0)
   One-Time Filter: false
(2 rows)

The reason this isn't working so well anymore is that initial pullup of
the IN sub-select produces a join condition that includes not "0 = 1"
but "0 = PlaceHolderVar(1)", which of course fails to simplify to a
constant.  In fact, since the PlaceHolderVar is treated like a Var, it
ends up being a relation scan qualifier on "b" and not a one-time filter
at all.

On reflection I think the error here is that we should not blindly
insert the PlaceHolderVar() wrapper around *every* expression pulled up
from a subselect.  We only need it for references that appear above the
lowest outer join that could null the subselect outputs.  In examples
such as this one, the reference we are interested in is not above but
within the join condition of that outer join, so it doesn't need a
PlaceHolderVar.

I haven't finished working out a patch for this, but it looks like it's
fixable with relatively localized hacking in pull_up_simple_subquery
and resolvenew_in_jointree --- we can track exactly which part of the
query we are doing substitutions in, and insert substitutes with or
without PlaceHolderVar accordingly.

Another place where planner regression tests might've helped :-(

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-28 Thread Guillaume Smet
On Tue, Apr 28, 2009 at 5:22 PM, Heikki Linnakangas
 wrote:
> At a normal startup, the checkpoint record would be there as usual. And an
> archive recovery starts at the location indicated by the backup label.
>
> AFAICS calling RequestXLogSwitch() before CreateCheckPoint would be
> equivalent to calling "pg_switch_xlog()" just before shutting down.

That's what I had in mind when writing the patch but I didn't know the
implications of this particular checkpoint.

So moving the call before CreateCheckPoint is what I really intended
now that I have in mind these implications and I don't why it would be
a problem to miss this checkpoint in the logs archived.

-- 
Guillaume

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread vacuum
Perhaps call them "session tables" instead of "temp tables"?

regards, tom lane

Or "transient table" ... 

Maybe we can define when such table lose data 

But in real - there is no need in this feature - databases are made to hold
data, not to lose.

If an application requires mechanism to store transient session-data, it
should create its own session-objects. 

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-28 Thread Heikki Linnakangas

Tom Lane wrote:

Heikki Linnakangas  writes:

Andreas Pflug wrote:
So to get this down to a solution, it appears to be correct to execute 
the RequestXLogSwitch right before CreateCheckPoint?


Hmm, then the checkpoint record isn't archived. That might be 
acceptable, though, since all data would be safe in the preceding WAL.


Not at all, because the database would be very unhappy at restart
if it can't find the checkpoint record pg_control is pointing to.


At a normal startup, the checkpoint record would be there as usual. And 
an archive recovery starts at the location indicated by the backup label.


AFAICS calling RequestXLogSwitch() before CreateCheckPoint would be 
equivalent to calling "pg_switch_xlog()" just before shutting down.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Tom Lane
Alvaro Herrera  writes:
> Maybe we could make this work by fiddling with a different smgr -- on
> it, smgr_sync would be a noop, as would smgr_immedsync, and we could
> kludge something up to truncate relations during recovery.

Interesting thought but I think it falls down on pg_statistic.

One comment I've got is that we have already concluded that the spec's
GLOBAL/LOCAL TEMP TABLE distinction is not related to cross-session
persistence of the table definitions, but rather to module visibility
which is a concept we have not got (yet).  Ergo, we should not use the
phrase "global temp table" for these things.  Not sure what to suggest
instead.  Perhaps call them "session tables" instead of "temp tables"?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-28 Thread Andreas Pflug
Tom Lane wrote:
> Not at all, because the database would be very unhappy at restart
> if it can't find the checkpoint record pg_control is pointing to.
>   

So for several weeks now all postings just say how it will _not_ work.
Does this boil down to "There's no way to make sure that a graceful
failover won't lose data"?

Regards,
Andreas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] idea: global temp tables

2009-04-28 Thread Alvaro Herrera
Pavel Stehule escribió:
> Hello
> 
> I am thinking about global temp tables. One possible solution is
> creating global temporary table like normal table and in planner stage
> check using this table. When some global temporary table is detected,
> then real temporary table is created and used in execution plan. It's
> like:
> 
> CREATE GLOBAL TEMP TABLE foo(a varchar); -- create global empty table foo
> SELECT * FROM foo;
>   a) is relevant temp table for foo, use it
>   a) when not, then CREATE TEMP TABLE pg_temp_1.foo(LIKE foo INCLUDING
> DEFAULTS INCLUDING CONSTRAINTS INCLUDING INDEXES);
>   b) transform origin query to SELECT * FROM pg_temp_1.foo;
> 
> Ideas? Notes? Objections?

Maybe we could make this work by fiddling with a different smgr -- on
it, smgr_sync would be a noop, as would smgr_immedsync, and we could
kludge something up to truncate relations during recovery.

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra cost of "lossy mode" Bitmap Scan plan

2009-04-28 Thread Greg Stark
On Tue, Apr 28, 2009 at 3:02 PM, Robert Haas  wrote:
> You may be right, but on the other hand, I'm not sure there's any
> sense in NOT trying to model the impact of the additional heap
> fetches.

Yeah, the flip side of the argument is that we generally try to do the
best job we can modeling costs and let the arithmetic work out how it
however it does because you never know what kind of wacky situations
will arise planning queries and and the better the estimates the
better your chance of coming up with a good plan.

For example the planner may have other join orders which allow it to
avoid accessing those records entirely. So the comparison with a
nested loop might not be the only comparison that matters. It might be
a case of whether to run a bitmap scan against this table or some scan
against another table to drive the join.

I have been running benchmarks comparing bitmap heap scans against
index scans amongst other comparisons. I haven't done CVS head yet but
on an older version I'm seeing with effective_io_concurrency set to 0
scanning 1000 random tuples throughout a 130G table (one searched
tuple per page) on a machine with 64G of ram after repeated executions
index scans settle down to about 245s vs 205s for bitmap scans (for
100 iterations). So they're about 16% faster for this use case.

Incidentally with effective_io_concurrency set to 30 on this 30-drive
raid the bitmap scans go down to 17s :)

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra cost of "lossy mode" Bitmap Scan plan

2009-04-28 Thread Tom Lane
Robert Haas  writes:
> On Tue, Apr 28, 2009 at 3:51 AM, Greg Stark  wrote:
>> If the logic you're suggesting would kick in at all it would be for a
>> narrow range of scan sizes,

> You may be right, but on the other hand, I'm not sure there's any
> sense in NOT trying to model the impact of the additional heap
> fetches.

I think it's probably useless.  In the first place, at reasonable values
of work_mem the effect is going to be negligible (in the sense that a
plain indexscan would never win).  In the second place, there isn't any
way to guess the extent of lossiness at plan time --- it depends on how
much the target rows are "clumped" on particular pages.  The planner
hasn't got any stats that would let it guess that, and even if we tried
to collect such stats they'd probably be too unstable to be useful.

There are boatloads of effects that the planner doesn't model.  This
one seems very far down the list of what we should worry about.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Clean shutdown and warm standby

2009-04-28 Thread Tom Lane
Heikki Linnakangas  writes:
> Andreas Pflug wrote:
>> So to get this down to a solution, it appears to be correct to execute 
>> the RequestXLogSwitch right before CreateCheckPoint?

> Hmm, then the checkpoint record isn't archived. That might be 
> acceptable, though, since all data would be safe in the preceding WAL.

Not at all, because the database would be very unhappy at restart
if it can't find the checkpoint record pg_control is pointing to.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra cost of "lossy mode" Bitmap Scan plan

2009-04-28 Thread Robert Haas
On Tue, Apr 28, 2009 at 3:51 AM, Greg Stark  wrote:
> On Tue, Apr 28, 2009 at 7:45 AM, higepon  wrote:
>> "How much extra cost should we add for lossy mode?".
>
> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.
>
> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.
>
> If the logic you're suggesting would kick in at all it would be for a
> narrow range of scan sizes, so the net effect would be to use an index
> scan for small scans, then switch to a bitmap scan, then switch back
> to an index scan when the bitmap scan becomes lossy, then switch back
> to a lossy bitmap scan for large scans. I'm thinking that even if it's
> slightly faster when the planner has perfect inputs the downsides of
> switching back and forth might not be worth it. Especially when you
> consider than the planner is often going on approximate estimates and
> it is probably not switching in precisely the right spot.

You may be right, but on the other hand, I'm not sure there's any
sense in NOT trying to model the impact of the additional heap
fetches.  Has anyone done a detailed analysis of the actual
performance characteristics of bitmap scans versus index scans, as
opposed to what the optimizer thinks?  We've had long discussions on
this topic in relation to both the posix_fadvise patch and the gin
fast update patch, but I haven't seen a lot of hard numbers.

...Robert

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] view columns and pg_depend

2009-04-28 Thread Gevik Babakhani

Hi,

I was wondering whether there is a way to see which column in a view 
depends on which column(s) in a table?
If I am not mistaking pg_depend only provides information about which 
view depends on which tables columns only. (same as view_column_usage)


Regards,
Gevik.




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword list sanity check

2009-04-28 Thread Heikki Linnakangas

Greg Stark wrote:
I had previously considered adding an assertion in the backend to check 
they're sorted properly. That would be less formatting dependent and 
would be only a couple lines of C.


I don't think we can do that with the gram.y check though.


Well, the ordering in gram.y is just pro forma anyway. Checking that all 
keywords are present and correctly labeled in kwlist.h is more 
important. What's still missing is to check that all keywords defined 
with "%token " in gram.y are present in one of the keyword lists.


Hmm, it just occurred to me that this script is only a few steps away 
from actually generating kwlist.h from gram.y, instead of merely 
cross-checking them.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Keyword list sanity check

2009-04-28 Thread Greg Stark
I had previously considered adding an assertion in the backend to  
check they're sorted properly. That would be less formatting dependent  
and would be only a couple lines of C.


I don't think we can do that with the gram.y check though.

--
Greg


On 28 Apr 2009, at 09:33, Heikki Linnakangas > wrote:


I wrote a little perl script to perform a basic sanity check to  
keywords in gram.y and kwlist.h. It checks that all lists are in  
alphabetical order, all keywords present in gram.y are listed in  
kwlist.h in the right category, and conversely that all keywords  
listed in kwlist.h are listed in gram.y.


It found one minor issue already:

$ perl src/tools/check_keywords.pl
'SCHEMA' after 'SERVER' in unreserved_keyword list is misplaced

SERVER is not in the right place in gram.y, it should go between  
SERIALIZABLE and SERVER. I'll fix that.


I'll put this into src/tools. It's heavily dependent on the format  
of the lists in gram.y and kwlist.h but if it bitrots due to changes  
in those files, we can either fix it or just remove it if it's not  
deemed useful anymore.


--
 Heikki Linnakangas
 EnterpriseDB   http://www.enterprisedb.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Keyword list sanity check

2009-04-28 Thread Heikki Linnakangas
I wrote a little perl script to perform a basic sanity check to keywords 
in gram.y and kwlist.h. It checks that all lists are in alphabetical 
order, all keywords present in gram.y are listed in kwlist.h in the 
right category, and conversely that all keywords listed in kwlist.h are 
listed in gram.y.


It found one minor issue already:

$ perl src/tools/check_keywords.pl
'SCHEMA' after 'SERVER' in unreserved_keyword list is misplaced

SERVER is not in the right place in gram.y, it should go between 
SERIALIZABLE and SERVER. I'll fix that.


I'll put this into src/tools. It's heavily dependent on the format of 
the lists in gram.y and kwlist.h but if it bitrots due to changes in 
those files, we can either fix it or just remove it if it's not deemed 
useful anymore.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com


check_keywords.pl
Description: Perl program

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra cost of "lossy mode" Bitmap Scan plan

2009-04-28 Thread higepon
Hi.

> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.

I see.

> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.

Hmm. Is this really a narrow corner case?
At least I and ITAGAKI-san met.
I think the default work_mem value (1MB) may cause these issues.

Do you think there's no need for the planner to know
whether the plan is lossy or not ?
If the planner could know the costs of lossy mode, it may choose
better plans than now.


Or the second option is to show some hints to people who are doing
performance tuning.

(a) Write trace log when bitmap scans falls into "lossy" mode.

(b) Show "lossy" or not on Explain results.


Best regards,

-
MINOWA Taro (Higepon)

Cybozu Labs, Inc.

http://www.monaos.org/
http://code.google.com/p/mosh-scheme/

On Tue, Apr 28, 2009 at 4:51 PM, Greg Stark  wrote:
> On Tue, Apr 28, 2009 at 7:45 AM, higepon  wrote:
>> "How much extra cost should we add for lossy mode?".
>
> There is something odd in this concern. Normally people aren't raising
> and lowering their work_mem so the comparison would be between a plan
> where the planner expects to see n records and a plan where the
> planner expects to see n+1 records where n would fit and n+1 wouldn't.
>
> It seems like an awfully narrow corner case where n records would be
> faster as a bitmap index scan but n+1 records would be faster as an
> index scan because the bitmap becomes lossy. The whole point of bitmap
> scans is that they're faster for large scans than index scans.
>
> If the logic you're suggesting would kick in at all it would be for a
> narrow range of scan sizes, so the net effect would be to use an index
> scan for small scans, then switch to a bitmap scan, then switch back
> to an index scan when the bitmap scan becomes lossy, then switch back
> to a lossy bitmap scan for large scans. I'm thinking that even if it's
> slightly faster when the planner has perfect inputs the downsides of
> switching back and forth might not be worth it. Especially when you
> consider than the planner is often going on approximate estimates and
> it is probably not switching in precisely the right spot.
>
> --
> greg
>

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RFE: Transparent encryption on all fields

2009-04-28 Thread Sam Halliday

If it works without any change to client SQL queries and compatible with JPA,
then I'm all ears. Otherwise, I really think Sam Mason's idea was spot on...
it works around the inadequacies of encrypted drives and provides the same
level of on-server security.


Tomas Zerolo wrote:
> 
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
> 
> On Mon, Apr 27, 2009 at 01:28:45AM -0700, Sam Halliday wrote:
>> 
>> 
>> Tomas Zerolo wrote:
>> > 
>> >> If there were a way to prompt the user for the password to an
>> encrypted 
>> >> drive on startup for all OS, with an equivalent for headless
>> machines... 
> 
> [...]
> 
>> There is a difference between "it's possible" and "there is". I know of
>> no
>> such standard support of either of the standard OSes.
> 
> Sorry. Denial doesn't help. It's not only "possible", it's being done
> all the time. Cf. ,
> for example. But you are attacking a strawman anyway.
> 
> Client-side decryption matches much better what you had in mind -- and
> I think it's provably no less secure (and more convenient).
> 
> The only hypothetical advantage of server-side encryption (there might
> be an opportunity of indexing) seems to be so mired in technical
> difficulties (if you want to avoid information leaks anyway) that I
> can't even imagine whether it's a real advantage.
> 
> Regards
> - -- tomás
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.6 (GNU/Linux)
> 
> iD8DBQFJ9oriBcgs9XrR2kYRAj/CAJ9c1UERONoqYtjEj0N/aSp5IELFAgCffeTR
> nomoWcaFoE9fiYPD0EOr9To=
> =KevK
> -END PGP SIGNATURE-
> 
> -- 
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
> 
> 

-- 
View this message in context: 
http://www.nabble.com/RFE%3A-Transparent-encryption-on-all-fields-tp23195216p23272501.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Extra cost of "lossy mode" Bitmap Scan plan

2009-04-28 Thread Greg Stark
On Tue, Apr 28, 2009 at 7:45 AM, higepon  wrote:
> "How much extra cost should we add for lossy mode?".

There is something odd in this concern. Normally people aren't raising
and lowering their work_mem so the comparison would be between a plan
where the planner expects to see n records and a plan where the
planner expects to see n+1 records where n would fit and n+1 wouldn't.

It seems like an awfully narrow corner case where n records would be
faster as a bitmap index scan but n+1 records would be faster as an
index scan because the bitmap becomes lossy. The whole point of bitmap
scans is that they're faster for large scans than index scans.

If the logic you're suggesting would kick in at all it would be for a
narrow range of scan sizes, so the net effect would be to use an index
scan for small scans, then switch to a bitmap scan, then switch back
to an index scan when the bitmap scan becomes lossy, then switch back
to a lossy bitmap scan for large scans. I'm thinking that even if it's
slightly faster when the planner has perfect inputs the downsides of
switching back and forth might not be worth it. Especially when you
consider than the planner is often going on approximate estimates and
it is probably not switching in precisely the right spot.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers