Re: [PATCHES] Time zone definitions to config files

2006-07-25 Thread Tom Lane
David Fetter <[EMAIL PROTECTED]> writes:
> On Mon, Jul 24, 2006 at 11:59:34PM -0400, Tom Lane wrote:
>> The documentation is still in need of help ... in particular, Table
>> B-4 (timezone names) is now out of sync with reality.

> I'll take a whack at that patch this evening PDT or tomorrow evening
> at the latest.  We're too late in the cycle to go over this, but maybe
> we can figure out a way to have this data read from the same data
> source as the pg_timezones VIEW does at compile time.  Keeping two
> such table in synch seems error-prone.

Well, the problem is exactly that there is no "same data source"
anymore: the local DBA can customize the timezone list all he wants.

We could document what the out-of-the-box settings are, but is it
really useful to duplicate that info in the SGML docs?  We don't
for example provide a copy of psql \df+ output in the SGML docs,
and I'm wondering if this isn't kind of the same animal.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Resurrecting per-page cleaner for btree

2006-07-25 Thread Bruce Momjian

Tom, I ran your tests with fsync off (as you did), and saw numbers
bouncing between 400-700 tps without my patch, and sticking at 700 tps
with my patch.

---

Bruce Momjian wrote:
> 
> The attached patch requires the new row to fit, and 10% to be free on
> the page.  Would someone test that?
> 
> ---
> 
> Tom Lane wrote:
> > ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> > > This is a revised patch originated by Junji TERAMOTO for HEAD.
> > >   [BTree vacuum before page splitting]
> > >   http://archives.postgresql.org/pgsql-patches/2006-01/msg00301.php
> > > I think we can resurrect his idea because we will scan btree pages
> > > at-atime now; the missing-restarting-point problem went away.
> > 
> > I've applied this but I'm now having some second thoughts about it,
> > because I'm seeing an actual *decrease* in pgbench numbers from the
> > immediately prior CVS HEAD code.  Using
> > pgbench -i -s 10 bench
> > pgbench -c 10 -t 1000 bench (repeat this half a dozen times)
> > with fsync off but all other settings factory-stock, what I'm seeing
> > is that the first run looks really good but subsequent runs tail off in
> > spectacular fashion :-(  Pre-patch there was only minor degradation in
> > successive runs.
> > 
> > What I think is happening is that because pgbench depends so heavily on
> > updating existing records, we get into a state where an index page is
> > about full and there's one dead tuple on it, and then for each insertion
> > we have
> > 
> > * check for uniqueness marks one more tuple dead (the
> >   next-to-last version of the tuple)
> > * newly added code removes one tuple and does a write
> > * now there's enough room to insert one tuple
> > * lather, rinse, repeat, never splitting the page.
> > 
> > The problem is that we've traded splitting a page every few hundred
> > inserts for doing a PageIndexMultiDelete, and emitting an extra WAL
> > record, on *every* insert.  This is not good.
> > 
> > Had you done any performance testing on this patch, and if so what
> > tests did you use?  I'm a bit hesitant to try to fix it on the basis
> > of pgbench results alone.
> > 
> > One possible fix that comes to mind is to only perform the cleanup
> > if we are able to remove more than one dead tuple (perhaps about 10
> > would be good).  Or do the deletion anyway, but then go ahead and
> > split the page unless X amount of space has been freed (where X is
> > more than just barely enough for the incoming tuple).
> > 
> > After all the thought we've put into this, it seems a shame to
> > just abandon it :-(.  But it definitely needs more tweaking.
> > 
> > regards, tom lane
> > 
> > ---(end of broadcast)---
> > TIP 4: Have you searched our list archives?
> > 
> >http://archives.postgresql.org
> 
> -- 
>   Bruce Momjian   [EMAIL PROTECTED]
>   EnterpriseDBhttp://www.enterprisedb.com
> 
>   + If your life is a hard drive, Christ can be your backup. +


> 
> ---(end of broadcast)---
> TIP 2: Don't 'kill -9' the postmaster

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Resurrecting per-page cleaner for btree

2006-07-25 Thread Bruce Momjian

The attached patch requires the new row to fit, and 10% to be free on
the page.  Would someone test that?

---

Tom Lane wrote:
> ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> > This is a revised patch originated by Junji TERAMOTO for HEAD.
> >   [BTree vacuum before page splitting]
> >   http://archives.postgresql.org/pgsql-patches/2006-01/msg00301.php
> > I think we can resurrect his idea because we will scan btree pages
> > at-atime now; the missing-restarting-point problem went away.
> 
> I've applied this but I'm now having some second thoughts about it,
> because I'm seeing an actual *decrease* in pgbench numbers from the
> immediately prior CVS HEAD code.  Using
>   pgbench -i -s 10 bench
>   pgbench -c 10 -t 1000 bench (repeat this half a dozen times)
> with fsync off but all other settings factory-stock, what I'm seeing
> is that the first run looks really good but subsequent runs tail off in
> spectacular fashion :-(  Pre-patch there was only minor degradation in
> successive runs.
> 
> What I think is happening is that because pgbench depends so heavily on
> updating existing records, we get into a state where an index page is
> about full and there's one dead tuple on it, and then for each insertion
> we have
> 
>   * check for uniqueness marks one more tuple dead (the
> next-to-last version of the tuple)
>   * newly added code removes one tuple and does a write
>   * now there's enough room to insert one tuple
>   * lather, rinse, repeat, never splitting the page.
> 
> The problem is that we've traded splitting a page every few hundred
> inserts for doing a PageIndexMultiDelete, and emitting an extra WAL
> record, on *every* insert.  This is not good.
> 
> Had you done any performance testing on this patch, and if so what
> tests did you use?  I'm a bit hesitant to try to fix it on the basis
> of pgbench results alone.
> 
> One possible fix that comes to mind is to only perform the cleanup
> if we are able to remove more than one dead tuple (perhaps about 10
> would be good).  Or do the deletion anyway, but then go ahead and
> split the page unless X amount of space has been freed (where X is
> more than just barely enough for the incoming tuple).
> 
> After all the thought we've put into this, it seems a shame to
> just abandon it :-(.  But it definitely needs more tweaking.
> 
>   regards, tom lane
> 
> ---(end of broadcast)---
> TIP 4: Have you searched our list archives?
> 
>http://archives.postgresql.org

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: src/backend/access/nbtree/nbtinsert.c
===
RCS file: /cvsroot/pgsql/src/backend/access/nbtree/nbtinsert.c,v
retrieving revision 1.142
diff -c -c -r1.142 nbtinsert.c
*** src/backend/access/nbtree/nbtinsert.c	25 Jul 2006 19:13:00 -	1.142
--- src/backend/access/nbtree/nbtinsert.c	26 Jul 2006 01:35:52 -
***
*** 438,445 
  			if (P_ISLEAF(lpageop) && P_HAS_GARBAGE(lpageop))
  			{
  _bt_vacuum_one_page(rel, buf);
! if (PageGetFreeSpace(page) >= itemsz)
! 	break;		/* OK, now we have enough space */
  			}
  
  			/*
--- 438,451 
  			if (P_ISLEAF(lpageop) && P_HAS_GARBAGE(lpageop))
  			{
  _bt_vacuum_one_page(rel, buf);
! /*
!  *	Free space should be large enough for the new tuple and
!  *	should be >= 10% because scanning the page over and
!  *	over again to get just a little free space is inefficient.
!  */
! if (PageGetFreeSpace(page) >= itemsz &&
! 	PageGetFreeSpace(page) >= BLCKSZ / 10)
! 	break;
  			}
  
  			/*

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread William ZHANG
"Tom Lane" <[EMAIL PROTECTED]>
> "William ZHANG" <[EMAIL PROTECTED]> writes:
>> When I tried to compile pgsql-8.2devel with VS.Net 2005 and do regression
>> tests,
>> I found the problem. It's a bug inVS.Net 2005:
>> http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99694
>
> So why don't you use the fixed version of VS?

No service pack found for VS.Net 2005 now.

> regards, tom lane
>
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>   choose an index scan if your joining column's datatypes do not
>   match
> 



---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [Fwd: Re: [PATCHES] Patch for - Change LIMIT/OFFSET to use int8]

2006-07-25 Thread Bruce Momjian

Patch applied.  Thanks.

It had quite a number of tab/space alignment problems that I fixed.

---


Dhanaraj M wrote:
> I sent this patch already.
> Can somebody verify this patch?
> 
> Thanks
> Dhanaraj

-- Start of included mail From: Dhanaraj M <[EMAIL PROTECTED]>

> Date: Wed, 12 Jul 2006 01:06:13 +0530
> Subject: Re: [PATCHES] Patch for - Change LIMIT/OFFSET to use int8
> To: pgsql-patches@postgresql.org

> I have made the changes appropriately. The regression tests passed.
> Since I do not have enough resources, I could not test for a large number.
> It works for a small table. If anybody tests for int8 value, it is 
> appreciated.
> Also, it gives the following error msg, when the input exceeds the int8 
> limit.
> 
> ERROR:  bigint out of range
> 
> I attach the patch. Pl. check it.
> Thanks
> Dhanaraj
> 
> Tom Lane wrote:
> 
> >Dhanaraj M <[EMAIL PROTECTED]> writes:
> >  
> >
> >>I attach the patch for the following TODO item.
> >>  SQL COMMAND
> >>* Change LIMIT/OFFSET to use int8
> >>
> >>
> >
> >This can't possibly be correct.  It doesn't even change the field types
> >in struct LimitState, for example.  You've missed half a dozen places
> >in the planner that would need work, too.
> >
> > regards, tom lane
> >
> >---(end of broadcast)---
> >TIP 1: if posting/reading through Usenet, please send an appropriate
> >   subscribe-nomail command to [EMAIL PROTECTED] so that your
> >   message can get through to the mailing list cleanly
> >  
> >
> 

> *** ./src/backend/executor/nodeLimit.c.orig   Tue Jul 11 22:31:51 2006
> --- ./src/backend/executor/nodeLimit.cWed Jul 12 00:46:11 2006
> ***
> *** 23,28 
> --- 23,29 
>   
>   #include "executor/executor.h"
>   #include "executor/nodeLimit.h"
> + #include "catalog/pg_type.h"
>   
>   static void recompute_limits(LimitState *node);
>   
> ***
> *** 226,239 
>   {
>   ExprContext *econtext = node->ps.ps_ExprContext;
>   boolisNull;
>   
> - if (node->limitOffset)
> - {
> - node->offset =
> - 
> DatumGetInt32(ExecEvalExprSwitchContext(node->limitOffset,
> - 
> econtext,
> - 
> &isNull,
> - 
> NULL));
>   /* Interpret NULL offset as no offset */
>   if (isNull)
>   node->offset = 0;
> --- 227,251 
>   {
>   ExprContext *econtext = node->ps.ps_ExprContext;
>   boolisNull;
> + Oid type;
> +   
> + if (node->limitOffset)
> + {
> +  type = ((Const *) node->limitOffset->expr)->consttype;
> +   
> + if(type == INT8OID)
> + node->offset =
> + 
> DatumGetInt64(ExecEvalExprSwitchContext(node->limitOffset,
> + 
> econtext,
> + 
> &isNull,
> + 
> NULL));
> + else
> + node->offset =
> +   
> DatumGetInt32(ExecEvalExprSwitchContext(node->limitOffset,
> + 
>   econtext,
> + 
>   &isNull,
> + 
>   NULL));
>   
>   /* Interpret NULL offset as no offset */
>   if (isNull)
>   node->offset = 0;
> ***
> *** 249,259 
>   if (node->limitCount)
>   {
>   node->noCount = false;
> ! node->count =
> ! 
> DatumGetInt32(ExecEvalExprSwitchContext(node->limitCount,
> ! 
> econtext,
> ! 
> &isNull,
> ! 
> NULL));
>   /* Interpret NULL count as no count (LIMIT ALL) */
>   if (isNull)
>   node->noCount = true;
> --- 261,282 
>   if (node->limitCount)
>   {
>   node->

Re: [PATCHES] Patch for updatable views

2006-07-25 Thread Jaime Casanova

On 7/25/06, Bernd Helmle <[EMAIL PROTECTED]> wrote:

Hi folks,

please find attached an implementation for updatable views. Included are
support for pg_dump and information_schema, regression test and
documentation are missing. Also, there's currently no upgrade path for older
PostgreSQL versions and user defined rules on views.


i'm testing the functionality... seems good to me... i will work on
docs and regress if no one objects and bernd is not doing it...


AFAICS, the view will not be updateable if there are casts in the
select list (seems fair to let that to future revisions), but i think
we must say it.


One thing to think of:

create table testing_serial (id serial primary key, name text);
CREATE TABLE

create view vtest_serial as select * from testing_serial;
CREATE VIEW

insert into vtest_serial values (default, 'name1');
psql:../view_test.sql:81: ERROR:  null value in column "id" violates
not-null constraint

insert into vtest_serial(name) values ('name2');
psql:../view_test.sql:82: ERROR:  null value in column "id" violates
not-null constraint

i still think that in updateable views we need to inherit the defaut
value of the base table, i still see this code commented in
rewriteHandler.c


psql:../view_test.sql:73: ERROR:  cannot insert into a view
HINT:  You need an unconditional ON INSERT DO INSTEAD rule.

BTW, we must change this message for something more like 'cannot
insert into a  non updateable view'

-
+   /*
+* I will do this only in case of relkind == RELKIND_VIEW.
+* This is the last attempt to get a value for expr before we
+* consider that expr must be NULL.
+*/
+ /*if (expr == NULL && rel->rd_rel->relkind == RELKIND_VIEW) */
+ /*{ */
+ /*expr = (Node *)makeNode(SetToDefault); */
+ /*return expr; */
+ /*}*/
+

if this functionality will be accepted this is the time to discuss it
otherwise drop this comment.

With this code we still can create a different default for the view
with ALTER TABLE ADD DEFAULT



I have some code which drops the implicit created rules silently if someone
wants to have its own rule, but this needs some discussion, i think.



+ #if 0
+   /*
+* Implicit rules should be dropped automatically when someone
+* wants to have its *own* rules on the view. is_implicit is set
+* to NO_OPTION_EXCPLICIT in this case so we drop all implicit
+* rules on the specified event type immediately.
+*
+* ???FIXME: do we want this behavior???
+   */
+
+   if ( ev_kind == NO_OPTION_EXPLICIT )
+deleteImplicitRulesOnEvent(event_relation, event_type);
+ #endif

This is a must for compatibility with older versions. Otherwise we
will have views with user defined rules and implicit rules that will
have an unexpected behaviour.



The patch covers the whole SQL92 functionality and doesn't create any
rules, if a given view is considered not to be compatible with SQL92 
definitions.


I think is necessary to send some NOTICE when we can't create rules at
all or when we can't create one of them (insert rules are not always
created because they need all not-null without defaults columns to be
in the select list)



The supported
syntax is

CREATE VIEW foo AS  [ WITH [ CASCADED | LOCAL ] CHECK OPTION ]

The check option is implemented as a conditional rule with a simple system
function, which checks the given expression tree to be true or false and raises
an error in the latter case.


the check option is working for all cases i'm trying...


Also, i have dropped support for updatable views which contains indexed array
fields of tables (like SELECT foo[3], foo[2] FROM bar). These are treated
non-updatable and someone needs his own rules here.



--
regards,
Jaime Casanova

"Programming today is a race between software engineers striving to
build bigger and better idiot-proof programs and the universe trying
to produce bigger and better idiots.
So far, the universe is winning."
  Richard Cook

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
  choose an index scan if your joining column's datatypes do not
  match


Re: [PATCHES] Final version of my cube patch - fixed test cases,

2006-07-25 Thread Bruce Momjian

Patch applied.  Thanks.

---


Joshua Reich wrote:
> Any committers want to take ownership of this?
> 
> Thanks,
> 
> Joshua Reich
> 
>  
> 
>   Changes that were made in July 2006 by Joshua Reich I.
>  
> 
> 
>   Code Cleanup:
> 
>   Update the calling convention for all external facing functions. By 
> external
>   facing, I mean all functions that are directly referenced in cube.sql. 
> Prior
>   to my update, all functions used the older V0 calling convention. They now
>   use V1.
> 
>   New Functions:
> 
>   cube(float[]), which makes a zero volume cube from a float array
> 
>   cube(float[], float[]), which allows the user to create a cube from
>   two float arrays; one for the upper right and one for the lower left
>   coordinate.
> 
>   cube_subset(cube, int4[]), to allow you to reorder or choose a subset of
>   dimensions from a cube, using index values specified in the array.
> 

> ? cube.diff
> ? cubeparse.tab.c
> ? logfile
> Index: CHANGES
> ===
> RCS file: /projects/cvsroot/pgsql/contrib/cube/CHANGES,v
> retrieving revision 1.2
> diff -c -r1.2 CHANGES
> *** CHANGES   12 Sep 2002 00:26:00 -  1.2
> --- CHANGES   22 Jul 2006 05:15:44 -
> ***
> *** 1,4 
> --- 1,28 
> + 
> 
> + Changes that were made in July 2006 by Joshua Reich I.
> + 
> 
> + 
> + Code Cleanup:
> + 
> + Update the calling convention for all external facing functions. By external
> + facing, I mean all functions that are directly referenced in cube.sql. Prior
> + to my update, all functions used the older V0 calling convention. They now 
> + use V1.
> + 
> + New Functions: 
> + 
> + cube(float[]), which makes a zero volume cube from a float array
> + 
> + cube(float[], float[]), which allows the user to create a cube from
> + two float arrays; one for the upper right and one for the lower left
> + coordinate.
> + 
> + cube_subset(cube, int4[]), to allow you to reorder or choose a subset of
> + dimensions from a cube, using index values specified in the array.
> + 
> + 
> 
>   Changes that were made in August/September 2002 by Bruno Wolff III.
> + 
> 
>   
>   Note that this was based on a 7.3 development version and changes may not
>   directly work with earlier versions.
> Index: README.cube
> ===
> RCS file: /projects/cvsroot/pgsql/contrib/cube/README.cube,v
> retrieving revision 1.7
> diff -c -r1.7 README.cube
> *** README.cube   27 Jun 2005 01:19:43 -  1.7
> --- README.cube   22 Jul 2006 05:15:45 -
> ***
> *** 244,249 
> --- 244,259 
> This makes a one dimensional cube.
> cube(1,2) == '(1),(2)'
>   
> + cube(float8[]) returns cube
> +   This makes a zero-volume cube using the coordinates defined by the
> +   array.
> +   cube(ARRAY[1,2]) == '(1,2)'
> + 
> + cube(float8[], float8[]) returns cube
> +   This makes a cube, with upper right and lower left coordinates as
> +   defined by the 2 float arrays. Arrays must be of the same length.
> +   cube('{1,2}'::float[], '{3,4}'::float[]) == '(1,2),(3,4)'
> + 
>   cube(cube, float8) returns cube
> This builds a new cube by adding a dimension on to an existing cube with
> the same values for both parts of the new coordinate. This is useful for
> ***
> *** 267,272 
> --- 277,289 
> cube_ur_coord returns the nth coordinate value for the upper right corner
> of a cube. This is useful for doing coordinate transformations.
>   
> + cube_subset(cube, int[]) returns cube
> +   Builds a new cube from an existing cube, using a list of dimension indexes
> +   from an array. Can be used to find both the ll and ur coordinate of single
> +   dimenion, e.g.: cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[2]) = '(3),(7)'
> +   Or can be used to drop dimensions, or reorder them as desired, e.g.:
> +   cube_subset(cube('(1,3,5),(6,7,8)'), ARRAY[3,2,1,1]) = '(5, 3, 1, 1),(8, 
> 7, 6, 6)'
> + 
>   cube_is_point(cube) returns bool
> cube_is_point returns true if a cube is also a point. This is true when 
> the
> two defining corners are the same.
> ***
> *** 327,329 
> --- 344,353 
>   
>   These include changing the precision from single precision to double
>   precision and adding some new functions.
> + 
> + 
> + 
> + Addition

Re: [PATCHES] Time zone definitions to config files

2006-07-25 Thread David Fetter
On Mon, Jul 24, 2006 at 11:59:34PM -0400, Tom Lane wrote:
> "Joachim Wieland" <[EMAIL PROTECTED]> writes:
> > Here's the patch that generalizes the australian_timezones hack by
> > moving the compiled-in time zone definitions into a text file. The
> > text file to use is chosen via a guc.
> 
> Applied with some revisions --- mostly, that I didn't like
> restricting timezone_abbreviations to be changed only via
> postgresql.conf.  The old australian_timezones setting was always
> USERSET, and I think people would have had a legitimate gripe about
> loss of flexibility if its replacement wasn't.  Fortunately this
> wasn't too hard to change.  I also editorialized a bit on the file
> placement and the parsing code.
> 
> The documentation is still in need of help ... in particular, Table
> B-4 (timezone names) is now out of sync with reality.  I am not sure
> whether to try to fix it, or just remove it and tell people to look
> at the pg_timezonenames view.  Thoughts?  If you want to fix it,
> please send a patch.

I'll take a whack at that patch this evening PDT or tomorrow evening
at the latest.  We're too late in the cycle to go over this, but maybe
we can figure out a way to have this data read from the same data
source as the pg_timezones VIEW does at compile time.  Keeping two
such table in synch seems error-prone.

Cheers,
D
-- 
David Fetter <[EMAIL PROTECTED]> http://fetter.org/
phone: +1 415 235 3778AIM: dfetter666
  Skype: davidfetter

Remember to vote!

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] LDAP lookup of connection parameters

2006-07-25 Thread Bruce Momjian
Albe Laurenz wrote:
> This patch for libpq allows you to enter an LDAP URL in pg_service.conf.
> The URL will be queried and the resulting string(s) parsed for
> keyword = value connection options.
> 
> The idea is to have connection information stored centrally on an LDAP
> server rather than on the client machine.
> 
> On Windows the native library wldap32.dll is used, else OpenLDAP.
> If --enable_thread_safety has been given, -lldap_r is appended to
> PTHREAD_LIBS so that libpq will be linked against the tread safe
> library.
> 
> There should probably also be a documentation patch for the --with-ldap
> option of ./configure, but I didn't write it because it also belongs to
> the "LDAP Auth" patch.
> 
> I have added German translations for the new messages - how can I get
> translations into other languages?

Translations are done later in the release process.

I have heavily modified your patch to be clearer.  Please review the
attached version and test it to make sure it still works properly. 
Thanks.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +
Index: configure.in
===
RCS file: /cvsroot/pgsql/configure.in,v
retrieving revision 1.469
diff -c -c -r1.469 configure.in
*** configure.in	24 Jul 2006 16:32:44 -	1.469
--- configure.in	25 Jul 2006 21:44:20 -
***
*** 1106, 
--- 1106,1119 
  PGAC_FUNC_GETPWUID_R_5ARG
  PGAC_FUNC_STRERROR_R_INT
  
+ # this will link libpq against libldap_r
+ if test "$with_ldap" = yes ; then
+   if test "$PORTNAME" != "win32"; then
+ AC_CHECK_LIB(ldap_r,ldap_simple_bind, [], [AC_MSG_ERROR([library 'ldap_r' is required for LDAP])])
+ PTHREAD_LIBS="$PTHREAD_LIBS -lldap_r"
+   fi
+ fi
+ 
  CFLAGS="$_CFLAGS"
  LIBS="$_LIBS"
  
Index: doc/src/sgml/libpq.sgml
===
RCS file: /cvsroot/pgsql/doc/src/sgml/libpq.sgml,v
retrieving revision 1.213
diff -c -c -r1.213 libpq.sgml
*** doc/src/sgml/libpq.sgml	4 Jul 2006 13:22:15 -	1.213
--- doc/src/sgml/libpq.sgml	25 Jul 2006 21:44:23 -
***
*** 4126,4131 
--- 4126,4197 
  
  
  
+ 
+  LDAP Lookup of Connection Parameters
+ 
+ 
+  LDAP connection parameter lookup
+ 
+ 
+ 
+ If libpq has been compiled with LDAP support (option
+ --with-ldap for configure)
+ it is possible to retrieve connection options like host
+ or dbname via LDAP from a central server.
+ The advantage is that if the connection parameters for a database change,
+ the connection information doesn't have to be updated on all client machines.
+ 
+ 
+ 
+ LDAP connection parameter lookup uses the connection service file
+ pg_service.conf (see ).
+ A line in a pg_service.conf stanza that starts with
+ ldap:// will be recognized as an LDAP URL and an LDAP
+ query will be performed. The result must be a list of keyword =
+ value pairs which will be used to set connection options.
+ The URL must conform to RFC 1959 and be of the form
+ 
+ ldap://[hostname[:port]]/search_base?attribute?search_scope?filter
+ 
+ where hostname
+ defaults to localhost and
+ port defaults to 389.
+ 
+ 
+ 
+ Processing of pg_service.conf is terminated after
+ a successful LDAP lookup, but is continued if the LDAP server cannot be
+ contacted.  This is to provide a fallback with
+ further LDAP URL lines that point to different LDAP
+ servers, classical keyword = value pairs, or
+ default connection options.
+ If you would rather get an error message in this case, add a
+ syntactically incorrect line after the LDAP URL.
+ 
+ 
+ 
+ A sample LDAP entry that has been created with the LDIF file
+ 
+ version:1
+ dn:cn=mydatabase,dc=mycompany,dc=com
+ changetype:add
+ objectclass:top
+ objectclass:groupOfUniqueNames
+ cn:mydatabase
+ uniqueMember:host=dbserver.mycompany.com
+ uniqueMember:port=5439
+ uniqueMember:dbname=mydb
+ uniqueMember:user=mydb_user
+ uniqueMember:sslmode=require
+ 
+ might be queried with the following LDAP URL:
+ 
+ ldap://ldap.mycompany.com/dc=mycompany,dc=com?uniqueMember?one?(cn=mydatabase)
+ 
+ 
+ 
+ 
+ 
  
  SSL Support
  
Index: src/interfaces/libpq/Makefile
===
RCS file: /cvsroot/pgsql/src/interfaces/libpq/Makefile,v
retrieving revision 1.146
diff -c -c -r1.146 Makefile
*** src/interfaces/libpq/Makefile	18 Jul 2006 22:18:08 -	1.146
--- src/interfaces/libpq/Makefile	25 Jul 2006 21:44:27 -
***
*** 62,68 
  SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lssl -lsocket -lnsl -lresolv -lintl $(PTHREAD_LIBS), $(LIBS))
  endif
  ifeq ($(PORTNAME), win32)
! SHLIB_LINK += -lshfolder -lwsock32 -lws2_32 $(filter -leay32 -lssleay32 -lcomerr32 -lkrb5_32, $(LIBS))
  endif
  
  
--- 62,68 
  SHLIB_LINK += $(filter -lcrypt -ldes -lcom_err -lcrypto -lk5crypto -lkrb5 -lssl -lsocket -lnsl -lre

Re: [PATCHES] Resurrecting per-page cleaner for btree

2006-07-25 Thread Tom Lane
ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> This is a revised patch originated by Junji TERAMOTO for HEAD.
>   [BTree vacuum before page splitting]
>   http://archives.postgresql.org/pgsql-patches/2006-01/msg00301.php
> I think we can resurrect his idea because we will scan btree pages
> at-atime now; the missing-restarting-point problem went away.

I've applied this but I'm now having some second thoughts about it,
because I'm seeing an actual *decrease* in pgbench numbers from the
immediately prior CVS HEAD code.  Using
pgbench -i -s 10 bench
pgbench -c 10 -t 1000 bench (repeat this half a dozen times)
with fsync off but all other settings factory-stock, what I'm seeing
is that the first run looks really good but subsequent runs tail off in
spectacular fashion :-(  Pre-patch there was only minor degradation in
successive runs.

What I think is happening is that because pgbench depends so heavily on
updating existing records, we get into a state where an index page is
about full and there's one dead tuple on it, and then for each insertion
we have

* check for uniqueness marks one more tuple dead (the
  next-to-last version of the tuple)
* newly added code removes one tuple and does a write
* now there's enough room to insert one tuple
* lather, rinse, repeat, never splitting the page.

The problem is that we've traded splitting a page every few hundred
inserts for doing a PageIndexMultiDelete, and emitting an extra WAL
record, on *every* insert.  This is not good.

Had you done any performance testing on this patch, and if so what
tests did you use?  I'm a bit hesitant to try to fix it on the basis
of pgbench results alone.

One possible fix that comes to mind is to only perform the cleanup
if we are able to remove more than one dead tuple (perhaps about 10
would be good).  Or do the deletion anyway, but then go ahead and
split the page unless X amount of space has been freed (where X is
more than just barely enough for the incoming tuple).

After all the thought we've put into this, it seems a shame to
just abandon it :-(.  But it definitely needs more tweaking.

regards, tom lane

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] Resurrecting per-page cleaner for btree

2006-07-25 Thread Tom Lane
ITAGAKI Takahiro <[EMAIL PROTECTED]> writes:
> I think we can resurrect his idea because we will scan btree pages
> at-atime now; the missing-restarting-point problem went away.

> Have I missed something? Comments welcome.

I was thinking for awhile just now that this would break the interlock
that guarantees VACUUM can't delete a heap tuple that an indexscanning
process is about to visit.  After further thought, it doesn't, but it's
non-obvious.  I've added the attached commentary to nbtree/README:


On-the-fly deletion of index tuples
---

If a process visits a heap tuple and finds that it's dead and removable
(ie, dead to all open transactions, not only that process), then we can
return to the index and mark the corresponding index entry "known dead",
allowing subsequent index scans to skip visiting the heap tuple.  The
"known dead" marking uses the LP_DELETE bit in ItemIds.  This is currently
only done in plain indexscans, not bitmap scans, because only plain scans
visit the heap and index "in sync" and so there's not a convenient way
to do it for bitmap scans.

Once an index tuple has been marked LP_DELETE it can actually be removed
from the index immediately; since index scans only stop "between" pages,
no scan can lose its place from such a deletion.  We separate the steps
because we allow LP_DELETE to be set with only a share lock (it's exactly
like a hint bit for a heap tuple), but physically removing tuples requires
exclusive lock.  In the current code we try to remove LP_DELETE tuples when
we are otherwise faced with having to split a page to do an insertion (and
hence have exclusive lock on it already).

This leaves the index in a state where it has no entry for a dead tuple
that still exists in the heap.  This is not a problem for the current
implementation of VACUUM, but it could be a problem for anything that
explicitly tries to find index entries for dead tuples.  (However, the
same situation is created by REINDEX, since it doesn't enter dead
tuples into the index.)

It's sufficient to have an exclusive lock on the index page, not a
super-exclusive lock, to do deletion of LP_DELETE items.  It might seem
that this breaks the interlock between VACUUM and indexscans, but that is
not so: as long as an indexscanning process has a pin on the page where
the index item used to be, VACUUM cannot complete its btbulkdelete scan
and so cannot remove the heap tuple.  This is another reason why
btbulkdelete has to get super-exclusive lock on every leaf page, not only
the ones where it actually sees items to delete.


regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread Hiroshi Saito
From: "Andrew Dunstan"

> Hiroshi Saito wrote:
> > Hmm, It seems to be the bug of very unpleasant Microsoft.:D
> > I think that the following is desirable as an evasion measure to add. 
> >
> > #if defined(_MSC_VER) && _MSC_VER == 1400
> >
> > To be sure, it was only VS2005.
> >   
> 
> 
> Why is this better than:
> 
>   #if _MSC_VER == 1400
> 
> 
> Surely this will not be true if _MSC_VER is undefined?

I experienced injustice and the reason of in OSX for it.

Regards,
Hiroshi Saito


---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread Andrew Dunstan
Hiroshi Saito wrote:
> Hmm, It seems to be the bug of very unpleasant Microsoft.:D
> I think that the following is desirable as an evasion measure to add. 
>
> #if defined(_MSC_VER) && _MSC_VER == 1400
>
> To be sure, it was only VS2005.
>   


Why is this better than:

  #if _MSC_VER == 1400


Surely this will not be true if _MSC_VER is undefined?

cheers

andrew




---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


[PATCHES] Patch for updatable views

2006-07-25 Thread Bernd Helmle

Hi folks,

please find attached an implementation for updatable views. Included are 
support

for pg_dump and information_schema, regression test and documentation are
missing. Also, there's currently no upgrade path for older PostgreSQL 
versions and
user defined rules on views. I have some code which drops the implicit 
created
rules silently if someone wants to have its own rule, but this needs some 
discussion,

i think.

The patch covers the whole SQL92 functionality and doesn't create any 
rules, if
a given view is considered not to be compatible with SQL92 definitions. The 
supported

syntax is

CREATE VIEW foo AS  [ WITH [ CASCADED | LOCAL ] CHECK OPTION ]

The check option is implemented as a conditional rule with a simple system 
function, which
checks the given expression tree to be true or false and raises an error in 
the latter case.
There's also a little change in the rewriter semantics, as i treat implicit 
(view update rules
created automatically) and explicit rules (rules created by any user) 
differently.

This involves some changes to the system catalog (especially
pg_rewrite and pg_proc), so be prepared to do an initdb. There are new files
in src/backend/rewrite/view_update.c and src/include/rewrite/view_update.h, 
too.


Please note that the patch currently breaks some regression tests, but 
these are
mostly due to duplicated rules on views and additional notice messages. 
Also, i
have dropped support for updatable views which contains indexed array 
fields
of tables (like SELECT foo[3], foo[2] FROM bar). These are treated 
non-updatable and

someone needs his own rules here.

I hope there aren't too many open points here, so this patch could be 
considered

for inclusion in 8.2.

Looking forward your opinions...

--
 Thanks

   Bernd


pgsql-view_update_8.2dev.tar.bz2
Description: Binary data

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread Hiroshi Saito
Hi.

"William ZHANG" <[EMAIL PROTECTED]> wrote in message news:[EMAIL PROTECTED]
> When I tried to compile pgsql-8.2devel with VS.Net 2005 and do regression 
> tests,
> I found the problem. It's a bug inVS.Net 2005:
> http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99694
> 

+   /* 
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99694
 */
+ #if _MSC_VER == 1400
+   {
+char x[1];
+ 
+xfrmlen = strxfrm(x, val, 0);
+   }
+ #else
xfrmlen = strxfrm(NULL, val, 0);
+ #endif


Hmm, It seems to be the bug of very unpleasant Microsoft.:D
I think that the following is desirable as an evasion measure to add. 

#if defined(_MSC_VER) && _MSC_VER == 1400

To be sure, it was only VS2005.

Regards,
Hiroshi Saito


---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread Tom Lane
"William ZHANG" <[EMAIL PROTECTED]> writes:
> When I tried to compile pgsql-8.2devel with VS.Net 2005 and do regression 
> tests,
> I found the problem. It's a bug inVS.Net 2005:
> http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99694

So why don't you use the fixed version of VS?

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PATCHES] Patch for VS.Net 2005's strxfrm() bug

2006-07-25 Thread William ZHANG
When I tried to compile pgsql-8.2devel with VS.Net 2005 and do regression 
tests,
I found the problem. It's a bug inVS.Net 2005:
http://connect.microsoft.com/VisualStudio/feedback/ViewFeedback.aspx?FeedbackID=99694

-- 
With regards,
William ZHANG 


begin 666 strxfrm.patch
M*BHJ('!G&9R;6QE;B ]('-T"[EMAIL PROTECTED]
M+" P*3L-"BL@"0E]#0HK("-E;'-E#0H@( D)>&9R;6QE;B ]('-T&9R;7-T&9R;6QE;C(@/2!S=')X
M9G)M*'AF&9R;6QE;B K(#$I.PT*(" )"4%S

Re: [PATCHES] Patch for units in postgresql.conf

2006-07-25 Thread Alvaro Herrera
Peter Eisentraut wrote:

> + if ((flags & (GUC_UNIT_KB|GUC_UNIT_BLOCKS)) && endptr != value)
> + {
> + bool used = false;
> +
> + while (*endptr == ' ')
> + endptr++;
> +
> + if (strcmp(endptr, "kB") == 0)
> + {
> + val *= KILOBYTE;
> + used = true;
> + endptr += 2;
> + }

Does this mean that one must match the "kB" exactly, with the specified
upper and lower case?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Patch for units in postgresql.conf

2006-07-25 Thread Peter Eisentraut
Am Dienstag, 25. Juli 2006 15:12 schrieb Alvaro Herrera:
> Does this mean that one must match the "kB" exactly, with the specified
> upper and lower case?

I think it's better to require exact spelling of the units, or else it'll 
quickly get inconsistent and error-prone.  (Say, if you want to allow "KB", 
why not "mB"?  And "kb" is clearly a kilobit, so we don't want to allow that 
in any case.)

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PATCHES] Patch for units in postgresql.conf

2006-07-25 Thread Peter Eisentraut
Here is a preliminary patch for units in postgresql.conf (and SET and so on, 
of course).  It currently supports memory units only.  Time units would be 
similar.  Let me know if you have comments.

(FWIW, work_mem is a good parameter to play with for trying this out.)

-- 
Peter Eisentraut
http://developer.postgresql.org/~petere/
diff -ur ../cvs-pgsql/src/backend/utils/misc/guc.c ./src/backend/utils/misc/guc.c
--- ../cvs-pgsql/src/backend/utils/misc/guc.c	2006-07-25 10:45:29.0 +0200
+++ ./src/backend/utils/misc/guc.c	2006-07-25 13:47:27.0 +0200
@@ -83,6 +83,10 @@
 #define MAX_KILOBYTES	(INT_MAX / 1024)
 #endif
 
+#define KILOBYTE 1024
+#define MEGABYTE (1024*1024)
+#define GIGABYTE (1024*1024*1024)
+
 /* XXX these should appear in other modules' header files */
 extern bool Log_disconnections;
 extern bool check_function_bodies;
@@ -1125,7 +1129,8 @@
 	{
 		{"shared_buffers", PGC_POSTMASTER, RESOURCES_MEM,
 			gettext_noop("Sets the number of shared memory buffers used by the server."),
-			NULL
+			NULL,
+			GUC_UNIT_BLOCKS
 		},
 		&NBuffers,
 		1000, 16, INT_MAX / 2, NULL, NULL
@@ -1134,7 +1139,8 @@
 	{
 		{"temp_buffers", PGC_USERSET, RESOURCES_MEM,
 			gettext_noop("Sets the maximum number of temporary buffers used by each session."),
-			NULL
+			NULL,
+			GUC_UNIT_BLOCKS
 		},
 		&num_temp_buffers,
 		1000, 100, INT_MAX / 2, NULL, show_num_temp_buffers
@@ -1167,7 +1173,8 @@
 			gettext_noop("Sets the maximum memory to be used for query workspaces."),
 			gettext_noop("This much memory may be used by each internal "
 		 "sort operation and hash table before switching to "
-		 "temporary disk files.")
+		 "temporary disk files."),
+			GUC_UNIT_KB
 		},
 		&work_mem,
 		1024, 8 * BLCKSZ / 1024, MAX_KILOBYTES, NULL, NULL
@@ -1185,7 +1192,8 @@
 	{
 		{"max_stack_depth", PGC_SUSET, RESOURCES_MEM,
 			gettext_noop("Sets the maximum stack depth, in kilobytes."),
-			NULL
+			NULL,
+			GUC_UNIT_KB
 		},
 		&max_stack_depth,
 		2048, 100, MAX_KILOBYTES, assign_max_stack_depth, NULL
@@ -1469,7 +1477,8 @@
 	{
 		{"log_rotation_size", PGC_SIGHUP, LOGGING_WHERE,
 			gettext_noop("Automatic log file rotation will occur after N kilobytes"),
-			NULL
+			NULL,
+			GUC_UNIT_KB
 		},
 		&Log_RotationSize,
 		10 * 1024, 0, INT_MAX / 1024, NULL, NULL
@@ -3513,13 +3522,46 @@
  * value there.
  */
 static bool
-parse_int(const char *value, int *result)
+parse_int(const char *value, int *result, int flags)
 {
 	long		val;
 	char	   *endptr;
 
 	errno = 0;
 	val = strtol(value, &endptr, 0);
+
+	if ((flags & (GUC_UNIT_KB|GUC_UNIT_BLOCKS)) && endptr != value)
+	{
+		bool used = false;
+
+		while (*endptr == ' ')
+			endptr++;
+
+		if (strcmp(endptr, "kB") == 0)
+		{
+			val *= KILOBYTE;
+			used = true;
+			endptr += 2;
+		}
+		else if (strcmp(endptr, "MB") == 0)
+		{
+			val *= MEGABYTE;
+			used = true;
+			endptr += 2;
+		}
+		else if (strcmp(endptr, "GB") == 0)
+		{
+			val *= GIGABYTE;
+			used = true;
+			endptr += 2;
+		}
+
+		if (used && (flags & GUC_UNIT_KB))
+			val /= 1024;
+		else if (used && (flags & GUC_UNIT_BLOCKS))
+			val /= BLCKSZ;
+	}
+
 	if (endptr == value || *endptr != '\0' || errno == ERANGE
 #ifdef HAVE_LONG_INT_64
 	/* if long > 32 bits, check for overflow of int4 */
@@ -3850,7 +3892,7 @@
 
 if (value)
 {
-	if (!parse_int(value, &newval))
+	if (!parse_int(value, &newval, conf->gen.flags))
 	{
 		ereport(elevel,
 (errcode(ERRCODE_INVALID_PARAMETER_VALUE),
@@ -5082,8 +5124,34 @@
 	val = (*conf->show_hook) ();
 else
 {
-	snprintf(buffer, sizeof(buffer), "%d",
-			 *conf->variable);
+	char unit[3];
+	int result = *conf->variable;
+
+	if (record->flags & (GUC_UNIT_KB|GUC_UNIT_BLOCKS))
+	{
+		if (record->flags & GUC_UNIT_BLOCKS)
+			result *= BLCKSZ/1024;
+
+		if (result % (GIGABYTE/1024) == 0)
+		{
+			result /= (GIGABYTE/1024);
+			strcpy(unit, "GB");
+		}
+		else if (result % (MEGABYTE/1024) == 0)
+		{
+			result /= (MEGABYTE/1024);
+			strcpy(unit, "MB");
+		}
+		else
+		{
+			strcpy(unit, "kB");
+		}
+	}
+	else
+		strcpy(unit, "");
+
+	snprintf(buffer, sizeof(buffer), "%d%s",
+			 result, unit);
 	val = buffer;
 }
 			}
@@ -5144,7 +5212,7 @@
 			struct config_int *conf = (struct config_int *) record;
 			int newval;
 
-			return parse_int(newvalue, &newval) && *conf->variable == newval;
+			return parse_int(newvalue, &newval, record->flags) && *conf->variable == newval;
 		}
 		case PGC_REAL:
 		{
diff -ur ../cvs-pgsql/src/include/utils/guc_tables.h ./src/include/utils/guc_tables.h
--- ../cvs-pgsql/src/include/utils/guc_tables.h	2006-07-20 10:42:11.0 +0200
+++ ./src/include/utils/guc_tables.h	2006-07-25 12:37:19.0 +0200
@@ -129,6 +129,8 @@
 #define GUC_CUSTOM_PLACEHOLDER	0x0080	/* placeholder for custom variable */
 #define GUC_SUPERUSER_ONLY		0x0100	/* show only to superusers */
 #define GUC_IS_NAME