[PATCHES] More doc patches

2006-09-19 Thread Gregory Stark

Some more doc patches for partitioned tables. In particular replace the caveat
that INCLUDING CONSTRAINTS doesn't exist and replace it with documentation of,
well, INCLUDING CONSTRAINTS.

Also, there was an instance of "LIKE WITH DEFAULTS" which is actually spelled
"LIKE INCLUDING DEFAULTS".


*** ddl.sgml05 Sep 2006 22:08:33 +0100  1.61
--- ddl.sgml19 Sep 2006 10:41:18 +0100  
***
*** 2083,2093 

 One convenient way to create a compatible table to be a new child is using
 the LIKE option of CREATE TABLE. This
!creates a table with the same columns with the same type (however note the
!caveat below regarding constraints). Alternatively a compatible table can
!be created by first creating a new child using CREATE
!TABLE then removing the inheritance link with ALTER
!TABLE.

  

--- 2083,2096 

 One convenient way to create a compatible table to be a new child is using
 the LIKE option of CREATE TABLE. This
!creates a table with the same columns with the same type. If there are any
!CHECK constraints defined on the parent table
!the INCLUDING CONSTRAINTS option
!to LIKE may be useful as the new child must have
!constraints matching the parentto be considered compatible. Alternatively a
!compatible table can be created by first creating a new child
!using CREATE TABLE then removing the inheritance link
!with ALTER TABLE.

  

***
*** 2161,2179 
  
  
   
-   There is no convenient way to define a table compatible with a specific
-   parent including columns and constraints. The LIKE
-   option for CREATE TABLE does not copy constraints
-   which makes the tables it creates ineligible for being added using
-   ALTER TABLE. Matching check constraints must be 
added 
-   manually or the table must be created as a child immediately, then if
-   needed removed from the inheritance structure temporarily to be added
-   again later.
-  
- 
- 
- 
-  
If a table is ever removed from the inheritance structure using
ALTER TABLE then all its columns will be marked as
being locally defined. This means DROP COLUMN on the
--- 2164,2169 
***
*** 2577,2632 
  constraint for its partition.
 

  
!   
!
! When the time comes to archive and remove the old data we first remove
! it from the production table using:
  
  
  ALTER TABLE measurement_y2003mm02 NO INHERIT measurement
  
  
! Then we can perform any sort of data modification necessary prior to
! archiving without impacting the data viewed by the production system.
! This could include, for example, deleting or compressing out redundant
! data.
!
!   
!   
!   
  
!Similarly we can a new partition to handle new data. We can either
!create an empty partition as the original partitions were created
!above, or for some applications it's necessary to bulk load and clean
!data for the new partition. If that operation involves multiple steps
!by different processes it can be helpful to work with it in a fresh
!table outside of the master partitioned table until it's ready to be
!loaded:
  
  
! CREATE TABLE measurement_y2006m02 (LIKE measurement WITH DEFAULTS);
  \COPY measurement_y2006m02 FROM 'measurement_y2006m02'
  UPDATE ...
  ALTER TABLE measurement_y2006m02 ADD CONSTRAINT y2006m02 CHECK ( logdate 
>= DATE '2006-02-01' AND logdate < DATE '2006-03-01' );
  ALTER TABLE measurement_y2006m02 INHERIT MEASUREMENT;
  
  
-   
-   
- 
-  
  
  
- 
-  As we can see, a complex partitioning scheme could require a
-  substantial amount of DDL. In the above example we would be
-  creating a new partition each month, so it may be wise to write a
-  script that generates the required DDL automatically.
- 
  
 
! The following caveats apply:
 
  
   
--- 2567,2654 
  constraint for its partition.
 

+  
+ 
  
! 
!  As we can see, a complex partitioning scheme could require a
!  substantial amount of DDL. In the above example we would be
!  creating a new partition each month, so it may be wise to write a
!  script that generates the required DDL automatically.
! 
! 
!
!Managing Partitions
! 
!
!  Normally the set of partitions established when initally defining the
!  table are not intended to remain static. It's common to want to remove
!  old partitions of data from the partitioned tables and add new partitions
!  for new data periodically. One of the most important advantages of
!  partitioning is precisely that it allows this otherwise painful task to
!  be executed nearly instantaneously by manipulating the partition
!  structu

Re: [PATCHES] [HACKERS] Timezone List

2006-09-19 Thread Joachim Wieland
On Sat, Sep 16, 2006 at 04:19:48PM -0400, Tom Lane wrote:
> "Magnus Hagander" <[EMAIL PROTECTED]> writes:
> > Here goes. Tested only on win32 so far, but works there. No docs yet
> > either - need to know if it goes in first ;)

> I've applied this along with some extra work to get it to show GMT
> offsets and DST status, which should be useful for helping people
> to choose which setting they want.  This effectively obsoletes
> Table B-5 as well as B-4 in the SGML docs ... we should probably
> remove both of those in favor of recommending people look at the
> views.

http://momjian.us/main/writings/pgsql/sgml/view-pg-timezone-names.html says
that the names in the view are "recognized" as argument to "SET TIMEZONE".
However some of them can still not be used if they contain leap seconds, try
for example

set timezone to 'Mideast/Riyadh87';

Should we just document that some can't be set or remove those from the view
completely or add another boolean column has_leapsecs or similar?

Removing them seems not to be the right idea because you can say:

select now() at time zone 'Mideast/Riyadh87';


Joachim

-- 
Joachim Wieland  [EMAIL PROTECTED]
   GPG key available

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


[PATCHES] Small additions and typos on backup

2006-09-19 Thread Simon Riggs

In SGML, diff -c format

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com
Index: doc/src/sgml/backup.sgml
===
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.86
diff -c -r2.86 backup.sgml
*** doc/src/sgml/backup.sgml	16 Sep 2006 00:30:11 -	2.86
--- doc/src/sgml/backup.sgml	19 Sep 2006 09:57:43 -
***
*** 666,680 
  
  SELECT pg_stop_backup();
  
!  This should return successfully.
  
 
 
  
!  Once the WAL segment files used during the backup are archived as part
!  of normal database activity, you are done.  The file identified by
!  pg_stop_backup's result is the last segment that needs
!  to be archived to complete the backup.
  
 

--- 666,687 
  
  SELECT pg_stop_backup();
  
!  This should return successfully; however, the backup is not yet fully
!  valid.  An automatic switch to the next WAL segment occurs, so all
!  WAL segment files that relate to the backup will now be marked ready for
!  archiving.
  
 
 
  
!  Once the WAL segment files used during the backup are archived, you are
!  done.  The file identified by pg_stop_backup's result is
!  the last segment that needs to be archived to complete the backup.  
!  Archival of these files will happen automatically, since you have
!  already configured archive_command. In many cases, this
!  happens fairly quickly, but you are advised to monitor your archival
!  system to ensure this has taken place so that you can be certain you
!  have a valid backup.  
  
 

***
*** 701,709 
  It is not necessary to be very concerned about the amount of time elapsed
  between pg_start_backup and the start of the actual backup,
  nor between the end of the backup and pg_stop_backup; a
! few minutes' delay won't hurt anything.  You
! must however be quite sure that these operations are carried out in
! sequence and do not overlap.
 
  
 
--- 708,722 
  It is not necessary to be very concerned about the amount of time elapsed
  between pg_start_backup and the start of the actual backup,
  nor between the end of the backup and pg_stop_backup; a
! few minutes' delay won't hurt anything.  However, if you normally run the
! server with full_page_writes disabled, you may notice a drop
! in performance between pg_start_backup and 
! pg_stop_backup.  You must ensure that these backup operations
! are carried out in sequence without any possible overlap, or you will
! invalidate the backup.
!
! 
!
 
  
 
***
*** 1437,1456 
 Failover
  
 
! If the Primary Server fails then the Standby Server should take begin
  failover procedures.
 
  
 
  If the Standby Server fails then no failover need take place. If the
! Standby Server can be restarted, then the recovery process can also be
! immediately restarted, taking advantage of Restartable Recovery.
 
  
 
  If the Primary Server fails and then immediately restarts, you must have
  a mechanism for informing it that it is no longer the Primary. This is
! sometimes known as STONITH (Should the Other Node In The Head), which is
  necessary to avoid situations where both systems think they are the
  Primary, which can lead to confusion and ultimately data loss.
 
--- 1450,1471 
 Failover
  
 
! If the Primary Server fails then the Standby Server should begin
  failover procedures.
 
  
 
  If the Standby Server fails then no failover need take place. If the
! Standby Server can be restarted, even some time later, then the recovery
! process can also be immediately restarted, taking advantage of 
! Restartable Recovery. If the Standby Server cannot be restarted, then a
! full new Standby Server should be created.
 
  
 
  If the Primary Server fails and then immediately restarts, you must have
  a mechanism for informing it that it is no longer the Primary. This is
! sometimes known as STONITH (Shoot the Other Node In The Head), which is
  necessary to avoid situations where both systems think they are the
  Primary, which can lead to confusion and ultimately data loss.
 
***
*** 1479,1490 
 
  
 
! So, switching from Primary to Standby Server can be fast, but requires
  some time to re-prepare the failover cluster. Regular switching from
  Primary to Standby is encouraged, since it allows the regular downtime
! one each system required to maintain HA. This also acts as a test of the
! failover so that it definitely works when you really need it. Written
! administration procedures are advised.
 

  
--- 1494,1505 
 
  
 
! S

[PATCHES] Notes on restoring a backup with --single-transaction

2006-09-19 Thread Simon Riggs

Additional notes for pg_dump/restore

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com
Index: doc/src/sgml/backup.sgml
===
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.86
diff -c -r2.86 backup.sgml
*** doc/src/sgml/backup.sgml	16 Sep 2006 00:30:11 -	2.86
--- doc/src/sgml/backup.sgml	19 Sep 2006 11:55:32 -
***
*** 123,128 
--- 123,146 
 
  
 
+ By default, the psql script will continue to execute
+ after an SQL error is encountered and returns 0 in that case. You may
+ wish to use the following command at the top of the script to alter 
+ that behaviour and report errors with a return code 3.
+ 
+ \set ON_ERROR_STOP
+ 
+ Either way, you will only have a partially restored dump. Alternatively,
+ you can specify that the whole dump is run as a single transaction, so
+ the restore is fully completed, or fully rolled back. This mode can be
+ specified using the command line options psql -1 or
+ psql --single-transaction. When using that option, be warned 
+ that even the smallest of errors can rollback a restore that has already
+ run for many hours. However, that may still be preferable to clearing up
+ a complex database after a partially restored dump.
+   
+ 
+
  Once restored, it is wise to run  on each database so the optimizer has
  useful statistics. An easy way to do this is to run

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PATCHES] new patch for uuid datatype

2006-09-19 Thread Gevik Babakhani
Folks,

I would like to submit an updated patch for the uuid datatype.
I have removed the new_guid() function assuming we want a generator
function in the contrib.
I also have included a regression test and added the default copyright
header for the new files.

If this patch gets accepted then I can start working on the
documentation.

Regards,
Gevik.
*** ./backend/utils/adt/Makefile.orig	2006-09-19 12:05:41.0 +0200
--- ./backend/utils/adt/Makefile	2006-09-19 12:06:47.0 +0200
***
*** 15,21 
  endif
  endif
  
! OBJS = acl.o arrayfuncs.o array_userfuncs.o arrayutils.o bool.o \
  	cash.o char.o date.o datetime.o datum.o domains.o \
  	float.o format_type.o \
  	geo_ops.o geo_selfuncs.o int.o int8.o like.o lockfuncs.o \
--- 15,21 
  endif
  endif
  
! OBJS = uuid.o acl.o arrayfuncs.o array_userfuncs.o arrayutils.o bool.o \
  	cash.o char.o date.o datetime.o datum.o domains.o \
  	float.o format_type.o \
  	geo_ops.o geo_selfuncs.o int.o int8.o like.o lockfuncs.o \
*** ./include/catalog/pg_amop.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_amop.h	2006-09-19 12:06:47.0 +0200
***
*** 594,599 
--- 594,601 
  DATA(insert (	2232	0 1 f 2334 ));
  /* aclitem_ops */
  DATA(insert (	2235	0 1 f  974 ));
+ /* uuid */
+ DATA(insert (	2868	0 1 f  2866 ));
  
  /*
   *	gist box_ops
***
*** 886,889 
--- 888,898 
  DATA(insert (	2780	0 3  t  2752 ));
  DATA(insert (	2780	0 4  t	1070 ));
  
+ /* btree uuid */
+ DATA(insert (	 2873	0 1 f	2869 ));
+ DATA(insert (	 2873	0 2 f	2871 ));
+ DATA(insert (	 2873	0 3 f	2866 ));
+ DATA(insert (	 2873	0 4 f	2872));
+ DATA(insert (	 2873	0 5 f	2870 ));
+ 
  #endif   /* PG_AMOP_H */
*** ./include/catalog/pg_amproc.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_amproc.h	2006-09-19 12:06:47.0 +0200
***
*** 125,130 
--- 125,132 
  DATA(insert (	2233	0 1  380 ));
  DATA(insert (	2234	0 1  381 ));
  DATA(insert (	27890 1 2794 ));
+ /*uuid*/
+ DATA(insert (	28730 1 2863 ));
  
  
  /* hash */
***
*** 161,166 
--- 163,171 
  DATA(insert (	2231	0 1 456 ));
  DATA(insert (	2232	0 1 455 ));
  DATA(insert (	2235	0 1 329 ));
+ /* uuid */
+ DATA(insert (	2868	0 1 2874 ));
+ 
  
  
  /* gist */
*** ./include/catalog/pg_cast.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_cast.h	2006-09-19 12:06:47.0 +0200
***
*** 196,201 
--- 196,206 
  /* Cross-category casts between int4 and "char" */
  DATA(insert (	18	 23   77 e ));
  DATA(insert (	23	 18   78 e ));
+ /* uuid */
+ DATA(insert ( 25   2854 2876 a ));
+ DATA(insert ( 2854 25   2877 a ));
+ DATA(insert ( 1043 2854 2878 a ));
+ DATA(insert ( 2854 1043 2879 a ));
  
  /*
   * Datetime category
*** ./include/catalog/pg_opclass.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_opclass.h	2006-09-19 12:06:47.0 +0200
***
*** 208,211 
--- 208,216 
  DATA(insert OID = 2779 (	2742	_reltime_ops	PGNSP PGUID  1024 t 703 ));
  DATA(insert OID = 2780 (	2742	_tinterval_ops	PGNSP PGUID  1025 t 704 ));
  
+ /* uuid */
+ DATA(insert OID = 2873 (	403	uuid_ops	PGNSP PGUID  2854 t 0 ));
+ DATA(insert OID = 2868 (	405	uuid_ops	PGNSP PGUID  2854 t 0 ));
+ 
+ 
  #endif   /* PG_OPCLASS_H */
*** ./include/catalog/pg_operator.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_operator.h	2006-09-19 12:06:47.0 +0200
***
*** 883,888 
--- 883,898 
  DATA(insert OID = 2751 (  "@"	   PGNSP PGUID b f 2277 2277	16 2752	 0	 0	 0	 0	 0 arraycontains contsel contjoinsel ));
  DATA(insert OID = 2752 (  "~"	   PGNSP PGUID b f 2277 2277	16 2751	 0	 0	 0	 0	 0 arraycontained contsel contjoinsel ));
  
+ /* uuid operators */
+ 
+ DATA(insert OID =  2866 ( "="	   PGNSP PGUID b t	2854 2854 16 2866 2867	2869 2869 2869 2870 uuid_eq eqsel eqjoinsel ));
+ DATA(insert OID =  2867 ( "<>"	   PGNSP PGUID b f	2854 2854 16 2867 2866  0	 000uuid_ne neqsel neqjoinsel ));
+ DATA(insert OID =  2869 ( "<"	   PGNSP PGUID b f	2854 2854 16 2870 2872  0	 0	  0	   0uuid_lt scalarltsel scalarltjoinsel ));
+ DATA(insert OID =  2870 ( ">"	   PGNSP PGUID b f	2854 2854 16 2869 2871  0	 0	  00uuid_gt scalargtsel scalargtjoinsel ));
+ DATA(insert OID =  2871 ( "<="	   PGNSP PGUID b f	2854 2854 16 2872 2870  0	 0	  0	   0uuid_le scalarltsel scalarltjoinsel ));
+ DATA(insert OID =  2872 ( ">="	   PGNSP PGUID b f	2854 2854 16 2871 2869  0	 0	  0	   0uuid_ge scalargtsel scalargtjoinsel ));
+ 
+ 
  
  /*
   * function prototypes
*** ./include/catalog/pg_proc.h.orig	2006-09-19 12:05:39.0 +0200
--- ./include/catalog/pg_proc.h	2006-09-19 12:37:38.0 +0200
***
*** 3940,3945 
--- 3940,3980 
  DATA(insert OID = 2749 (  arraycontained	   PGNSP PGUID 12 f f t f i 2 16 "2277 2277" _null_ _null_ _null_ arraycontained - _null_ ));
  DESCR("anyarray contained")

Re: [HACKERS] [PATCHES] Patch for UUID datatype (beta)

2006-09-19 Thread Jim C. Nasby
On Mon, Sep 18, 2006 at 07:45:07PM -0400, [EMAIL PROTECTED] wrote:
> I would not use a 100% random number generator for a UUID value as was
> suggested. I prefer inserting the MAC address and the time, to at
> least allow me to control if a collision is possible. This is not easy
> to do using a few lines of C code. I'd rather have a UUID type in core
> with no generation routine, than no UUID type in core because the code
> is too complicated to maintain, or not portable enough.

As others have mentioned, using MAC address doesn't remove the
possibility of a collision.

Maybe a good compromise that would allow a generator function to go into
the backend would be to combine the current time with a random number.
That will ensure that you won't get a dupe, so long as your clock never
runs backwards.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PATCHES] Dynamic linking on AIX

2006-09-19 Thread Albe Laurenz
This is a second try; this patch replaces
http://archives.postgresql.org/pgsql-patches/2006-09/msg00185.php

Incorporated are
- Tom Lane's suggestions for a more sane approach at
  fixing Makefile.shlib
- Rocco Altier's additions to make the regression test run
  by setting LIBPATH appropriately.
- Two changes in /src/interfaces/ecpg/test/Makefile.regress
  and src/interfaces/ecpg/test/compat_informix/Makefile
  without which 'make' fails if configure was called
  with --disable-shared (this is not AIX-specific).

The line in src/makefiles/Makefile.aix
where I set 'libpath' also seems pretty ugly to me.
Do you have a better idea?

Basically I need to convert LDFLAGS like:
-L../../dir -L /extra/lib -lalib -Wl,yxz -L/more/libs
to
:/extra/lib:/more/libs
and couldn't think of a better way to do it.

It will fail if there is a -L path that contains
a blank :^(

Yours,
Laurenz Albe


aixlink2.patch
Description: aixlink2.patch

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] Dynamic linking on AIX

2006-09-19 Thread Bruce Momjian

I still would like to see a paragraph describing how AIX is different
from other platforms and what we are doing here.

---

Albe Laurenz wrote:
> This is a second try; this patch replaces
> http://archives.postgresql.org/pgsql-patches/2006-09/msg00185.php
> 
> Incorporated are
> - Tom Lane's suggestions for a more sane approach at
>   fixing Makefile.shlib
> - Rocco Altier's additions to make the regression test run
>   by setting LIBPATH appropriately.
> - Two changes in /src/interfaces/ecpg/test/Makefile.regress
>   and src/interfaces/ecpg/test/compat_informix/Makefile
>   without which 'make' fails if configure was called
>   with --disable-shared (this is not AIX-specific).
> 
> The line in src/makefiles/Makefile.aix
> where I set 'libpath' also seems pretty ugly to me.
> Do you have a better idea?
> 
> Basically I need to convert LDFLAGS like:
> -L../../dir -L /extra/lib -lalib -Wl,yxz -L/more/libs
> to
> :/extra/lib:/more/libs
> and couldn't think of a better way to do it.
> 
> It will fail if there is a -L path that contains
> a blank :^(
> 
> Yours,
> Laurenz Albe

Content-Description: aixlink2.patch

[ Attachment, skipping... ]

> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Patch for UUID datatype (beta)

2006-09-19 Thread mark
On Tue, Sep 19, 2006 at 08:20:13AM -0500, Jim C. Nasby wrote:
> On Mon, Sep 18, 2006 at 07:45:07PM -0400, [EMAIL PROTECTED] wrote:
> > I would not use a 100% random number generator for a UUID value as was
> > suggested. I prefer inserting the MAC address and the time, to at
> > least allow me to control if a collision is possible. This is not easy
> > to do using a few lines of C code. I'd rather have a UUID type in core
> > with no generation routine, than no UUID type in core because the code
> > is too complicated to maintain, or not portable enough.
> As others have mentioned, using MAC address doesn't remove the
> possibility of a collision.

It does, as I control the MAC address. I can choose not to overwrite it.
I can choose to ensure that any cases where it is overwritten, it is
overwritten with a unique value. Random number does not provide this
level of control.

> Maybe a good compromise that would allow a generator function to go into
> the backend would be to combine the current time with a random number.
> That will ensure that you won't get a dupe, so long as your clock never
> runs backwards.

Which standard UUID generation function would you be thinking of? 
Inventing a new one doesn't seem sensible. I'll have to read over the
versions again...

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


[PATCHES] Incrementally Updated Backup

2006-09-19 Thread Simon Riggs

Way past feature freeze, but this small change allows a powerful new
feature utilising the Restartable Recovery capability. Very useful for
very large database backups... 

Includes full documentation.

Perhaps a bit rushed, but inclusion in 8.2 would be great. (Ouch, don't
shout back, read the patch first)

-
Docs copied here as better explanation:

   Incrementally Updated Backups

   
Restartable Recovery can also be utilised to avoid the need to take
regular complete base backups, thus improving backup performance in
situations where the server is heavily loaded or the database is
very large.  This concept is known as incrementally updated backups.
   

   
If we take a backup of the server files after a recovery is
partially
completed, we will be able to restart the recovery from the last
restartpoint. This backup is now further forward along the timeline
than the original base backup, so we can refer to it as an
incrementally
updated backup. If we need to recover, it will be faster to recover
from 
the incrementally updated backup than from the base backup.
   

   
The  option in the
recovery.conf
file is provided to allow the recovery to complete up to the current
last
WAL segment, yet without starting the database. This option allows
us
to stop the server and take a backup of the partially recovered
server
files: this is the incrementally updated backup.
   

   
We can use the incrementally updated backup concept to come up with
a
streamlined backup schedule. For example:
  
   

 Set up continuous archiving

   
   

 Take weekly base backup

   
   

 After 24 hours, restore base backup to another server, then run a
 partial recovery and take a backup of the latest database state to
 produce an incrmentally updated backup.

   
   

 After next 24 hours, restore the incrementally updated backup to
the
 second server, then run a partial recovery, at the end, take a
backup
 of the partially recovered files.

   
   

 Repeat previous step each day, until the end of the week.

   
  
   

   
A weekly backup need only be taken once per week, yet the same level
of
protection is offered as if base backups were taken nightly. 
   

  

-- 
  Simon Riggs 
  EnterpriseDB   http://www.enterprisedb.com
Index: doc/src/sgml/backup.sgml
===
RCS file: /projects/cvsroot/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.86
diff -c -r2.86 backup.sgml
*** doc/src/sgml/backup.sgml	16 Sep 2006 00:30:11 -	2.86
--- doc/src/sgml/backup.sgml	19 Sep 2006 14:03:32 -
***
*** 1063,1068 
--- 1063,1081 

   
  
+  
+   startup_after_recovery 
+ (boolean)
+   
+   
+
+ Allows an incrementally updated backup to be taken.
+ See  for discussion.
+
+   
+  
+ 
 
  
 
***
*** 1137,1142 
--- 1150,1229 
 

  
+   
+Incrementally Updated Backups
+ 
+   
+incrementally updated backups
+   
+ 
+
+ Restartable Recovery can also be utilised to avoid the need to take
+ regular complete base backups, thus improving backup performance in
+ situations where the server is heavily loaded or the database is
+ very large.  This concept is known as incrementally updated backups.
+
+ 
+
+ If we take a backup of the server files after a recovery is partially
+ completed, we will be able to restart the recovery from the last
+ restartpoint. This backup is now further forward along the timeline
+ than the original base backup, so we can refer to it as an incrementally
+ updated backup. If we need to recover, it will be faster to recover from 
+ the incrementally updated backup than from the base backup.
+
+ 
+
+ The  option in the recovery.conf
+ file is provided to allow the recovery to complete up to the current last
+ WAL segment, yet without starting the database. This option allows us
+ to stop the server and take a backup of the partially recovered server
+ files: this is the incrementally updated backup.
+
+ 
+
+ We can use the incrementally updated backup concept to come up with a
+ streamlined backup schedule. For example:
+   
+
+ 
+  Set up continuous archiving
+ 
+
+
+ 
+  Take weekly base backup
+ 
+
+
+ 
+  After 24 hours, restore base backup to another server, then run a
+  partial recovery and take a backup of the latest database state to
+  produce an incrmentally updated backup.
+ 
+
+
+ 
+  After next 24 hours, restore the incrementally updated backup to the
+  second server, then run a partial recovery, at the end, take a backup
+  of the part

Re: [HACKERS] [PATCHES] Patch for UUID datatype (beta)

2006-09-19 Thread Jim C. Nasby
On Tue, Sep 19, 2006 at 09:51:23AM -0400, [EMAIL PROTECTED] wrote:
> On Tue, Sep 19, 2006 at 08:20:13AM -0500, Jim C. Nasby wrote:
> > On Mon, Sep 18, 2006 at 07:45:07PM -0400, [EMAIL PROTECTED] wrote:
> > > I would not use a 100% random number generator for a UUID value as was
> > > suggested. I prefer inserting the MAC address and the time, to at
> > > least allow me to control if a collision is possible. This is not easy
> > > to do using a few lines of C code. I'd rather have a UUID type in core
> > > with no generation routine, than no UUID type in core because the code
> > > is too complicated to maintain, or not portable enough.
> > As others have mentioned, using MAC address doesn't remove the
> > possibility of a collision.
> 
> It does, as I control the MAC address. I can choose not to overwrite it.
> I can choose to ensure that any cases where it is overwritten, it is
> overwritten with a unique value. Random number does not provide this
> level of control.
> 
> > Maybe a good compromise that would allow a generator function to go into
> > the backend would be to combine the current time with a random number.
> > That will ensure that you won't get a dupe, so long as your clock never
> > runs backwards.
> 
> Which standard UUID generation function would you be thinking of? 
> Inventing a new one doesn't seem sensible. I'll have to read over the
> versions again...

I don't think it exists, but I don't see how that's an issue. Let's look
at an extreme case: take the amount of random entropy used for the
random-only generation method. Append that to the current time in UTC,
and hash it. Thanks to the time component, you've now greatly reduced
the odds of a duplicate, probably by many orders of magnitude.

Ultimately, I'm OK with a generator that's only in contrib, provided
that there's at least one that will work on all OSes.
-- 
Jim Nasby[EMAIL PROTECTED]
EnterpriseDB  http://enterprisedb.com  512.569.9461 (cell)

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] Dynamic linking on AIX

2006-09-19 Thread Albe Laurenz
> I still would like to see a paragraph describing how AIX is different
> from other platforms and what we are doing here.

Ok, I'll try to sum it up:

Shared libraries in AIX are different from shared libraries
in Linux.

A shared library on AIX is an 'ar' archive containing
shared objects.
A shared object is produced by the linker when invoked
appropriately (e.g. with -G), it is what we call a
shared library on Linux.

-> On AIX, you can do a static as well as a dynamic
-> link against a shared library, it depends on how you
-> invoke the linker.

When you link statically, the shared objects from
the library are added to your executable as required;
when you link dynamically, only references
to the shared objects are included in the executable.

Consequently you do not need a separate static library
on AIX if you have a dynamic library.

However, you CAN have static libraries (ar archives
containing *.o files), and the linker will link
against them. This will of course always be a
static link.

When the AIX linker searches for libraries to link,
it will look for a library libxy.a as well as for a
single shared object libxy.so when you tell it
to -lyx. When it finds both in the same directory,
it will prefer libpq.a unless invoked with -brtl.

This is where the problem occurs:

By default, PostgreSQL will (in the Linux way) create
a shared object libpq.so and a static library libpq.a
in the same directory.

Up to now, since the linker was invoked without the
-brtl flag, linking on AIX was always static, as the
linker preferred libpq.a over libpq.so.

We could have solved the problem by linking with
-brtl on AIX, but we chose to go a more AIX-conforming
way so that third party programs linking against
PostgreSQL libraries will not be fooled into
linking statically by default.

The 'new way' on AIX is:
- Create libxy.so.n as before from the static library
  libxy.a with the linker.
- Remove libxy.a
- Recreate libxy.a as a dynamic library with
  ar -cr libxy.a libxy.so.n
- Only install libxy.a, do not install libxy.so

Since linking is dynamic on AIX now, we have a new
problem:

We must make sure that the executable finds
its library even if the library is not installed in
one of the standard library paths (/usr/lib or /lib).

On Linux this is done with an RPATH, on AIX the
equivalent is LIBPATH that can be specified at link
time with -blibpath: .
If you do not specify the LIBPATH, it is automatically
computed from the -L arguments given to the linker.
The LIBPATH, when set, must contain ALL directories where
shared libraries should be searched, including
the standard library directories.

Makefile.aix has been changed to link executables
with a LIBPATH that contains --libdir when PostgreSQL
is configured with --enable-rpath (the default).

The AIX equivalent for the Linux environment variable
LD_LIBRARY_PATH is LIBPATH.

The regression tests rely on LD_LIBRARY_PATH and have
to be changed to set LIBPATH as well.


I hope that's good enough,
Laurenz Albe

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] [HACKERS] Incrementally Updated Backup

2006-09-19 Thread Heikki Linnakangas

Simon Riggs wrote:

+
+ if (startupAfterRecovery)
+ ereport(ERROR,
+ (errmsg("recovery ends normally with startup_after_recovery=false")));
+


I find this part of the patch a bit ugly. Isn't there a better way to 
exit than throwing an error that's not really an error?


--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [PATCHES] Small additions and typos on backup

2006-09-19 Thread Neil Conway
On Tue, 2006-09-19 at 11:09 +0100, Simon Riggs wrote:
> In SGML, diff -c format

Applied, thanks.

(I'll also apply the single-transaction doc patch later today if no one
beats me to it.)

-Neil



---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


Re: [PATCHES] Dynamic linking on AIX

2006-09-19 Thread Tom Lane
"Albe Laurenz" <[EMAIL PROTECTED]> writes:
> This is a second try; this patch replaces
> http://archives.postgresql.org/pgsql-patches/2006-09/msg00185.php

Looks good, applied.

> The line in src/makefiles/Makefile.aix
> where I set 'libpath' also seems pretty ugly to me.
> It will fail if there is a -L path that contains
> a blank :^(

We were already assuming no spaces in -L switches, see the $filter
manipulations in Makefile.shlib.  So I simplified it to

libpath := $(shell echo $(subst -L,:,$(filter -L/%,$(LDFLAGS))) | sed -e's/ 
//g'):/usr/lib:/lib

It's annoying to have to shell out to sed to get rid of the spaces, but
this is gmake's fault for having such a brain-dead function call syntax.
After looking at the gmake manual, it is possible to use $subst to get
rid of spaces, but it's even uglier (and much harder to follow) than
the above ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] [HACKERS] Incrementally Updated Backup

2006-09-19 Thread Tom Lane
Heikki Linnakangas <[EMAIL PROTECTED]> writes:
> Simon Riggs wrote:
>> +
>> + if (startupAfterRecovery)
>> + ereport(ERROR,
>> + (errmsg("recovery ends normally with startup_after_recovery=false")));
>> +

> I find this part of the patch a bit ugly. Isn't there a better way to 
> exit than throwing an error that's not really an error?

This patch has obviously been thrown together with no thought and even
less testing.  It breaks the normal case (I think the above if-test is
backwards), and I don't believe that it works for the advertised purpose
either (because nothing gets done to force a checkpoint before aborting,
thus the files on disk are not up to date with the end of WAL).

Also, I'm not sold that the concept is even useful.  Apparently the idea
is to offload the expense of taking periodic base backups from a master
server, by instead backing up a PITR slave's fileset --- which is fine.
But why in the world would you want to stop the slave to do it?  ISTM
we would want to arrange things so that you can copy the slave's files
while it continues replicating, just as with a standard base backup.

regards, tom lane

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [HACKERS] [PATCHES] DOC: catalog.sgml

2006-09-19 Thread Zdenek Kotala

Tom Lane napsal(a):

Zdenek Kotala <[EMAIL PROTECTED]> writes:
I little bit enhanced overview catalog tables. I added two new columns. 
First one is OID of catalog table and second one contains attributes 
which determine if the table is bootstrap, with oid and global.


Why is this a good idea?  It seems like mere clutter.



I'm working on pg_upgrade and these information are important for me and 
I think that They should be interest some else.
You can easy determine the file related to if you know the OID. 
Specially when database is shutdown is good to have some information 
source.  If catalog table is global/share or local is very important and 
it is not mentioned anywhere.  If it is created with oid or bootstrap it 
is not important for standard purpose, it is only for fullness.


I know that people who hacking postgres ten years know this, however it 
is internals chapter and for newbies it should be useful. And by the way 
it is documentation and this is completion of information. You can say 
why we have page layout there because it is described in the source code 
and so on...


Zdenek

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PATCHES] docs for advisory locks

2006-09-19 Thread Merlin Moncure

ok, here is the promised docs for the advisory locks.

some quick notes here: this is my first non trivial patch (albeit only
documentation) and i am a complete docbook novice.  i also dont have
the capability to build docbook at the moment.  so consider this a
very rough draft.  any comments are welcome particularly from someone
willing to build the sgml and check for errors...sorry about that but
i wanted to get this up as soon as possible.

thanks
merlin


advisory_docs.patch
Description: Binary data

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] Incrementally Updated Backup

2006-09-19 Thread Bruce Momjian

No, too late.

---

Simon Riggs wrote:
> 
> Way past feature freeze, but this small change allows a powerful new
> feature utilising the Restartable Recovery capability. Very useful for
> very large database backups... 
> 
> Includes full documentation.
> 
> Perhaps a bit rushed, but inclusion in 8.2 would be great. (Ouch, don't
> shout back, read the patch first)
> 
> -
> Docs copied here as better explanation:
> 
>Incrementally Updated Backups
> 
>
> Restartable Recovery can also be utilised to avoid the need to take
> regular complete base backups, thus improving backup performance in
> situations where the server is heavily loaded or the database is
> very large.  This concept is known as incrementally updated backups.
>
> 
>
> If we take a backup of the server files after a recovery is
> partially
> completed, we will be able to restart the recovery from the last
> restartpoint. This backup is now further forward along the timeline
> than the original base backup, so we can refer to it as an
> incrementally
> updated backup. If we need to recover, it will be faster to recover
> from 
> the incrementally updated backup than from the base backup.
>
> 
>
> The  option in the
> recovery.conf
> file is provided to allow the recovery to complete up to the current
> last
> WAL segment, yet without starting the database. This option allows
> us
> to stop the server and take a backup of the partially recovered
> server
> files: this is the incrementally updated backup.
>
> 
>
> We can use the incrementally updated backup concept to come up with
> a
> streamlined backup schedule. For example:
>   
>
> 
>  Set up continuous archiving
> 
>
>
> 
>  Take weekly base backup
> 
>
>
> 
>  After 24 hours, restore base backup to another server, then run a
>  partial recovery and take a backup of the latest database state to
>  produce an incrmentally updated backup.
> 
>
>
> 
>  After next 24 hours, restore the incrementally updated backup to
> the
>  second server, then run a partial recovery, at the end, take a
> backup
>  of the partially recovered files.
> 
>
>
> 
>  Repeat previous step each day, until the end of the week.
> 
>
>   
>
> 
>
> A weekly backup need only be taken once per week, yet the same level
> of
> protection is offered as if base backups were taken nightly. 
>
> 
>   
> 
> -- 
>   Simon Riggs 
>   EnterpriseDB   http://www.enterprisedb.com

[ Attachment, skipping... ]

> 
> ---(end of broadcast)---
> TIP 3: Have you checked our extensive FAQ?
> 
>http://www.postgresql.org/docs/faq

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [HACKERS] [PATCHES] DOC: catalog.sgml

2006-09-19 Thread Zdenek Kotala

Alvaro Herrera wrote:

Tom Lane wrote:

Zdenek Kotala <[EMAIL PROTECTED]> writes:
I little bit enhanced overview catalog tables. I added two new columns. 
First one is OID of catalog table and second one contains attributes 
which determine if the table is bootstrap, with oid and global.

Why is this a good idea?  It seems like mere clutter.


What's "global"?  A maybe-useful flag would be telling that a table is
shared.  Is that it?  Mind you, it's not useful to me because I know
which tables are shared, but I guess for someone not so familiar with
the catalogs it could have some use.


Global means share table stored in global directory :-). Ok I change it.

Thanks for comment Zdenek


---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] Notes on restoring a backup with --single-transaction

2006-09-19 Thread Neil Conway
On Tue, 2006-09-19 at 13:00 +0100, Simon Riggs wrote:
> Additional notes for pg_dump/restore

Applied with additional fixes; revised patch is attached.

Thanks for the patch.

-Neil

Index: doc/src/sgml/backup.sgml
===
RCS file: /home/neilc/postgres/cvs_root/pgsql/doc/src/sgml/backup.sgml,v
retrieving revision 2.87
diff -c -r2.87 backup.sgml
*** doc/src/sgml/backup.sgml	19 Sep 2006 15:18:41 -	2.87
--- doc/src/sgml/backup.sgml	19 Sep 2006 18:59:52 -
***
*** 84,90 
  

 
! When your database schema relies on OIDs (for instance as foreign
  keys) you must instruct pg_dump to dump the OIDs
  as well. To do this, use the -o command line
  option.
--- 84,90 
  

 
! If your database schema relies on OIDs (for instance as foreign
  keys) you must instruct pg_dump to dump the OIDs
  as well. To do this, use the -o command line
  option.
***
*** 105,134 
  you used as outfile
  for the pg_dump command. The database dbname will not be created by this
! command, you must create it yourself from template0 before executing
! psql (e.g., with createdb -T template0
! dbname).
! psql supports options similar to pg_dump 
! for controlling the database server location and the user name. See
! 's reference page for more information.
 
  
 
! Not only must the target database already exist before starting to
! run the restore, but so must all the users who own objects in the
! dumped database or were granted permissions on the objects.  If they
! do not, then the restore will fail to recreate the objects with the
! original ownership and/or permissions.  (Sometimes this is what you want,
! but usually it is not.)
 
  
 
! Once restored, it is wise to run  on each database so the optimizer has
! useful statistics. An easy way to do this is to run
! vacuumdb -a -z to
! VACUUM ANALYZE all databases; this is equivalent to
! running VACUUM ANALYZE manually.
 
  
 
--- 105,146 
  you used as outfile
  for the pg_dump command. The database dbname will not be created by this
! command, so you must create it yourself from template0
! before executing psql (e.g., with
! createdb -T template0 dbname).  psql
! supports similar options to pg_dump for specifying
! the database server to connect to and the user name to use. See
! the  reference page for more information.
 
  
 
! Before restoring a SQL dump, all the users who own objects or were
! granted permissions on objects in the dumped database must already
! exist. If they do not, then the restore will fail to recreate the
! objects with the original ownership and/or permissions.
! (Sometimes this is what you want, but usually it is not.)
 
  
 
! By default, the psql script will continue to
! execute after an SQL error is encountered. You may wish to use the
! following command at the top of the script to alter that
! behaviour and have psql exit with an
! exit status of 3 if an SQL error occurs:
! 
! \set ON_ERROR_STOP
! 
! Either way, you will only have a partially restored
! dump. Alternatively, you can specify that the whole dump should be
! restored as a single transaction, so the restore is either fully
! completed or fully rolled back. This mode can be specified by
! passing the -1 or --single-transaction
! command-line options to psql. When using this
! mode, be aware that even the smallest of errors can rollback a
! restore that has already run for many hours. However, that may
! still be preferable to manually cleaning up a complex database
! after a partially restored dump.
 
  
 
***
*** 153,160 
 
  
 
! For advice on how to load large amounts of data into
! PostgreSQL efficiently, refer to .
 

--- 165,177 
 
  
 
! After restoring a backup, it is wise to run  on each
! database so the query optimizer has useful statistics. An easy way
! to do this is to run vacuumdb -a -z; this is
! equivalent to running VACUUM ANALYZE on each database
! manually.  For more advice on how to load large amounts of data
! into PostgreSQL efficiently, refer to .
 


---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [PATCHES] More doc patches

2006-09-19 Thread Neil Conway
On Tue, 2006-09-19 at 10:44 +0100, Gregory Stark wrote: 
> Some more doc patches for partitioned tables. In particular replace the caveat
> that INCLUDING CONSTRAINTS doesn't exist and replace it with documentation of,
> well, INCLUDING CONSTRAINTS.
> 
> Also, there was an instance of "LIKE WITH DEFAULTS" which is actually spelled
> "LIKE INCLUDING DEFAULTS".

Applied with additional fixes. Thanks for the patch.

BTW, I personally prefer patches to be proper MIME attachments, created
as a patch against the root of the Postgres source tree. However, I'm
not sure what the other committers prefer.

-Neil



---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] More doc patches

2006-09-19 Thread Bruce Momjian
Neil Conway wrote:
> On Tue, 2006-09-19 at 10:44 +0100, Gregory Stark wrote: 
> > Some more doc patches for partitioned tables. In particular replace the 
> > caveat
> > that INCLUDING CONSTRAINTS doesn't exist and replace it with documentation 
> > of,
> > well, INCLUDING CONSTRAINTS.
> > 
> > Also, there was an instance of "LIKE WITH DEFAULTS" which is actually 
> > spelled
> > "LIKE INCLUDING DEFAULTS".
> 
> Applied with additional fixes. Thanks for the patch.
> 
> BTW, I personally prefer patches to be proper MIME attachments, created
> as a patch against the root of the Postgres source tree. However, I'm
> not sure what the other committers prefer.

Agreed.

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 3: Have you checked our extensive FAQ?

   http://www.postgresql.org/docs/faq


Re: [PATCHES] More doc patches

2006-09-19 Thread Gregory Stark

Neil Conway <[EMAIL PROTECTED]> writes:

> BTW, I personally prefer patches to be proper MIME attachments, created
> as a patch against the root of the Postgres source tree. However, I'm
> not sure what the other committers prefer.

Good to know. I usually generate diffs with cvs -c. 
I don't know why I did it differently here, weird.
Sorry about that.

-- 
  Gregory Stark
  EnterpriseDB  http://www.enterprisedb.com

---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
   subscribe-nomail command to [EMAIL PROTECTED] so that your
   message can get through to the mailing list cleanly


Re: [PATCHES] Dynamic linking on AIX

2006-09-19 Thread Bruce Momjian

Great, I have added this to the bottom of the AIX FAQ.  Thanks.

---

Albe Laurenz wrote:
> > I still would like to see a paragraph describing how AIX is different
> > from other platforms and what we are doing here.
> 
> Ok, I'll try to sum it up:
> 
> Shared libraries in AIX are different from shared libraries
> in Linux.
> 
> A shared library on AIX is an 'ar' archive containing
> shared objects.
> A shared object is produced by the linker when invoked
> appropriately (e.g. with -G), it is what we call a
> shared library on Linux.
> 
> -> On AIX, you can do a static as well as a dynamic
> -> link against a shared library, it depends on how you
> -> invoke the linker.
> 
> When you link statically, the shared objects from
> the library are added to your executable as required;
> when you link dynamically, only references
> to the shared objects are included in the executable.
> 
> Consequently you do not need a separate static library
> on AIX if you have a dynamic library.
> 
> However, you CAN have static libraries (ar archives
> containing *.o files), and the linker will link
> against them. This will of course always be a
> static link.
> 
> When the AIX linker searches for libraries to link,
> it will look for a library libxy.a as well as for a
> single shared object libxy.so when you tell it
> to -lyx. When it finds both in the same directory,
> it will prefer libpq.a unless invoked with -brtl.
> 
> This is where the problem occurs:
> 
> By default, PostgreSQL will (in the Linux way) create
> a shared object libpq.so and a static library libpq.a
> in the same directory.
> 
> Up to now, since the linker was invoked without the
> -brtl flag, linking on AIX was always static, as the
> linker preferred libpq.a over libpq.so.
> 
> We could have solved the problem by linking with
> -brtl on AIX, but we chose to go a more AIX-conforming
> way so that third party programs linking against
> PostgreSQL libraries will not be fooled into
> linking statically by default.
> 
> The 'new way' on AIX is:
> - Create libxy.so.n as before from the static library
>   libxy.a with the linker.
> - Remove libxy.a
> - Recreate libxy.a as a dynamic library with
>   ar -cr libxy.a libxy.so.n
> - Only install libxy.a, do not install libxy.so
> 
> Since linking is dynamic on AIX now, we have a new
> problem:
> 
> We must make sure that the executable finds
> its library even if the library is not installed in
> one of the standard library paths (/usr/lib or /lib).
> 
> On Linux this is done with an RPATH, on AIX the
> equivalent is LIBPATH that can be specified at link
> time with -blibpath: .
> If you do not specify the LIBPATH, it is automatically
> computed from the -L arguments given to the linker.
> The LIBPATH, when set, must contain ALL directories where
> shared libraries should be searched, including
> the standard library directories.
> 
> Makefile.aix has been changed to link executables
> with a LIBPATH that contains --libdir when PostgreSQL
> is configured with --enable-rpath (the default).
> 
> The AIX equivalent for the Linux environment variable
> LD_LIBRARY_PATH is LIBPATH.
> 
> The regression tests rely on LD_LIBRARY_PATH and have
> to be changed to set LIBPATH as well.
> 
> 
> I hope that's good enough,
> Laurenz Albe
> 
> ---(end of broadcast)---
> TIP 9: In versions below 8.0, the planner will ignore your desire to
>choose an index scan if your joining column's datatypes do not
>match

-- 
  Bruce Momjian   [EMAIL PROTECTED]
  EnterpriseDBhttp://www.enterprisedb.com

  + If your life is a hard drive, Christ can be your backup. +

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] [PATCHES] Patch for UUID datatype (beta)

2006-09-19 Thread Alvaro Herrera
[EMAIL PROTECTED] wrote:
> On Tue, Sep 19, 2006 at 08:20:13AM -0500, Jim C. Nasby wrote:
> > On Mon, Sep 18, 2006 at 07:45:07PM -0400, [EMAIL PROTECTED] wrote:
> > > I would not use a 100% random number generator for a UUID value as was
> > > suggested. I prefer inserting the MAC address and the time, to at
> > > least allow me to control if a collision is possible. This is not easy
> > > to do using a few lines of C code. I'd rather have a UUID type in core
> > > with no generation routine, than no UUID type in core because the code
> > > is too complicated to maintain, or not portable enough.
> > As others have mentioned, using MAC address doesn't remove the
> > possibility of a collision.
> 
> It does, as I control the MAC address.

What happens if you have two postmaster running on the same machine?

-- 
Alvaro Herrerahttp://www.CommandPrompt.com/
The PostgreSQL Company - Command Prompt, Inc.

---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings


[PATCHES] Fwd: docs for advisory locks

2006-09-19 Thread Merlin Moncure

[resending this to the list, was silently dropped this afternoon?]

-- Forwarded message --
From: Merlin Moncure <[EMAIL PROTECTED]>
Date: Sep 19, 2006 11:57 PM
Subject: docs for advisory locks
To: pgsql-patches@postgresql.org


ok, here is the promised docs for the advisory locks.

some quick notes here: this is my first non trivial patch (albeit only
documentation) and i am a complete docbook novice.  i also dont have
the capability to build docbook at the moment.  so consider this a
very rough draft.  any comments are welcome particularly from someone
willing to build the sgml and check for errors...sorry about that but
i wanted to get this up as soon as possible.

thanks
merlin


advisory_docs.patch
Description: Binary data

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster


Re: [HACKERS] [PATCHES] Patch for UUID datatype (beta)

2006-09-19 Thread mark
On Tue, Sep 19, 2006 at 11:21:51PM -0400, Alvaro Herrera wrote:
> [EMAIL PROTECTED] wrote:
> > On Tue, Sep 19, 2006 at 08:20:13AM -0500, Jim C. Nasby wrote:
> > > On Mon, Sep 18, 2006 at 07:45:07PM -0400, [EMAIL PROTECTED] wrote:
> > > > I would not use a 100% random number generator for a UUID value as was
> > > > suggested. I prefer inserting the MAC address and the time, to at
> > > > least allow me to control if a collision is possible. This is not easy
> > > > to do using a few lines of C code. I'd rather have a UUID type in core
> > > > with no generation routine, than no UUID type in core because the code
> > > > is too complicated to maintain, or not portable enough.
> > > As others have mentioned, using MAC address doesn't remove the
> > > possibility of a collision.
> > It does, as I control the MAC address.
> What happens if you have two postmaster running on the same machine?

Could be bad things. :-)

For the case of two postmaster processes, I assume you mean two
different databases? If you never intend to merge the data between the
two databases, the problem is irrelevant. There is a much greater
chance that any UUID form is more unique, or can be guaranteed to be
unique, within a single application instance, than across all
application instances in existence. If you do intend to merge the
data, you may have a problem.

If I have two connections to PostgreSQL - would the plpgsql procedures
be executed from two different processes? With an in-core generation
routine, I think it is possible for it to collide unless inter-process
synchronization is used (unlikely) to ensure generation of unique
time/sequence combinations each time. I use this right now (mostly),
but as I've mentioned, it isn't my favourite. It's convenient. I don't
believe it provides the sort of guarantees that a SERIAL provides.

A model that intended to try and guarantee uniqueness would provide a
UUID generation service for the entire host, that was not specific to
any application, or database, possibly accessible via the loopback
address. It would ensure that at any given time, either the time is
new, or the sequence is new for the time. If computer time ever went
backwards, it could keep the last time issued persistent, and
increment from this point forward through the clock sequence values
until real time catches up. An alternative would be along the lines of
a /dev/uuid device, that like /dev/random, would be responsible for
outputting unique uuid values for the system. Who does this? Probably
nobody. I'm tempted to implement it, though, for my uses. :-)

Cheers,
mark

-- 
[EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] 
__
.  .  _  ._  . .   .__.  . ._. .__ .   . . .__  | Neighbourhood Coder
|\/| |_| |_| |/|_ |\/|  |  |_  |   |/  |_   | 
|  | | | | \ | \   |__ .  |  | .|. |__ |__ | \ |__  | Ottawa, Ontario, Canada

  One ring to rule them all, one ring to find them, one ring to bring them all
   and in the darkness bind them...

   http://mark.mielke.cc/


---(end of broadcast)---
TIP 6: explain analyze is your friend