RE: [HACKERS] Status of JDBC Interface

2000-12-22 Thread Peter Mount



-- 
Peter Mount
Enterprise Support Officer, Maidstone Borough Council
Email: [EMAIL PROTECTED]
WWW: http://www.maidstone.gov.uk
All views expressed within this email are not the views of Maidstone Borough
Council


 -Original Message-
 From: Peter Eisentraut [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, December 21, 2000 6:30 PM
 To: Peter Mount
 Cc: PostgreSQL Interfaces (E-mail); PostgreSQL Developers 
 List (E-mail)
 Subject: Re: [HACKERS] Status of JDBC Interface
 
 
 Peter Mount writes:
 
  1) ANT vs Make
 
  I suggest we keep supporting both methods for now to see 
 how people get on.
 
 If you're confident about ANT is suggest that you dump the 
 make interface because otherwise you increase the possible
 failure scenarios at the install level alone in combinatorial ways.

My plan is to keep both for this release and then (assuming there's feedback
about how ANT is working) to remove the current Makefile for the next one.

 What's a bit sad about the JDBC subtree is that it doesn't follow the
 build system conventions in the rest of the tree.  For example, I would
 really like to be able to do this:
 
 ./configure ... --with-java[=/usr/local/j2sdk1.3.0]
 make
 make install

I did think about that but ran out of time. What would be nice would be for
configure to either detect the presence of the JDK  ANT (with --with-java
and --with-ant pairs) and then automatically call a cut down Makefile. ie:

-- Begin Makefile --
all:
ant

clean:
ant clean
-- End Makefile --

The one big thing for ANT is that it makes the detection of the JDK/JVM
version simple, so detecting the JDK/JVM version isn't required anywhere in
the Makefiles.

 This wouldn't only make it easier on users, but many more people would
 perhaps be exposed to the fact that there's a JDBC driver in 
 the tree at all.

Agreed. I still get emails from people asking for the source when it's
included in the main source tree.

 I have on and off had some ideas about autoconfiscating the 
 JDBC build but I'm glad I didn't do it since this Ant thing seems to be
much better.
 But it would still be desirable to have a make wrapper since 
 that is what people are accustomed to.

Yes. I could replace Makefile now, but I wanted to see what everyones
opinion on ANT was first.

 Btw., how does Ant choose the JDK it uses if I have three or four
 installed at random places?  (This is perhaps yet another source of
 problems, if make and ant use different JDKs by default.)

There's the JAVA_HOME environment variable used by the JDK. Normally the JDK
can work it out without the user setting it. I use it to switch between
1.1.8  1.2.2 (you also have to change PATH).

Anyhow, you have set JAVA_HOME then ANT will use that for the compiler.
There's also ANT_HOME but I'm running ok without that one set.

PS: ANT works with both Sun's javac, jikes (IBM's compiler) and jvc
(Micro$oft's Java/VisualJ++)

The more I look into ANT's capabilities the more I like it. It's extensible
(you can write your own class to implement new tasks) and it even has the
ability to use CVS and apply patches on the fly, so if someone has CVS
installed they only need to run:

ant update

and an update target (ANT's name for a rule in Make) then checks out the
current source before building.

  2) Versioning
 
  one location. Also as suggested on the Hackers list Make 
 now extracts the
  version from Makefile.global. This works fine for Make, but 
 there are two
  problems.
 
  First, ANT can't read this easily. This isn't that major, 
 but the second one
  is. I've had reports that some people checkout just the 
 interfaces, and not
  the entire source, so Makefile.global is not available.
 
 Just checking out interfaces is not advertised AFAIK and it definitely
 doesn't work for anything but the JDBC driver.

I had an email from someone who said they did this (I didn't know you could
before then) because of space reasons. Before 7.0 yes JDBC would be
compileable but it has a link to Makefile.global now.

 OTOH, nothing is stopping you from inventing your own 
 versioning scheme for the driver only.  Several other things in the tree
do this as well
 (PyGreSql, PgAccess).

True, but historically the JDBC versioning has matched that of the backend,
and I think it makes it easier to say JDBC Driver v7.1 is for 7.1.x
backends. This is especially as in this version DatabaseMetaData will not
work with earlier version backends (uses INNER  OUTER joins).

If I get chance today I'll see if I can get it to pull the versions out of
Makefile.global.

Peter




[HACKERS] Merry X-Mass

2000-12-22 Thread Partyka Robert

Hello,

Merry Christmass and Happy New Year 2001 ;)

R. "BoBsoN" Partyka





Re: [HACKERS] Merry X-Mass

2000-12-22 Thread Rod Taylor

Little early aren't you?

select now()::date gives me 2000-12-22

Hmm.. only one digit is odd.
--
Rod Taylor

There are always four sides to every story: your side, their side, the
truth, and what really happened.
- Original Message -
From: "Partyka Robert" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Cc: [EMAIL PROTECTED]
Sent: Friday, December 22, 2000 8:41 AM
Subject: [HACKERS] Merry X-Mass


 Hello,

 Merry Christmass and Happy New Year 2001 ;)

 R. "BoBsoN" Partyka







Re: [HACKERS] Merry X-Mass

2000-12-22 Thread Partyka Robert


On Fri, 22 Dec 2000, Rod Taylor wrote:

 Little early aren't you?
 

I live from town (and this meen no internet access) today 
and when I back will be the XXI century so its last
chance to wish You all mery xmass and happy new year ;)

So have a good party at night 31.12.2000, dont drink to much ;))) if You
want to remember how the XX millennium ends ;


BoBsoN





[HACKERS] RI problem with inherited table

2000-12-22 Thread Oliver Elphick

It no longer seems to be possible to refer to a table, which is an
ancestor of any other, in a referential integrity constraint.

In this example, "person" is the ancestor of several other tables:


bray=# create table junk (id char(10) constraint junk_id_person references 
person*(id));
ERROR:  parser: parse error at or near "*"
bray=#  create table junk (id char(10) constraint junk_id_person references 
only person(id));
ERROR:  parser: parse error at or near "only"
bray=# create table junk (id char(10) constraint junk_id_person references 
person(id));
NOTICE:  CREATE TABLE will create implicit trigger(s) for FOREIGN KEY check(s)
CREATE
bray=# insert into junk values ('aa');
ERROR:  SELECT FOR UPDATE is not supported for inherit queries

-- 
Oliver Elphick[EMAIL PROTECTED]
Isle of Wight  http://www.lfix.co.uk/oliver
PGP: 1024R/32B8FAA1: 97 EA 1D 47 72 3F 28 47  6B 7E 39 CC 56 E4 C1 47
GPG: 1024D/3E1D0C1C: CA12 09E0 E8D5 8870 5839  932A 614D 4C34 3E1D 0C1C
 
 "And there were in the same country shepherds abiding 
  in the field, keeping watch over their flock by night.
  And, lo, the angel of the Lord came upon them, and the
  glory of the Lord shone around them; and they were 
  sore afraid. And the angel said unto them, " Fear not;
  for behold I bring you good tidings of great joy which
  shall be to all people. For unto you is born this day 
  in the city of David a Saviour, which is Christ the 
  Lord."Luke 2:8-11 





[HACKERS] Inheritance is a security loophole!

2000-12-22 Thread Tom Lane

The lack of a permissions check for creating a child table means that
in current sources, any user can inject data of his choosing into
another user's tables.  Example:

User A:

regression= create table foo (f1 text);
CREATE
regression= insert into foo values ('good data');
INSERT 271570 1

User B:

regression= create table foohack () inherits (foo);
CREATE
regression= insert into foohack values ('you have been hacked!');
INSERT 271598 1

Now User A sees:

regression= select * from foo;
  f1
---
 good data
 you have been hacked!
(2 rows)

User A can only avoid this trap by being very careful to specify ONLY
in every query.  If he *intends* to use foo as an inheritance tree
master, then that cure doesn't work either.

Just to add insult to injury, user A is now unable to drop table foo.
He'll also get permission failures from commands like "UPDATE foo ..."

I suppose a proper fix would involve adding a new permission type "can
make child tables", but I don't want to mess with that at the moment.
For 7.1, I propose that we only allow creation of child tables to the
owner of the parent table.

Comments?

regards, tom lane

PS: another interesting problem: create a temp table, then create a
non-temp table that inherits from it.  Unhappiness ensues when you
end your session.  Need to prohibit this combination, I think.



Re: [HACKERS] GEQO status?

2000-12-22 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 What is the status of the genetic algorithm query optimizer?  Is this
 supposed to work well on many-table joins, or has it fallen out of favor
 or in disrepair?

It's supposed to work ;-).  I'm not sure that the default parameters are
optimal, however.  If you experiment with other settings, please post your
results.

regards, tom lane



Re: [HACKERS] GEQO status?

2000-12-22 Thread Ed Loehr

Tom Lane wrote:
 
 Ed Loehr [EMAIL PROTECTED] writes:
  What is the status of the genetic algorithm query optimizer?  Is this
  supposed to work well on many-table joins, or has it fallen out of favor
  or in disrepair?
 
 It's supposed to work ;-).  I'm not sure that the default parameters are
 optimal, however.  If you experiment with other settings, please post your
 results.

Query time dropped from many minutes to 13 seconds on a 12-table join
with a little tweaking from the default params:

My $PGDATA/pg_geqo:
---
Pool_Size1024
# Effort   high
Generations  100
Random_Seed  330418761
Selection_Bias   2.00

Similar performance with Generations setting of 800 derived from Effort.

Regards,
Ed Loehr



Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Alfred Perlstein

* Yusuf Goolamabbas [EMAIL PROTECTED] [001222 15:34] wrote:
 Hi, I am using the following command to check out the 7.1 branch of
 PostgreSQL.
 cvs -d :pserver:[EMAIL PROTECTED]:/home/projects/pgsql/cvsroot co -r REL7_1 
pgsql
 
 This is the error I am getting.
 cvs [server aborted]: cannot write /home/projects/pgsql/cvsroot/CVSROOT/val-tags: 
Permission denied
 
 I can check out HEAD perfectly alright
 
 Anybody else seeing similar results ?

Try using "cvs -Rq ..." or just use CVSup it's (cvsup) a lot quicker.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] GEQO status?

2000-12-22 Thread Ed Loehr

Thomas Lockhart wrote:
 
  What is the status of the genetic algorithm query optimizer?  Is this
  supposed to work well on many-table joins, or has it fallen out of favor
  or in disrepair?  [I'm needing to optimize some large, many-table-join
  queries and wondering time spent configuring/understanding geqo would be
  fruitful...]
 
 It is the only techique we have to achieve adequate performance on
 many-table joins. It has received little work recently, but that may be
 due to having received no complaints or discussions that I can recall.

At risk of being off-topic here, is there a reason why GEQO is off by
default in the ODBC driver (postdrv.exe)?  I vaguely recall something
about this from a year ago, but can't find it.

Regards,
Ed Loehr



Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Yusuf Goolamabbas

Nope, no luck with cvs -Rq also. Me thinks its some repository
permission issue. Don't know if CVSup would help either. I don't have
cvsup installed on this machine. 

 * Yusuf Goolamabbas [EMAIL PROTECTED] [001222 15:34] wrote:
  Hi, I am using the following command to check out the 7.1 branch of
  PostgreSQL.
  cvs -d :pserver:[EMAIL PROTECTED]:/home/projects/pgsql/cvsroot co -r REL7_1 
pgsql
  
  This is the error I am getting.
  cvs [server aborted]: cannot write /home/projects/pgsql/cvsroot/CVSROOT/val-tags: 
Permission denied
  
  I can check out HEAD perfectly alright
  
  Anybody else seeing similar results ?
 
 Try using "cvs -Rq ..." or just use CVSup it's (cvsup) a lot quicker.
 
 -- 
 -Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
 "I have the heart of a child; I keep it in a jar on my desk."
 

-- 
Yusuf Goolamabbas
[EMAIL PROTECTED]



Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Tom Lane

Use HEAD.  REL7_1 is a tag, not a branch (and a misplaced tag at that,
IMHO, since it's not the formal release or even close...)

regards, tom lane



Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Alfred Perlstein

* Yusuf Goolamabbas [EMAIL PROTECTED] [001222 15:47] wrote:
 Nope, no luck with cvs -Rq also. Me thinks its some repository
 permission issue. Don't know if CVSup would help either. I don't have
 cvsup installed on this machine. 

CVSup would work, that's what I use.

-- 
-Alfred Perlstein - [[EMAIL PROTECTED]|[EMAIL PROTECTED]]
"I have the heart of a child; I keep it in a jar on my desk."



Re: [HACKERS] GEQO status?

2000-12-22 Thread Tom Lane

Ed Loehr [EMAIL PROTECTED] writes:
 is there a reason why GEQO is off by
 default in the ODBC driver (postdrv.exe)?

There may once have been a good reason for that, but it sounds like a
mighty bad idea nowadays.

AFAICT ODBC's default setting has been that way for as long as ODBC has
been in our CVS tree, so no way to know who chose to do that, when, or
why.

regards, tom lane



Re: [HACKERS] Unable to check out REL7_1 via cvs

2000-12-22 Thread Peter Eisentraut

Yusuf Goolamabbas writes:

 Hi, I am using the following command to check out the 7.1 branch of
 PostgreSQL.
 cvs -d :pserver:[EMAIL PROTECTED]:/home/projects/pgsql/cvsroot co -r REL7_1 
pgsql

I don't think there is a 7.1 branch yet, there being no 7.1 release yet
either.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/




Re: [HACKERS] Isn't init_irels() dangerous ?

2000-12-22 Thread Tom Lane

Hiroshi Inoue [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 "Hiroshi Inoue" [EMAIL PROTECTED] writes:
 It seems that init_irels() should be called after
 InitializeTransactionSystem() was called.
 
 Can we just swap the order of the RelationCacheInitialize() and
 InitializeTransactionSystem() calls in InitPostgres?  If that
 works, I'd have no objection.

 It doesn't work. InitializeTransactionSystem() requires
 pg_log/pg_variable relations which are already built in 
 RelationCacheInitialize().

OK.  Second proposal: do the init_irels() call in
RelationCacheInitializePhase2().  I've just looked through the
other stuff that's done in between, and I don't think any of it
needs valid relcache entries.

 In the meantime,I have another anxiety. init_irels()
 (RelationCacheInitialize()) seems to be called while 
 Locking is disabled.

This should fix that problem, too.

regards, tom lane



Re: [HACKERS] GEQO status?

2000-12-22 Thread Tom Lane

Peter Eisentraut [EMAIL PROTECTED] writes:
 I've seen a number of bug reports that would indicate to me the GEQO works
 less than perfectly.  I vividly recall how, while working on my own code,
 mere additions of dummy clauses like '... AND 5=5' altered query results
 in seemingly random ways.

The choices made by GEQO are intentionally random, so I would expect
variation in tuple output order even for repetitions of the identical
query.  If you got a semantically different result, that would indeed
be a bug.  But it would most likely be a bug in the core planner, since
GEQO has essentially no influence over whether the produced plan is
correct or not.  GEQO merely forces specific choices of join order.
All else is in the core planner.

 That was admittedly quite a while ago, but the
 GEQO code hasn't changed since.

The planner has changed quite markedly over the past couple releases,
so I don't put a lot of stock in old anecdotes.  Let's see a test case.

regards, tom lane



Re: [HACKERS] GEQO status?

2000-12-22 Thread Ed Loehr

Tom Lane wrote:
 
 The choices made by GEQO are intentionally random, so I would expect
 variation in tuple output order even for repetitions of the identical
 query.  If you got a semantically different result, that would indeed
 be a bug.  But it would most likely be a bug in the core planner, since
 GEQO has essentially no influence over whether the produced plan is
 correct or not.  GEQO merely forces specific choices of join order.
 All else is in the core planner.

You can remove the randomness by setting the Seed configuration value, if
the docs are correct.

Regards,
Ed Loehr



[HACKERS] 7.1 on DEC/Alpha

2000-12-22 Thread Brent Verner

Hi,
  I saw the thread from a few days ago about Linux/Alpha and 7.1. I
believe I'm seeing the same problems with DEC/Alpha (Tru64Unix 4.0D).

I noticed the following in the postmaster.log, which occurs, as the
Linux/Alpha bug report states, during the misc regression test.

  DEBUG:  copy: line 293, XLogWrite: had to create new log file - you probably should 
do checkpoints more often
  Server process (pid 24954) exited with status 139 at Fri Dec 22 17:15:48 2000
  Terminating any active server processes...
  Server processes were terminated at Fri Dec 22 17:15:48 2000
  Reinitializing shared memory and semaphores
  DEBUG:  starting up
  DEBUG:  database system was interrupted at 2000-12-22 17:15:47
  DEBUG:  CheckPoint record at (0, 316624)
  DEBUG:  Redo record at (0, 316624); Undo record at (0, 0); Shutdown TRUE

the full src/test/regress/log/postmaster.log can be snagged from
http://www.rcfile.org/postmaster.log

in addition to this, compiling on DEC/Alpha with gcc does not work,
without some shameful hackery :) as __INTERLOCKED_TESTBITSS_QUAD() is 
a builtin that gcc does not know about. The DEC cc builds pg properly.
either way pg is built the test results are much the same, esp the
FAILURE of misc regression test.

If there is anything else I can do to help get this working, please
let me know.

  Brent Verner



[HACKERS] Considerations for running current cvs pgsql and pgsql release on same machine?

2000-12-22 Thread Robert B. Easter

I'm wanting to run pgsql 7.0.3 release and pgsql current cvs on the same 
machine without them conflicting (if possible).  Can someone explain what I 
should look out for when trying to do this?

I assume I'll have to configure --with-pgport=5433 (something other than 
5432).

How will the use of the environment variables PGDATA and PGLIB be affected if 
I have them still pointing at the release version?

I'm wanting to begin keeping an updated pgsql-cvs installation running and 
maybe get active in working on the sources.  Maybe I can start out with 
documentation stuff.  I've been following this list long enough and still 
enjoy it so I want to try towards contributing stuff.

-- 
 Robert B. Easter  [EMAIL PROTECTED] -
- CompTechNews Message Board   http://www.comptechnews.com/ -
- CompTechServ Tech Services   http://www.comptechserv.com/ -
-- http://www.comptechnews.com/~reaster/ 



[HACKERS] Re: 7.1 on DEC/Alpha

2000-12-22 Thread Brent Verner

On 22 Dec 2000 at 20:27 (-0500), Brent Verner wrote:

observation:

  commenting out the queries with 'FROM person* p' causes the misc
  regression test to pass.

SELECT p.name, p.hobbies.name FROM person* p;

  Brent

| Hi,
|   I saw the thread from a few days ago about Linux/Alpha and 7.1. I
| believe I'm seeing the same problems with DEC/Alpha (Tru64Unix 4.0D).
| 
| I noticed the following in the postmaster.log, which occurs, as the
| Linux/Alpha bug report states, during the misc regression test.
| 
|   DEBUG:  copy: line 293, XLogWrite: had to create new log file - you probably 
|should do checkpoints more often
|   Server process (pid 24954) exited with status 139 at Fri Dec 22 17:15:48 2000
|   Terminating any active server processes...
|   Server processes were terminated at Fri Dec 22 17:15:48 2000
|   Reinitializing shared memory and semaphores
|   DEBUG:  starting up
|   DEBUG:  database system was interrupted at 2000-12-22 17:15:47
|   DEBUG:  CheckPoint record at (0, 316624)
|   DEBUG:  Redo record at (0, 316624); Undo record at (0, 0); Shutdown TRUE
| 
| the full src/test/regress/log/postmaster.log can be snagged from
| http://www.rcfile.org/postmaster.log
| 
| in addition to this, compiling on DEC/Alpha with gcc does not work,
| without some shameful hackery :) as __INTERLOCKED_TESTBITSS_QUAD() is 
| a builtin that gcc does not know about. The DEC cc builds pg properly.
| either way pg is built the test results are much the same, esp the
| FAILURE of misc regression test.
| 
| If there is anything else I can do to help get this working, please
| let me know.
| 
|   Brent Verner



[HACKERS] Re: 7.1 on DEC/Alpha

2000-12-22 Thread Brent Verner

On 22 Dec 2000 at 21:58 (-0500), Brent Verner wrote:
| On 22 Dec 2000 at 20:27 (-0500), Brent Verner wrote:
| 
| observation:
| 
|   commenting out the queries with 'FROM person* p' causes the misc
|   regression test to pass.

that's not what I meant to say. the misc test still FAILS, but it 
no longer causes pg to die.

  b



Re: [HACKERS] Considerations for running current cvs pgsql and pgsql release on same machine?

2000-12-22 Thread Tom Lane

"Robert B. Easter" [EMAIL PROTECTED] writes:
 I'm wanting to run pgsql 7.0.3 release and pgsql current cvs on the same 
 machine without them conflicting (if possible).  Can someone explain what I 
 should look out for when trying to do this?

I routinely run multiple versions at the same time.  You need a separate
port, install directory, and data directory for each.  Easiest way to do
this is to configure the non-default versions with

./configure --with-pgport=nnn --prefix=/path/to/someplace

to establish their ports and install dirs, and then initdb each version
with the appropriate data directory specified.  A user then doesn't have
to do much except make his PATH point at the PREFIX/bin dir for the
version he wants to use at the moment.

You may also find that you have to pump up your kernel's IPC resource
parameters in order to start up multiple postmasters.

 How will the use of the environment variables PGDATA and PGLIB be
 affected if I have them still pointing at the release version?

They had better have the right values while initdb'ing or starting
each individual version.  There's no good reason to have either one
set in the general environment, or actually to set them at all --- you
can get the same results with command line switches to initdb and
postmaster, with much less chance of bad side-effects.

A tip I find handy is to make sure that the databases available under
each live postmaster have distinct names.  This helps to avoid the
mistake of testing against 7.0.2 when you thought you were testing
against 7.0.3 or current or whatever...

 I've been following this list long enough and still 
 enjoy it so I want to try towards contributing stuff.

Welcome aboard!

regards, tom lane



Re: [HACKERS] Considerations for running current cvs pgsql and pgsql release on same machine?

2000-12-22 Thread Robert B. Easter

On Friday 22 December 2000 22:05, Tom Lane wrote:
 I routinely run multiple versions at the same time.  You need a separate
 port, install directory, and data directory for each.  Easiest way to do
 this is to configure the non-default versions with

 ../configure --with-pgport=nnn --prefix=/path/to/someplace

 to establish their ports and install dirs, and then initdb each version
 with the appropriate data directory specified.  A user then doesn't have
 to do much except make his PATH point at the PREFIX/bin dir for the
 version he wants to use at the moment.

 You may also find that you have to pump up your kernel's IPC resource
 parameters in order to start up multiple postmasters.

  How will the use of the environment variables PGDATA and PGLIB be
  affected if I have them still pointing at the release version?

 They had better have the right values while initdb'ing or starting
 each individual version.  There's no good reason to have either one
 set in the general environment, or actually to set them at all --- you
 can get the same results with command line switches to initdb and
 postmaster, with much less chance of bad side-effects.

 A tip I find handy is to make sure that the databases available under
 each live postmaster have distinct names.  This helps to avoid the
 mistake of testing against 7.0.2 when you thought you were testing
 against 7.0.3 or current or whatever...

Thanks for these tips.  I took out all global PG environment variables from 
/etc/profile.  The way I did it, was to make a directory /etc/pgsql.d and in 
there have files pgsql.sh and pgcvs.sh for production and cvs test 
environments.  Each sets up the variables:

# Change these two
PGHOME=/usr/local/pgsql or /home/pgcvs/pgsql
PGPORT=5432 or 5433

PGLIB=$PGHOME/lib
PGDATA=$PGHOME/data
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$PGLIB
MANPATH=$MANPATH:$PGHOME/man
PATH=$PGHOME/bin
export PGHOME PGPORT PGLIB PGDATA LD_LIBRARY_PATH MANPATH PATH

These two files are then included into .profile in user directories like:

# Pgsql prod
.. /etc/pgsql.d/pgsql.sh

or

# Pgsql cvs dev
.. /etc/pgsql.d/pgcvs.sh

The cvs development version runs under user pgcvs while the production runs 
under user postgres as normal.  The only little problem about this setup is 
that I can almost just run pgsql.sh and pgcvs.sh to switch between the two 
environments while logged in as a user.  But if I do, PATH, MANPATH and 
LD_LIBRARY path, because they append, don't replace one thing for another.  
I'm sure that can be fixed with some simple bash/awk/sed trick but I haven't 
tried yet.  The next thing is to setup another local-only httpd on port 8080 
or whatever and get another php to compile with the cvs libs so I can test 
web stuff.


-- 
 Robert B. Easter  [EMAIL PROTECTED] -
- CompTechNews Message Board   http://www.comptechnews.com/ -
- CompTechServ Tech Services   http://www.comptechserv.com/ -
-- http://www.comptechnews.com/~reaster/