[HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Justin Clift

Hi all,

Am experimenting to find out what kind of performance gain are achieved
from moving indexes to a different scsi drives than the WAL files, than
the data itself, etc.

Have come across an interesting problem.

Have moved the indexes to another drive, then created symlinks to them.

Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL
ANALYZE'd, prepared to re-run the benchmark again and guess what?

The indexes were back on the original drive.

The process of REINDEX-ing obviously creates another file then drops the
original.

Is there a way to allow REINDEX to work without having this side affect?

Pre-creating a bunch of dangling symlinks doesn't work (tried that, it
gives a ERROR:  cannot create accounts_pkey: File exists on FreeBSD
4.6.2 when using the REINDEX).

Any suggestions?

:-)

Regards and best wishes,

Justin Clift

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Shridhar Daithankar

On 28 Sep 2002 at 17:08, Justin Clift wrote:

 Have moved the indexes to another drive, then created symlinks to them.
 Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL
 ANALYZE'd, prepared to re-run the benchmark again and guess what?
 
 The indexes were back on the original drive.
 Is there a way to allow REINDEX to work without having this side affect?
 
 Pre-creating a bunch of dangling symlinks doesn't work (tried that, it
 gives a ERROR:  cannot create accounts_pkey: File exists on FreeBSD
 4.6.2 when using the REINDEX).

Looks like we should have a subdirectory in database directory which stores 
index.

May be transaction logs, indexes goes in separte directory which can be 
symlinked. Linking a directory is much simpler solution than linking a file.

I suggest we have per database transaction log and indexes created in separate 
subdirectories for each database. Furhter given that large tables are segmented 
after one GB size, a table should have it's own subdirectory optionally..

At the cost of few inodes, postgresql would gain much more flexibility and 
hence tunability..

May be TODO for 7.4? Anyone?

Bye
 Shridhar

--
Software, n.:   Formal evening attire for female computer analysts.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Justin Clift

Shridhar Daithankar wrote:
snip
 Looks like we should have a subdirectory in database directory which stores
 index.

That was my first thought also, but an alternative/additional approach
would be this (not sure if it's workable):

 - As each index already has a bunch of information stored stored for
it, would it be possible to have an additional column added called
'idxpath' or something?

 - This would mean that the index location would be stable per index,
and would allow for *really* high volume environments to keep different
indexes on different drives.

Not sure what the default value would be, maybe the PGDATA directory,
maybe something as a GUC variable, etc, but that's the concept.

:-)

Regards and best wishes,

Justin Clift

 
 May be transaction logs, indexes goes in separte directory which can be
 symlinked. Linking a directory is much simpler solution than linking a file.
 
 I suggest we have per database transaction log and indexes created in separate
 subdirectories for each database. Furhter given that large tables are segmented
 after one GB size, a table should have it's own subdirectory optionally..
 
 At the cost of few inodes, postgresql would gain much more flexibility and
 hence tunability..
 
 May be TODO for 7.4? Anyone?
 
 Bye
  Shridhar
 
 --
 Software, n.:   Formal evening attire for female computer analysts.
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Shridhar Daithankar

On 28 Sep 2002 at 17:51, Justin Clift wrote:

 Shridhar Daithankar wrote:
 snip
  Looks like we should have a subdirectory in database directory which stores
  index.
 
 That was my first thought also, but an alternative/additional approach
 would be this (not sure if it's workable):
 
  - As each index already has a bunch of information stored stored for
 it, would it be possible to have an additional column added called
 'idxpath' or something?
 
  - This would mean that the index location would be stable per index,
 and would allow for *really* high volume environments to keep different
 indexes on different drives.

I have to disagree.. Completely.. This is like turning PG-Metadata into 
registry...

And what happens when index starts splitting when it grows beyond 1GB in size?

Putting indexes into a separate subdirectoy and mount/link that directory on a 
device that is on a separate SCSI channel is what I can think of as last drop 
of performance out of it..

Just a thought, as usual..

I don't know how much efforts it would take but if we have pg_xlog in separte 
configurable dir. now, putting indexes as well and having per database pg_xlog 
should be on the same line. The later aspect is also important IMO..

Bye
 Shridhar

--
VMS, n.:The world's foremost multi-user adventure game.


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] hacker help: PHP-4.2.3 patch to allow restriction of

2002-09-28 Thread Peter Eisentraut

Jim Mercer writes:

 ideally, i'd like to have users-per-database, as opposed to the global
 model we have now.

That's in the works.  Some form of this will be in 7.3.

 if we are willing to modify libpq to support a white-list, then what you
 are suggesting is quite possible.

How would you store such a list and prevent users from simply unsetting
it?

 i suspect the php-dev people are unhappy with my patch because it is including
 logic (ie. parsing the white-list) which they don't think php should be
 responsible for.

From my reading of the discussion, I think they have not understood that
the PostgreSQL server has no way to distinguish different virtual host
identities.  I think your feature is quite reasonable, if you list users
instead of databases.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)

2002-09-28 Thread Peter Eisentraut

Zeugswetter Andreas SB SD writes:

 The problem is, that scan.c includes unistd.h before postgres.h
 and thus unistd.h defines _LARGE_FILE_API which is not allowed
 together with _LARGE_FILES. Do you know an answer ?

Actually, a better idea I just had is to include scan.c at the end of
gram.c and compile them both into one object file.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: AIX compilation problems (was Re: [HACKERS] Proposal ...)

2002-09-28 Thread Peter Eisentraut

Zeugswetter Andreas SB SD writes:

 -brtl , but that does a lot more that we don't want and does not work :-(

I think -bnogc is the switch you want.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Bug in PL/pgSQL GET DIAGNOSTICS?

2002-09-28 Thread Peter Eisentraut

Bruce Momjian writes:

 Well, let's look at the common case.  For proper view rules, these would
 all return the right values because the UPDATE in the rule would be
 returned.  Is that what you mean?

I guess that really depends on whether the rules are written to properly
constrain the writes to the view to the set of rows visible by the view.
For example, if a view v1 selects from a single table t1 constrained by a
search condition, and I do UPDATE v1 SET ...; without a condition, does
that affect all rows in t1?  If not, then both our proposals are
equivalent, if yes, then the it's the user's fault, I suppose.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] hacker help: PHP-4.2.3 patch to allow restriction of database access

2002-09-28 Thread Jim Mercer

On Sat, Sep 28, 2002 at 01:08:36PM +0200, Peter Eisentraut wrote:
 Jim Mercer writes:
  ideally, i'd like to have users-per-database, as opposed to the global
  model we have now.
 
 That's in the works.  Some form of this will be in 7.3.

cool!

  if we are willing to modify libpq to support a white-list, then what you
  are suggesting is quite possible.
 
 How would you store such a list and prevent users from simply unsetting
 it?

the list is something determined by the client, effectively, in this scenario.

basically, i'm just looking at a libpq function that will take a white-list,
and only allow connections through PQconnect() based on that list.

the reasoning for this is that postmaster has no ability to differentiate
between incoming sessions, and as such, storing the list in the server makes
no sense, the server won't know how to apply the list.

in the scenario i'm working with, apache/php/libpq are safe from change by
the users.  apache has the ability to pass values through php to libpq which
the user cannot change.

so apache would tell libpq what tables _this_ instance of apache/php/libpq
can access.

simply using environment variables is not good enough, as the user can
change their values in their php scripts.

  i suspect the php-dev people are unhappy with my patch because it is including
  logic (ie. parsing the white-list) which they don't think php should be
  responsible for.
 
 From my reading of the discussion, I think they have not understood that
 the PostgreSQL server has no way to distinguish different virtual host
 identities.  I think your feature is quite reasonable, if you list users
 instead of databases.

well, for my purposes, it is _databases_ i'm more concerned about.  each
virtual host should be restricted to specific databases. this way each user
is entitled to mess up their own world, but not mess with other people's.

as it currently stands, virtual hosts can trample all over other databases,
and with the nature of a single uid for all apache/php/libpq proceses,
they are generally doing it with the same pgsql user.

vigilience over the user-level permissions is not something i trust the users
to do.  8^(

-- 
[ Jim Mercer[EMAIL PROTECTED] +1 416 410-5633 ]
[  I want to live forever, or die trying.]

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] hacker help: PHP-4.2.3 patch to allow restriction of database access

2002-09-28 Thread Michael Paesold

Jim Mercer [EMAIL PROTECTED] wrote:

 as it currently stands, virtual hosts can trample all over other
databases,
 and with the nature of a single uid for all apache/php/libpq proceses,
 they are generally doing it with the same pgsql user.

I haven't followed the whole thread, so perhaps I missed something. But why
not just use password authentication to the database with a different user
for each database? Ok, one has to store the plain-text passwords in the php
files. You have to protect your users from reading each others files anyway;
this can be done.

At least you can set up different users per database, so that it doesn't
matter if the proposed restriction setting is by database or by user.

Regards,
Michael Paesold


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] hacker help: PHP-4.2.3 patch to allow restriction of database access

2002-09-28 Thread Jim Mercer

On Sat, Sep 28, 2002 at 03:57:27PM +0200, Michael Paesold wrote:
 Jim Mercer [EMAIL PROTECTED] wrote:
  as it currently stands, virtual hosts can trample all over other
 databases,
  and with the nature of a single uid for all apache/php/libpq proceses,
  they are generally doing it with the same pgsql user.
 
 I haven't followed the whole thread, so perhaps I missed something. But why
 not just use password authentication to the database with a different user
 for each database? Ok, one has to store the plain-text passwords in the php
 files. You have to protect your users from reading each others files anyway;
 this can be done.

that can be done, but plain-text passwords are not good.  also, it doesn't
stop users from cruising other databases for unprotected data.
my patch will control that, at least in the context of apach/php/libpq.

 At least you can set up different users per database, so that it doesn't
 matter if the proposed restriction setting is by database or by user.

most of the databases have one user, that of the httpd process.
from what i've seen, this is fairly standard with virtual hosting.

until we have per-database users, generally what you end up doing is creating
a per-database user/password table, and then write applications that control
things based on that table, as opposed to the system table.  this means that
all of the tables in a database need to be read/write by one central user.

i've always found this hokey, but necessary.

the up-coming per-table userlist will help this alot.

-- 
[ Jim Mercer[EMAIL PROTECTED] +1 416 410-5633 ]
[  I want to live forever, or die trying.]

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Justin Clift

Shridhar Daithankar wrote:
snip
 And what happens when index starts splitting when it grows beyond 1GB in size?

Having an index directory:

i.e. $PGDATA/data/oid/indexes/

(that's the kind of thing you mean isn't it?)

Sounds workable, and sounds better than the present approach.

The reason that I was thinking of having a different path per index
would be for high volume situations like this:

/dev/dsk1 : /pgdata - data here
/dev/dsk2 : /pgindexes1 - some indexes here
/dev/dsk3 : /pgindexes2 - some ultra-high volume activity here

Let's say that there's a bunch of data on /dev/dsk1, and for performance
reasons it's been decided to move the indexes to another drive
/dev/dsk2.

Now, if just one of those indexes is getting *a lot* of the drive
activity, it would make sense to move it to it's own dedicated drive. 
Having an um... PGINDEX (that's just an identifier for this example, not
an environment variable suggestion) directory location defined would
mean that each time a REINDEX operation occurs, then all new indexes
would be created in the same spot.  That sounds better than the present
approach thus far, but wouldn't work for situations where indexes are
spread across multiple disk drives.

The suggestion of having some kind of path info for each index is merely
a thought of how to meet that potential future need, not necessarily the
best method anyone has ever thought of.  Like someone might pipe up and
say Nah, it could be done better XYZ way, etc.

:-)

Regards and best wishes,

Justin Clift

 
 Putting indexes into a separate subdirectoy and mount/link that directory on a
 device that is on a separate SCSI channel is what I can think of as last drop
 of performance out of it..
 
 Just a thought, as usual..
 
 I don't know how much efforts it would take but if we have pg_xlog in separte
 configurable dir. now, putting indexes as well and having per database pg_xlog
 should be on the same line. The later aspect is also important IMO..
 
 Bye
  Shridhar
 
 --
 VMS, n.:The world's foremost multi-user adventure game.
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Vacuum from within a function crashes backend

2002-09-28 Thread Magnus Naeslund(f)

Hello, 

I did a vacuum from within a function, and it went sig11 on me.
Is it illegal to do that?

The function:

drop function xorder1_cleanup();
create function xorder1_cleanup() RETURNS integer AS '
declare
 x record;
 c integer;
begin
   c:=0;
   FOR x IN SELECT order_id,count(*) as cnt FROM xorder1_updates group by order_id LOOP
 if x.cnt  1 then
   c:=c+x.cnt;   
   delete from xorder1_updates where order_id = x.order_id;
   insert into xorder1_updates(order_id) values (x.order_id);
 end if;
   END LOOP;
   execute ''vacuum full analyse xorder1_updates;'';
   return c;
end;
' LANGUAGE 'plpgsql';


Magnus

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Alvaro Herrera

Justin Clift dijo: 

Hi,

 Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL
 ANALYZE'd, prepared to re-run the benchmark again and guess what?
 
 The indexes were back on the original drive.

Yes, this is expected.  Same for CLUSTER.  They create a different
filenode and point the relation (table or index) at it.

I think the separate space for indexes is a good idea.  However, and
this is orthogonal, I feel the way REINDEX works now is not the best,
because it precludes you from using the index while you are doing it.

I'm trying to implement a way to concurrently compact the indexes.
I hope to have it for 7.4.

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Y una voz del caos me hablo y me dijo
Sonrie y se feliz, podria ser peor.
Y sonrei. Y fui feliz.
Y fue peor.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] making use of large TLB pages

2002-09-28 Thread Tom Lane

Neil Conway [EMAIL PROTECTED] writes:
 If we used a key that would remain the same between runs of the
 postmaster, this should ensure that there isn't a possibility of two
 independant sets of backends operating on the same data dir. The most
 logical way to do this IMHO would be to just hash the data dir, but I
 suppose the current method of using the port number should work as
 well.

You should stick as closely as possible to the key logic currently used
for SysV shmem keys.  That logic is intended to cope with the case where
someone else is already using the key# that we initially generate, as
well as the case where we discover a collision with a pre-existing
backend set.  (We tell the difference by looking for a magic number at
the start of the shmem segment.)

Note that we do not assume the key is the same on each run; that's why
we store it in postmaster.pid.

 (1) call sys_alloc_hugepages() without IPC_EXCL. If it returns
 an error, we're in the clear: there's no page matching
 that key. If it returns a pointer to a previously existing
 segment, panic: it is very likely that there are some
 orphaned backends still active.

s/panic/and the PG magic number appears in the segment header, panic/

 - if we're compiling on a Linux system but the kernel headers
   don't define the syscalls we need, use some reasonable
   defaults (e.g. the syscall numbers for the current hugepage
   syscalls in Linux 2.5)

I think this is overkill, and quite possibly dangerous.  If we don't see
the symbols then don't try to compile the code.

On the whole it seems that this allows a very nearly one-to-one mapping
to the existing SysV functionality.  We don't have the number of
connected processes syscall, perhaps, but we don't need it: if a
hugepages segment exists we can assume the number of connected processes
is greater than 0, and that's all we really need to know.

I think it's okay to stuff this support into the existing
port/sysv_shmem.c file, rather than make a separate file (particularly
given your point that we have to be able to fall back to SysV calls at
runtime).  I'd suggest reorganizing the code in that file slightly to
separate the actual syscalls from the controlling logic in
PGSharedMemoryCreate().  Probably also will have to extend the API for
PGSharedMemoryIsInUse() and RecordSharedMemoryInLockFile() to allow
three fields to be recorded in postmaster.pid, not two --- you'll want
a boolean indicating whether the stored key is for a SysV or hugepage
segment.

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Vacuum from within a function crashes backend

2002-09-28 Thread Alvaro Herrera

Magnus Naeslund(f) dijo: 

 Hello, 
 
 I did a vacuum from within a function, and it went sig11 on me.
 Is it illegal to do that?

Huh...  what version is this?  In current sources, VACUUM cannot be run
inside a function (it will throw an ERROR).  In 7.2[.1] I see there is
no protection against this.

Maybe the fix for this should be backported to 7.2 also.

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Hay quien adquiere la mala costumbre de ser infeliz (M. A. Evans)



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Greg Copeland

On Sat, 2002-09-28 at 02:16, Shridhar Daithankar wrote:
 On 28 Sep 2002 at 17:08, Justin Clift wrote:
 
  Have moved the indexes to another drive, then created symlinks to them.
  Ran a benchmark against the database, REINDEX'd the tables, VACUUM FULL
  ANALYZE'd, prepared to re-run the benchmark again and guess what?
  
  The indexes were back on the original drive.
  Is there a way to allow REINDEX to work without having this side affect?
  
  Pre-creating a bunch of dangling symlinks doesn't work (tried that, it
  gives a ERROR:  cannot create accounts_pkey: File exists on FreeBSD
  4.6.2 when using the REINDEX).
 
 Looks like we should have a subdirectory in database directory which stores 
 index.
 
 May be transaction logs, indexes goes in separte directory which can be 
 symlinked. Linking a directory is much simpler solution than linking a file.
 
 I suggest we have per database transaction log and indexes created in separate 
 subdirectories for each database. Furhter given that large tables are segmented 
 after one GB size, a table should have it's own subdirectory optionally..
 
 At the cost of few inodes, postgresql would gain much more flexibility and 
 hence tunability..
 
 May be TODO for 7.4? Anyone?


Very neat idea!  Sounds like an excellent way of gaining lots of
granularity!

I can't even think of a reason not to use the directory per table scheme
all the time.  Perhaps simply allowing for a script/tool that will
automatically perform such a physical table migration to a distinct 
directory would be in order too.

Either way, sounds like a good idea.

Greg





signature.asc
Description: This is a digitally signed message part


[HACKERS] Improving backend startup interlock

2002-09-28 Thread Tom Lane

I have the beginnings of an idea about improving our interlock logic
for postmaster startup.  The existing method is pretty good, but we
have had multiple reports that it can fail during system boot if the
old postmaster wasn't given a chance to shut down cleanly: there's
a fair-sized chance that the old postmaster PID will have been assigned
to some other process, and that fools the interlock check.

I think we can improve matters by combining the existing checks for
old-postmaster-PID and old-shared-memory-segment into one cohesive
entity.  To do this, we must abandon the existing special case for
private memory when running a bootstrap or standalone backend.
Even a standalone backend will be required to get a shmem segment
just like a postmaster would.  This ensures that we can use both
parts of the safety check, even when the old holder of the data
directory interlock was a standalone backend.

Here's a sketch of the improved startup procedure:

1. Try to open and read the $PGDATA/postmaster.pid file.  If we fail
because it's not there, okay to continue, because old postmaster must
have shut down cleanly; skip to step 8.  If we fail for any other reason
(eg, permissions failure), complain and abort startup.  (Because we
write the postmaster.pid file mode 600, getting past this step
guarantees we are either the same UID as the old postmaster or root;
else we'd have failed to read the old file.  This fact justifies some
assumptions below.)

2. Extract old postmaster PID and old shared memory key from file.
(Both will now always be there, per above; abort if file contents are
not as expected.)  We do not bother with trying kill(PID, 0) anymore,
because it doesn't prove anything.

3. Try to attach to the old shared memory segment using the old key.
There are three possible outcomes:
A: fail because it's not there.  Then we know the old postmaster
   (or standalone backend) is gone, and so are all its children.
   Okay to skip to step 7.
B: fail for some other reason, eg permissions violation.  Because
   we know we are the same UID (or root) as before, this must indicate
   that the old shmem segment actually belongs to someone else;
   so we have a chance collision with someone else's shmem key.
   Ignore the shmem segment, skip to step 7.  (In short,
   we can treat all failures alike, which is a Good Thing.)
C: attach succeeds. Continue to step 4.

4. Examine header of old shmem segment to see if it contains the right
   magic number *and* old postmaster PID.  If not, it isn't really
   a Postgres shmem segment, so ignore it; detach and skip to step 7.

5. If old shmem segment still has other processes attached to it,
   abort: these must be an old postmaster and/or old backends still
   alive.  (We can check nattach  1 in the SysV case, or just assume
   they are there in the hugepages-segment case that Neil wants to add.)

6. Detach from and delete the old shmem segment.  (Deletion isn't
   strictly necessary, but we should do it to avoid sucking resources.)

7. Delete the old postmaster.pid file.  If this fails for any reason,
   abort.  (Either we've got permissions problems or a race condition
   with someone else trying to start up.)

8. Create a shared memory segment.

9. Create a new postmaster.pid file and record my PID and segment key.
   If we fail to do this (with O_EXCL create), abort; someone else
   must be trying to start up at the same time.  Be careful to create
   the lockfile mode 600, per notes above.


This is not quite ready for prime time yet, because it's not very
bulletproof against the scenario where two would-be postmasters are
starting concurrently.  The first one might get all the way through the
sequence before the second one arrives at step 7 --- in which case the
second one will be deleting the first one's lockfile.  Oops.  A possible
answer is to create a second lockfile that only exists for the duration
of the startup sequence, and use that to ensure that only one process is
trying this sequence at a time.  This reintroduces the same problem
we're trying to get away from (must rely on kill(PID, 0) to determine
validity of the lock file), but at least the window of vulnerability is
much smaller than before.  Does anyone see a better way?

A more general objection is that this approach will hardwire, even more
solidly than before, the assumption that we are using a shared-memory
API that provides identifiable shmem segments (ie, something we can
record a key for and later try to attach to).  I think some people
wanted to get away from that.  But so far I've not seen any proposal
for an alternative startup interlock that doesn't require attachable
shared memory.

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Vacuum from within a function crashes backend

2002-09-28 Thread Magnus Naeslund(f)

Alvaro Herrera [EMAIL PROTECTED] wrote:
 Magnus Naeslund(f) dijo:
 
 Hello,
 
 I did a vacuum from within a function, and it went sig11 on me.
 Is it illegal to do that?
 
 Huh...  what version is this?  In current sources, VACUUM cannot be
 run inside a function (it will throw an ERROR).  In 7.2[.1] I see
 there is no protection against this.
 
 Maybe the fix for this should be backported to 7.2 also.

Argh!
Sorry i forgot the version, it's as you say 7.2.1..
Then i'll just not do that :)

Magnus

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] version mismatch detection doesn't work

2002-09-28 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 Seems the functionality to detect old versions of the postmaster with
 newer psql doesn't work.

What functionality?  psql has never had such a test.  I think you
are thinking of pg_dump.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] How to REINDEX in high volume environments?

2002-09-28 Thread Tom Lane

Justin Clift [EMAIL PROTECTED] writes:
 Shridhar Daithankar wrote:
 Looks like we should have a subdirectory in database directory which stores
 index.

 That was my first thought also, but an alternative/additional approach
 would be this (not sure if it's workable):

See the tablespaces TODO item.  I'm not excited about building
half-baked versions of tablespaces before we get around to doing the
real thing ...

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] version mismatch detection doesn't work

2002-09-28 Thread Alvaro Herrera

Tom Lane dijo: 

 Alvaro Herrera [EMAIL PROTECTED] writes:
  Seems the functionality to detect old versions of the postmaster with
  newer psql doesn't work.
 
 What functionality?  psql has never had such a test.  I think you
 are thinking of pg_dump.

No, I was thinking of psql.  There was a discussion some time ago about
mismatching versions; I don't know where I got the idea that the
conclusion had been that if versions mismatched, psql would barf.  (The
conclusion was to add the version number to psql.)

-- 
Alvaro Herrera (alvherre[a]atentus.com)
No hay ausente sin culpa ni presente sin disculpa (Prov. frances)


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] v7.3 Branched ...

2002-09-28 Thread Marc G. Fournier


As was previously discussed (and now that I'm mostly back from the dead
... damn colds) I've just branched off REL7_3_STABLE ... all future beta's
will be made based off of that branch, so that development may resume on
the main branch ...

So, for those doing commits or anoncvs, remember that the 'stable' branch
requires you to use:

-rREL7_3_STABLE

while the development branch is 'as per normal' ...



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] v7.3 Branched ...

2002-09-28 Thread Justin Clift

Marc G. Fournier wrote:
 
 As was previously discussed (and now that I'm mostly back from the dead
 ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's
 will be made based off of that branch, so that development may resume on
 the main branch ...

What is the attitude towards getting stuff from Gborg to the main
PostgreSQL distribution (contrib or otherwise)?

For example, the pg_autotune utility recently started on GBorg.  It's an
ongoing project, useful to many installations, and the additional size
would be barely noticeable.

Not saying it's ready right now, but am hoping that maybe 7.4 would be
able to include it.

:-)

Regards and best wishes,

Justin Clift

 
 So, for those doing commits or anoncvs, remember that the 'stable' branch
 requires you to use:
 
 -rREL7_3_STABLE
 
 while the development branch is 'as per normal' ...



-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] Will Pay for Help

2002-09-28 Thread fostered

I need to be able to download info from my public library website a
program called Reference USA it will only allow you to download 10 at
a time...I would think there is a way to open this up...any help would
be appreciated.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Web site

2002-09-28 Thread CoL

Hi,

So, why not just redirect people to one of the mirrors listed? This
could be done based on IP (yes it is inaccurate but it is close enough
and has the same net effect: pushing people off the main web server) or
it could be done by simply redirecting to a random mirror.
I think it would be stupid, I am, who wants to decide where to go. If I 
feel that .co.uk is better than others I'll chose that, and bookmark if 
I want.
(random??? brbrbrbrbr) :)

C.


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] PGXLOG variable worthwhile?

2002-09-28 Thread jra

Hi everyone,

In order to clarify things, how about we do a formal vote with specific
details like this:

***

Are you for...

- pg_xlog directory changeable at all, not using symlinks?

Yes/No

- a PGXLOG environment variable to do this?

Yes/No

- a -X command line option to do this?

Yes/No

- a GUC (postgresql.conf) option to do this?

Yes/No

- altering the format of the pg_xlog directory so that it
can't be used with the wrong database instance?

Yes/No

***

Does this seem reasonable?

:-)

Regards and best wishes,

Justin Clift


-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] query speed depends on lifetime of frozen db?

2002-09-28 Thread Andriy Tkachuk

Hi hackers.

There is ineresting behavior of some select query mentioned in $subj.

In working db this query takes:
real3m10.219s
user0m0.074s
sys 0m0.074s

it's interesting that vacuum or analyze or reinex not helpfull, BUT

if dump this db and create it again (whith another name mabe, no matter,
just for testing) and query the same query on this db, it takes:
real0m6.225s
user0m0.072s
sys 0m0.074s
(other databases continue running)

There is no end of this story!
With some time (couple of days for example) this the same query overloads
machine on this new test db also! No one working with this db during this
time. Works continued only with real working databases. Vacuuming was
as usual (every 2 hours without -f and with it at night one time :) :
as i said this behavior does not depend on any vacuuming.

Have anyone any ideas about this?

db=# SELECT version();
version
---
 PostgreSQL 7.2.2 on i386-portbld-freebsd4.6.1, compiled by GCC 2.95.3


Thanks,
  Andriy.

--
  Because strait is the gate, and narrow is the way, which leadeth unto
   blife/b, and few there be that find it. (MAT 7:7)
 bAsk/b, and it shall be given you; bseek/b, and ye shall find;
   bknock/b, and it shall be opened unto you... (MAT 7:14)

ANT17-RIPE


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] initdb failed due to syntax error

2002-09-28 Thread jeff

I recently downloaded and built Postgresql 7.2.2.  In trying to run
the gmake check after the build I get an error indicating that
initdb failed.  I turned on the debug option so the text below
contains some of the debug statements output from gmake check.  Note
the error at the bottom ERROR:  syntax error at line 2658: unexpected
token parse error.  It seems like this is complaining about the
syntax of line 2658 in the postgres.bki file, but I don't see any
problem with the syntax here.

Does anyone have any ideas about the problem here?

Thanks,
Jeff Stevens





DEBUG:  start transaction
DEBUG:  relation created with oid 16406
DEBUG:  commit transaction
DEBUG:  start transaction
DEBUG:  open relation pg_aggregate, attrsize 66
DEBUG:  create attribute 0 name aggname len 32 num 1 type 19
DEBUG:  create attribute 1 name aggowner len 4 num 2 type 23
DEBUG:  create attribute 2 name aggtransfn len 4 num 3 type 24
DEBUG:  create attribute 3 name aggfinalfn len 4 num 4 type 24
DEBUG:  create attribute 4 name aggbasetype len 4 num 5 type 26
DEBUG:  create attribute 5 name aggtranstype len 4 num 6 type 26
DEBUG:  create attribute 6 name aggfinaltype len 4 num 7 type 26
DEBUG:  create attribute 7 name agginitval len -1 num 8 type 25
DEBUG:  commit transaction
ERROR:  syntax error at line 2658: unexpected token parse error

initdb failed.
Data directory /array/web/src/postgresql-7.2.2/src/test/regress/./tmp_check/data
 will not be removed at user's request.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] How to convert

2002-09-28 Thread Tomasz Zdybicki

Hi,

I have a set of points defined in two columns x,y and...
how to convert it to PATH data type using pgplsql ?

Thanks for help
Thomas



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Will Pay for Help

2002-09-28 Thread Jonah H. Harris

Is it just me, or is this not very clear?

Could you be more specific on what you need?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of fostered
Sent: Sunday, September 22, 2002 7:04 AM
To: [EMAIL PROTECTED]
Subject: [HACKERS] Will Pay for Help


I need to be able to download info from my public library website a
program called Reference USA it will only allow you to download 10 at
a time...I would think there is a way to open this up...any help would
be appreciated.

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance

2002-09-28 Thread Alvaro Herrera

En Thu, 19 Sep 2002 14:06:05 -0400
Tom Lane [EMAIL PROTECTED] escribió:

 Alvaro Herrera [EMAIL PROTECTED] writes:
  Tom Lane dijo: 
  One corner case is that I think we currently allow
  
  create table p (f1 int);
  create table c (f1 int) inherits(p);
 
  In this case, c.f1.attisinherited count is 2; thus when I drop f1 from
  p, it is not dropped from c.
 
 That seems right, but the problem I have with it is that the resulting
 state of c.f1 is attisinherited = 1.  This means that you cannot drop
 c.f1.  It seems to me that we should have this behavior:

New patch attached.  This one should answer your concerns.  This is the
idea implemented:

 We might need two fields :-( ... a locally defined boolean and a
 number of times inherited counter.


Some discussion:

 create table p (f1 int);
 create table c (f1 int not null) inherits(p);
 
 drop column p.f1;
 -- c.f1 GOES AWAY, because its inherit count went to zero

In this case, the attached code preserves f1.  It's not clear whether
the user wants the column to stay or not, but if he is defining it
twice, let him drop it twice if he wants it to go away.

 Another interesting case is multiple inheritance.
 
 create table p1 (f1 int);
 create table p2 (f1 int);
 create table c () inherits(p1, p2);
 
 drop ONLY column p1.f1;
 drop column p2.f1;
 
 After this sequence, what is the state of c.f1?  Is it still there?
 Should it be?  If it is still there, will it be possible to get rid of
 it with drop column c.f1?  What if we did DROP ONLY on *both*
 ancestors?

Well, in this case the column is dropped.  If the last drop is ONLY, the
column will stay (regardless of what the first drop did).  This one
seems very tricky and I don't see a way to do otherwise.

Other cases (such as the set of four you posted) are handled in the
natural way you described.  Regression tests for all those four are
included, along another case that was the start of all this.

Please review the patch.  It should be current as of your commit of
20:30 today, but I'm not sure (anoncvs delays and all -- there are
changes to the same files).

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Hay dos momentos en la vida de un hombre en los que no debería
especular: cuando puede permitírselo y cuando no puede (Mark Twain)



attinhcount-2.patch
Description: Binary data


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [GENERAL] Performance while loading data and indexing

2002-09-28 Thread James Maes

Has there been any thought of providing RAW disk support to bypass the fs?

-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED]]On Behalf Of Bruce Momjian
Sent: Thursday, September 26, 2002 3:57 PM
To: Neil Conway
Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED];
[EMAIL PROTECTED]
Subject: Re: [HACKERS] [GENERAL] Performance while loading data and
indexing


Neil Conway wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  The paper does recommend ext3, but the differences between file systems
  are very small.

 Well, I only did a very rough benchmark (a few runs of pgbench), but
 the results I found were drastically different: ext2 was significantly
 faster (~50%) than ext3-writeback, which was in turn significantly
 faster (~25%) than ext3-ordered.

  Also, though ext3 is slower, turning fsync off should make ext3 function
  similar to ext2.

 Why would that be?

OK, I changed the text to:

File system choice is particularly difficult on Linux because there are
so many file system choices, and none of them are optimal: ext2 is not
entirely crash-safe, ext3, xfs, and jfs are journal-based, and Reiser is
optimized for small files and does journalling. The journalling file
systems can be significantly slower than ext2 but when crash recovery is
required, ext2 isn't an option.

--
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Bug in PL/pgSQL GET DIAGNOSTICS?

2002-09-28 Thread Bruce Momjian

Peter Eisentraut wrote:
 Bruce Momjian writes:
 
  Well, let's look at the common case.  For proper view rules, these would
  all return the right values because the UPDATE in the rule would be
  returned.  Is that what you mean?
 
 I guess that really depends on whether the rules are written to properly
 constrain the writes to the view to the set of rows visible by the view.
 For example, if a view v1 selects from a single table t1 constrained by a
 search condition, and I do UPDATE v1 SET ...; without a condition, does
 that affect all rows in t1?  If not, then both our proposals are
 equivalent, if yes, then the it's the user's fault, I suppose.

Well, since we found that we can't get a perfect solution, I started to
think of the common cases.  First, there is the log changes type of
rule, but that isn't INSTEAD, so it doesn't even apply here.  We already
know we want to return the result of the main query.

CREATE RULE service_request_update AS -- UPDATE rule
ON UPDATE TO service_request 
DO 
INSERT INTO service_request_log (customer_id, description, mod_type)
VALUES (old.customer_id, old.description, 'U');

CREATE RULE service_request_delete AS -- DELETE rule
ON DELETE TO service_request 
DO
INSERT INTO service_request_log (customer_id, description, mod_type)
VALUES (old.customer_id, old.description, 'D');

Second, there is the updatable view rule, that is INSTEAD, and relies on
the primary key of the table:

CREATE RULE view_realtable_insert AS -- INSERT rule
ON INSERT TO view_realtable 
DO INSTEAD 
INSERT INTO realtable 
VALUES (new.col);

CREATE RULE view_realtable_update AS -- UPDATE rule
ON UPDATE TO view_realtable 
DO INSTEAD 
UPDATE realtable 
SET col = new.col 
WHERE col = old.col;

CREATE RULE view_realtable_delete AS -- DELETE rule
ON DELETE TO view_realtable 
DO INSTEAD 
DELETE FROM realtable 
WHERE col = old.col;

It is my understanding that the proposed rule result improvements will
return the proper values in these cases.  That is why I like the current
proposal.  It also makes any extra non-tag matching queries in the rule
not affect the result, which seems best.

Does anyone else have a common rule that would return incorrect results
using the proposed rules?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] The rh7.3 time errors

2002-09-28 Thread Magnus Naeslund(f)

Was there a workaround for the errors in time handling for rh7.3 dist?

I get there regression failures:
 abstime  ... FAILED
 tinterval... FAILED
test horology ... FAILED

I remember the discussion about old dates, but not if there was any fix for it...

Magnus

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] The rh7.3 time errors

2002-09-28 Thread Joe Conway

Magnus Naeslund(f) wrote:
 Was there a workaround for the errors in time handling for rh7.3 dist?
 
 I get there regression failures:
  abstime  ... FAILED
  tinterval... FAILED
 test horology ... FAILED
 
 I remember the discussion about old dates, but not if there was any fix for it...
 

Tom fixed this just before we went into beta. Are you using a recent snapshot?

Joe




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] The rh7.3 time errors

2002-09-28 Thread Magnus Naeslund(f)

Joe Conway [EMAIL PROTECTED] wrote:
 Magnus Naeslund(f) wrote:
 Was there a workaround for the errors in time handling for rh7.3
 dist? 
 
 I get there regression failures:
  abstime  ... FAILED
  tinterval... FAILED
 test horology ... FAILED
 
 I remember the discussion about old dates, but not if there was any
 fix for it... 
 
 
 Tom fixed this just before we went into beta. Are you using a recent
 snapshot? 
 
 Joe

As usual, i never remember to supply version information.
I'm using latest stable, 7.2.2.
Is there a quick workaround for this version, or must there be code ?

Magnus

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

Magnus

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] 7.2.3?

2002-09-28 Thread Bruce Momjian

I have seen no discussion on whether to go ahead with a 7.2.3 to add
several serious fixes Tom has made to the code in the past few days. 

Are we too close to 7.3 for this to be worthwhile?  Certainly there will
be people distributing 7.2.X for some time as 7.3 stabilizes.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] The rh7.3 time errors

2002-09-28 Thread Bruce Momjian

Magnus Naeslund(f) wrote:
 Joe Conway [EMAIL PROTECTED] wrote:
  Magnus Naeslund(f) wrote:
  Was there a workaround for the errors in time handling for rh7.3
  dist? 
  
  I get there regression failures:
   abstime  ... FAILED
   tinterval... FAILED
  test horology ... FAILED
  
  I remember the discussion about old dates, but not if there was any
  fix for it... 
  
  
  Tom fixed this just before we went into beta. Are you using a recent
  snapshot? 
  
  Joe
 
 As usual, i never remember to supply version information.
 I'm using latest stable, 7.2.2.
 Is there a quick workaround for this version, or must there be code ?

The change was to use localtime() rather than mktime() in the code. 
There is no workaround available for 7.2.X, and I don't see that anyone
backpatched it to 7.2 CVS.  However, we are considering a 7.2.3 and a
backpatch of that fix may be worthwhile.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Justin Clift

Bruce Momjian wrote:
 
 I have seen no discussion on whether to go ahead with a 7.2.3 to add
 several serious fixes Tom has made to the code in the past few days.

This will allow production sites to run the 7.2 series and also do
VACUUM FULL won't it?

If so, then the idea is already pretty good.  :-)

Which other fixes would be included?

Regards and best wishes,

Justin Clift

 
 Are we too close to 7.3 for this to be worthwhile?  Certainly there will
 be people distributing 7.2.X for some time as 7.3 stabilizes.
 
 --
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] version mismatch detection doesn't work

2002-09-28 Thread Greg Copeland

It was I that originally brought the topic up.  I don't really remember
the exact details but I do seem to recall that the author thought it was
a horrid idea.  Basically and poorly paraphrased the response was that
everyone should use select version() after they connect and if they
don't know to do that or simply forget, that's tough.  I also seem to
recall that even the prospect of having some slash command that showed
psql and back end version was considered a waste and a bad/redundant
idea.  So, as it stands, only the psql version is displayed.

I still think it makes so much more sense to simply state something
like, Welcome to psql 7.3b1, the PostgreSQL interactive terminal.  You
are currently connected with a 7.1.1 server named 'foobar'.  It's
simple and makes the information very obvious.  It also helps re-enforce
the  name of the server that you've connected with.  I should clarify,
the host name par is not something I originally asked about but does
seem to make sense.  I honestly could care less about the exact text as
making the information obviously available is all that I care really
about.

Personally, I never understood how making even marginally redundant
information readily and obviously available, especially when it can
prevent some potential peril, is a bad idea.  But, for each is own.  ;)

Greg



On Sat, 2002-09-28 at 11:28, Alvaro Herrera wrote:
 Tom Lane dijo: 
 
  Alvaro Herrera [EMAIL PROTECTED] writes:
   Seems the functionality to detect old versions of the postmaster with
   newer psql doesn't work.
  
  What functionality?  psql has never had such a test.  I think you
  are thinking of pg_dump.
 
 No, I was thinking of psql.  There was a discussion some time ago about
 mismatching versions; I don't know where I got the idea that the
 conclusion had been that if versions mismatched, psql would barf.  (The
 conclusion was to add the version number to psql.)
 
 -- 
 Alvaro Herrera (alvherre[a]atentus.com)
 No hay ausente sin culpa ni presente sin disculpa (Prov. frances)
 
 
 ---(end of broadcast)---
 TIP 3: if posting/reading through Usenet, please send an appropriate
 subscribe-nomail command to [EMAIL PROTECTED] so that your
 message can get through to the mailing list cleanly




signature.asc
Description: This is a digitally signed message part


Re: [HACKERS] 7.2.3?

2002-09-28 Thread Lamar Owen

On Saturday 28 September 2002 02:36 pm, Bruce Momjian wrote:
 I have seen no discussion on whether to go ahead with a 7.2.3 to add
 several serious fixes Tom has made to the code in the past few days.

 Are we too close to 7.3 for this to be worthwhile?  Certainly there will
 be people distributing 7.2.X for some time as 7.3 stabilizes.

IMHO, I believe a 7.2.3 is worthwhile.  It isn't _that_ much effort, is it?  I 
am most certainly of the school of thought that backporting serious issues 
into the last stable release is a Good Thing.  I don't think a released 7.3 
should prevent us from a 7.2.4 down the road, either -- or even a 7.1.4 if a 
serious security issue were to be found there.  Probably not a 7.0.4, though.  
And definitely not a 6.5.4.  Some people can have great difficulty migrating 
-- if we're not going to make it easy for people to migrate, we should 
support older versions with fixes.  IMHO, of course.

If it hasn't already, a fix for the Red Hat 7.3/glibc mktime(3) issue 
(workaround really) would be nice, as I understand the 7.3 branch has one.

RPM's will take me all of an hour if I'm at work when it's released.  That is 
if my wife doesn't go into labor first (she's at 37 weeks and having 
Braxton-Hicks already). #4.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] The rh7.3 time errors

2002-09-28 Thread Magnus Naeslund(f)

Bruce Momjian [EMAIL PROTECTED] wrote:
 
 The change was to use localtime() rather than mktime() in the code.
 There is no workaround available for 7.2.X, and I don't see that
 anyone backpatched it to 7.2 CVS.  However, we are considering a
 7.2.3 and a backpatch of that fix may be worthwhile.
 

That would be excellent, because it feels awkward installing stuff that doesn't pass 
the regression tests, as all our new linux boxes will be rh7.3.
But right now in our apps we're not relying on the time being right (isn't that the 
issue?) only the years...

If it's a simple fix, i think we should include that in the next 7.2.X .

Magnus

-- 
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-
 Programmer/Networker [|] Magnus Naeslund
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Alvaro Herrera

Justin Clift dijo: 

 Bruce Momjian wrote:

  I have seen no discussion on whether to go ahead with a 7.2.3 to add
  several serious fixes Tom has made to the code in the past few days.
 
 This will allow production sites to run the 7.2 series and also do
 VACUUM FULL won't it?
 
 If so, then the idea is already pretty good.  :-)
 
 Which other fixes would be included?

At least the VACUUM code should prevent VACUUM from running inside a
function.  At least one user has been bitten by it.

Memory leaks and such in the PL modules should be backported also.

-- 
Alvaro Herrera (alvherre[a]atentus.com)
El sentido de las cosas no viene de las cosas, sino de
las inteligencias que las aplican a sus problemas diarios
en busca del progreso. (Ernesto Hernández-Novich)


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 Memory leaks and such in the PL modules should be backported also.

This is getting out of hand :-(

7.2 is in maintenance status at this point.  I'm willing to do backports
for bugs that cause data loss, like this VACUUM/CLOG issue.
Performance problems are not on the radar screen at all (especially
not when the putative fixes for them haven't received much of any
testing, and are barely worthy to be called beta status).

We do not have either the developer manpower or the testing resources
to do more than the most minimal maintenance on back versions.  Major
back-port efforts just aren't going to happen.  If they did, they would
significantly impact our ability to work on 7.3 and up; does that seem
like a good tradeoff to you?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] v7.3 Branched ...

2002-09-28 Thread Marc G. Fournier


Not going to happen ... there are oodles of not big, but useful pieces
of software out there that we could include ... but th epoint of Gborg is
you download the main repository, and then you go to gborg to look for the
add-ons you might like to have ...




On Sun, 29 Sep 2002, Justin Clift wrote:

 Marc G. Fournier wrote:
 
  As was previously discussed (and now that I'm mostly back from the dead
  ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's
  will be made based off of that branch, so that development may resume on
  the main branch ...

 What is the attitude towards getting stuff from Gborg to the main
 PostgreSQL distribution (contrib or otherwise)?

 For example, the pg_autotune utility recently started on GBorg.  It's an
 ongoing project, useful to many installations, and the additional size
 would be barely noticeable.

 Not saying it's ready right now, but am hoping that maybe 7.4 would be
 able to include it.

 :-)

 Regards and best wishes,

 Justin Clift


  So, for those doing commits or anoncvs, remember that the 'stable' branch
  requires you to use:
 
  -rREL7_3_STABLE
 
  while the development branch is 'as per normal' ...



 --
 My grandfather once told me that there are two kinds of people: those
 who work and those who take the credit. He told me to try to be in the
 first group; there was less competition there.
- Indira Gandhi



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Bug in PL/pgSQL GET DIAGNOSTICS?

2002-09-28 Thread Manfred Koizar

On Sat, 28 Sep 2002 13:41:04 -0400 (EDT), Bruce Momjian
[EMAIL PROTECTED] wrote:
Does anyone else have a common rule that would return incorrect results
using the proposed rules?

CREATE VIEW twotables AS
SELECT ... FROM table1 INNER JOIN table2 ON ... ;

CREATE RULE twotables_insert AS -- INSERT rule
ON INSERT TO twotables 
DO INSTEAD (
INSERT INTO table1 VALUES (new.pk, new.col1);
INSERT INTO table2 VALUES (new.pk, new.col2)
); 

CREATE RULE twotables_update AS -- UPDATE rule
ON UPDATE TO twotables 
DO INSTEAD (
UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;
UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk
); 

CREATE RULE twotables_delete AS -- DELETE rule
ON DELETE TO twotables 
DO INSTEAD (
DELETE FROM table1 WHERE pk = old.pk;
DELETE FROM table2 WHERE pk = old.pk
);

CREATE VIEW visible AS
SELECT ... FROM table3
WHERE deleted = 0;

CREATE RULE visible_delete AS -- DELETE rule
ON DELETE TO visible 
DO INSTEAD 
UPDATE table3
SET deleted = 1
WHERE pk = old.pk;

Servus
 Manfred

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] [PATCHES] Cascaded Column Drop

2002-09-28 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 Leaving a zero-width table would be best, even if its not so useful.  I
 don't like rejecting a CASCADE as it kinda defeats the purpose of having
 CASCADE.

I did something about this --- as of CVS tip, you can do

regression=# create table foo (f1 int);
CREATE TABLE
regression=# alter table foo drop column f1;
ALTER TABLE
regression=# select * from foo;

--
(0 rows)

I fixed the places that were exposed by the regression tests as not
coping with zero-column tables, but it is likely that there are some
more places that will give odd errors with such a table.  Feel free
to beat on it.

psql seems to have some minor issues with a zero-column select.
You can do this:

regression=# insert into foo default values;
INSERT 720976 1
regression=# select * from foo;

--
(1 row)

regression=# insert into foo default values;
INSERT 720977 1
regression=# select * from foo;

--
(2 rows)

regression=# 

Seems like nothing's being printed for an empty row.

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Alvaro Herrera

Tom Lane dijo: 

 Alvaro Herrera [EMAIL PROTECTED] writes:
  Memory leaks and such in the PL modules should be backported also.
 
 This is getting out of hand :-(

Yes, I agree with you.

 Major back-port efforts just aren't going to happen.  If they did,
 they would significantly impact our ability to work on 7.3 and up;
 does that seem like a good tradeoff to you?

I understand the issue.  I also understand that is very nice for
PostgreSQL to advance very quickly, and requiring backports (and
subsequent slowdown) is not nice at all.  However, for users it's very
important to have the fixes present in newer versions...  _without_ the
burden of having to upgrade!

I agree with Lamar that upgrading is a very difficult process right now.
Requiring huge amounts of disk space and database downtime to do
dump/restore is in some cases too high a price to pay.  So maybe the
upgrading process should be observed instead of wasting time on people
trying to stay behind because of the price of that process.

Maybe there is some way of making the life easier for the upgrader.
Let's see, when you upgrade there are basically two things that change:

a) system catalogs
   Going from one version to another requires a number of changes: new
   tuples, deleted tuples, new attributes, deleted attributes.  On-line
   transforming syscatalogs for the three first types seems easy.  The
   last one may be difficult, but it also may not be, I'm not sure.  It
   will require a standalone backend for shared relations and such, but
   hey, it's much cheaper than the process that's required now.

b) on-disk representation of user data
   This is not easy.  Upgrading means changing each filenode from one
   version to another; it requires a tool that understands both (and
   more than two) versions.  It also requires a backend that is able to
   detect that a page is not the version it should, and either abort or
   convert it on the fly (this last possibility seems very nice).

   Note that only tables should be converted: other objects (indexes)
   should just be rebuilt.

There are other things that change.  For example, dependencies are new
in 7.3; building them without the explicit schema construction seems
difficult, but it's certainly possible.  The implicit/explicit cast
system is also new, but it doesn't depend on user data (except for user
defined datatypes, and that should be done manually by the user), so
should just be created from scratch.

Is this at least remotely possible to do?

-- 
Alvaro Herrera (alvherre[a]atentus.com)
La fuerza no está en los medios físicos
sino que reside en una voluntad indomable (Gandhi)


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Stephan Szabo

On Sat, 28 Sep 2002, Bruce Momjian wrote:

 I have seen no discussion on whether to go ahead with a 7.2.3 to add
 several serious fixes Tom has made to the code in the past few days.

 Are we too close to 7.3 for this to be worthwhile?  Certainly there will
 be people distributing 7.2.X for some time as 7.3 stabilizes.

The vacuum thing is big enough that there should be since as always people
aren't going to move immediately forward with a major version change.



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Upgrade process (was Re: [HACKERS] 7.2.3?)

2002-09-28 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 Maybe there is some way of making the life easier for the upgrader.
 Let's see, when you upgrade there are basically two things that change:
 a) system catalogs
 b) on-disk representation of user data
 [much snipped]

Yup.  I see nothing wrong with the pg_upgrade process that we've
previously used for updating the system catalogs, however.  Trying to
do it internally in some way will be harder and more dangerous (ie,
much less reliable) than relying on schema-only dump and restore
followed by moving the physical data.

Updates that change the on-disk representation of user data are much
harder, as you say.  But I think they can be made pretty infrequent.
We've only had two such updates that I know of in Postgres' history:
adding WAL in 7.1 forced some additions to page headers, and now in
7.3 we've changed tuple headers for space-saving reasons, and fixed
some problems with alignment in array data.

pg_upgrade could have worked for the 7.2 cycle, but it wasn't done,
mostly for lack of effort.

Going forward I think we should try to maintain compatibility of on-disk
user data and ensure that pg_upgrade works.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Lamar Owen

On Saturday 28 September 2002 04:14 pm, Tom Lane wrote:
 7.2 is in maintenance status at this point.  I'm willing to do backports
 for bugs that cause data loss, like this VACUUM/CLOG issue.
 Performance problems are not on the radar screen at all (especially
 not when the putative fixes for them haven't received much of any
 testing, and are barely worthy to be called beta status).

A fix that is beta-quality for a non-serious issue (serious issues being of 
the level of the VACUUM/CLOG issue) is, in my mind at least, not for 
inclusion into a _stable_ release.  Simple fixes (the localtime versus mktime 
fix) might be doable, but might not depending upon the particular fix, how 
difficult the packport, etc.  But 7.2 is considered _stable_ -- and I agree 
that this means maintenance mode only.  Only the most trivial or the most 
serious problems should be tackled here.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: Upgrade process (was Re: [HACKERS] 7.2.3?)

2002-09-28 Thread Lamar Owen

On Saturday 28 September 2002 04:57 pm, Tom Lane wrote:
 7.3 we've changed tuple headers for space-saving reasons, and fixed
 some problems with alignment in array data.

 Going forward I think we should try to maintain compatibility of on-disk
 user data and ensure that pg_upgrade works.

This is of course a two-edged sword.

1.) Keeping pg_upgrade working, which depends upon pg_dump working;
2.) Maintaining security fixes for 7.2 for a good period of time to come, 
since migration from 7.2 to 7.2 isn't easy.

If pg_upgrade is going to be the cookie, then let's all try to test the 
cookie.  I'll certainly try to do my part.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: Upgrade process (was Re: [HACKERS] 7.2.3?)

2002-09-28 Thread Tom Lane

Lamar Owen [EMAIL PROTECTED] writes:
 This is of course a two-edged sword.

 1.)   Keeping pg_upgrade working, which depends upon pg_dump working;

... which we have to have anyway, of course ...

 2.)   Maintaining security fixes for 7.2 for a good period of time to come, 
 since migration from 7.2 to 7.2 isn't easy.

True, but I think we'll have to deal with that anyway.  Even if the
physical database upgrade were trivial, people are going to find
application compatibility problems due to schemas and other 7.3 changes.
So we're going to have to expend at least some work on fixing critical
7.2.* problems.  (I just want to keep a tight rein on how much.)

regards, tom lane

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Justin Clift

Alvaro Herrera wrote:
snip
 I agree with Lamar that upgrading is a very difficult process right now.
 Requiring huge amounts of disk space and database downtime to do
 dump/restore is in some cases too high a price to pay.  So maybe the
 upgrading process should be observed instead of wasting time on people
 trying to stay behind because of the price of that process.

As a simple for the user approach, would it be
too-difficult-to-bother-with to add to the postmaster an ability to
start up with the data files from the previous version, for it to
recognise an old data format automatically, then for it to do the
conversion process of the old data format to the new one before going
any further?

Sounds like a pain to create initially, but nifty in the end.

:-)

Regards and best wishes,

Justin Clift


snip
 --
 Alvaro Herrera (alvherre[a]atentus.com)
 La fuerza no está en los medios físicos
 sino que reside en una voluntad indomable (Gandhi)

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Improving backend startup interlock

2002-09-28 Thread Giles Lean


Tom Lane wrote:

[ discussion of new startup interlock ]

 This is not quite ready for prime time yet, because it's not very
 bulletproof against the scenario where two would-be postmasters are
 starting concurrently.

A solution to this is to require would-be postmasters to obtain an
exclusive lock on a lock file before touching the pid file.  (The lock
file perhaps could be the pid file, but it doesn't have to be.)

Is there some reason that file locking is not acceptable?  Is there
any platform or filesystem supported for use with PostgreSQL which
doesn't have working exclusive file locking?

 A possible answer is to create a second lockfile that only exists
 for the duration of the startup sequence, and use that to ensure
 that only one process is trying this sequence at a time.
 ...
 This reintroduces the same problem
 we're trying to get away from (must rely on kill(PID, 0) to determine
 validity of the lock file), but at least the window of vulnerability is
 much smaller than before.

A lock file locked for the whole time the postmaster is running can be
responsible for preventing multiple postmasters running without
relying on pids.  All that is needed is that the OS drop exclusive
file locks on process exit and that locks not survive across reboots.

The checks of the shared memory segment (number of attachements etc)
look after orphaned back end processes, per the proposal.

Regards,

Giles

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Upgrade process (was Re: 7.2.3?)

2002-09-28 Thread Giles Lean


Tom lane wrote:

 True, but I think we'll have to deal with that anyway.  Even if the
 physical database upgrade were trivial, people are going to find
 application compatibility problems due to schemas and other 7.3 changes.

More reasons:

a) learning curve -- I want to use 7.3 and gain some experience with
   7.2.x - 7.3 migration before rolling out 7.3 to my users.

b) change control and configuration freezes sometimes dictate when
   upgrades may be done.  A 7.2.2 - 7.2.3 upgrade for bug fixes is
   much less intrusive than an upgrade to 7.3.

 So we're going to have to expend at least some work on fixing critical
 7.2.* problems.  (I just want to keep a tight rein on how much.)

No argument here.  Supporting multiple versions eats resources and
eventually destabilises the earlier releases, so critial fixes only,
please.  New features and non-critical fixes however minor are
actually unhelpful.

Since PostgreSQL is open source, anyone who just has to have some
minor new feature back ported can do it, or pay for it to be done.
But this doesn't have to effect all users.

Regards,

Giles

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Bug in PL/pgSQL GET DIAGNOSTICS?

2002-09-28 Thread Bruce Momjian


OK, that is a good example.  It would return the sum of the matching
tags.  You are suggesting here that it would be better to take the
result of the last matching tag command, right?

---

Manfred Koizar wrote:
 On Sat, 28 Sep 2002 13:41:04 -0400 (EDT), Bruce Momjian
 [EMAIL PROTECTED] wrote:
 Does anyone else have a common rule that would return incorrect results
 using the proposed rules?
 
   CREATE VIEW twotables AS
   SELECT ... FROM table1 INNER JOIN table2 ON ... ;
 
   CREATE RULE twotables_insert AS -- INSERT rule
   ON INSERT TO twotables 
   DO INSTEAD (
   INSERT INTO table1 VALUES (new.pk, new.col1);
   INSERT INTO table2 VALUES (new.pk, new.col2)
   ); 
   
   CREATE RULE twotables_update AS -- UPDATE rule
   ON UPDATE TO twotables 
   DO INSTEAD (
   UPDATE table1 SET col1 = new.col1 WHERE pk = old.pk;
   UPDATE table2 SET col2 = new.col2 WHERE pk = old.pk
   ); 
   
   CREATE RULE twotables_delete AS -- DELETE rule
   ON DELETE TO twotables 
   DO INSTEAD (
   DELETE FROM table1 WHERE pk = old.pk;
   DELETE FROM table2 WHERE pk = old.pk
   );
 
   CREATE VIEW visible AS
   SELECT ... FROM table3
   WHERE deleted = 0;
 
   CREATE RULE visible_delete AS -- DELETE rule
   ON DELETE TO visible 
   DO INSTEAD 
   UPDATE table3
   SET deleted = 1
   WHERE pk = old.pk;
 
 Servus
  Manfred
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance

2002-09-28 Thread Alvaro Herrera

En Mon, 23 Sep 2002 09:53:08 -0400
Tom Lane [EMAIL PROTECTED] escribió:

  You cannot add a column to a table that is inherited by another table
  that has a column with the same name:
 
 Yeah, this is an implementation shortcoming in ALTER ADD COLUMN: if it
 finds an existing column of the same name in a child table, it should
 test whether it's okay to merge the columns (same types, no conflict
 in constraints/defaults, cf CREATE's behavior); if so, it should
 increment the child column's attinhcount instead of failing.

I have this almost ready.  The thing I don't have quite clear yet is
what to do with attislocal.  IMHO it should not be touched in any case,
but Hannu thinks that for symmetry it should be reset in some cases.

Also, what do you mean by conflicts on defaults?  I don't think the
parent should take into consideration what the defaults are for its
children.  Same for constraints.

-- 
Alvaro Herrera (alvherre[a]atentus.com)
Si no sabes adonde vas, es muy probable que acabes en otra parte.

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Bruce Momjian

Justin Clift wrote:
 Alvaro Herrera wrote:
 snip
  I agree with Lamar that upgrading is a very difficult process right now.
  Requiring huge amounts of disk space and database downtime to do
  dump/restore is in some cases too high a price to pay.  So maybe the
  upgrading process should be observed instead of wasting time on people
  trying to stay behind because of the price of that process.
 
 As a simple for the user approach, would it be
 too-difficult-to-bother-with to add to the postmaster an ability to
 start up with the data files from the previous version, for it to
 recognise an old data format automatically, then for it to do the
 conversion process of the old data format to the new one before going
 any further?
 
 Sounds like a pain to create initially, but nifty in the end.

Yes, we could, but if we are going to do that, we may as well just
automate the dump/reload.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] Improving backend startup interlock

2002-09-28 Thread Tom Lane

Giles Lean [EMAIL PROTECTED] writes:
 Is there some reason that file locking is not acceptable?  Is there
 any platform or filesystem supported for use with PostgreSQL which
 doesn't have working exclusive file locking?

How would we know?  We have never tried to use such a feature.

For sure I would not trust it on an NFS filesystem.  (Although we
disparage running an NFS-mounted database, people do it anyway.)

regards, tom lane

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] DROP COLUMN misbehaviour with multiple inheritance

2002-09-28 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 I have this almost ready.  The thing I don't have quite clear yet is
 what to do with attislocal.  IMHO it should not be touched in any case,
 but Hannu thinks that for symmetry it should be reset in some cases.

My feeling would be to leave it alone in all cases.  If I have

create table p (f1 int);
create table c (f2 text) inherits (p);

I would find it quite surprising if I could destroy c.f2 by adding
and then dropping p.f2.

 Also, what do you mean by conflicts on defaults?  I don't think the
 parent should take into consideration what the defaults are for its
 children.  Same for constraints.

Well, the rules will probably have to be different for this case than
they are when creating a child below an existing parent.  In particular,
if the ADD COLUMN operation is trying to create constraints (including
a simple NOT NULL), I'm inclined to fail rather than merge if the
existing child column does not already have matching constraints.
It would seem surprising to me that creating a parent column in this
way could allow the formerly free-standing child column to suddenly
have constraints it didn't have before.  Also, you'd have to scan the
child rows to see whether they all meet the constraint, which would
be slow.  For example, if you wanted to do

alter table p add column f2 text not null;

in the above example, I think it is reasonable to insist that you first
do

alter table c alter column f2 set not null;

to make it perfectly clear all 'round that you are accepting an
alteration in the existing column.

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: Default privileges for new databases (was Re: [HACKERS] Can't import

2002-09-28 Thread Bruce Momjian


Can someone tell me where we are on this;  exactly what writability do
we have in 7.3?

---

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Have we addressed this?  I don't think so.
 
 No, it's not done yet.  My inclination is
 
 * Template1 has temp table creation and schema creation disabled
 (disallowed to world) by default.
 
 * CREATE DATABASE sets up new databases with temp table creation allowed
 to world and schema creation allowed to DB owner only (regardless of
 what the template database had).  The owner can adjust this default
 afterwards if he doesn't like it.
 
 It would be nice to lock down the public schema in template1 too, but I
 see no good way to do that, because CREATE DATABASE can't readily fiddle
 with protections *inside* the database --- the only games we can play
 are with the protections stored in the pg_database row itself.  So
 public's permissions are going to be inherited from the template
 database, and that means template1's public has to be writable.
 
 Objections anyone?
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] C vs. C++ contributions

2002-09-28 Thread Bruce Momjian

Marc Lavergne wrote:
 That's an quite a bite to chew given my level of experience with 
 PostgreSQL internals! However, I will keep it in mind and whatever I do 
 will be fully abstracted (already is actually) so that it should just a 
 matter of snapping it into place when 7.4 forks. Realistically, I can't 
 comment from an informed position on this yet. When I get a chance to 
 look into what is happening in 7.3 and the 7.4 roadmap, I will post back 
 if I feel I can provide something of substance.

FYI, we just split off 7.4 so we are ready to accept 7.4 patches.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [PATCHES] Cascaded Column Drop

2002-09-28 Thread Rod Taylor

On Sat, 2002-09-28 at 16:38, Tom Lane wrote:
 Rod Taylor [EMAIL PROTECTED] writes:
  Leaving a zero-width table would be best, even if its not so useful.  I
  don't like rejecting a CASCADE as it kinda defeats the purpose of having
  CASCADE.
 
 I did something about this --- as of CVS tip, you can do
 
 regression=# create table foo (f1 int);
 CREATE TABLE
 regression=# alter table foo drop column f1;
 ALTER TABLE
 regression=# select * from foo;

Which of course would dump as 'create table foo ();'.

I don't think relcache would like a table without any columns, which is
why the above is rejected.


Anyway, should pg_dump ignore the table entirely?  Or do we try to allow
create table () without any attributes?

-- 
  Rod Taylor


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Lamar Owen

On Saturday 28 September 2002 09:23 pm, Bruce Momjian wrote:
 Justin Clift wrote:
  Alvaro Herrera wrote:
   I agree with Lamar that upgrading is a very difficult process right

  As a simple for the user approach, would it be
  too-difficult-to-bother-with to add to the postmaster an ability to
  start up with the data files from the previous version, for it to
  recognise an old data format automatically, then for it to do the
  conversion process of the old data format to the new one before going
  any further?

  Sounds like a pain to create initially, but nifty in the end.

 Yes, we could, but if we are going to do that, we may as well just
 automate the dump/reload.

Automating the dump/reload is fraught with pitfalls.  Been there; done that; 
got the t-shirt.  The dump from the old version many times requires 
hand-editing for cases where the complexity is above a certain threshold.  
The 7.2-7.3 threshold is just a little lower than normal.  

Our whole approach to the system catalog is wrong for what Justin (and many 
others would like to see).

With MySQL, for instance, one can migrate on a table-by-table basis from one 
table type to another.  As older table types are continuously supported, one 
can upgrade each table in turn as you need the featureset supported by that 
tabletype.

Yes, I know that doesn't fit our existing model of 'all in one' system 
catalogs.  And the solution doesn't present itself readily -- but one day 
someone will see the way to do this, and it will be good.  It _will_ involve 
refactoring the system catalog schema so that user 'system catalog' metadata 
and system 'system catalog' data aren't codependent.  A more modular data 
storage approach at a level above the existing broken storage manager 
modularity will result, and things will be different.

However, the number of messages on this subject has increased; one day it will 
become an important feature worthy of core developer attention.  That will be 
a happy day for me, as well as many others.  I have not the time to do it 
myself; but I can be a gadfly, at least.  In the meantime we have pg_upgrade 
for the future 7.3 - 7.4 upgrade.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: Default privileges for new databases (was Re: [HACKERS] Can't import large objects in most recent cvs)

2002-09-28 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Can someone tell me where we are on this;  exactly what writability do
 we have in 7.3?

The current code implements what I suggested in that note, viz:
default permissions for new databases are
owner = all rights (ie, create schema and create temp)
public = create temp right only
but template1 and template0 are set to
owner (postgres user) = all rights
public = no rights
by initdb.

Also, the public schema within template1 is empty but writable by
public.  This is annoying, but at least it's easy to fix if you
mess up --- you can DROP SCHEMA public CASCADE and then recreate
the schema.  (Or not, if you don't want to.)

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] [PATCHES] Cascaded Column Drop

2002-09-28 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 I did something about this --- as of CVS tip, you can do
 
 regression=# create table foo (f1 int);
 CREATE TABLE
 regression=# alter table foo drop column f1;
 ALTER TABLE
 regression=# select * from foo;

 Which of course would dump as 'create table foo ();'.

True.  I didn't say that everything would be happy with it ;-).  I think
that a zero-column table is only useful as a transient state, and so I'm
happy as long as the backend doesn't core dump.

 I don't think relcache would like a table without any columns, which is
 why the above is rejected.

Relcache doesn't seem to have a problem with it.

 Anyway, should pg_dump ignore the table entirely?  Or do we try to allow
 create table () without any attributes?

I feel no strong need to do either.  But it likely would only take
removal of this error check:

regression=# create table foo ();
ERROR:  DefineRelation: please inherit from a relation or define an attribute

at least as far as the backend goes.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: Default privileges for new databases (was Re: [HACKERS] Can't import

2002-09-28 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Can someone tell me where we are on this;  exactly what writability do
  we have in 7.3?
 
 The current code implements what I suggested in that note, viz:
 default permissions for new databases are
   owner = all rights (ie, create schema and create temp)
   public = create temp right only
 but template1 and template0 are set to
   owner (postgres user) = all rights
   public = no rights
 by initdb.
 
 Also, the public schema within template1 is empty but writable by
 public.  This is annoying, but at least it's easy to fix if you
 mess up --- you can DROP SCHEMA public CASCADE and then recreate
 the schema.  (Or not, if you don't want to.)

OK, yes, this is what I thought, that public in all databases is
world-writable, but you can control that by dropping and recreating the
public schema, or altering the schema, right?

How did you get temp schemas non-world writable in template1 but not in
the databases, or am I confused?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] [PATCHES] Cascaded Column Drop

2002-09-28 Thread Rod Taylor


 regression=# create table foo ();
 ERROR:  DefineRelation: please inherit from a relation or define an attribute
 
 at least as far as the backend goes.

Found in relcache.c earlier:
AssertArg(natts  0);

Didn't look too hard to see what it protects, because it's more effort
than it's worth.

-- 
  Rod Taylor


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] [PATCHES] Cascaded Column Drop

2002-09-28 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 Found in relcache.c earlier:
   AssertArg(natts  0);

Okay, one other place to change ... there are probably more such ...

regards, tom lane

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Improving backend startup interlock

2002-09-28 Thread Giles Lean


Tom Lane wrote:

 Giles Lean [EMAIL PROTECTED] writes:
  Is there some reason that file locking is not acceptable?  Is there
  any platform or filesystem supported for use with PostgreSQL which
  doesn't have working exclusive file locking?
 
 How would we know?  We have never tried to use such a feature.

I asked because I've not been following this project long enough to
know if it had been tried and rejected previously.  Newcomers being
prone to making silly suggestions and all that. :-)

 For sure I would not trust it on an NFS filesystem.  (Although we
 disparage running an NFS-mounted database, people do it anyway.)

scratches head

I can't work out if that's an objection or not.

I'm certainly no fan of NFS locking, but if someone trusts their NFS
client and server implementations enough to put their data on, they
might as well trust it to get a single lock file for startup right
too.  IMHO.  Your mileage may vary.

Regards,

Giles

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] NUMERIC's transcendental functions

2002-09-28 Thread Bruce Momjian


Is this an open item?

---

Tom Lane wrote:
 Jan Wieck [EMAIL PROTECTED] writes:
  One problem is, that division already has an inherently inexact
  result. Do you intend to rip that out too while at it? (Just
  kidding)
 
 No, but that too is now delivering less precision than it used to:
 
 regression=# select 10.1/7.0;
?column?
 --
  1.4428571429
 (1 row)
 
 versus 1.44285714285714 in prior releases.
 
  Proposal #2.667 would be to have a GUC variable for the default
  precision.
 
 Perhaps, but I'd be satisfied if the default precision were at least
 16 digits.  Again, the point is not to have any apparent regression
 from 7.2.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] NUMERIC's transcendental functions

2002-09-28 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Is this an open item?

Yes.  (Fooling with it right now, in fact, in a desultory way ...)

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] NUMERIC's transcendental functions

2002-09-28 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Is this an open item?
 
 Yes.  (Fooling with it right now, in fact, in a desultory way ...)

OK, added.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



[HACKERS] CVS split problems

2002-09-28 Thread Bruce Momjian


Marc, I am still seeing these errors.  Would you please fix it?

---

Bruce Momjian wrote:
 I am getting errors when doing a checkout, related to Marc's splitting
 up the CVS tree into modules:
   
   C pgsql/contrib/earthdistance/Makefile
   cvs checkout: move away
   pgsql/contrib/earthdistance/README.earthdistance; it is in the way
   C pgsql/contrib/earthdistance/README.earthdistance
   cvs checkout: move away pgsql/contrib/earthdistance/earthdistance.c; it
   is in the way
 
 I get this from a CVS checkout every time.  Can someone fix it?
 
 -- 
   Bruce Momjian|  http://candle.pha.pa.us
   [EMAIL PROTECTED]   |  (610) 359-1001
   +  If your life is a hard drive, |  13 Roberts Road
   +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Alvaro Herrera

Bruce Momjian dijo: 

 Justin Clift wrote:
  Alvaro Herrera wrote:

  As a simple for the user approach, would it be
  too-difficult-to-bother-with to add to the postmaster an ability to
  start up with the data files from the previous version, for it to
  recognise an old data format automatically, then for it to do the
  conversion process of the old data format to the new one before going
  any further?
 
 Yes, we could, but if we are going to do that, we may as well just
 automate the dump/reload.

I don't think that's an acceptable solution.  It requires too much free
disk space and too much time.  On-line upgrading, meaning altering the
databases on a table-by-table basis (or even page-by-page) solves both
problems (binary conversion sure takes less than converting to text
representation and parsing it to binary again).

I think a converting postmaster would be a waste, because it's unneeded
functionality 99.999% of the time.  I'm leaning towards an external
program doing the conversion, and the backend just aborting if it finds
old or in-conversion data.  The converter should be able to detect that
it has aborted and resume conversion.

What would that converter need:
- the old system catalog  (including user defined data)
- the new system catalog  (ditto, including the schema)
- the storage manager subsystem

I think that should be enough for converting table files.  I'd like to
experiment with something like this when I have some free time.  Maybe
next year...

-- 
Alvaro Herrera (alvherre[a]atentus.com)
I think my standards have lowered enough that now I think 'good design'
is when the page doesn't irritate the living fuck out of me. (JWZ)


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] 7.2.3?

2002-09-28 Thread Tom Lane

Alvaro Herrera [EMAIL PROTECTED] writes:
 What would that converter need:
 [snip]
 I think that should be enough for converting table files.  I'd like to
 experiment with something like this when I have some free time.  Maybe
 next year...

It's difficult to say anything convincing on this topic without a
specific conversion requirement in mind.

Localized conversions like 7.3's tuple header change could be done on a
page-by-page basis as you suggest.  (In fact, one reason I insisted on
putting in a page header version number was to leave the door open for
such a converter, if someone wants to do one.)

But one likely future format change for user data is combining parent
and child tables into a single physical table, per recent inheritance
thread.  (I'm not yet convinced that that's feasible or desirable,
I'm just using it as an example of a possible conversion requirement.)
You can't very well do that page-by-page; it'd require a completely
different approach.

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] v7.3 Branched ...

2002-09-28 Thread Justin Clift

Marc G. Fournier wrote:
 
 Not going to happen ... there are oodles of not big, but useful pieces
 of software out there that we could include ... but th epoint of Gborg is
 you download the main repository, and then you go to gborg to look for the
 add-ons you might like to have ...

Ok.  Wonder if it's worth someone creating a PostgreSQL Powertools
type of package, that includes in one download all of these nifty tools
(pg_autotune, oid2name, etc) that would be beneficial to have compiled
and already available.  Kind of like contrib is (oid2name is already
there I know), but so people don't have to go hunting all over GBorg to
find the bits that they'd want.

:-)

Regards and best wishes,

Justin Clift

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] pg_config : postgresql.conf adjustments?

2002-09-28 Thread Justin Clift

Hi all,

Would it be beneficial for us to extend pg_config to update the
postgresql.conf file?

i.e.

pg_config --sort_mem 16384 --shared_buffers 800

pg_config -d /some/datadir --sort_mem 16384 --shared_buffers 800

etc?

Not sure if it should trigger a restart of postmaster, etc, but the
concept sounds useful.

:-)

Regards and best wishes,

Justin Clift

-- 
My grandfather once told me that there are two kinds of people: those
who work and those who take the credit. He told me to try to be in the
first group; there was less competition there.
   - Indira Gandhi

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] fix for client utils compilation under win32

2002-09-28 Thread Bruce Momjian


Your patch has been added to the PostgreSQL unapplied patches list at:

http://candle.pha.pa.us/cgi-bin/pgpatches

I will try to apply it within the next 48 hours.

---


Joe Conway wrote:
 Tom Lane wrote:
  It might work to measure time since the start of the whole process, or
  until the timeout target, rather than accumulating adjustments to the
  remains count each time through.  In other words something like
  
  at start: targettime = time() + specified-timeout
  
  each time we are about to wait: set select timeout to
  targettime - time().
  
  This bounds the error at 1 second which is probably good enough (you
  might want to add 1 to targettime to ensure the error is in the
  conservative direction of not timing out too soon).
  
 
 The attached patch fixes a number of issues related to compiling the client 
 utilities (libpq.dll and psql.exe) for win32 (missing defines, adjustments to 
 includes, pedantic casting, non-existent functions) per:
http://developer.postgresql.org/docs/postgres/install-win32.html.
 
 It compiles cleanly under Windows 2000 using Visual Studio .net. Also compiles 
 clean and passes all regression tests (regular and contrib) under Linux.
 
 In addition to a review by the usual suspects, it would be very desirable for 
 someone well versed in the peculiarities of win32 to take a look.
 
 If there are no objections, please commit.
 
 Thanks,
 
 Joe

 Index: src/backend/libpq/md5.c
 ===
 RCS file: /opt/src/cvs/pgsql-server/src/backend/libpq/md5.c,v
 retrieving revision 1.18
 diff -c -r1.18 md5.c
 *** src/backend/libpq/md5.c   4 Sep 2002 20:31:19 -   1.18
 --- src/backend/libpq/md5.c   26 Sep 2002 17:56:11 -
 ***
 *** 26,35 
*  can be compiled stand-alone.
*/
   
 ! #ifndef MD5_ODBC
   #include postgres.h
   #include libpq/crypt.h
 ! #else
   #include md5.h
   #endif
   
 --- 26,44 
*  can be compiled stand-alone.
*/
   
 ! #if ! defined(MD5_ODBC)  ! defined(FRONTEND)
   #include postgres.h
   #include libpq/crypt.h
 ! #endif
 ! 
 ! #ifdef FRONTEND
 ! #include postgres_fe.h
 ! #ifndef WIN32
 ! #include libpq/crypt.h
 ! #endif /* WIN32 */
 ! #endif /* FRONTEND */
 ! 
 ! #ifdef MD5_ODBC
   #include md5.h
   #endif
   
 Index: src/bin/psql/command.c
 ===
 RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/command.c,v
 retrieving revision 1.81
 diff -c -r1.81 command.c
 *** src/bin/psql/command.c22 Sep 2002 20:57:21 -  1.81
 --- src/bin/psql/command.c26 Sep 2002 18:18:17 -
 ***
 *** 23,28 
 --- 23,29 
   #include win32.h
   #include io.h
   #include fcntl.h
 + #include direct.h
   #endif
   
   #include libpq-fe.h
 ***
 *** 1163,1169 
   return NULL;
   }
   
 ! if (i  token_len - 1)
   return_val[i + 1] = '\0';
   }
   
 --- 1164,1170 
   return NULL;
   }
   
 ! if (i  (int) token_len - 1)
   return_val[i + 1] = '\0';
   }
   
 ***
 *** 1240,1246 
   exit(EXIT_FAILURE);
   }
   
 ! for (p = source; p - source  len  *p; p += PQmblen(p, pset.encoding))
   {
   if (esc)
   {
 --- 1241,1247 
   exit(EXIT_FAILURE);
   }
   
 ! for (p = source; p - source  (int) len  *p; p += PQmblen(p, pset.encoding))
   {
   if (esc)
   {
 ***
 *** 1278,1284 
   char   *end;
   
   l = strtol(p, end, 0);
 ! c = l;
   p = end - 1;
   break;
   }
 --- 1279,1285 
   char   *end;
   
   l = strtol(p, end, 0);
 ! c = (char) l;
   p = end - 1;
   break;
   }
 Index: src/bin/psql/common.c
 ===
 RCS file: /opt/src/cvs/pgsql-server/src/bin/psql/common.c,v
 retrieving revision 1.45
 diff -c -r1.45 common.c
 *** src/bin/psql/common.c 14 Sep 2002 19:46:01 -  1.45
 --- src/bin/psql/common.c 26 

Re: [HACKERS] making use of large TLB pages

2002-09-28 Thread Bruce Momjian


I haven't been following this thread.  Can someone answer:

Is TLB Linux-only?
Why use it and non SysV memory?
Is it a lot of code?

---

Tom Lane wrote:
 Neil Conway [EMAIL PROTECTED] writes:
  If we used a key that would remain the same between runs of the
  postmaster, this should ensure that there isn't a possibility of two
  independant sets of backends operating on the same data dir. The most
  logical way to do this IMHO would be to just hash the data dir, but I
  suppose the current method of using the port number should work as
  well.
 
 You should stick as closely as possible to the key logic currently used
 for SysV shmem keys.  That logic is intended to cope with the case where
 someone else is already using the key# that we initially generate, as
 well as the case where we discover a collision with a pre-existing
 backend set.  (We tell the difference by looking for a magic number at
 the start of the shmem segment.)
 
 Note that we do not assume the key is the same on each run; that's why
 we store it in postmaster.pid.
 
  (1) call sys_alloc_hugepages() without IPC_EXCL. If it returns
  an error, we're in the clear: there's no page matching
  that key. If it returns a pointer to a previously existing
  segment, panic: it is very likely that there are some
  orphaned backends still active.
 
 s/panic/and the PG magic number appears in the segment header, panic/
 
  - if we're compiling on a Linux system but the kernel headers
don't define the syscalls we need, use some reasonable
defaults (e.g. the syscall numbers for the current hugepage
syscalls in Linux 2.5)
 
 I think this is overkill, and quite possibly dangerous.  If we don't see
 the symbols then don't try to compile the code.
 
 On the whole it seems that this allows a very nearly one-to-one mapping
 to the existing SysV functionality.  We don't have the number of
 connected processes syscall, perhaps, but we don't need it: if a
 hugepages segment exists we can assume the number of connected processes
 is greater than 0, and that's all we really need to know.
 
 I think it's okay to stuff this support into the existing
 port/sysv_shmem.c file, rather than make a separate file (particularly
 given your point that we have to be able to fall back to SysV calls at
 runtime).  I'd suggest reorganizing the code in that file slightly to
 separate the actual syscalls from the controlling logic in
 PGSharedMemoryCreate().  Probably also will have to extend the API for
 PGSharedMemoryIsInUse() and RecordSharedMemoryInLockFile() to allow
 three fields to be recorded in postmaster.pid, not two --- you'll want
 a boolean indicating whether the stored key is for a SysV or hugepage
 segment.
 
   regards, tom lane
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



[HACKERS] Do we want a CVS branch now?

2002-09-28 Thread Bruce Momjian


[ I am starting to change subject headings to make things easier for
people.]

I don't think we want a branch for 7.4 yet.  We still have lots of open
issues and the branch will require double-patching.

Marc, I know we said branch after beta2 but I think we need another week
or two before we can start using that branch effectively.  Even if we
started using it, like adding PITR, the code would drift so much that
the double-patching would start to fail when applied.

Can the branch be undone, or can we not use it and just apply a
mega-patch later to make it match HEAD?

---

Marc G. Fournier wrote:
 
 As was previously discussed (and now that I'm mostly back from the dead
 ... damn colds) I've just branched off REL7_3_STABLE ... all future beta's
 will be made based off of that branch, so that development may resume on
 the main branch ...
 
 So, for those doing commits or anoncvs, remember that the 'stable' branch
 requires you to use:
 
   -rREL7_3_STABLE
 
 while the development branch is 'as per normal' ...
 
 
 
 ---(end of broadcast)---
 TIP 5: Have you checked our extensive FAQ?
 
 http://www.postgresql.org/users-lounge/docs/faq.html
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 359-1001
  +  If your life is a hard drive, |  13 Roberts Road
  +  Christ can be your backup.|  Newtown Square, Pennsylvania 19073

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] making use of large TLB pages

2002-09-28 Thread Neil Conway

Bruce Momjian [EMAIL PROTECTED] writes:
   Is TLB Linux-only?

Well, the TLB is a feature of the CPU, so no. Many modern processors
support large TLB pages in some fashion.

However, the specific API for using large TLB pages differs between
operating systems. The API I'm planning to implement is the one
provided by recent versions of Linux (2.5.38+).

I've only looked briefly at enabling the usage of large pages on other
operating systems. On Solaris, we already use large pages (due to
using Intimate Shared Memory). On HPUX, you apparently need call
chattr on the executable for it to use large pages. AFAIK the BSDs
don't support large pages for user-land apps -- if I'm incorrect, let
me know.

   Why use it and non SysV memory?

It's faster, at least in theory. I posted these links at the start of
the thread:

http://lwn.net/Articles/6535/
http://lwn.net/Articles/10293/

   Is it a lot of code?

I haven't implemented it yet, so I'm not sure. However, I don't think
it will be a lot of code.

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED] || PGP Key ID: DB3C29FC


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org