Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Lamar Owen

On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
 Lamar Owen wrote:
  [...]
  Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
  deal of promise for seamless binary 'in place' upgrading.  He has been
  able to write code to read multiple versions' database structures --
  proving that it CAN be done.

 Unfortunately it's not the on-disk binary format of files that causes
 the big problems. Our dump/initdb/restore sequence is also the solution
 for system catalog changes.

Hmmm.  They get in there via the bki interface, right?  Is there an OID issue 
with these?  Could differential BKI files be possible, with known system 
catalog changes that can be applied via a 'patchdb' utility?  I know pretty 
much how pg_upgrade is doing things now -- and, frankly, it's a little bit of 
a kludge.

Yes, I do understand the things a dump restore does on somewhat of a detailed 
level.  I know the restore repopulates the entries in the system catalogs for 
the restored data, etc, etc.

Currently dump/restore handles the catalog changes.  But by what other means 
could we upgrade the system catalog in place?

Our very extensibility is our weakness for upgrades.  Can it be worked around?  
Anyone have any ideas?

Improving pg_upgrade may be the ticket -- but if the on-disk binary format 
changes (like it has before), then something will have to do the binary 
format translation -- something like pg_fsck. 

Incidentally, pg_fsck, or a program like it, should be in the core 
distribution.  Maybe not named pg_fsck, as our database isn't a filesystem, 
but pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar.  Although pg_fsck is 
more of a pg_dbdump.

I've seen too many people bitten by upgrades gone awry.  The more we can do in 
the regard, the better.

And the Windows user will likely demand it.  I never thought I'd be grateful 
for a Win32 native PostgreSQL port... :-)
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  I don't see a huge value to using shared memory.   Once we get
  auto-vacuum, pg_listener will be fine,
 
 No it won't.  The performance of notify is *always* going to suck
 as long as it depends on going through a table.  This is particularly
 true given the lack of any effective way to index pg_listener; the
 more notifications you feed through, the more dead rows there are
 with the same key...

Why can't we do efficient indexing, or clear out the table?  I don't
remember.

  and shared memory like SI is just
  too hard to get working reliabily because of all the backends
  reading/writing in there.
 
 A curious statement considering that PG depends critically on SI
 working.  This is a solved problem.

My point is that SI was buggy for years until we found all the bugs, so
yea, it is a solved problem, but solved with difficulty.

Do we want to add another SI-type capability that could be as difficult
to get working properly, or will the notify piggyback on the existing SI
code.  If that latter, that would be fine with me, but we still have the
overflow queue problem.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Jeff Davis

On Tuesday 02 July 2002 06:03 pm, Bruce Momjian wrote:
 Let me tell you what would be really interesting.  If we didn't report
 the pid of the notifying process and we didn't allow arbitrary strings
 for notify (just pg_class relation names), we could just add a counter
 to pg_class that is updated for every notify.  If a backend is
 listening, it remembers the counter at listen time, and on every commit
 checks the pg_class counter to see if it has incremented.  That way,
 there is no queue, no shared memory, and there is no scanning. You just
 pull up the cache entry for pg_class and look at the counter.

 One problem is that pg_class would be updated more frequently.  Anyway,
 just an idea.

I think that currently a lot of people use select() (after all, it's mentioned 
in the docs) in the frontend to determine when a notify comes into a 
listening backend. If the backend only checks on commit, and the backend is 
largely idle except for notify processing, might it be a while before the 
frontend realizes that a notify was sent?

Regards,
Jeff




 ---

 Tom Lane wrote:
  Bruce Momjian [EMAIL PROTECTED] writes:
   Is disk i/o a real performance
   penalty for notify, and is performance a huge issue for notify anyway,
 
  Yes, and yes.  I have used NOTIFY in production applications, and I know
  that performance is an issue.
 
   The queue limit problem is a valid argument, but it's the only valid
   complaint IMHO; and it seems a reasonable tradeoff to make for the
   other advantages.
 
  BTW, it occurs to me that as long as we make this an independent message
  buffer used only for NOTIFY (and *not* try to merge it with SI), we
  don't have to put up with overrun-reset behavior.  The overrun reset
  approach is useful for SI because there are only limited times when
  we are prepared to handle SI notification in the backend work cycle.
  However, I think a self-contained NOTIFY mechanism could be much more
  flexible about when it will remove messages from the shared buffer.
  Consider this:
 
  1. To send NOTIFY: grab write lock on shared-memory circular buffer.
  If enough space, insert message, release lock, send signal, done.
  If not enough space, release lock, send signal, sleep some small
  amount of time, and then try again.  (Hard failure would occur only
  if the proposed message size exceeds the buffer size; as long as we
  make the buffer size a parameter, this is the DBA's fault not ours.)
 
  2. On receipt of signal: grab read lock on shared-memory circular
  buffer, copy all data up to write pointer into private memory,
  advance my (per-process) read pointer, release lock.  This would be
  safe to do pretty much anywhere we're allowed to malloc more space,
  so it could be done say at the same points where we check for cancel
  interrupts.  Therefore, the expected time before the shared buffer
  is emptied after a signal is pretty small.
 
  In this design, if someone sits in a transaction for a long time,
  there is no risk of shared memory overflow; that backend's private
  memory for not-yet-reported NOTIFYs could grow large, but that's
  his problem.  (We could avoid unnecessary growth by not storing
  messages that don't correspond to active LISTENs for that backend.
  Indeed, a backend with no active LISTENs could be left out of the
  circular buffer participation list altogether.)
 
  We'd need to separate this processing from the processing that's used to
  force SI queue reading (dz's old patch), so we'd need one more signal
  code than we use now.  But we do have SIGUSR1 available.
 
  regards, tom lane




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 I don't see a huge value to using shared memory.   Once we get
 auto-vacuum, pg_listener will be fine,

No it won't.  The performance of notify is *always* going to suck
as long as it depends on going through a table.  This is particularly
true given the lack of any effective way to index pg_listener; the
more notifications you feed through, the more dead rows there are
with the same key...

 and shared memory like SI is just
 too hard to get working reliabily because of all the backends
 reading/writing in there.

A curious statement considering that PG depends critically on SI
working.  This is a solved problem.

regards, tom lane



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Of course, a shared memory system probably is going to either do it
  sequentailly or have its own index issues, so I don't see a huge
  advantage to going to shared memory, and I do see extra code and a queue
  limit.
 
 Disk I/O vs. no disk I/O isn't a huge advantage?  Come now.

My assumption is that it throws to disk as backing store, which seems
better to me than dropping the notifies.  Is disk i/o a real performance
penalty for notify, and is performance a huge issue for notify anyway,
assuming autovacuum?

 A shared memory system would use sequential (well, actually
 circular-buffer) access, which is *exactly* what you want given
 the inherently sequential nature of the messages.  The reason that
 table storage hurts is that we are forced to do searches, which we
 could eliminate if we had control of the storage ordering.  Again,
 it comes down to the fact that tables don't provide the right
 abstraction for this purpose.

To me, it just seems like going to shared memory is taking our existing
table structure and moving it to memory.  Yea, there is no tuple header,
and yea we can make a circular list, but we can't index the thing, so is
spinning around a circular list any better than a sequential scan of a
table.  Yea, we can delete stuff better, but autovacuum would help with
that.  It just seems like we are reinventing the wheel.

Are there other uses for this? Can we make use of RAM-only tables?

 The extra code argument doesn't impress me either; async.c is
 currently 900 lines, about 2.5 times the size of sinvaladt.c which is
 the guts of SI message passing.  I think it's a good bet that a SI-like
 notify module would be much smaller than async.c is now; it's certainly
 unlikely to be significantly larger.
 
 The queue limit problem is a valid argument, but it's the only valid
 complaint IMHO; and it seems a reasonable tradeoff to make for the
 other advantages.

I am just not excited about it.  What do others think?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Is disk i/o a real performance
 penalty for notify, and is performance a huge issue for notify anyway,

Yes, and yes.  I have used NOTIFY in production applications, and I know
that performance is an issue.

 The queue limit problem is a valid argument, but it's the only valid
 complaint IMHO; and it seems a reasonable tradeoff to make for the
 other advantages.

BTW, it occurs to me that as long as we make this an independent message
buffer used only for NOTIFY (and *not* try to merge it with SI), we
don't have to put up with overrun-reset behavior.  The overrun reset
approach is useful for SI because there are only limited times when
we are prepared to handle SI notification in the backend work cycle.
However, I think a self-contained NOTIFY mechanism could be much more
flexible about when it will remove messages from the shared buffer.
Consider this:

1. To send NOTIFY: grab write lock on shared-memory circular buffer.
If enough space, insert message, release lock, send signal, done.
If not enough space, release lock, send signal, sleep some small
amount of time, and then try again.  (Hard failure would occur only
if the proposed message size exceeds the buffer size; as long as we
make the buffer size a parameter, this is the DBA's fault not ours.)

2. On receipt of signal: grab read lock on shared-memory circular
buffer, copy all data up to write pointer into private memory,
advance my (per-process) read pointer, release lock.  This would be
safe to do pretty much anywhere we're allowed to malloc more space,
so it could be done say at the same points where we check for cancel
interrupts.  Therefore, the expected time before the shared buffer
is emptied after a signal is pretty small.

In this design, if someone sits in a transaction for a long time,
there is no risk of shared memory overflow; that backend's private
memory for not-yet-reported NOTIFYs could grow large, but that's
his problem.  (We could avoid unnecessary growth by not storing
messages that don't correspond to active LISTENs for that backend.
Indeed, a backend with no active LISTENs could be left out of the
circular buffer participation list altogether.)

We'd need to separate this processing from the processing that's used to
force SI queue reading (dz's old patch), so we'd need one more signal
code than we use now.  But we do have SIGUSR1 available.

regards, tom lane



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian


Let me tell you what would be really interesting.  If we didn't report
the pid of the notifying process and we didn't allow arbitrary strings
for notify (just pg_class relation names), we could just add a counter
to pg_class that is updated for every notify.  If a backend is
listening, it remembers the counter at listen time, and on every commit
checks the pg_class counter to see if it has incremented.  That way,
there is no queue, no shared memory, and there is no scanning. You just
pull up the cache entry for pg_class and look at the counter.

One problem is that pg_class would be updated more frequently.  Anyway,
just an idea.

---

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Is disk i/o a real performance
  penalty for notify, and is performance a huge issue for notify anyway,
 
 Yes, and yes.  I have used NOTIFY in production applications, and I know
 that performance is an issue.
 
  The queue limit problem is a valid argument, but it's the only valid
  complaint IMHO; and it seems a reasonable tradeoff to make for the
  other advantages.
 
 BTW, it occurs to me that as long as we make this an independent message
 buffer used only for NOTIFY (and *not* try to merge it with SI), we
 don't have to put up with overrun-reset behavior.  The overrun reset
 approach is useful for SI because there are only limited times when
 we are prepared to handle SI notification in the backend work cycle.
 However, I think a self-contained NOTIFY mechanism could be much more
 flexible about when it will remove messages from the shared buffer.
 Consider this:
 
 1. To send NOTIFY: grab write lock on shared-memory circular buffer.
 If enough space, insert message, release lock, send signal, done.
 If not enough space, release lock, send signal, sleep some small
 amount of time, and then try again.  (Hard failure would occur only
 if the proposed message size exceeds the buffer size; as long as we
 make the buffer size a parameter, this is the DBA's fault not ours.)
 
 2. On receipt of signal: grab read lock on shared-memory circular
 buffer, copy all data up to write pointer into private memory,
 advance my (per-process) read pointer, release lock.  This would be
 safe to do pretty much anywhere we're allowed to malloc more space,
 so it could be done say at the same points where we check for cancel
 interrupts.  Therefore, the expected time before the shared buffer
 is emptied after a signal is pretty small.
 
 In this design, if someone sits in a transaction for a long time,
 there is no risk of shared memory overflow; that backend's private
 memory for not-yet-reported NOTIFYs could grow large, but that's
 his problem.  (We could avoid unnecessary growth by not storing
 messages that don't correspond to active LISTENs for that backend.
 Indeed, a backend with no active LISTENs could be left out of the
 circular buffer participation list altogether.)
 
 We'd need to separate this processing from the processing that's used to
 force SI queue reading (dz's old patch), so we'd need one more signal
 code than we use now.  But we do have SIGUSR1 available.
 
   regards, tom lane
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





[HACKERS] Scope of constraint names

2002-07-03 Thread Tom Lane

SQL92 requires named constraints to have names that are unique within
their schema.  Our past implementation did not require constraint names
to be unique at all; as a compromise I suggested requiring constraint
names to be unique for any given relation.  Rod Taylor's pending
pg_constraint patch implements that approach, but I'm beginning to have
second thoughts about it.

One problem I see is that pg_constraint entries can *only* be associated
with relations; so the table has no way to represent constraints
associated with domains --- not to mention assertions, which aren't
associated with any table at all.  I'm in no hurry to try to implement
assertions, but domain constraints are definitely interesting.  We'd
probably have to put domain constraints into a separate table, which
is possible but not very attractive.

At the SQL level, constraint names seem to be used in only two
contexts: DROP CONSTRAINT subcommands of ALTER TABLE and ALTER DOMAIN
commands, and SET CONSTRAINTS ... IMMEDIATE/DEFERRED.  In the DROP
context there's no real need to identify constraints globally, since
the associated table or domain name is available, but in SET CONSTRAINTS
the syntax doesn't include a table name.

Our current implementation of SET CONSTRAINTS changes the behavior of
all constraints matching the specified name, which is pretty bogus
given the lack of uniqueness.  If we don't go over to the SQL92 approach
then I think we need some other way of handling SET CONSTRAINTS that
allows a more exact specification of the target constraint.

A considerable advantage of per-relation constraint names is that a new
unique name can be assigned for a nameless constraint while holding only
a lock on the target relation.  We'd need a global lock to create unique
constraint names in the SQL92 semantics.  The only way I can see around
that would be to use newoid(), or perhaps a dedicated sequence
generator, to construct constraint names.  The resulting unpredictable
constraint names would be horribly messy to deal with in the regression
tests, so I'm not eager to do this.

Even per-relation uniqueness has some unhappiness: if you have a domain
with a named constraint, and you try to use this domain for two columns
of a relation, you'll get a constraint name conflict.  Inheriting
similar constraint names from two different parent relations is also
troublesome.  We could get around these either by going back to the
old no-uniqueness approach, or by being willing to alter constraint
names to make them unique (eg, by tacking on _nnn when needed).
But this doesn't help SET CONSTRAINTS.

At the moment I don't much like any of the alternatives.  Ideas anyone?

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian

Jeff Davis wrote:
 On Tuesday 02 July 2002 06:03 pm, Bruce Momjian wrote:
  Let me tell you what would be really interesting.  If we didn't report
  the pid of the notifying process and we didn't allow arbitrary strings
  for notify (just pg_class relation names), we could just add a counter
  to pg_class that is updated for every notify.  If a backend is
  listening, it remembers the counter at listen time, and on every commit
  checks the pg_class counter to see if it has incremented.  That way,
  there is no queue, no shared memory, and there is no scanning. You just
  pull up the cache entry for pg_class and look at the counter.
 
  One problem is that pg_class would be updated more frequently.  Anyway,
  just an idea.
 
 I think that currently a lot of people use select() (after all, it's mentioned 
 in the docs) in the frontend to determine when a notify comes into a 
 listening backend. If the backend only checks on commit, and the backend is 
 largely idle except for notify processing, might it be a while before the 
 frontend realizes that a notify was sent?

I meant to check exactly when it does now;  when a query completes.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Of course, a shared memory system probably is going to either do it
 sequentailly or have its own index issues, so I don't see a huge
 advantage to going to shared memory, and I do see extra code and a queue
 limit.

Disk I/O vs. no disk I/O isn't a huge advantage?  Come now.

A shared memory system would use sequential (well, actually
circular-buffer) access, which is *exactly* what you want given
the inherently sequential nature of the messages.  The reason that
table storage hurts is that we are forced to do searches, which we
could eliminate if we had control of the storage ordering.  Again,
it comes down to the fact that tables don't provide the right
abstraction for this purpose.

The extra code argument doesn't impress me either; async.c is
currently 900 lines, about 2.5 times the size of sinvaladt.c which is
the guts of SI message passing.  I think it's a good bet that a SI-like
notify module would be much smaller than async.c is now; it's certainly
unlikely to be significantly larger.

The queue limit problem is a valid argument, but it's the only valid
complaint IMHO; and it seems a reasonable tradeoff to make for the
other advantages.

regards, tom lane



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] Integrating libpqxx

2002-07-03 Thread Jeroen T. Vermeulen

On Tue, Jul 02, 2002 at 02:05:57PM -0400, Bruce Momjian wrote:
 
 Jeroen, do you have PostgreSQL CVS access yet?  If not, we need to get
 you that.

Don't have it yet, so please do!


Jeroen




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Why can't we do efficient indexing, or clear out the table?  I don't
 remember.

I don't recall either, but I do recall that we tried to index it and
backed out the changes.  In any case, a table on disk is just plain
not the right medium for transitory-by-design notification messages.

 A curious statement considering that PG depends critically on SI
 working.  This is a solved problem.

 My point is that SI was buggy for years until we found all the bugs, so
 yea, it is a solved problem, but solved with difficulty.

The SI message mechanism itself was not the source of bugs, as I recall
it (although certainly the code was incomprehensible in the extreme;
the original programmer had absolutely no grasp of readable coding style
IMHO).  The problem was failure to properly design the interactions with
relcache and catcache, which are pretty complex in their own right.
An SI-like NOTIFY mechanism wouldn't have those issues.

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Christopher Kings-Lynne

 Of course, a shared memory system probably is going to either do it
 sequentailly or have its own index issues, so I don't see a huge
 advantage to going to shared memory, and I do see extra code and a queue
 limit.

Is a shared memory implementation going to play silly buggers with the Win32
port?

Chris




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] Integrating libpqxx

2002-07-03 Thread Christopher Kings-Lynne

Is it included now in the main build process?  If so, I'll test it on
FreeBSD/Alpha.

 Libpqxx still needs to be integrated:

   The 'configure' tests need to be merged into our main configure
   The documentation needs to be merged into our SGML docs.
   The makefile structure needs to be merged into /interfaces.

Chris




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





[HACKERS] libpq++ build problems

2002-07-03 Thread Christopher Kings-Lynne

OK, this is what I'm seeing on FreeBSD/Alpha for libpq++.  I haven't figured
out how to build libpqxx yet.:

gmake[3]: Entering directory
`/home/chriskl/pgsql-head/src/interfaces/libpq++'
g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/
include   -c -o pgconnection.o pgconnection.cc -MMD
cc1plus: warning:
***
*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
***

g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/
include   -c -o pgdatabase.o pgdatabase.cc -MMD
cc1plus: warning:
***
*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
***

g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/
include   -c -o pgtransdb.o pgtransdb.cc -MMD
cc1plus: warning:
***
*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
***

g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/
include   -c -o pgcursordb.o pgcursordb.cc -MMD
cc1plus: warning:
***
*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
***

g++ -O2 -g -Wall -fpic -DPIC -I../../../src/interfaces/libpq -I../../../src/
include   -c -o pglobject.o pglobject.cc -MMD
cc1plus: warning:
***
*** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
***

ar cr libpq++.a `lorder pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o
pglobject.o | tsort`
ranlib libpq++.a
g++ -O2 -g -Wall -fpic -DPIC -shared -Wl,-x,-soname,libpq++.so.4
pgconnection.o pgdatabase.o pgtransdb.o pgcursordb.o
lobject.o   -L../../../src/interfaces/libpq -lpq -R/home/chriskl/local/lib -
o libpq++.so.4
rm -f libpq++.so
ln -s libpq++.so.4 libpq++.so
gmake[3]: Leaving directory
`/home/chriskl/pgsql-head/src/interfaces/libpq++'





---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Jean-Michel POURE

Le Jeudi 27 Juin 2002 05:48, Christopher Kings-Lynne a écrit :
 I am willing to supply a complete, friendly, powerful and pretty installer
 program, based on NSIS.

Maybe you should contact Dave Page, who wrote pgAdmin2 and the ODBC 
installers. Maybe you can both work on the installer.

By the way, when will Dave be added to the main developper list? He wrote 99% 
of pgAdmin on his own.

Cheers, Jean-Michel POURE



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] Scope of constraint names

2002-07-03 Thread Christopher Kings-Lynne

 One problem I see is that pg_constraint entries can *only* be associated
 with relations; so the table has no way to represent constraints
 associated with domains --- not to mention assertions, which aren't
 associated with any table at all.  I'm in no hurry to try to implement
 assertions, but domain constraints are definitely interesting.  We'd
 probably have to put domain constraints into a separate table, which
 is possible but not very attractive.

Hmmm...there must be some sort of schema that can do both in one table?
Even something nastly like:

refid Oid of relation or domain
type 'r' for relation and 'd' for domain
...

 Our current implementation of SET CONSTRAINTS changes the behavior of
 all constraints matching the specified name, which is pretty bogus
 given the lack of uniqueness.  If we don't go over to the SQL92 approach
 then I think we need some other way of handling SET CONSTRAINTS that
 allows a more exact specification of the target constraint.

If we do go over to SQL92, what kind of problems will people have reloading
their old schema?  Should unnamed be excluded from the uniqueness
check...?

 A considerable advantage of per-relation constraint names is that a new
 unique name can be assigned for a nameless constraint while holding only
 a lock on the target relation.  We'd need a global lock to create unique
 constraint names in the SQL92 semantics.

Surely adding a foreign key is what you'd call a 'rare' event in a database,
occurring once once for millions or queries?  Hence, we shouldn't worry
about it too much?

 The only way I can see around
 that would be to use newoid(), or perhaps a dedicated sequence
 generator, to construct constraint names.  The resulting unpredictable
 constraint names would be horribly messy to deal with in the regression
 tests, so I'm not eager to do this.

Surely you do the ol' loop and test sort of thing...?

 Even per-relation uniqueness has some unhappiness: if you have a domain
 with a named constraint, and you try to use this domain for two columns
 of a relation, you'll get a constraint name conflict.  Inheriting
 similar constraint names from two different parent relations is also
 troublesome.  We could get around these either by going back to the
 old no-uniqueness approach, or by being willing to alter constraint
 names to make them unique (eg, by tacking on _nnn when needed).
 But this doesn't help SET CONSTRAINTS.

 At the moment I don't much like any of the alternatives.  Ideas anyone?

If they're both equally evil, then maybe we should consider going the SQL92
way, for compatibilities sake?

Chris




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





[HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne

Hi All,

I have given up working on the BETWEEN node.  It got to the stage where I
realised I was really out of my depth!  Rod Taylor has indicated an interest
in the problem and I have sent him my latest patch, so hopefully he'll be
able to crack it.

So instead, I've taken up with the DROP COLUMN crusade.  It seems that the
following are the jobs that need to be done:

* Add attisdropped to pg_attribute
  - Looking for takers for this one, otherwise I'll look into it.
* Fill out AlterTableDropColumn
  - I've done this, with the assumption that attisdropped exists.  It sets
attisdropped to true, drops the column default and renames the column.
(Plus does all other normal ALTER TABLE checks)
* Modify parser and other places to ignore dropped columns
  - This is also up for grabs.
* Modify psql and pg_dump to handle dropped columns
  - I've done this.

Once the above is done, we have a working drop column implementation.

* Modify all other interfaces, JDBC, etc. to handle dropped cols.
  - I think this can be suggested to the relevant developers once the above
is committed!

* Modify VACUUM to add a RECLAIM option to reduce on disk table size.
  - This is out of my league, so it's up for grabs

I have approached a couple of people off-list to see if they're interested
in helping, so please post to the list if you intend to work on something.

It has also occurred to me that once drop column exists, users will be able
to change the type of their columns manually (ie. create a new col, update
all values, drop the old col).  So, there is no reason why this new
attisdropped field shouldn't allow us to implement a full ALTER TABLE/SET
TYPE sort of feature - cool huh?

Chris




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 08:20, Christopher Kings-Lynne wrote:
  Of course, a shared memory system probably is going to either do it
  sequentailly or have its own index issues, so I don't see a huge
  advantage to going to shared memory, and I do see extra code and a queue
  limit.
 
 Is a shared memory implementation going to play silly buggers with the Win32
 port?

Perhaps this is a good place to introduce anonymous mmap ?

Is there a way to grow anonymous mmap on demand ?


Hannu




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Tue, 2002-07-02 at 23:35, Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Is disk i/o a real performance
  penalty for notify, and is performance a huge issue for notify anyway,
 
 Yes, and yes.  I have used NOTIFY in production applications, and I know
 that performance is an issue.
 
  The queue limit problem is a valid argument, but it's the only valid
  complaint IMHO; and it seems a reasonable tradeoff to make for the
  other advantages.
 
 BTW, it occurs to me that as long as we make this an independent message
 buffer used only for NOTIFY (and *not* try to merge it with SI), we
 don't have to put up with overrun-reset behavior.  The overrun reset
 approach is useful for SI because there are only limited times when
 we are prepared to handle SI notification in the backend work cycle.
 However, I think a self-contained NOTIFY mechanism could be much more
 flexible about when it will remove messages from the shared buffer.
 Consider this:
 
 1. To send NOTIFY: grab write lock on shared-memory circular buffer.

Are you planning to have one circular buffer per listening backend ?

Would that not be waste of space for large number of backends with long
notify arguments ?

--
Hannu




---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] [PATCHES] Reduce heap tuple header size

2002-07-03 Thread Manfred Koizar

On Tue, 2 Jul 2002 02:16:29 -0400 (EDT), Bruce Momjian
[EMAIL PROTECTED] wrote:
I committed the version with no #ifdef's.  If we need them, we can add
them later, but it is likely we will never need them.

My point was, if there is a need to fallback to v7.2 format, it can be
done by changing a single line from #undef to #define.  IMO the next
patch I'm going to submit is a bit more risky.  But if everyone else
is confident we can make it stable for v7.3, it's fine by me too.

Yes.  Manfred, keep going.  ;-)

Can't guarantee to keep the rate.  You know, the kids need a bit more
attention when they don't go to school :-)

Servus
 Manfred



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Hannu Krosing

On Tue, 2002-07-02 at 21:50, Lamar Owen wrote:
 On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
  Lamar Owen wrote:
   [...]
   Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
   deal of promise for seamless binary 'in place' upgrading.  He has been
   able to write code to read multiple versions' database structures --
   proving that it CAN be done.
 
  Unfortunately it's not the on-disk binary format of files that causes
  the big problems. Our dump/initdb/restore sequence is also the solution
  for system catalog changes.
 
 Hmmm.  They get in there via the bki interface, right?  Is there an OID issue 
 with these?  Could differential BKI files be possible, with known system 
 catalog changes that can be applied via a 'patchdb' utility?  I know pretty 
 much how pg_upgrade is doing things now -- and, frankly, it's a little bit of 
 a kludge.
 
 Yes, I do understand the things a dump restore does on somewhat of a detailed 
 level.  I know the restore repopulates the entries in the system catalogs for 
 the restored data, etc, etc.
 
 Currently dump/restore handles the catalog changes.  But by what other means 
 could we upgrade the system catalog in place?
 
 Our very extensibility is our weakness for upgrades.  Can it be worked around?  
 Anyone have any ideas?

Perhaps we can keep an old postgres binary + old backend around and then
use it in single-user mode to do a pg_dump into our running backend.

IIRC Access does its upgrade databse by copying old databse to new.

Our approach could be like

$OLD/postgres -D $OLD_DATA pg_dump_cmds | $NEW/postgres -D NEW_BACKEND

or perhaps, while old backend is still running:

pg_dumpall | path_to_new_backend/bin/postgres


I dont think we should assume that we will be able to do an upgrade
while we have less free space than currently used by databases (or at
least by data - indexes can be added later)

Trying to do an in-place upgrade is an interesting CS project, but any
serious DBA will have backups, so they can do
$ psql  dumpfile

Speeding up COPY FROM could be a good thing (perhaps enabling it to run
without any checks and outside transactions when used in loading dumps)

And home users will have databases small enough that they should have
enough free space to have both old and new version for some time.

What we do need is more-or-less solid upgrade path using pg_dump

BTW, how hard would it be to move pg_dump inside the backend (perhaps
using a dynamically loaded function to save space when not used) so that
it could be used like COPY ?

pg DUMP  table [ WITH 'other cmdline options' ] TO stdout ;

pg DUMP * [ WITH 'other cmdline options' ] TO stdout ;

 

Hannu




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Rod Taylor

On Tue, 2002-07-02 at 17:12, Bruce Momjian wrote:
 Tom Lane wrote:
  Bruce Momjian [EMAIL PROTECTED] writes:
   Of course, a shared memory system probably is going to either do it
   sequentailly or have its own index issues, so I don't see a huge
   advantage to going to shared memory, and I do see extra code and a queue
   limit.
  
  Disk I/O vs. no disk I/O isn't a huge advantage?  Come now.
 
 My assumption is that it throws to disk as backing store, which seems
 better to me than dropping the notifies.  Is disk i/o a real performance
 penalty for notify, and is performance a huge issue for notify anyway,
 assuming autovacuum?

For me, performance would be one of the only concerns. Currently I use
two methods of finding changes, one is NOTIFY which directs frontends to
reload various sections of data, the second is a table which holds a
QUEUE of actions to be completed (which must be tracked, logged and
completed).

If performance wasn't a concern, I'd simply use more RULES which insert
requests into my queue table.




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne

 I've not looked in a while, but the column rename code did not account
 for issues in foreign keys, etc.  Those should be easier to ferret out
 soon, but may not be so nice to change yet.

Which is probably a good reason for us to offer it as an all-in-one command,
rather than expecting them to do it manually...

 It should also be noted that an ALTER TABLE / SET TYPE implemented with
 the above idea with run into the 2x diskspace issue as well as take
 quite a while to process.

I think that if the 'SET TYPE' operation is ever to be rollback-able, it
will need to use 2x diskspace.  If it's overwritten in place, there's no
chance of fallback...  I think that a DBA would choose to use the command
knowing full well what it requires?  Better than not offering them the
choice at all!

Chris





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Rod Taylor

  It should also be noted that an ALTER TABLE / SET TYPE implemented with
  the above idea with run into the 2x diskspace issue as well as take
  quite a while to process.
 
 I think that if the 'SET TYPE' operation is ever to be rollback-able, it
 will need to use 2x diskspace.  If it's overwritten in place, there's no
 chance of fallback...  I think that a DBA would choose to use the command
 knowing full well what it requires?  Better than not offering them the
 choice at all!

True, but if we did the multi-version thing in pg_attribute we may be
able to coerce to the right type on the way out making it a high speed
change.




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 Is a shared memory implementation going to play silly buggers with the Win32
 port?

No.  Certainly no more so than shared disk buffers or the SI message
facility, both of which are *not* optional.

regards, tom lane



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] Scope of constraint names

2002-07-03 Thread Tom Lane

Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 A considerable advantage of per-relation constraint names is that a new
 unique name can be assigned for a nameless constraint while holding only
 a lock on the target relation.  We'd need a global lock to create unique
 constraint names in the SQL92 semantics.

 Surely adding a foreign key is what you'd call a 'rare' event in a database,
 occurring once once for millions or queries?  Hence, we shouldn't worry
 about it too much?

I don't buy that argument even for foreign keys --- and remember that
pg_constraint will also hold entries for CHECK, UNIQUE, and PRIMARY KEY
constraints.  I don't want to have to take a global lock whenever we
create an index.

 The only way I can see around
 that would be to use newoid(), or perhaps a dedicated sequence
 generator, to construct constraint names.  The resulting unpredictable
 constraint names would be horribly messy to deal with in the regression
 tests, so I'm not eager to do this.

 Surely you do the ol' loop and test sort of thing...?

How is a static 'expected' file going to do loop-and-test?

One possible answer to that is to report all unnamed constraints as
unnamed in error messages, even though they'd have distinct names
internally.  I don't much care for that approach though, since it might
make it hard for users to figure out which internal name to mention in
DROP CONSTRAINT.  But it'd keep the expected regression output stable.

 If they're both equally evil, then maybe we should consider going the SQL92
 way, for compatibilities sake?

If the spec didn't seem so brain-damaged on this point, I'd be more
eager to follow it.  I can't see any advantage in the way they chose
to do it.  But yeah, I'd lean to following the spec, if we can think
of a way around the locking and regression testing issues it creates.

regards, tom lane



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 Perhaps this is a good place to introduce anonymous mmap ?

I don't think so; it just adds a portability variable without buying
us anything.

 Is there a way to grow anonymous mmap on demand ?

Nope.  Not portably, anyway.  For instance, the HPUX man page for mmap
sayeth:

 If the size of the mapped file changes after the call to mmap(), the
 effect of references to portions of the mapped region that correspond
 to added or removed portions of the file is unspecified.

Dynamically re-mmapping after enlarging the file might work, but there
are all sorts of interesting constraints on that too; it looks like
you'd have to somehow synchronize things so that all the backends do it
at the exact same time.

On the whole I see no advantage to be gained here, compared to the
implementation I sketched earlier with a fixed-size shared buffer and
enlargeable internal buffers in backends.

regards, tom lane



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 Are you planning to have one circular buffer per listening backend ?

No; one circular buffer, period.

Each backend would also internally buffer notifies that it hadn't yet
delivered to its client --- but since the time until delivery could vary
drastically across clients, I think that's reasonable.  I'd expect
clients that are using LISTEN to avoid doing long-running transactions,
so under normal circumstances the internal buffer should not grow very
large.

regards, tom lane



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 14:32, Rod Taylor wrote:
   It should also be noted that an ALTER TABLE / SET TYPE implemented with
   the above idea with run into the 2x diskspace issue as well as take
   quite a while to process.
  
  I think that if the 'SET TYPE' operation is ever to be rollback-able, it
  will need to use 2x diskspace.  If it's overwritten in place, there's no
  chance of fallback...  I think that a DBA would choose to use the command
  knowing full well what it requires?  Better than not offering them the
  choice at all!
 
 True, but if we did the multi-version thing in pg_attribute we may be
 able to coerce to the right type on the way out making it a high speed
 change.

If I understand you right, i.e. you want to do the conversion at each
select(), then the change is high speed but all subsequent queries using
it will pay a a speed penalty, not to mention added complexity of the
whole thing.

I don't think that making changes quick autweights added  slowness and
complexity - changes are meant to be slow ;)

The real-life analogue to the proposed scenario would be adding one
extra wheel next to each existing one in a car in order to make it
possible to change tyres while driving - while certainly possible nobody
actually does it.

---
Hannu






---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 15:51, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  Are you planning to have one circular buffer per listening backend ?
 
 No; one circular buffer, period.
 
 Each backend would also internally buffer notifies that it hadn't yet
 delivered to its client --- but since the time until delivery could vary
 drastically across clients, I think that's reasonable.  I'd expect
 clients that are using LISTEN to avoid doing long-running transactions,
 so under normal circumstances the internal buffer should not grow very
 large.
 
   regards, tom lane

 2. On receipt of signal: grab read lock on shared-memory circular
 buffer, copy all data up to write pointer into private memory,
 advance my (per-process) read pointer, release lock.  This would be
 safe to do pretty much anywhere we're allowed to malloc more space,
 so it could be done say at the same points where we check for cancel
 interrupts.  Therefore, the expected time before the shared buffer
 is emptied after a signal is pretty small.

 In this design, if someone sits in a transaction for a long time,
 there is no risk of shared memory overflow; that backend's private
 memory for not-yet-reported NOTIFYs could grow large, but that's
 his problem.  (We could avoid unnecessary growth by not storing
 messages that don't correspond to active LISTENs for that backend.
 Indeed, a backend with no active LISTENs could be left out of the
 circular buffer participation list altogether.)

There could a little more smartness here to avoid unneccessary copying
(not just storing) of not-listened-to data. Perhaps each notify message
could be stored as

(ptr_to_next_blk,name,data)

so that the receiving backend could skip uninetersting (not-listened-to)
messages. 

I guess that depending on the circumstances this can be either faster or
slower than copying them all in one memmove.

This will be slower if all messages are interesting, this will be an
overall win if there is one backend listening to messages with big
dataload and lots of other backends listening to relatively small
messages.

There are scenarios where some more complex structure will be faster (a
sparse communication structure, say 1000 backends each listening to 1
name and notifying ten others - each backend has to (manually ;) check
1000 messages to find the one that is for it) but your proposed
structure seems good enough for most common uses (and definitely better
than the current one)

-
Hannu




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 There could a little more smartness here to avoid unneccessary copying
 (not just storing) of not-listened-to data.

Yeah, I was wondering about that too.

 I guess that depending on the circumstances this can be either faster or
 slower than copying them all in one memmove.

The more interesting question is whether it's better to hold the read
lock on the shared buffer for the minimum possible amount of time; if
so, we'd be better off to pull the data from the buffer as quickly as
possible and then sort it later.  Determining whether we are interested
in a particular notify name will probably take a probe into a (local)
hashtable, so it won't be super-quick.  However, I think we could
arrange for readers to use a sharable lock on the buffer, so having them
expend that processing while holding the read lock might be acceptable.

My guess is that the actual volume of data going through the notify
mechanism isn't going to be all that large, and so avoiding one memcpy
step for it isn't going to be all that exciting.  I think I'd lean
towards minimizing the time spent holding the shared lock, instead.
But it's a judgment call.

regards, tom lane



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Bruce Momjian

Lamar Owen wrote:
 On Tuesday 02 July 2002 03:14 pm, Jan Wieck wrote:
  Lamar Owen wrote:
   [...]
   Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great
   deal of promise for seamless binary 'in place' upgrading.  He has been
   able to write code to read multiple versions' database structures --
   proving that it CAN be done.
 
  Unfortunately it's not the on-disk binary format of files that causes
  the big problems. Our dump/initdb/restore sequence is also the solution
  for system catalog changes.
 
 Hmmm.  They get in there via the bki interface, right?  Is there an OID issue 
 with these?  Could differential BKI files be possible, with known system 
 catalog changes that can be applied via a 'patchdb' utility?  I know pretty 
 much how pg_upgrade is doing things now -- and, frankly, it's a little bit of 
 a kludge.

Sure, if it wasn't a kludge, I wouldn't have written it.  ;-)

Does everyone remember my LIKE indexing kludge in gram.y?  Until people
found a way to get it into the optimizer, it did its job.  I guess
that's where pg_upgrade is at this point.

Actually, how can pg_upgrade be improved?  

Also, we have committed to making file format changes for 7.3, so it
seems pg_upgrade will not be useful for that release unless we get some
binary conversion tool working.


 Yes, I do understand the things a dump restore does on somewhat of a detailed 
 level.  I know the restore repopulates the entries in the system catalogs for 
 the restored data, etc, etc.
 
 Currently dump/restore handles the catalog changes.  But by what other means 
 could we upgrade the system catalog in place?
 
 Our very extensibility is our weakness for upgrades.  Can it be worked around?  
 Anyone have any ideas?
 
 Improving pg_upgrade may be the ticket -- but if the on-disk binary format 
 changes (like it has before), then something will have to do the binary 
 format translation -- something like pg_fsck. 

Yep.

 Incidentally, pg_fsck, or a program like it, should be in the core 
 distribution.  Maybe not named pg_fsck, as our database isn't a filesystem, 
 but pg_dbck, or pg_dbcheck, pr pg_dbfix, or similar.  Although pg_fsck is 
 more of a pg_dbdump.
 
 I've seen too many people bitten by upgrades gone awry.  The more we can do in 
 the regard, the better.

I should mention that 7.3 will have pg_depend, which should make our
post-7.3 reload process much cleaner because we will not have dangling
objects as often.

 And the Windows user will likely demand it.  I never thought I'd be grateful 
 for a Win32 native PostgreSQL port... :-)

Yea, the trick is to get an something working that will require minimal
change from release to release.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 16:30, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  There could a little more smartness here to avoid unneccessary copying
  (not just storing) of not-listened-to data.
 
 Yeah, I was wondering about that too.
 
  I guess that depending on the circumstances this can be either faster or
  slower than copying them all in one memmove.
 
 The more interesting question is whether it's better to hold the read
 lock on the shared buffer for the minimum possible amount of time;

OTOH, we may decide that getting a notify ASAP is not a priority and
just go on doing what we did before if we can't get the lock and try
again the next time around.

This may have some pathological behaviours (starving some backends who
always come late ;), but we are already attracting a thundering herd by
sending a signal to all _possibly_ interested backends at the same time
time.

Keeping a list of who listens to what can solve this problem (but only
in case of sparse listening habits).

-
Hannu




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Neil Conway

On Tue, Jul 02, 2002 at 05:35:42PM -0400, Tom Lane wrote:
 1. To send NOTIFY: grab write lock on shared-memory circular buffer.
 If enough space, insert message, release lock, send signal, done.
 If not enough space, release lock, send signal, sleep some small
 amount of time, and then try again.  (Hard failure would occur only
 if the proposed message size exceeds the buffer size; as long as we
 make the buffer size a parameter, this is the DBA's fault not ours.)

How would this interact with the current transactional behavior of
NOTIFY? At the moment, executing a NOTIFY command only stores the
pending notification in a List in the backend you're connected to;
when the current transaction commits, the NOTIFY is actually
processed (stored in pg_listener, SIGUSR2 sent, etc) -- if the
transaction is rolled back, the NOTIFY isn't sent. If we do the
actual insertion when the NOTIFY is executed, I don't see a simple
way to get this behavior...

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] libpq++ build problems

2002-07-03 Thread jtv

On Wed, Jul 03, 2002 at 02:25:46PM +0800, Christopher Kings-Lynne wrote:
 OK, this is what I'm seeing on FreeBSD/Alpha for libpq++.  

[cut]
[paste]

 cc1plus: warning:
 ***
 *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
 ***

Doesn't say it doesn't work though...  Have you tried running the
resulting code?


 I haven't figured out how to build libpqxx yet.:

Basically, ./configure; make; make check; make install.  You may have to
use configure options --with-postgres=/your/postgres/dir or its cousins.
Plus, you'll also run into the same gcc warning so you may have to set
the environment variable CXXFLAGS to something like -O before running 
configure.  The same will probably help with libpq++ as well BTW.


Jeroen





---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 17:28, Bruce Momjian wrote:
 Hannu Krosing wrote:
   Our very extensibility is our weakness for upgrades.  Can it be worked around?  
   Anyone have any ideas?
  
  Perhaps we can keep an old postgres binary + old backend around and then
  use it in single-user mode to do a pg_dump into our running backend.
 
 That brings up an interesting idea.  Right now we dump the entire
 database out to a file, delete the old database, and load in the file.
 
 What if we could move over one table at a time?  Copy out the table,
 load it into the new database, then delete the old table and move on to
 the next.  That would allow use to upgrade having free space for just
 the largest table.  Another idea would be to record and remove all
 indexes in the old database.  That certainly would save disk space
 during the upgrade.
 
 However, the limiting factor is that we don't have a mechanism to have
 both databases running at the same time currently. 

How so ?

AFAIK I can run as many backends as I like (up to some practical limit)
on the same comuter at the same time, as long as they use different
ports and different data directories.

 Seems this may be
 the direction to head in.
 
  BTW, how hard would it be to move pg_dump inside the backend (perhaps
  using a dynamically loaded function to save space when not used) so that
  it could be used like COPY ?
  
  pg DUMP  table [ WITH 'other cmdline options' ] TO stdout ;
  
  pg DUMP * [ WITH 'other cmdline options' ] TO stdout ;
 
 Intersting idea, but I am not sure what that buys us.  Having pg_dump
 separate makes maintenance easier.

can pg_dump connect to single-user-mode backend ?


Hannu




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] Integrating libpqxx

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
 Is it included now in the main build process?  If so, I'll test it on
 FreeBSD/Alpha.
 
  Libpqxx still needs to be integrated:
 
  The 'configure' tests need to be merged into our main configure
  The documentation needs to be merged into our SGML docs.
  The makefile structure needs to be merged into /interfaces.
 

No, currently disabled in the build.  You can go into libpqxx and run
configure and make and that should work.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 but we are already attracting a thundering herd by
 sending a signal to all _possibly_ interested backends at the same time

That's why it's so important that the readers use a sharable lock.  The
only thing they'd be locking out is some new writer trying to send (yet
another) notify.

Also, it's a pretty important optimization to avoid signaling backends
that are not listening for any notifies at all.

We could improve on it further by keeping info in shared memory about
which backends are listening for which notify names, but I don't see
any good way to do that in a fixed amount of space.

regards, tom lane



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
 Hi All,

 I have given up working on the BETWEEN node.  It got to the stage where I
 realised I was really out of my depth!  Rod Taylor has indicated an interest
 in the problem and I have sent him my latest patch, so hopefully he'll be
 able to crack it.

 So instead, I've taken up with the DROP COLUMN crusade.  It seems that the
 following are the jobs that need to be done:

Great crusade!

 * Add attisdropped to pg_attribute
   - Looking for takers for this one, otherwise I'll look into it.

I can do this for you.  Just let me know when.

 * Fill out AlterTableDropColumn
   - I've done this, with the assumption that attisdropped exists.  It sets
 attisdropped to true, drops the column default and renames the column.
 (Plus does all other normal ALTER TABLE checks)
 * Modify parser and other places to ignore dropped columns
   - This is also up for grabs.

As I remember, Hiroshi's drop column changed the attribute number to a
special negative value, which required lots of changes to track.
Keeping the same number and just marking the column as dropped is a big
win.  This does push the coding out the client though.

 * Modify psql and pg_dump to handle dropped columns
   - I've done this.
 
 Once the above is done, we have a working drop column implementation.
 
 * Modify all other interfaces, JDBC, etc. to handle dropped cols.
   - I think this can be suggested to the relevant developers once the above
 is committed!
 
 * Modify VACUUM to add a RECLAIM option to reduce on disk table size.
   - This is out of my league, so it's up for grabs

Will UPDATE on a row set the deleted column to NULL?  If so, the
disk space used by the column would go away over time.  In fact, a
simple:

UPDATE tab SET col = col;
VACUUM;

would remove the data stored in the deleted column;  no change to VACUUM
needed.

 I have approached a couple of people off-list to see if they're interested
 in helping, so please post to the list if you intend to work on something.
 
 It has also occurred to me that once drop column exists, users will be able
 to change the type of their columns manually (ie. create a new col, update
 all values, drop the old col).  So, there is no reason why this new
 attisdropped field shouldn't allow us to implement a full ALTER TABLE/SET
 TYPE sort of feature - cool huh?

Yep.
 
-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

[EMAIL PROTECTED] (Neil Conway) writes:
 How would this interact with the current transactional behavior of
 NOTIFY?

No change.  Senders would only insert notify messages into the shared
buffer when they commit (uncommited notifies would live in a list in
the sender, same as now).  Readers would be expected to remove messages
from the shared buffer ASAP after receiving the signal, but they'd
store those messages internally and not forward them to the client until
such time as they're not inside a transaction block.

regards, tom lane



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 17:48, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  but we are already attracting a thundering herd by
  sending a signal to all _possibly_ interested backends at the same time
 
 That's why it's so important that the readers use a sharable lock.  The
 only thing they'd be locking out is some new writer trying to send (yet
 another) notify.

But there must be some way to communicate the positions of read pointers
of all backends for managing the free space, lest we are unable to know
when the buffer is full. 

I imagined that at least this info was kept in share memory.

 Also, it's a pretty important optimization to avoid signaling backends
 that are not listening for any notifies at all.

But of little help when they are all listening to something ;)

 We could improve on it further by keeping info in shared memory about
 which backends are listening for which notify names, but I don't see
 any good way to do that in a fixed amount of space.

A compromize would be to do it for some fixed amount of mem (say 10
names/backend) and assume all if out of that memory.

Notifying everybody has less bad effects when backends listen to more
names and keeping lists is pure overhead when all listeners listen to
all names.

--
Hannu



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] (A) native Windows port

2002-07-03 Thread Bruce Momjian

Hannu Krosing wrote:
  
  However, the limiting factor is that we don't have a mechanism to have
  both databases running at the same time currently. 
 
 How so ?
 
 AFAIK I can run as many backends as I like (up to some practical limit)
 on the same comuter at the same time, as long as they use different
 ports and different data directories.

We don't have an automated system for doing this.  Certainly it is done
all the time.

  Intersting idea, but I am not sure what that buys us.  Having pg_dump
  separate makes maintenance easier.
 
 can pg_dump connect to single-user-mode backend ?

Uh, no, I don't think so.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] [PATCHES] Reduce heap tuple header size

2002-07-03 Thread Bruce Momjian

Manfred Koizar wrote:
 On Tue, 2 Jul 2002 02:16:29 -0400 (EDT), Bruce Momjian
 [EMAIL PROTECTED] wrote:
 I committed the version with no #ifdef's.  If we need them, we can add
 them later, but it is likely we will never need them.
 
 My point was, if there is a need to fallback to v7.2 format, it can be
 done by changing a single line from #undef to #define.  IMO the next
 patch I'm going to submit is a bit more risky.  But if everyone else
 is confident we can make it stable for v7.3, it's fine by me too.

Yes, with your recent pages, I think we are committed to changing the
format for 7.3.

 Yes.  Manfred, keep going.  ;-)
 
 Can't guarantee to keep the rate.  You know, the kids need a bit more
 attention when they don't go to school :-)

Let me send over my kids.  Where are you located?  Austria?  Hmmm...

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 On Wed, 2002-07-03 at 17:48, Tom Lane wrote:
 That's why it's so important that the readers use a sharable lock.  The
 only thing they'd be locking out is some new writer trying to send (yet
 another) notify.

 But there must be some way to communicate the positions of read pointers
 of all backends for managing the free space, lest we are unable to know
 when the buffer is full. 

Right.  But we play similar games already with the existing SI buffer,
to wit:

Writers grab the controlling lock LW_EXCLUSIVE, thereby having sole
access; in this state it's safe for them to examine all the read
pointers as well as examine/update the write pointer (and of course
write data into the buffer itself).  The furthest-back read pointer
limits what they can write.

Readers grab the controlling lock LW_SHARED, thereby ensuring there
is no writer (but there may be other readers).  In this state they
may examine the write pointer (to see how much data there is) and
may examine and update their own read pointer.  This is safe and
useful because no reader cares about any other's read pointer.

 We could improve on it further by keeping info in shared memory about
 which backends are listening for which notify names, but I don't see
 any good way to do that in a fixed amount of space.

 A compromize would be to do it for some fixed amount of mem (say 10
 names/backend) and assume all if out of that memory.

I thought of that too, but it's not clear how much it'd help.  The
writer would have to scan through all the per-reader data while holding
the write lock, which is not good for concurrency.  On SMP hardware it
could actually be a net loss.  Might be worth experimenting with though.

You could make a good reduction in the shared-memory space needed by
storing just a hash code for the interesting names, and not the names
themselves.  (I'd also be inclined to include the hash code in the
transmitted message, so that readers could more quickly ignore
uninteresting messages.)

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] regress/results directory problem

2002-07-03 Thread Bruce Momjian


Marc has removed the regress/results directory from CVS.

---

Thomas Lockhart wrote:
 ...
  I am backing out my GNUmakefile change.  I am still unclear why this has
  started happening all of a sudden.
 
 ?
 
 The results/ directory should not be a part of CVS (since it is assumed
 to not exist by the regression tests). But it has been in CVS since 1997
 during a period of time when a Makefile in that directory was
 responsible for cleaning the directory. 
 
 We are relying on the pruning capabilities of CVS and so never really
 notice that this was the case (I use -Pd almost always too).
 
 I doubt anything has changed recently in this regard.
 
 - Thomas
 
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 
 
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] libpq++ build problems

2002-07-03 Thread Bruce Momjian


Actually, I am confused. In src/template/freebsd I see:

CFLAGS='-pipe'

case $host_cpu in
  alpha*)   CFLAGS=$CFLAGS -O;;
  i386*)CFLAGS=$CFLAGS -O2;;
esac

so why is he seeing the -O2 flag on FreeBSD/alpha?

---

jtv wrote:
 On Wed, Jul 03, 2002 at 02:25:46PM +0800, Christopher Kings-Lynne wrote:
  OK, this is what I'm seeing on FreeBSD/Alpha for libpq++.  
 
 [cut]
 [paste]
 
  cc1plus: warning:
  ***
  *** The -O2 flag TRIGGERS KNOWN OPTIMIZER BUGS ON THIS PLATFORM
  ***
 
 Doesn't say it doesn't work though...  Have you tried running the
 resulting code?
 
 
  I haven't figured out how to build libpqxx yet.:
 
 Basically, ./configure; make; make check; make install.  You may have to
 use configure options --with-postgres=/your/postgres/dir or its cousins.
 Plus, you'll also run into the same gcc warning so you may have to set
 the environment variable CXXFLAGS to something like -O before running 
 configure.  The same will probably help with libpq++ as well BTW.
 
 
 Jeroen
 
 
 
 
 
 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])
 
 
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Tom Lane wrote:
  themselves.  (I'd also be inclined to include the hash code in the
  transmitted message, so that readers could more quickly ignore
  uninteresting messages.)
 
  Doesn't seem worth it, and how would the user know their hash;
 
 This is not the user's problem; it is the writing backend's
 responsibility to compute and add the hash.  Basically we trade off some
 space to compute the hash code once at the writer not N times at all the
 readers.

Oh, OK.  When you said transmitted, I thought you meant transmitted to
the client.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Bruce Momjian

Tom Lane wrote:
 themselves.  (I'd also be inclined to include the hash code in the
 transmitted message, so that readers could more quickly ignore
 uninteresting messages.)

Doesn't seem worth it, and how would the user know their hash;  they
already have a C string for comparison.  Do we have to handle possible
hash collisions?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Hannu Krosing

On Wed, 2002-07-03 at 22:43, Tom Lane wrote:
 Hannu Krosing [EMAIL PROTECTED] writes:
  On Wed, 2002-07-03 at 17:48, Tom Lane wrote:
  That's why it's so important that the readers use a sharable lock.  The
  only thing they'd be locking out is some new writer trying to send (yet
  another) notify.
 
  But there must be some way to communicate the positions of read pointers
  of all backends for managing the free space, lest we are unable to know
  when the buffer is full. 
 
 Right.  But we play similar games already with the existing SI buffer,
 to wit:
 
 Writers grab the controlling lock LW_EXCLUSIVE, thereby having sole
 access; in this state it's safe for them to examine all the read
 pointers as well as examine/update the write pointer (and of course
 write data into the buffer itself).  The furthest-back read pointer
 limits what they can write.

It means a full seq scan over pointers ;)

 Readers grab the controlling lock LW_SHARED, thereby ensuring there
 is no writer (but there may be other readers).  In this state they
 may examine the write pointer (to see how much data there is) and
 may examine and update their own read pointer.  This is safe and
 useful because no reader cares about any other's read pointer.

OK. Now, how will we introduce transactional behaviour to this scheme ?

It is easy to save transaction id with each notify message, but is there
a quick way for backends to learn when these transactions commit/abort
or if they have done either in the past ?

Is there already a good common facility for that, or do I just need to
examine some random tuples in hope of finding out ;)

--
Hannu





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] libpq++ build problems

2002-07-03 Thread jtv

On Wed, Jul 03, 2002 at 01:45:56PM -0400, Bruce Momjian wrote:
 
 Actually, I am confused. In src/template/freebsd I see:
   
   CFLAGS='-pipe'
   
   case $host_cpu in
 alpha*)   CFLAGS=$CFLAGS -O;;
 i386*)CFLAGS=$CFLAGS -O2;;
   esac
 
 so why is he seeing the -O2 flag on FreeBSD/alpha?

Probably because CXXFLAGS still has -O2 set.


Jeroen




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Tom Lane wrote:
 themselves.  (I'd also be inclined to include the hash code in the
 transmitted message, so that readers could more quickly ignore
 uninteresting messages.)

 Doesn't seem worth it, and how would the user know their hash;

This is not the user's problem; it is the writing backend's
responsibility to compute and add the hash.  Basically we trade off some
space to compute the hash code once at the writer not N times at all the
readers.

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] libpq++ build problems

2002-07-03 Thread Bruce Momjian

jtv wrote:
 On Wed, Jul 03, 2002 at 01:45:56PM -0400, Bruce Momjian wrote:
  
  Actually, I am confused. In src/template/freebsd I see:
  
  CFLAGS='-pipe'
  
  case $host_cpu in
alpha*)   CFLAGS=$CFLAGS -O;;
i386*)CFLAGS=$CFLAGS -O2;;
  esac
  
  so why is he seeing the -O2 flag on FreeBSD/alpha?
 
 Probably because CXXFLAGS still has -O2 set.

Interesting.  I thought -O2 was only set in /template files, but I now
see it is set in configure too.  The following patch fixes the libpqxx
compile problem on FreeBSD/alpha.  The old code set -O2 for
FreeBSD/i386, but that is already set earlier.  The new patch just
updates the FreeBSD/alpha compile.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026


Index: src/template/freebsd
===
RCS file: /cvsroot/pgsql/src/template/freebsd,v
retrieving revision 1.10
diff -c -r1.10 freebsd
*** src/template/freebsd16 Nov 2000 05:51:07 -  1.10
--- src/template/freebsd3 Jul 2002 19:45:14 -
***
*** 1,7 
  CFLAGS='-pipe'
  
! case $host_cpu in
!   alpha*)   CFLAGS=$CFLAGS -O;;
!   i386*)CFLAGS=$CFLAGS -O2;;
! esac
! 
--- 1,6 
  CFLAGS='-pipe'
  
! if [ `expr $host_cpu : alpha` -ge 5 ]
! then  CFLAGS=$CFLAGS -O
!   CXXFLAGS=$CFLAGS -O
! fi



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] Scope of constraint names

2002-07-03 Thread Rod Taylor

 I don't buy that argument even for foreign keys --- and remember that
 pg_constraint will also hold entries for CHECK, UNIQUE, and PRIMARY KEY
 constraints.  I don't want to have to take a global lock whenever we
 create an index.

I don't understand why a global lock is necessary -- and not simply a
lock on the pg_constraint table and the relations the constraint is
applied to (foreign key locks two, all others one).




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





[HACKERS] Compiling PostgreSQL with Intel C Compiler 6.0

2002-07-03 Thread Hans-Jürgen Schönig

I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for 
Linux. During this process some errors occurred which I have attached to 
this email. I have compiled the sources using:

[hs@duron postgresql-7.2.1]$ cat compile.sh
#!/bin/sh

CC=/usr/local/intel_compiler/compiler60/ia32/bin/icc CFLAGS=' -O3 ' 
./configure
make

If anybody is interested in testing the compiler feel free to contact me.

Hans



mkdir man7
27329-fmgrtmp.c
heaptuple.c
indextuple.c
indexvalid.c
printtup.c
scankey.c
tupdesc.c
gist.c
gistget.c
gistscan.c
giststrat.c
hash.c
hashfunc.c
hashinsert.c
hashovfl.c
hashpage.c
hashscan.c
hashsearch.c
hashstrat.c
hashutil.c
heapam.c
hio.c
tuptoaster.c
genam.c
indexam.c
istrat.c
nbtcompare.c
nbtinsert.c
nbtpage.c
nbtree.c
nbtsearch.c
nbtstrat.c
nbtutils.c
nbtsort.c
rtget.c
rtproc.c
rtree.c
rtscan.c
rtstrat.c
clog.c
transam.c
varsup.c
xact.c
xid.c
xlog.c
xlogutils.c
rmgr.c
bootparse.c
bootscanner.c
bootstrap.c
catalog.c
heap.c
index.c
indexing.c
aclchk.c
pg_aggregate.c
pg_largeobject.c
pg_operator.c
pg_proc.c
pg_type.c
/tmp/genbkitmp.c
analyze.c
gram.c
keywords.c
parser.c
parse_agg.c
parse_clause.c
parse_expr.c
parse_func.c
parse_node.c
parse_oper.c
parse_relation.c
parse_type.c
parse_coerce.c
parse_target.c
scan.c
scansup.c
async.c
creatinh.c
command.c
comment.c
copy.c
indexcmds.c
define.c
remove.c
rename.c
vacuum.c
vacuumlazy.c
analyze.c
view.c
cluster.c
explain.c
sequence.c
trigger.c
user.c
proclang.c
dbcommands.c
variable.c
execAmi.c
execFlatten.c
execJunk.c
execMain.c
execProcnode.c
execQual.c
execScan.c
execTuples.c
execUtils.c
functions.c
instrument.c
nodeAppend.c
nodeAgg.c
nodeHash.c
nodeHashjoin.c
nodeIndexscan.c
nodeMaterial.c
nodeMergejoin.c
nodeNestloop.c
nodeResult.c
nodeSeqscan.c
nodeSetOp.c
nodeSort.c
nodeUnique.c
nodeLimit.c
nodeGroup.c
nodeSubplan.c
nodeSubqueryscan.c
nodeTidscan.c
spi.c
bit.c
dllist.c
lispsort.c
stringinfo.c
be-fsstubs.c
auth.c
crypt.c
hba.c
md5.c
password.c
pqcomm.c
pqformat.c
pqsignal.c
util.c
main.c
nodeFuncs.c
nodes.c
list.c
copyfuncs.c
equalfuncs.c
makefuncs.c
outfuncs.c
readfuncs.c
print.c
read.c
geqo_copy.c
geqo_eval.c
geqo_main.c
geqo_misc.c
geqo_pool.c
geqo_recombination.c
geqo_selection.c
geqo_erx.c
geqo_pmx.c
geqo_cx.c
geqo_px.c
geqo_ox1.c
geqo_ox2.c
allpaths.c
clausesel.c
costsize.c
indxpath.c
joinpath.c
joinrels.c
orindxpath.c
pathkeys.c
tidpath.c
createplan.c
initsplan.c
planmain.c
planner.c
setrefs.c
subselect.c
prepqual.c
preptlist.c
prepunion.c
prepkeyset.c
restrictinfo.c
clauses.c
plancat.c
joininfo.c
pathnode.c
relnode.c
tlist.c
var.c
dynloader.c
memcmp.c
postmaster.c
postmaster.c(1139): warning #186: pointless comparison of unsigned integer with zero
if (PG_PROTOCOL_MAJOR(port-proto)  PG_PROTOCOL_MAJOR(PG_PROTOCOL_EARLIEST) ||
   ^

pgstat.c
pgstat.c(195): warning #167: argument of type int * is incompatible with parameter 
of type socklen_t={__socklen_t={unsigned int}} *restrict
if (getsockname(pgStatSock, (struct sockaddr *)  pgStatAddr, alen)  0)
  ^

pgstat.c(1581): warning #167: argument of type int * is incompatible with parameter 
of type socklen_t={__socklen_t={unsigned int}} *restrict
   (struct sockaddr *) fromaddr, 
fromlen);
  ^

regcomp.c
regerror.c
regexec.c
regfree.c
rewriteRemove.c
rewriteDefine.c
rewriteHandler.c
rewriteManip.c
rewriteSupport.c
buf_table.c
buf_init.c
bufmgr.c
freelist.c
localbuf.c
fd.c
buffile.c
freespace.c
ipc.c
ipci.c
pmsignal.c
shmem.c
shmqueue.c
sinval.c
sinvaladt.c
inv_api.c
lmgr.c
lock.c
proc.c
deadlock.c
lwlock.c
spin.c
s_lock.c
bufpage.c
itemptr.c
md.c
mm.c
smgr.c
smgrtype.c
dest.c
fastpath.c
postgres.c
pquery.c
utility.c
fmgrtab.c
acl.c
arrayfuncs.c
arrayutils.c
bool.c
cash.c
char.c
date.c
datetime.c
datum.c
float.c
float.c(202): warning #39: division by zero
val = NAN;
  ^

float.c(263): warning #39: division by zero
val = NAN;
  ^

format_type.c
geo_ops.c
geo_selfuncs.c
int.c
int8.c
like.c
like_match.c(256): warning #556: a value of type char * cannot be assigned to an 
entity of type unsigned char *
p = VARDATA(pat);
  ^

like_match.c(258): warning #556: a value of type char * cannot be assigned to an 
entity of type unsigned char *
e = VARDATA(esc);
  ^

like_match.c(266): warning #556: a value of type char * cannot be assigned to an 
entity of type unsigned char *
r = VARDATA(result);
  ^

like_match.c(289): warning #556: a value of type char * cannot be assigned to an 
entity of type unsigned char *
e = VARDATA(esc);
  ^

like.c(162): warning #556: a value of type char * cannot be assigned to an entity of 
type unsigned char *
s = 

Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Hiroshi Inoue


Bruce Momjian wrote:
 
 Christopher Kings-Lynne wrote:
  Hi All,
 
  I have given up working on the BETWEEN node.  It got to the stage where I
  realised I was really out of my depth!  Rod Taylor has indicated an interest
  in the problem and I have sent him my latest patch, so hopefully he'll be
  able to crack it.
 
  So instead, I've taken up with the DROP COLUMN crusade.  It seems that the
  following are the jobs that need to be done:
 
 Great crusade!
 
  * Add attisdropped to pg_attribute
- Looking for takers for this one, otherwise I'll look into it.
 
 I can do this for you.  Just let me know when.
 
  * Fill out AlterTableDropColumn
- I've done this, with the assumption that attisdropped exists.  It sets
  attisdropped to true, drops the column default and renames the column.
  (Plus does all other normal ALTER TABLE checks)
  * Modify parser and other places to ignore dropped columns
- This is also up for grabs.
 
 As I remember, Hiroshi's drop column changed the attribute number to a
 special negative value, which required lots of changes to track.

??? What do you mean by *lots of* ?

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Hiroshi Inoue wrote:
  As I remember, Hiroshi's drop column changed the attribute number to a
  special negative value, which required lots of changes to track.
 
 ??? What do you mean by *lots of* ?

Yes, please remind me.  Was your solution renumbering the attno values? 
I think there are fewer cases to fix if we keep the existing attribute
numbering and just mark the column as deleted.  Is this accurate?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Hiroshi Inoue wrote:
 Bruce Momjian wrote:
  
  Hiroshi Inoue wrote:
As I remember, Hiroshi's drop column changed the attribute number to a
special negative value, which required lots of changes to track.
  
   ??? What do you mean by *lots of* ?
  
  Yes, please remind me.  Was your solution renumbering the attno values?
 
 Yes though I don't intend to object to Christopher's proposal.
 
  I think there are fewer cases to fix if we keep the existing attribute
  numbering and just mark the column as deleted.  Is this accurate?
 
 No. I don't understand why you think so. 

With the isdropped column, you really only need to deal with '*'
expansion in a few places, and prevent the column from being accessed. 
With renumbering, the backend loops that go through the attnos have to
be dealt with.

Is this correct?  I certainly prefer attno renumbering to isdropped
because it allows us to get DROP COLUMN without any client changes, or
at least with fewer because the dropped column has a negative attno.  Is
this accurate?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Hiroshi Inoue
Bruce Momjian wrote:
 
 Hiroshi Inoue wrote:
   As I remember, Hiroshi's drop column changed the attribute number to a
   special negative value, which required lots of changes to track.
 
  ??? What do you mean by *lots of* ?
 
 Yes, please remind me.  Was your solution renumbering the attno values?

Yes though I don't intend to object to Christopher's proposal.

 I think there are fewer cases to fix if we keep the existing attribute
 numbering and just mark the column as deleted.  Is this accurate?

No. I don't understand why you think so. 

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Hiroshi Inoue


Bruce Momjian wrote:
 
 Hiroshi Inoue wrote:
  Bruce Momjian wrote:
  
   Hiroshi Inoue wrote:
 As I remember, Hiroshi's drop column changed the attribute number to a
 special negative value, which required lots of changes to track.
   
??? What do you mean by *lots of* ?
  
   Yes, please remind me.  Was your solution renumbering the attno values?
 
  Yes though I don't intend to object to Christopher's proposal.
 
   I think there are fewer cases to fix if we keep the existing attribute
   numbering and just mark the column as deleted.  Is this accurate?
 
  No. I don't understand why you think so.
 
 With the isdropped column, you really only need to deal with '*'
 expansion in a few places, and prevent the column from being accessed.
 With renumbering, the backend loops that go through the attnos have to
 be dealt with.

I used the following macro in my trial implementation.
 #define COLUMN_IS_DROPPED(attribute) ((attribute)-attnum = 
DROP_COLUMN_OFFSET)
The places where the macro was put are exactly the places
where attisdropped must be checked.

The difference is essentially little. Please don't propagate
a wrong information. 
 
 Is this correct?  I certainly prefer attno renumbering to isdropped
 because it allows us to get DROP COLUMN without any client changes,

Unfortunately many apps rely on the fact that the attnos are
consecutive starting from 1. It was the main reason why Tom
rejected my trial. Nothing has changed about it.

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
 Yes, please remind me.  Was your solution renumbering the
  attno values?
   
Yes though I don't intend to object to Christopher's proposal.
 
 Hiroshi,
 
 I am thinking of rolling back my CVS to see if there's code from your
 previous test implementation that we can use.  Apart from the DropColumn
 function itself, what other changes did you make?  Did you have
 modifications for '*' expansion in the parser, etc.?

Yes, please review Hiroshi's work.  It is good work.  Can we have an
analysis of Hiroshi's approach vs the isdropped case.

Is it better to renumber the attno or set a column to isdropped.  The
former may be easier on the clients.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
   I am thinking of rolling back my CVS to see if there's code from your
   previous test implementation that we can use.  Apart from the DropColumn
   function itself, what other changes did you make?  Did you have
   modifications for '*' expansion in the parser, etc.?
 
  Yes, please review Hiroshi's work.  It is good work.  Can we have an
  analysis of Hiroshi's approach vs the isdropped case.
 
 Yes, it is.  I've rolled it back and I'm already incorporating his changes
 to the parser into my patch.  I just have to grep all the source code for
 'HACK' to find all the changes.  It's all very handy.

Yes.  It should have been accepted long ago, but we were waiting for a
perfect solution which we all know now will never come.

 
  Is it better to renumber the attno or set a column to isdropped.  The
  former may be easier on the clients.
 
 Well, obviously I prefer the attisdropped approach.  I think it's clearer
 and there's less confusion.  As a head developer for phpPgAdmin that's what
 I'd prefer...  Hiroshi obviously prefers his solution, but doesn't object to

OK, can you explain the issues from a server and client perspective,
i.e. renumbering vs isdropped?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne
Yes, please remind me.  Was your solution renumbering the
 attno values?
  
   Yes though I don't intend to object to Christopher's proposal.

Hiroshi,

I am thinking of rolling back my CVS to see if there's code from your
previous test implementation that we can use.  Apart from the DropColumn
function itself, what other changes did you make?  Did you have
modifications for '*' expansion in the parser, etc.?

Chris




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Hiroshi Inoue
Christopher Kings-Lynne wrote:
 
 Yes, please remind me.  Was your solution renumbering the
  attno values?
   
Yes though I don't intend to object to Christopher's proposal.
 
 Hiroshi,
 
 I am thinking of rolling back my CVS to see if there's code from your
 previous test implementation that we can use.  Apart from the DropColumn
 function itself, what other changes did you make?  Did you have
 modifications for '*' expansion in the parser, etc.?

Don't mind my posting.
I'm only correcting a misunderstanding for my work.

regards,
Hiroshi Inoue
http://w2422.nsk.ne.jp/~inoue/



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly


[HACKERS] Adding attisdropped

2002-07-03 Thread Christopher Kings-Lynne

Hi,

I've attached the changes I've made to pg_attribute.h - I can't see what's
wrong but whenever I do an initdb it fails:

initdb -D /home/chriskl/local/data
The files belonging to this database system will be owned by user chriskl.
This user must also own the server process.

The database cluster will be initialized with locale C.

creating directory /home/chriskl/local/data... ok
creating directory /home/chriskl/local/data/base... ok
creating directory /home/chriskl/local/data/global... ok
creating directory /home/chriskl/local/data/pg_xlog... ok
creating directory /home/chriskl/local/data/pg_clog... ok
creating template1 database in /home/chriskl/local/data/base/1...
initdb failed.
Removing /home/chriskl/local/data.

Chris


Index: pg_attribute.h
===
RCS file: /projects/cvsroot/pgsql/src/include/catalog/pg_attribute.h,v
retrieving revision 1.93
diff -c -r1.93 pg_attribute.h
*** pg_attribute.h  2002/06/20 20:29:44 1.93
--- pg_attribute.h  2002/07/04 02:08:29
***
*** 142,147 
--- 142,150 
  
/* Has DEFAULT value or not */
boolatthasdef;
+ 
+   /* Is dropped or not */
+   boolattisdropped;
  } FormData_pg_attribute;
  
  /*
***
*** 150,156 
   * because of alignment padding at the end of the struct.)
   */
  #define ATTRIBUTE_TUPLE_SIZE \
!   (offsetof(FormData_pg_attribute,atthasdef) + sizeof(bool))
  
  /* 
   *Form_pg_attribute corresponds to a pointer to a tuple with
--- 153,159 
   * because of alignment padding at the end of the struct.)
   */
  #define ATTRIBUTE_TUPLE_SIZE \
!   (offsetof(FormData_pg_attribute,attisdropped) + sizeof(bool))
  
  /* 
   *Form_pg_attribute corresponds to a pointer to a tuple with
***
*** 164,170 
   * 
   */
  
! #define Natts_pg_attribute15
  #define Anum_pg_attribute_attrelid1
  #define Anum_pg_attribute_attname 2
  #define Anum_pg_attribute_atttypid3
--- 167,173 
   * 
   */
  
! #define Natts_pg_attribute16
  #define Anum_pg_attribute_attrelid1
  #define Anum_pg_attribute_attname 2
  #define Anum_pg_attribute_atttypid3
***
*** 180,185 
--- 183,189 
  #define Anum_pg_attribute_attalign13
  #define Anum_pg_attribute_attnotnull  14
  #define Anum_pg_attribute_atthasdef   15
+ #define Anum_pg_attribute_attisdropped16
  
  
  
***
*** 398,405 
  { 1249, {attstorage},   18, 0,  1, 11, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
  { 1249, {attisset},   16, 0,1, 12, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
  { 1249, {attalign},   18, 0,1, 13, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
! { 1249, {attnotnull},  16, 0, 1, 14, 0, -1, -1, true, 'p', false, 'c', false, 
false }, \
! { 1249, {atthasdef}, 16, 0, 1, 15, 0, -1, -1, true, 'p', false, 'c', 
false, false }
  
  DATA(insert ( 1249 attrelid   26 DEFAULT_ATTSTATTARGET  4   1 0 -1 
-1 t p f i f f));
  DATA(insert ( 1249 attname19 DEFAULT_ATTSTATTARGET NAMEDATALEN  
2 0 -1 -1 f p f i f f));
--- 402,410 
  { 1249, {attstorage},   18, 0,  1, 11, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
  { 1249, {attisset},   16, 0,1, 12, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
  { 1249, {attalign},   18, 0,1, 13, 0, -1, -1, true, 'p', false, 'c', 
false, false }, \
! { 1249, {attnotnull},   16, 0, 1, 14, 0, -1, -1, true, 'p', false, 'c', false, 
false }, \
! { 1249, {atthasdef},16, 0, 1, 15, 0, -1, -1, true, 'p', false, 
'c', false, false }, \
! { 1249, {attisdropped}, 16, 0, 1, 16, 0, -1, -1, true, 'p', false, 'c', false, 
false }
  
  DATA(insert ( 1249 attrelid   26 DEFAULT_ATTSTATTARGET  4   1 0 -1 
-1 t p f i f f));
  DATA(insert ( 1249 attname19 DEFAULT_ATTSTATTARGET NAMEDATALEN  
2 0 -1 -1 f p f i f f));
***
*** 416,421 
--- 421,427 
  DATA(insert ( 1249 attalign   18 0  1  13 0 -1 -1 t p f c f f));
  DATA(insert ( 1249 attnotnull 16 0  1  14 0 -1 -1 t p f c f f));
  DATA(insert ( 1249 atthasdef  16 0  1  15 0 -1 -1 t p f c f f));
+ DATA(insert ( 1249 attisdropped   16 0  1  16 0 -1 -1 t p f c f f));
  DATA(insert ( 1249 ctid   27 0  6  -1 0 -1 -1 f p f i f 
f));
  /* no OIDs in pg_attribute */
  DATA(insert ( 1249 xmin   28 0  4  -3 0 -1 -1 t p f i f 
f));



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne

  I am thinking of rolling back my CVS to see if there's code from your
  previous test implementation that we can use.  Apart from the DropColumn
  function itself, what other changes did you make?  Did you have
  modifications for '*' expansion in the parser, etc.?

 Yes, please review Hiroshi's work.  It is good work.  Can we have an
 analysis of Hiroshi's approach vs the isdropped case.

Yes, it is.  I've rolled it back and I'm already incorporating his changes
to the parser into my patch.  I just have to grep all the source code for
'HACK' to find all the changes.  It's all very handy.

 Is it better to renumber the attno or set a column to isdropped.  The
 former may be easier on the clients.

Well, obviously I prefer the attisdropped approach.  I think it's clearer
and there's less confusion.  As a head developer for phpPgAdmin that's what
I'd prefer...  Hiroshi obviously prefers his solution, but doesn't object to
mine/Tom's.  I think that with all the schema-related changes that clients
will have to handle in 7.3, we may as well hit them with the dropped column
stuff in the same go, that way there's fewer rounds of clients scrambling to
keep up with the server.

I intend to email every single postgres client I can find and tell them
about the new changes, well before we release 7.3...

Chris




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne

  Well, obviously I prefer the attisdropped approach.  I think
 it's clearer
  and there's less confusion.  As a head developer for phpPgAdmin
 that's what
  I'd prefer...  Hiroshi obviously prefers his solution, but
 doesn't object to

 OK, can you explain the issues from a server and client perspective,
 i.e. renumbering vs isdropped?

Well in the renumbering case, the client needs to know about missing attnos
and it has to know to ignore negative attnos (which it probably does
already).  ie. psql and pg_dump wouldn't have to be modified in that case.

In the isdropped case, the client needs to know to exclude any column with
'attisdropped' set to true.

So in both cases, the client needs to be updated.  I personally prefer the
explicit 'is dropped' as opposed to the implicit 'negative number', but hey.

*sigh* Now I've gone and made an argument for the renumbering case.  I'm
going to have a good look at Hiroshi's old code and see which one is less
complicated, etc.  So far all I've really need to do is redefine Hiroshi's
COLUMN_DROPPED macro.

I'm sure that both methods could be made to handle a 'ALTER TABLE/SET TYPE'
syntax.

Chris




---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
   Well, obviously I prefer the attisdropped approach.  I think
  it's clearer
   and there's less confusion.  As a head developer for phpPgAdmin
  that's what
   I'd prefer...  Hiroshi obviously prefers his solution, but
  doesn't object to
 
  OK, can you explain the issues from a server and client perspective,
  i.e. renumbering vs isdropped?
 
 Well in the renumbering case, the client needs to know about missing attnos
 and it has to know to ignore negative attnos (which it probably does
 already).  ie. psql and pg_dump wouldn't have to be modified in that case.
 
 In the isdropped case, the client needs to know to exclude any column with
 'attisdropped' set to true.
 
 So in both cases, the client needs to be updated.  I personally prefer the
 explicit 'is dropped' as opposed to the implicit 'negative number', but hey.
 
 *sigh* Now I've gone and made an argument for the renumbering case.  I'm
 going to have a good look at Hiroshi's old code and see which one is less
 complicated, etc.  So far all I've really need to do is redefine Hiroshi's
 COLUMN_DROPPED macro.
 
 I'm sure that both methods could be made to handle a 'ALTER TABLE/SET TYPE'
 syntax.

Yes!  This is exactly what I would like investigated.  I am embarrassed
to see that we had Hiroshi's patch all this time and never implemented
it.

I think it underscores that we have drifted too far into the code purity
camp and need a little reality check that users have needs and we should
try to meet them if we want to be successful.  How  many DROP COLUMN
gripes have we heard over the years!  Now I am upset.

OK, I calmed down now.  What I would like to know is which DROP COLUMN
method is easier on the server end, and which is easier on the client
end.  If one is easier in both places, let's use that.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-03 Thread Tom Lane

Hannu Krosing [EMAIL PROTECTED] writes:
 Right.  But we play similar games already with the existing SI buffer,
 to wit:

 It means a full seq scan over pointers ;)

I have not seen any indication that the corresponding scan in the SI
code is a bottleneck --- and that has to scan over *all* backends,
without even the opportunity to skip those that aren't LISTENing.

 OK. Now, how will we introduce transactional behaviour to this scheme ?

It's no different from before --- notify messages don't get into the
buffer at all, until they're committed.  See my earlier response to Neil.

regards, tom lane



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] Scope of constraint names

2002-07-03 Thread Tom Lane

Rod Taylor [EMAIL PROTECTED] writes:
 I don't want to have to take a global lock whenever we
 create an index.

 I don't understand why a global lock is necessary --

To be sure we are creating a unique constraint name.

 and not simply a lock on the pg_constraint table

In this context, a lock on pg_constraint *is* global, because it will
mean that no one else can be creating an index on some other table.
They'd need to hold that same lock to ensure that *their* chosen
constraint name is unique.

regards, tom lane



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] regress/results directory problem

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Marc has removed the regress/results directory from CVS.

Uh ... say it ain't so, Joe!

regress/results/Makefile was part of several releases.  If you
really did that, then it is no longer possible to extract the state
of some past releases from CVS.

This cure is way worse than the disease.

regards, tom lane



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] Compiling PostgreSQL with Intel C Compiler 6.0

2002-07-03 Thread Tom Lane

=?ISO-8859-1?Q?Hans-J=FCrgen_Sch=F6nig?= [EMAIL PROTECTED] writes:
 I have tried to compile PostgreSQL with the Intel C Compiler 6.0 for 
 Linux. During this process some errors occurred which I have attached to 
 this email. I have compiled the sources using:

These are not errors, only overly-pedantic warnings.

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] libpq++ build problems

2002-07-03 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 ... The following patch fixes the libpqxx
 compile problem on FreeBSD/alpha.  The old code set -O2 for
 FreeBSD/i386, but that is already set earlier.  The new patch just
 updates the FreeBSD/alpha compile.

As a general rule, anything that affects one *BSD affects them all.
I am always very suspicious of any patch that changes only one of
the *BSD templates or makefiles.  I'm not even convinced we should
have separate makefiles/templates for 'em ...

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Christopher Kings-Lynne
 Unfortunately many apps rely on the fact that the attnos are
 consecutive starting from 1. It was the main reason why Tom
 rejected my trial. Nothing has changed about it.

OK, I've been looking at Hiroshi's implementation.  It's basically
semantically equivalent to mine from what I can see so far.  The only
difference really is in how the dropped columns are marked.

I've been ruminating on Hiroshi's statement at the top there.  What was the
reasoning for assuming that 'many apps rely on the fact that the attnos are
consecutive'?  Is that true?  phpPgAdmin doesn't.  In fact, phpPgAdmin won't
require any changes with Hiroshi's implementaiton and will require changes
with mine.

Anyway, an app that relies on consecutive attnos is going to have pain
skipping over attisdropped columns anyway???

In fact, I'm now beginning to think that I should just resurrect Hiroshi's
implementation.  I'm prepared to do that if people like...

Chris




---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org


Re: [HACKERS] Adding attisdropped

2002-07-03 Thread Tom Lane

Christopher Kings-Lynne [EMAIL PROTECTED] writes:
 I've attached the changes I've made to pg_attribute.h - I can't see what's
 wrong but whenever I do an initdb it fails:

Did you change the relnatts entry in pg_class.h for pg_attribute?

More generally, run initdb with -d or -v or whatever its debug-output
switch is, and look at the last few lines to see the actual error.
(Caution: this may produce megabytes of output.)

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Tom Lane

Hiroshi Inoue [EMAIL PROTECTED] writes:
 I used the following macro in my trial implementation.
  #define COLUMN_IS_DROPPED(attribute) ((attribute)-attnum = 
 DROP_COLUMN_OFFSET)
 The places where the macro was put are exactly the places
 where attisdropped must be checked.

Actually, your trial required column dropped-ness to be checked in
many more places than the proposed approach does.  Since you renumbered
the dropped column, nominal column numbers didn't correspond to physical
order of values in tuples anymore; that meant checking for dropped
columns in many low-level tuple manipulations.

 Is this correct?  I certainly prefer attno renumbering to isdropped
 because it allows us to get DROP COLUMN without any client changes,

 Unfortunately many apps rely on the fact that the attnos are
 consecutive starting from 1. It was the main reason why Tom
 rejected my trial. Nothing has changed about it.

I'm still not thrilled about it ... but I don't see a reasonable way
around it, either.  I don't see any good way to do DROP COLUMN
without breaking applications that make such assumptions.  Unless
you have one, we may as well go for the approach that adds the least
complication to the backend.

regards, tom lane



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] BETWEEN Node DROP COLUMN

2002-07-03 Thread Bruce Momjian

Christopher Kings-Lynne wrote:
  Unfortunately many apps rely on the fact that the attnos are
  consecutive starting from 1. It was the main reason why Tom
  rejected my trial. Nothing has changed about it.
 
 OK, I've been looking at Hiroshi's implementation.  It's basically
 semantically equivalent to mine from what I can see so far.  The only
 difference really is in how the dropped columns are marked.
 
 I've been ruminating on Hiroshi's statement at the top there.  What was the
 reasoning for assuming that 'many apps rely on the fact that the attnos are
 consecutive'?  Is that true?  phpPgAdmin doesn't.  In fact, phpPgAdmin won't
 require any changes with Hiroshi's implementaiton and will require changes
 with mine.
 
 Anyway, an app that relies on consecutive attnos is going to have pain
 skipping over attisdropped columns anyway???
 
 In fact, I'm now beginning to think that I should just resurrect Hiroshi's
 implementation.  I'm prepared to do that if people like...

Well, you have clearly identified that Hiroshi's approach is cleaner for
clients, because most clients don't need any changes.  If the server end
looks equivalent for both approaches, I suggest you get started with
Hiroshi's idea.

When Hiroshi's idea was originally proposed, some didn't like the
uncleanliness of it, and particularly relations that relied on attno
would all have to be adjusted/removed.  We didn't have pg_depend, of
course, so there was this kind of gap in knowing how to remove all
references to the dropped column.

There was also this idea that somehow the fairy software goddess was
going to come down some day and give us a cleaner way to implement DROP
COLUMN.  She still hasn't shown up.  :-)

I just read over TODO.detail/drop and my memory was correct.  It was a
mixure of having no pg_depend coupled with other ideas.  Now that
pg_depend is coming, DROP COLUMN is ripe for a solution.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]