Re: [HACKERS] [PATCHES] Reduce heap tuple header size

2002-07-02 Thread Bruce Momjian

Tom Lane wrote:
 Manfred Koizar [EMAIL PROTECTED] writes:
  ... I wonder
  whether we shouldn't apply this second version (without the configure
  parts) and put all forthcoming format changes under #ifndef
  PG72FORMAT.
 
 Seems reasonable.  I generally dislike #ifdef clutter, but the #ifs
 would only be around a couple of macro definitions AFAICT, so the
 readability hit would be minimal.  And someone who wanted
 back-compatibility would be able to have it, whichever way we jump
 on the decision for 7.3.

I committed the version with no #ifdef's.  If we need them, we can add
them later, but it is likely we will never need them.

 At the rate Manfred is going, he'll have patches for all the tuple and
 page header related issues before August anyway ... so my original gripe
 about wanting to group those changes into a single release will become
 moot ;-).  I certainly have no objection to doing them all in 7.3
 instead of 7.4 if we can get it done.

Yes.  Manfred, keep going.  ;-)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] Page OpaqueData

2002-07-02 Thread Bruce Momjian


Patch applied.  Thanks.

---


 Manfred Koizar wrote:
 On Mon, 24 Jun 2002 12:53:42 -0400, Tom Lane [EMAIL PROTECTED] wrote:
 the extra level of struct naming for pd_opaque has no obvious
 usefulness.
 
  [...] should I post a patch that puts pagesize directly into
  PageHeaderData?
 
 If you're so inclined.  Given that pd_opaque is hidden in those macros,
 there wouldn't be much of any gain in readability either, so I haven't
 worried about changing the declaration.
 
 Thanks for the clarification.  Here is the patch.  Not much gain, but at
 least it saves the next junior hacker from scratching his head ...
 
 Cordialmente
  Manfredinho :-)
 
 PS: Please do not apply before Page access patch from 2002-06-20.
 
 diff -ru ../base/src/backend/access/hash/hashutil.c 
src/backend/access/hash/hashutil.c
 --- ../base/src/backend/access/hash/hashutil.c2002-05-21 11:54:11.0 
+0200
 +++ src/backend/access/hash/hashutil.c2002-06-21 16:43:24.0 +0200
 @@ -131,13 +131,13 @@
   HashPageOpaque opaque;
  
   Assert(page);
 - Assert(((PageHeader) (page))-pd_lower = (sizeof(PageHeaderData) - 
sizeof(ItemIdData)));
 + Assert(((PageHeader) (page))-pd_lower = SizeOfPageHeaderData);
  #if 1
   Assert(((PageHeader) (page))-pd_upper =
  (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData;
   Assert(((PageHeader) (page))-pd_special ==
  (BLCKSZ - MAXALIGN(sizeof(HashPageOpaqueData;
 - Assert(((PageHeader) (page))-pd_opaque.od_pagesize == BLCKSZ);
 + Assert(PageGetPageSize(page) == BLCKSZ);
  #endif
   if (flags)
   {
 diff -ru ../base/src/include/storage/bufpage.h src/include/storage/bufpage.h
 --- ../base/src/include/storage/bufpage.h 2002-06-20 12:22:21.0 +0200
 +++ src/include/storage/bufpage.h 2002-06-21 16:38:17.0 +0200
 @@ -65,8 +65,7 @@
   * byte-offset position, tuples can be physically shuffled on a page
   * whenever the need arises.
   *
 - * AM-generic per-page information is kept in the pd_opaque field of
 - * the PageHeaderData.   (Currently, only the page size is kept here.)
 + * AM-generic per-page information is kept in PageHeaderData.
   *
   * AM-specific per-page data (if any) is kept in the area marked special
   * space; each AM has an opaque structure defined somewhere that is
 @@ -92,25 +91,18 @@
  
  
  /*
 + * disk page organization
   * space management information generic to any page
   *
 - *   od_pagesize - size in bytes.
 - * Minimum possible page size is 
perhaps 64B to fit
 - * page header, opaque space and a 
minimal tuple;
 - * of course, in reality you want it 
much bigger.
 - * On the high end, we can only support 
pages up
 - * to 32KB because lp_off/lp_len are 15 
bits.
 - */
 -typedef struct OpaqueData
 -{
 - uint16  od_pagesize;
 -} OpaqueData;
 -
 -typedef OpaqueData *Opaque;
 -
 -
 -/*
 - * disk page organization
 + *   pd_lower- offset to start of free space.
 + *   pd_upper- offset to end of free space.
 + *   pd_special  - offset to start of special space.
 + *   pd_pagesize - size in bytes.
 + * Minimum possible page size is perhaps 64B to 
fit
 + * page header, opaque space and a minimal 
tuple;
 + * of course, in reality you want it much 
bigger.
 + * On the high end, we can only support pages up
 + * to 32KB because lp_off/lp_len are 15 bits.
   */
  typedef struct PageHeaderData
  {
 @@ -124,7 +116,7 @@
   LocationIndex pd_lower; /* offset to start of free space */
   LocationIndex pd_upper; /* offset to end of free space */
   LocationIndex pd_special;   /* offset to start of special space */
 - OpaqueData  pd_opaque;  /* AM-generic information */
 + uint16  pd_pagesize;
   ItemIdData  pd_linp[1]; /* beginning of line pointer array */
  } PageHeaderData;
  
 @@ -216,14 +208,14 @@
   * however, it can be called on a page for which there is no buffer.
   */
  #define PageGetPageSize(page) \
 - ((Size) ((PageHeader) (page))-pd_opaque.od_pagesize)
 + ((Size) ((PageHeader) (page))-pd_pagesize)
  
  /*
   * PageSetPageSize
   *   Sets the page size of a page.
   */
  #define PageSetPageSize(page, size) \
 - (((PageHeader) (page))-pd_opaque.od_pagesize = (size))
 + (((PageHeader) (page))-pd_pagesize = (size))
  
  /* 
   *   page special data macros
 
 Servus
  Manfred
 
 
 

Re: [HACKERS] aggregate that returns array

2002-07-02 Thread Hannu Krosing

On Tue, 2002-07-02 at 03:25, David M. Kaplan wrote:
 Hi,
 
 Has anyone considered creating an aggregate function that returns an 
 array of all matching rows?

check contrib/intagg for a function that does it for integers.

---
Hannu




---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] DROP COLUMN Proposal

2002-07-02 Thread Christopher Kings-Lynne

 The nice thing about this implementation approach is that most of the
 backend need not be aware of deleted columns.  There are a few places in
 the parser (probably few enough to count on one hand) that will have to
 explicitly check for and reject references to dropped columns, and
 you're done.  The rewriter, planner and executor are blissfully ignorant
 of the whole deal.

If you can enumerate these places without much effort, it'd be appreciated!

I found:

expandRTE() in parser/parse_relation.c

What else?

Chris




---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





[HACKERS] listen/notify argument (old topic revisited)

2002-07-02 Thread Jeff Davis

A while ago, I started a small discussion about passing arguments to a NOTIFY 
so that the listening backend could get more information about the event.

There wasn't exactly a consensus from what I understand, but the last thing I 
remember is that someone intended to speed up the notification process by 
storing the events in shared memory segments (IIRC this was Tom's idea). That 
would create a remote possibility of a spurious notification, but the idea is 
that the listening application can check the status and determine what 
happened.

I looked at the TODO, but I couldn't find anything, nor could I find anything 
in the docs. 

Is someone still interested in implementing this feature? Are there still 
people who disagree with the above implementation strategy?

Regards,
Jeff



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org





Re: [HACKERS] regress/results directory problem

2002-07-02 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Does anyone know when this problem was added.  I don't see any
 significant changes in GNUmakefile.

AFAICT, the results directory hasn't been touched in 21 months.
Are you sure you haven't changed your own CVS setup or arguments?

 The -d brings in any directories that may have been created since the
 last checkout.  That is causing problems when the regress/results
 directory has been deleted and recreated.

I use cvs update -d -P myself, and I do *not* see it creating the
results directory.

regards, tom lane



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] (A) native Windows port

2002-07-02 Thread Jan Wieck

Christopher Kings-Lynne wrote:
 
   It would all work out of the box and would do wonderful things for the
   Postgres community.
 
  I like this idea, but let me just bring one little issue to note: are you
  going to handle upgrades, and if so, how?  How are you going to
  do a major
  version upgrade?
 
 Well, the easiest way would be to get them to uninstall the old version
 first, but I'm sure it can be worked out.  Perhaps even we shouldn't
 overwrite the old version anyway?

The question is not how to replace some .EXE and .DLL files or modify
something in the registry. The question is what to do with the existing
databases in the case of a catalog version change. You have to dump and
restore. 


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] regress/results directory problem

2002-07-02 Thread Thomas Lockhart

...
 I am backing out my GNUmakefile change.  I am still unclear why this has
 started happening all of a sudden.

?

The results/ directory should not be a part of CVS (since it is assumed
to not exist by the regression tests). But it has been in CVS since 1997
during a period of time when a Makefile in that directory was
responsible for cleaning the directory. 

We are relying on the pruning capabilities of CVS and so never really
notice that this was the case (I use -Pd almost always too).

I doubt anything has changed recently in this regard.

- Thomas



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-02 Thread Neil Conway

On Tue, Jul 02, 2002 at 02:37:19AM -0700, Jeff Davis wrote:
 A while ago, I started a small discussion about passing arguments to a NOTIFY 
 so that the listening backend could get more information about the event.

Funny, I was just about to post to -hackers about this.

 There wasn't exactly a consensus from what I understand, but the last thing I 
 remember is that someone intended to speed up the notification process by 
 storing the events in shared memory segments (IIRC this was Tom's idea). That 
 would create a remote possibility of a spurious notification, but the idea is 
 that the listening application can check the status and determine what 
 happened.

Yes, that was Tom Lane. IMHO, we need to replace the existing
pg_listener scheme with an improved model if we want to make any
significant improvements to asynchronous notifications. In summary,
the two designs that have been suggested are:

pg_notify: a new system catalog, stores notifications only --
pg_listener stores only listening backends.

shmem: all notifications are done via shared memory and not stored
in system catalogs at all, in a manner similar to the cache
invalidation code that already exists. This avoids the MVCC-induced
performence problem with storing notification in system catalogs,
but can lead to spurrious notifications -- the statically sized
buffer in which notifications are stored can overflow. Applications
will be able to differentiate between overflow-induced and regular
messages.

 Is someone still interested in implementing this feature? Are there still 
 people who disagree with the above implementation strategy?

Some people objected to shmem at the time; personally, I'm not really
sure which design is best. Any comments from -hackers?

If there's a consensus on which route to take, I'll probably implement
the preferred design for 7.3. However, I think that a proper
implementation of notify messages will need an FE/BE protocol change,
so that will need to wait for 7.4.

Cheers,

Neil

-- 
Neil Conway [EMAIL PROTECTED]
PGP Key ID: DB3C29FC



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] regress/results directory problem

2002-07-02 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Does anyone know when this problem was added.  I don't see any
  significant changes in GNUmakefile.
 
 AFAICT, the results directory hasn't been touched in 21 months.
 Are you sure you haven't changed your own CVS setup or arguments?

I looked at that.  My cvs hasn't changed, my OS hasn't changed.  I
deleted my entire tree and did another checkout, but that didn't help.

  The -d brings in any directories that may have been created since the
  last checkout.  That is causing problems when the regress/results
  directory has been deleted and recreated.
 
 I use cvs update -d -P myself, and I do *not* see it creating the
 results directory.

The problem is that if the results directory exists, the update fails
because there is no /CVS directory in there.

Here is a sample session.  At the start regress/results does not exist:

---

$ cvs update -d -P pgsql
? pgsql/config.log
...
$ cd /pg/test/regress/
$ mkdir results
$ cd -
/pgcvs
$ cvs update -d -P pgsql
? pgsql/config.log
...
? pgsql/src/test/regress/results
cvs update: in directory pgsql/src/test/regress/results:
cvs update: cannot open CVS/Entries for reading: No such file or directory

---

It seems to be failing because there is no CVS directory in results.

This is CVS:

Concurrent Versions System (CVS) 1.10.3 (client/server)

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly





Re: [HACKERS] regress/results directory problem

2002-07-02 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 I use cvs update -d -P myself, and I do *not* see it creating the
 results directory.

 The problem is that if the results directory exists, the update fails
 because there is no /CVS directory in there.

Ah.  I always do a 'make distclean' before I risk a whole-tree update...

regards, tom lane



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-02 Thread Bruce Momjian

Jeff Davis wrote:
 A while ago, I started a small discussion about passing arguments to a NOTIFY 
 so that the listening backend could get more information about the event.
 
 There wasn't exactly a consensus from what I understand, but the last thing I 
 remember is that someone intended to speed up the notification process by 
 storing the events in shared memory segments (IIRC this was Tom's idea). That 
 would create a remote possibility of a spurious notification, but the idea is 
 that the listening application can check the status and determine what 
 happened.

I don't see a huge value to using shared memory.   Once we get
auto-vacuum, pg_listener will be fine, and shared memory like SI is just
too hard to get working reliabily because of all the backends
reading/writing in there.  We have tables that have the proper sharing
semantics;  I think we should use those and hope we get autovacuum soon.

As far as the message, perhaps passing the oid of the pg_listener row to
the backend would help, and then the backend can look up any message for
that oid in pg_listener.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] (A) native Windows port

2002-07-02 Thread Lamar Owen

On Tuesday 02 July 2002 09:52 am, Jan Wieck wrote:
 Christopher Kings-Lynne wrote:
It would all work out of the box and would do wonderful things for
the Postgres community.

   I like this idea, but let me just bring one little issue to note: are
   you going to handle upgrades, and if so, how?  How are you going to do
   a major
   version upgrade?

  Well, the easiest way would be to get them to uninstall the old version
  first, but I'm sure it can be worked out.  Perhaps even we shouldn't
  overwrite the old version anyway?

 The question is not how to replace some .EXE and .DLL files or modify
 something in the registry. The question is what to do with the existing
 databases in the case of a catalog version change. You have to dump and
 restore.

Now, riddle me this: we're going to explain the vagaries of 
dump/initdb/restore to a typical Windows user, and further explain why the 
dump won't necessarily restore because of a bug in the older version's 
dump

The typical Windows user is going to barf when confronted with our extant 
'upgrade' process.  While I really could not care less if PostgreSQL goes to 
Windows or not, I am of a mind to support the Win32 effort if it gets an 
upgrade path done so that everyone can upgrade sanely.  At least the Windows 
installer can check for existing database structures and ask what to do -- 
the RPM install cannot do this.  In fact, the Windows installer *must* check 
for an existing database installation, or we're going to get fried by typical 
Windows users.

And if having a working, usable, Win32 native port gets the subject of good 
upgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32 
NATIVELY! :-) (and I despise Win32)

But it shouldn't be an installer issue -- this is an issue which cause pain 
for all of our users, not just Windows or RPM (or Debian) users.  Upgrading 
(pg_upgrade is a start -- but it's not going to work as written on Windows) 
needs to be core functionality.  If I can't easily upgrade my database, what 
good are new features going to do for me?

Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal 
of promise for seamless binary 'in place' upgrading.  He has been able to 
write code to read multiple versions' database structures -- proving that it 
CAN be done.

Windows programs such as Lotus Organizer, Microsoft Access, Lotus Approach, 
and others, allow you to convert the old to the new as part of initial 
startup.  This will be a prerequisite for wide acceptance in the Windows 
world, methinks.
-- 
Lamar Owen
WGCR Internet Radio
1 Peter 4:11



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





[HACKERS] elog() proposals

2002-07-02 Thread Bruce Momjian

I just submitted a patch to fix various elog() issues.  I have two
additional proposals.

First, I think ERROR/DEBUG/NOTICE/FATAL, etc are too generic and cause
compile problems/warnings, especially with Perl.  I suggest renaming all
elog levels to PG*, so it would be PGERROR and PGINFO.  We could also do
E_* or E*.  I am interested in other opinions.

Second, I propose adding two GUC variables that control how much elog()
info is sent to the server and client logs.  I suggest
'server_message_min' with possible values DEBUG, LOG, NOTICE, ERROR,
FATAL, CRASH; and 'client_message_min' with possible values INFO,
NOTICE, ERROR, FATAL, CRASH.

We currently have 'debug_level' which controls how much debug
information is sent to the server logs.  I believe this would have to
remain because it controls how _much_ DEBUG output is printed.  We could
go with some kind of hybrid where DEBUG 5 sets DEBUG as the minimum
reporting mode with level 5.

This functionality mimics the log filter functionality of syslog(3).

Comments?

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html





Re: [HACKERS] Scope of constraint names

2002-07-02 Thread Rod Taylor

 One problem I see is that pg_constraint entries can *only* be associated
 with relations; so the table has no way to represent constraints
 associated with domains --- not to mention assertions, which aren't

It's ugly, but one could make the relid 0, and add a typeid which is
non-zero to represent a constraint against a domain.  Relation
constraints have typeid 0 and relid as a normal number.

Obviously I prefer unique constraint names mostly for my users.  For
some reason they tend to try to make assumptions about a constraint
given the name and have been fooled about what the constraint actually
is more than once due to 'having seen it before elsewhere'.

Is applying a lock on the pg_constraint table really that bad during
creation?  Sure, you could only make one constraint at a time -- but
thats the same with relations, types, and a fair number of other things
that are usually created at the same time (or same transaction) as most
constraints will be.






---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





[HACKERS] ignor ethis ...

2002-07-02 Thread Marc G. Fournier





---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] (A) native Windows port

2002-07-02 Thread Jan Wieck

Lamar Owen wrote:
 [...]
 
 And if having a working, usable, Win32 native port gets the subject of good
 upgrading higher up the priority list, BY ALL MEANS LET'S SUPPORT WIN32
 NATIVELY! :-) (and I despise Win32)

Hehehe :-)

 [...]
 Martin O has come up with a 'pg_fsck' utility that, IMHO, holds a great deal
 of promise for seamless binary 'in place' upgrading.  He has been able to
 write code to read multiple versions' database structures -- proving that it
 CAN be done.

Unfortunately it's not the on-disk binary format of files that causes
the big problems. Our dump/initdb/restore sequence is also the solution
for system catalog changes. If we add/remove internal functions, there
will be changes to pg_proc. When the representation of parsetrees
changes, there will be changes to pg_rewrite (dunno how to convert
that). Consider adding another attribute to pg_class. You'd have to add
a row in pg_attribute, possibly (because it likely isn't added at the
end) increment the attno for 50% of all pg_attribute entries, and of
course insert an attribute in the middle of all existing pg_class rows
... ewe.


Jan

-- 

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





[HACKERS] Does postgreSQL have distributed database management?

2002-07-02 Thread liping guo

 Hi,
 I know ORACLE has distributed database management features: such as
 replication. Can postgreSQL do this? Is any software available for this
purpose for postgreSQL?

 Thanks,

Liping




---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])





Re: [HACKERS] Integrating libpqxx

2002-07-02 Thread Bruce Momjian


OK, I have added this to our CVS under interfaces/libpqxx.  I have not
migrated over the CVS history.  If we have questions about the code, we
know who to ask.  ;-)

Libpqxx still needs to be integrated:

The 'configure' tests need to be merged into our main configure
The documentation needs to be merged into our SGML docs.
The makefile structure needs to be merged into /interfaces.

Jeroen, do you have PostgreSQL CVS access yet?  If not, we need to get
you that.

---

Jeroen T. Vermeulen wrote:
 On Wed, Jun 12, 2002 at 04:04:36PM -0400, Neil Conway wrote:
 
  Otherwise, if you put the code into src/interfaces/libpqxx and modify
  the PostgreSQL build system to be aware of it (as well as removing
  libpqxx's autoconf stuff), it shouldn't be too difficult.
 
 One concern I have on this point is that not all platforms are going to
 be able to build libpqxx.  Also, there'd have to be a lot of C++ stuff
 in the existing config.h which I guess was meant to be C.  
 
 Anyway, I found I'm not much good with automake and so on.  I'm trying
 to merge the two configure.ins, but I feel I must be missing a lot of
 details.
 
 
 Jeroen
 
 
 ---(end of broadcast)---
 TIP 4: Don't 'kill -9' the postmaster
 

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]





Re: [HACKERS] listen/notify argument (old topic revisited)

2002-07-02 Thread Bruce Momjian

Tom Lane wrote:
 Bruce Momjian [EMAIL PROTECTED] writes:
  Why can't we do efficient indexing, or clear out the table?  I don't
  remember.
 
 I don't recall either, but I do recall that we tried to index it and
 backed out the changes.  In any case, a table on disk is just plain
 not the right medium for transitory-by-design notification messages.

OK, I can help here.  I added an index on pg_listener so lookups would
go faster in the backend, but inserts/updates into the table also
require index additions, and your feeling was that the table was small
and we would be better without the index and just sequentially scanning
the table.  I can easily add the index and make sure it is used properly
if you are now concerned about table access time.

I think your issue was that it is only looked up once, and only updated
once, so there wasn't much sense in having that index maintanance
overhead, i.e. you only used the index once per row.

(I remember the item being on TODO for quite a while when we discussed
this.)

Of course, a shared memory system probably is going to either do it
sequentailly or have its own index issues, so I don't see a huge
advantage to going to shared memory, and I do see extra code and a queue
limit.

  A curious statement considering that PG depends critically on SI
  working.  This is a solved problem.
 
  My point is that SI was buggy for years until we found all the bugs, so
  yea, it is a solved problem, but solved with difficulty.
 
 The SI message mechanism itself was not the source of bugs, as I recall
 it (although certainly the code was incomprehensible in the extreme;
 the original programmer had absolutely no grasp of readable coding style
 IMHO).  The problem was failure to properly design the interactions with
 relcache and catcache, which are pretty complex in their own right.
 An SI-like NOTIFY mechanism wouldn't have those issues.

Oh, OK, interesting.  So _that_ was the issue there.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org