AW: [HACKERS] embedded sql with indicators in other DBs

2000-10-23 Thread Zeugswetter Andreas SB


 What do other DBs do with their output variables if there is 
 an embedded SQL
 query resulting in a NULL return value? What I mean is:
 
 exec sql select text into :txt:ind from ...
 
 If text is NULL, ind will be set, but does txt change?
 
 I was just told Informix blanks txt.

No, it gives a null string. 
In general Informix has a value that represents null that is 
'distinct from all legal values in any given datatype'.

Andreas



[HACKERS] testing my connection to list.

2000-10-23 Thread Larry Rosenman


I posted some regression failures twice, and never saw them on the
list or in the newsgroup.  This is a test.
-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Zeugswetter Andreas SB

   What I'm proposing is that once an xact has touched a
   table, other xacts should not be able to apply schema updates to that
   table until the first xact commits.
 
  No, this would mean too many locks, and would leave the dba with hardly a
  chance to alter a table.
 
 
 Are there many applications which have many SELECT statements(without
 FOR UPDATE) in one tx ?

Why not ?

 As for locks,weak locks doesn't pass intensive locks.  Dba 
 seems to be able to alter a table at any time.

Sorry, I don't understand this sentence. Tom suggested placing a shared lock on 
any table that is accessed until end of tx. Noone can alter table until all users have
closed their txns and not accessed tables again. Remember that this would include
creating an index ...

Andreas


Re: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Hiroshi Inoue


Zeugswetter Andreas SB wrote:

  What I'm proposing is that once an xact has touched a
  table, other xacts should not be able to apply schema updates to that
  table until the first xact commits.

 No, this would mean too many locks, and would leave the dba with hardly a
 chance to alter a table.


Are there many applications which have many SELECT statements(without
FOR UPDATE) in one tx ?
As for locks,weak locks doesn't pass intensive locks.  Dba seems to be able
to alter a table at any time.

Regards.
Hiroshi Inoue


Re: [HACKERS] failed runcheck

2000-10-23 Thread Patrick Welche

On Sat, Oct 21, 2000 at 01:48:39PM -0700, Vadim Mikheev wrote:
 Did you run make distclean? I've run regtests before committing changes.

Just made sure - different computer - fresh cvs update/distclean/configure/make
cd src/test/regress
gmake clean
gmake all
gmake runcheck

same coredump

#1  0x807f4be in IsSharedSystemRelationName (relname=0x0) at catalog.c:197
 ^^^
ie. relname[0] at catalog.c:176 is dereferencing a null pointer.

Cheers,

Patrick



Re: [HACKERS] failed runcheck

2000-10-23 Thread Tom Lane

Patrick Welche [EMAIL PROTECTED] writes:
 On Sat, Oct 21, 2000 at 01:48:39PM -0700, Vadim Mikheev wrote:
 Did you run make distclean? I've run regtests before committing changes.

 Just made sure - different computer - fresh cvs update/distclean/configure/make
 same coredump

 #1  0x807f4be in IsSharedSystemRelationName (relname=0x0) at catalog.c:197
  ^^^
 ie. relname[0] at catalog.c:176 is dereferencing a null pointer.

Interesting.  Current sources pass regress tests on my machine, same as
for Vadim.  I think you have found a platform-specific bug.

Could you dig into it a little further and try to determine where the
NULL is coming from?

regards, tom lane



[HACKERS] Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out inet.out foreign_key.out errors.out)

2000-10-23 Thread Tom Lane

Bruce Momjian [EMAIL PROTECTED] writes:
 Speaking of error messages, one idea for 7.2 might be to prepended
 numbers to the error messages.

Isn't that long since on the TODO list?  I know we've had long
discussions about a thoroughgoing revision of error reporting.

regards, tom lane



Re: [HACKERS] relation ### modified while in use

2000-10-23 Thread Philip Warner

At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:

If we have a mechanism to acquire a share lock on a tuple,we
could use it for managing system info generally. However the
only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
Lock on tables would give us a restricted solution about pg_class
tuples.


Don't we have this ability? What about taking a RowShare lock on the
pg_class tuple whenever you read from the table; then requiring schema
updates take a RowExclusive lock on the pg_class tuple?

As you say, it won't prevent deadlocks, but it seems like a reasonable
thing to do.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



Re: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 As for locks,weak locks doesn't pass intensive locks.  Dba 
 seems to be able to alter a table at any time.

 Sorry, I don't understand this sentence. Tom suggested placing a
 shared lock on any table that is accessed until end of tx. Noone can
 alter table until all users have closed their txns and not accessed
 tables again.

Until existing xacts using that table have closed, yes.  But I believe
the lock manager has some precedence rules that will allow the pending
request for AccessExclusiveLock to take precedence over new requests
for lesser locks.  So you're only held off for a long time if you have
long-running xacts that use the target table.

I consider that behavior *far* safer than allowing schema changes to
be seen mid-transaction.  Consider the following example:

Session 1   Session 2

begin;

INSERT INTO foo ...;

ALTER foo ADD constraint;

INSERT INTO foo ...;

end;

Which, if any, of session 1's insertions will be subject to the
constraint?  What are the odds that the dba will like the result?

With my proposal, session 2's ALTER would wait for session 1 to commit,
and then the ALTER's own scan to verify the constraint will check all
the rows added by session 1.

Under your proposal, I think the rows inserted at the beginning of
session 1's xact would be committed without having been checked.

regards, tom lane



Re: AW: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 No, the above is not a valid example, because Session 2 won't
 get the exclusive lock until Session 1 commits, since Session 1 already 
 holds a lock on foo (for the inserted row). 

 You were talking about the "select only" case (and no for update eighter). 
 I think that select statements need a shared lock for the duration of their 
 execution only.

You seem to think that locks on individual tuples conflict with
table-wide locks.  AFAIK that's not true.  The only way to prevent
another xact from gaining AccessExclusiveLock on a table is to be
holding some lock *on the table*.

As for your claim that read-only xacts don't need to worry about
preventing schema updates, what of adding/deleting ON SELECT rules?

regards, tom lane



Re: [HACKERS] INHERITS doesn't offer enough functionality

2000-10-23 Thread Marten Feldtmann



Hiroshi Inoue schrieb:
 
 Chris wrote:
 
  It's pretty clear to me that an inherited index should be only one
  index. There may be a case for optional non-inherited indexes (CREATE
  INDEX ON ONLY foobar), but if the index is inherited, it is just one
  index.
 
  At the end of the day though, the reason is only performance. The
  semantics should be the same no matter whether implemented as multiple
  indexes or not. Performance is much better with one index though.(*)
 
 
 Is it true ?
 How to guarantee the uniqueness using multiple indexes ?
 

 Sorry to say, but you all should really think about, what inheritance
should mean 

 In the classic mapping strategy (OO-rdbms mapping) it's said, that 
each class is mapped to ONE table ! This is the classic mapping
strategy, which is mentioned in every literature.

 The point is: this is classic, but noone does it like this if
your really have a larger hierarchy of classes. You'll not get 
any good performance, when solving an association in your oo
program, because the framework has to query against each 
table: 6 tables - 6 queries !!! :-(

 With the PostgreSQL approach one can send ONE query against
the tables and one would get one result ... which will be
much faster (I hope so ... that has to be prooved ..).

--

 I'm not sure, that inherited indices should be really ONE
index. There are very well reasons NOT to build ONE larger
index.

 Actually one should think about: why do I really want to 
have inheritance in the oo-rdbms ? Actually I could put
all columns (of all classes in this hierarchy into one table 
and that's it). I would like to have inheritance in this
database system, because the tables are getting smaller
and queries against special classes (eh tables) are becoming
faster.

 Actually the inserts will be much faster also because you
have several smaller indices.

 I've run tests here with ONE large table (5 columns and 
5 indices) holding data for about 17 classes and the result 
is: the insert/update path is the problem and not the 
select-path. insert-performance is decreasing in a 
linear fashon ... very, very bad.


 Marten



Re: [HACKERS] relation ### modified while in use

2000-10-23 Thread Philip Warner

At 10:45 23/10/00 -0400, Tom Lane wrote:
Philip Warner [EMAIL PROTECTED] writes:
 Don't we have this ability? What about taking a RowShare lock on the
 pg_class tuple whenever you read from the table; then requiring schema
 updates take a RowExclusive lock on the pg_class tuple?

How is that different from taking locks on the table itself?

Only slightly; one interpretation of a table lock is that it locks all of
the data in the table; and a lock on the pg_class row locks the metadata. I
must admit that I am having a little difficulty thinking of a case where
the distinction would be useful...


In any case, we don't have the ability to hold multiple classes of locks
on individual tuples, AFAIK.  UPDATE and SELECT FOR UPDATE use a
different mechanism that involves setting fields in the header of the
affected tuple.  There's no room there for more than one kind of lock;
what's worse, checking and waiting for that lock is far slower than
normal lock-manager operations. 

So where do

SELECT FOR UPDATE IN ROW SHARE MODE 
and 
LOCK TABLE IN ROW EXCLUSIVE MODE statements. 

fit in? 

They *seem* to provide differing levels of row locking.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



AW: AW: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Zeugswetter Andreas SB


  You were talking about the "select only" case (and no for update eighter). 
  I think that select statements need a shared lock for the duration of their 
  execution only.
 
 You seem to think that locks on individual tuples conflict with
 table-wide locks.

Yes, very much so. Any other way would be subject to the same quirks 
you would like to avoid, no ?

  AFAIK that's not true.

well, imho room for improvement.

 The only way to prevent
 another xact from gaining AccessExclusiveLock on a table is to be
 holding some lock *on the table*.

Yes, and holding a row exclusive lock must imho at least grab a shared
table lock (to avoid several problems, like missing an index update,
inserting a null into a newly added not null column ...).
Alternately the table exclusive lock could honour row locks 
(probably not possible, since we don't track those do we ?).

 As for your claim that read-only xacts don't need to worry about
 preventing schema updates, what of adding/deleting ON SELECT rules?

Well, depends on what that rule does, you mean a new rule ?
Ad hoc I don't see a problem based on the idea that all modification gets 
appropriate locks.

Andreas



[HACKERS] Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out inet.outforeign_key.out errors.out)

2000-10-23 Thread Bruce Momjian

 Bruce Momjian [EMAIL PROTECTED] writes:
  Speaking of error messages, one idea for 7.2 might be to prepended
  numbers to the error messages.
 
 Isn't that long since on the TODO list?  I know we've had long
 discussions about a thoroughgoing revision of error reporting.

Yes. We have:

* Allow elog() to return error codes, not just messages
* Allow international error message support and add error codes

I just thought I would mention it is on my radar screen now.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] testing my connection to list.

2000-10-23 Thread Tom Lane

Larry Rosenman [EMAIL PROTECTED] writes:
 Ok, so why didn't my regression outputs post? 
 Marc? 

How big were they?  I think the default configuration for majordomo
is that posts over 50K or so don't go through until hand-approved by
moderator.  Marc tends to clean out that inbox every few days...

regards, tom lane



Re: [HACKERS] relation ### modified while in use

2000-10-23 Thread Vadim Mikheev
  in general.  What I'm proposing is that once an xact has touched a
  table, other xacts should not be able to apply schema updates to that
  table until the first xact commits.
 
 
 I agree with you.

I don't know. We discussed this issue just after 6.5 and decided to
allow concurrent schema modifications.
Oracle has disctionary locks but run each DDL statement in separate
xaction, so - no deadlock condition here. OTOH, I wouldn't worry
about deadlock - one just had to follow common anti-deadlock rules.

 I've wondered why AccessShareLock is a short term lock.

MUST BE. AccessShare-/Exclusive-Locks are *data* locks.
If one want to protect schema then new schema share/excl locks
must be inroduced. There is no conflict between data and
schema locks - they are orthogonal.

We use AccessShare-/Exclusive-Locks for schema because of...
we allow concurrent schema modifications and no true schema
locks were required.

 If we have a mechanism to acquire a share lock on a tuple,we
 could use it for managing system info generally. However the
 only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)

Actually, just look at lock.h:LTAG structure - lock manager supports
locking of "some objects" inside tables:

typedef struct LTAG
{
OidrelId;
OiddbId;
union
{
BlockNumberblkno;
Transactionxid;
} objId;
...
 - we could add oid to union above and lock tables by acquiring lock
on pg_class with objId.oid = table' oid. Same way we could lock indices
and whatever we want... if we want -:)

 Lock on tables would give us a restricted solution about pg_class
 tuples.
 
 Thers'a possibility of deadlock in any case but there are few
 cases when AccessExclusiveLock is really needed and we could
 acquire an AccessExclusiveLock manually from the first if
 necessary.
 
 I'm not sure about the use of AccessShareLock in parse-analyze-
 optimize phase however.

There is notion about breakable (parser) locks in Oracle documentation -:)

Vadim


Re: AW: AW: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 Yes, and holding a row exclusive lock must imho at least grab a shared
 table lock

As indeed it does.  Our disagreement seems to be just on the point of
whether it's safe to allow a read-only transaction to release its 
AccessShareLock locks partway through.

My opinion about that is colored by the known bugs that we have because
the parser/rewriter/planner currently do just that.  You can cause the
system to become mighty confused if the report of a table schema change
arrives partway through the parse/plan process, because decisions
already made are no longer valid.  While we can probably patch the holes
in this area by holding a lock throughout processing of one statement,
I think that will just push the problem up to the application level.
How many apps are likely to be coded in a way that will be robust
against intra-transaction schema changes?

regards, tom lane



Re: [HACKERS] relation ### modified while in use

2000-10-23 Thread Tom Lane

Philip Warner [EMAIL PROTECTED] writes:
 Only slightly; one interpretation of a table lock is that it locks all of
 the data in the table; and a lock on the pg_class row locks the metadata. I
 must admit that I am having a little difficulty thinking of a case where
 the distinction would be useful...

I can't see any value in locking the data without locking the metadata.
Given that, the other way round is sort of moot...

 So where do
 SELECT FOR UPDATE IN ROW SHARE MODE 

We don't support that (never heard of it before, in fact)

 and 
 LOCK TABLE IN ROW EXCLUSIVE MODE statements. 
 fit in? 

That one is just a table lock (RowExclusiveLock).  All the variants
of LOCK TABLE are table-level locks.

regards, tom lane



Re: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Tom Lane

Philip Warner [EMAIL PROTECTED] writes:
 In this case, wouldn't the answer depend on the isolation level of session
 1? For serializable TX, then constraint would not apply; 'read committed'
 would mean the constraint was visible on the second insert and at the commit.

The important issue here is that all schema changes have to be read
on a read-committed basis, even if your transaction is otherwise
serializable.  Consider for example the possibility that the schema
change you're ignoring consists of a DROP INDEX or some such --- you'll
fail if you proceed as though the index is still there.  This is the
point Vadim was making a few days ago (but I didn't understand at the
time).

I believe we can work out a consistent set of behavior such that user
data accesses (SELECT/UPDATE/etc) follow MVCC rules but system accesses
to schema data always follow read-committed semantics.  One of the
components of this has to be an agreement on how to handle locking.
AFAICS, we have to adopt hold-some-kind-of-lock-till-end-of-xact,
or we will have consistency problems between the user and system
views of the world.

regards, tom lane



Re: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Vadim Mikheev
  As for locks,weak locks doesn't pass intensive locks.  Dba
  seems to be able to alter a table at any time.

 Sorry, I don't understand this sentence. Tom suggested placing a shared
lock on
 any table that is accessed until end of tx. Noone can alter table until
all users have
 closed their txns and not accessed tables again.

More of that - while one xaction will wait to alter a table no new xaction
will be
allowed to access this table too.

 Remember that this would include creating an index ...

I don't think so. Index creation requires
1. share lock on schema
2. share lock on data

Vadim


Re: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Philip Warner

At 10:10 23/10/00 -0400, Tom Lane wrote:

I consider that behavior *far* safer than allowing schema changes to
be seen mid-transaction.  Consider the following example:

   Session 1   Session 2

   begin;

   INSERT INTO foo ...;

   ALTER foo ADD constraint;

   INSERT INTO foo ...;

   end;

Which, if any, of session 1's insertions will be subject to the
constraint?  What are the odds that the dba will like the result?


In this case, wouldn't the answer depend on the isolation level of session
1? For serializable TX, then constraint would not apply; 'read committed'
would mean the constraint was visible on the second insert and at the commit.

I would err on the side of insisting all metadata changes occur in
serializable transactions to make life a little easier.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



Re: [HACKERS] testing my connection to list.

2000-10-23 Thread Larry Rosenman

* Tom Lane [EMAIL PROTECTED] [001023 09:15]:
 Larry Rosenman [EMAIL PROTECTED] writes:
  Ok, so why didn't my regression outputs post? 
  Marc? 
 
 How big were they?  I think the default configuration for majordomo
 is that posts over 50K or so don't go through until hand-approved by
 moderator.  Marc tends to clean out that inbox every few days...
63K.  But I thought some of my others were larger. 

Maybe that limit needs to be bigger?


 
   regards, tom lane
-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



[HACKERS] UDK....

2000-10-23 Thread Larry Rosenman


One thing my testing gave SCO was the fact that cc needs to know about
the -R option to ld.  It will change before release to know that -R
takes an argument. 

Just keep that in mind
-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: AW: [HACKERS] to_char() dumps core

2000-10-23 Thread Tom Lane

Zeugswetter Andreas SB [EMAIL PROTECTED] writes:
 For pass by reference datatypes setting the reference to a null pointer
 for a NULL value imho would be a fine thing in addition to the indicator, 
 no ?

At the moment it generally will be, but that's not certain to be true
forever.  I believe we've had discussions in the past about supporting
multiple kinds of NULL.  (I had the idea this was actually required by
SQL99, in fact, though I can't find anything about it at the moment.)

The obvious way to do that is to commandeer the otherwise unused
contents of a Datum when the associated null-flag is true.  At that
point checking the null-flag will be the only valid way to check for
NULL.

Assuming that the null-kind values are small integers, attempts to
dereference them will still SEGV on reasonable systems, so I don't
think any error checking is lost.  Just don't do "if (datum == NULL)".

regards, tom lane



Re: [HACKERS] question about new fmgr in 7.1 snapshots

2000-10-23 Thread Jan Wieck

Tom Lane wrote:
 Jeff Hoffmann [EMAIL PROTECTED] writes:
  my question is whether i should change the function to use the new fmgr
  type of definition or if it's only for internal functions.

 Up to you.  If you need any of the new features (like clean handling
 of NULLs) then convert.  If you were happy with the old way, no need.

 A new-style dynamically loaded function must be defined as using
 language "newC" not "C"; this cues fmgr which way to call it.


Are  you  sure  on  that?  Doesn't  TOAST  mean that any user
defined function  recieving  variable  size  attributes  must
expect  them  now  to  be  compressed  or stored external and
change it's access to them going through the  untoasting?  Or
do  you  do  that for old style 'C' functions all the time in
the fmgr now?

 Gotta start updating the documentation soon ;-)


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





[HACKERS] Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out inet.outforeign_key.out errors.out)

2000-10-23 Thread Bruce Momjian

 Bruce Momjian writes:
 
   Bruce Momjian [EMAIL PROTECTED] writes:
Speaking of error messages, one idea for 7.2 might be to prepended
numbers to the error messages.
   
   Isn't that long since on the TODO list?  I know we've had long
   discussions about a thoroughgoing revision of error reporting.
  
  Yes. We have:
  
  * Allow elog() to return error codes, not just messages
  * Allow international error message support and add error codes
  
  I just thought I would mention it is on my radar screen now.
 
 Yeah, it's on mine too.  The only thing I'm still unsure about the
 "international" part.  Does anyone know of a gettext-ish thing that has an
 acceptable license?

Yes, they must exist.  I want a solution that doesn't make it difficult
for people add error messages.  Having codes in the C files and error
messages in another file is quite a pain.  My idea would enable us to
number the error messages, keep the English text for the message in the
same file next to the code, then allow international support by creating
a utility that can dump out all the codes with the Engligh text to allow
translators to make non-English versions.  The translated file can then
be used by the backend to generate messages in other languagues using a
SET command.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] UnixWare 7.1.1b FS

2000-10-23 Thread Larry Rosenman

OK, removing the second --with-CXX got us past configure, and gmake
ran a long while, but MAXBUFSIZE didn't get defined such that
fe-connect.c died:

gmake -C doc all
gmake[1]: Entering directory `/home/ler/pg-dev/pgsql-snap/doc'
gmake[1]: Nothing to be done for `all'.
gmake[1]: Leaving directory `/home/ler/pg-dev/pgsql-snap/doc'
gmake -C src all
gmake[1]: Entering directory `/home/ler/pg-dev/pgsql-snap/src'
gmake -C backend all
gmake[2]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend'
prereqdir=`cd parser/  pwd`  \
  cd ../../src/include/parser/  rm -f parse.h  \
  ln -s $prereqdir/parse.h .
gmake -C utils fmgroids.h
gmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'
CPP='cc -E' AWK='gawk' /bin/sh Gen_fmgrtab.sh ../../../src/include/catalog/pg_proc.h
gmake[3]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/utils'
cd ../../src/include/utils/  rm -f fmgroids.h  \
ln -s ../../../src/backend/utils/fmgroids.h .
gmake -C access all
gmake[3]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access'
gmake -C common SUBSYS.o
gmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/common'
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o heaptuple.o heaptuple.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o indextuple.o indextuple.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o indexvalid.o indexvalid.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o printtup.o printtup.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o scankey.o scankey.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o tupdesc.o tupdesc.c
/usr/bin/ld -r -o SUBSYS.o heaptuple.o indextuple.o indexvalid.o printtup.o scankey.o 
tupdesc.o  
gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/common'
gmake -C gist SUBSYS.o
gmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/gist'
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o gist.o gist.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o gistget.o gistget.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o gistscan.o gistscan.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o giststrat.o giststrat.c
/usr/bin/ld -r -o SUBSYS.o gist.o gistget.o gistscan.o giststrat.o
gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/gist'
gmake -C hash SUBSYS.o
gmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/hash'
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hash.o hash.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashfunc.o hashfunc.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashinsert.o hashinsert.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashovfl.o hashovfl.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashpage.o hashpage.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashscan.o hashscan.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashsearch.o hashsearch.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashstrat.o hashstrat.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hashutil.o hashutil.c
/usr/bin/ld -r -o SUBSYS.o hash.o hashfunc.o hashinsert.o hashovfl.o hashpage.o 
hashscan.o hashsearch.o hashstrat.o hashutil.o
gmake[4]: Leaving directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/hash'
gmake -C heap SUBSYS.o
gmake[4]: Entering directory `/home/ler/pg-dev/pgsql-snap/src/backend/access/heap'
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o heapam.o heapam.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include -O -K 
host,inline,loop_unroll,alloca -Dsvr4 -o hio.o hio.c
cc -c -I/usr/local/include -I/opt/include -I../../../../src/include 

Re: [HACKERS] INHERITS doesn't offer enough functionality

2000-10-23 Thread Marten Feldtmann



Chris schrieb:
 
   The point is: this is classic, but noone does it
  like this if your really have a larger hierarchy of
  classes. You'll not get any good performance, when
  solving an association in your oo
  program, because the framework has to query against
  each table: 6 tables - 6 queries !!! :-(
 
   With the PostgreSQL approach one can send ONE query
  against the tables and one would get one result ...
  which will be much faster (I hope so ... that has to
  be prooved ..).=
 
 You'll still have to do 6 queries in postgres because it does not return
 fields in sub-classes. 

 Practically this is not such a big problem as one might think.

 WHEN you have a persistance framework you tell your framework, 
that every attribut is located (mapped or stored or however you 
may see it) in the superclass and then your top class (table)
helds all attributes your "lowest" subclass has.

 But that puts another question to be answered: are the defined
contrained also inheritate ??? Actually I would say: no and
therefore we have the same handling as with indices.

 Most of the attributes may have NULL, but who cares ? The 
framework actually has to interpret the data coming from
the database and will throw him away.

 Therefore I can get around the limitations of PostgreSQL
in this case. If PostgreSQL can handle this in addition
this would be very nice ... but before the basic stuff has
to be fixed and it has to be very solid.

 But I have to admit: my point is a viewpoint from a programmer
using an object oriented language and I only want to store
my objects into a database. People using PHP, pearl or
other "low-level" languages may have a different view or
need, because they do not have a framework doing the work
for them.

 I can only tell you, what will be an improvement for me as
a persistance framework programmer and will not help me.

 What will not help me:

 * that the database generates OID

 * that the database generates "clsss" OID (one may want to
   have that, because to recognize which table the data
   comes from..)

 * special features to solve very special problems

 What will help me:

 * all the stuff to reduce the number (!) of queries send 
   to database to get my data

 * a way to insert VERY quickly a larger amount of data 
   into a table.

 * a good, speedy database
 
Marten



[HACKERS] Re: Navigating time-warps in the CVS tree (was re the rule system)

2000-10-23 Thread Tom Lane

"Kevin O'Gorman" [EMAIL PROTECTED] writes:
 pg_backup_custom.c: In function `_DoDeflate':
 pg_backup_custom.c:846: `z_streamp' undeclared (first use in this function)
 pg_backup_custom.c:846: parse error before `zp'
 pg_backup_custom.c:849: `ctx' undeclared (first use in this function)
 pg_backup_custom.c:852: `AH' undeclared (first use in this function)
 pg_backup_custom.c:854: `zp' undeclared (first use in this function)
 pg_backup_custom.c:854: `flush' undeclared (first use in this function)
 pg_backup_custom.c: In function `_EndDataCompressor':
 pg_backup_custom.c:912: `ctx' undeclared (first use in this function)
 pg_backup_custom.c:912: parse error before `)'
 pg_backup_custom.c:913: `z_streamp' undeclared (first use in this function)
 pg_backup_custom.c:918: `zp' undeclared (first use in this function)

Hmm.  Looks like Philip neglected to see to it that pg_dump will compile
when libz is not present --- the "#include zlib.h" is properly ifdef'd
out, but the code isn't.  Over to you, Philip ...

regards, tom lane



Re: [HACKERS] Re: [COMMITTERS] pgsql/src/test/regress/expected (plpgsql.out inet.out foreign_key.out errors.out)

2000-10-23 Thread Vadim Mikheev

 Bruce Momjian [EMAIL PROTECTED] writes:
  Speaking of error messages, one idea for 7.2 might be to prepended
  numbers to the error messages.
 
 Isn't that long since on the TODO list?  I know we've had long
 discussions about a thoroughgoing revision of error reporting.

Yes, yes, yes! We need in numbers especially because of we
hopefully will have savepoints in 7.2 and so we would get powerful
error handling by *applications* not by *human* only.

Vadim





Re: [HACKERS] [ANNC][RFC] crypto hashes for PostgreSQL 7.0, 7.1

2000-10-23 Thread Horst Herb

 Btw, the concept of checksumming rows is kinda new to me.
 I needed this to store passwords on a table, so sorry if I
 cant be more help.  But I am a litte bit curious, why is it
 needed?  Simple checksumming (crc32/md5) does not help malicious
 changing of data, only hardware failures, but today's hardware
 has itself checksumming builtin...  It probably would be a
 more point if you do some pgp/gpg style signing so you would
 get some authenticy too, but this is hard to implement right.

1.) checksumming a row will alert you when glitches have changed data. Happened twice 
in 3 years to me with my previous system (with top end hardware!). This is probably 
due to file system or hardware failures. There is no other way to find out whether 
such a glitch has happened other than regularly checking the checksums. Despite all 
progress in hardware, these errors still happen and I have these happenings well 
documented. Most of the people never will notice as they do not use such a checking.

2.) We had problems before with tunneled IP connections and corrupted data. These 
errors are very rare, but again, they can happen - the more complex your network setup 
is, the more likely you might get a glitch or two per year. I never fou d out what to 
blame: the protocol implementation, the server, the client ...
With large packet sizes, the checksumming the network protocols use is not as 
collision proof as one might wish. The same crc works more reliable with small amounts 
of data than with larger amounts.

3.) This checksumming helps to check whether a complex database setup with lots of 
triggers and interdependencies really stores the data the way it is supposed to as you 
can do the same calculation on the client and compare after commitment. Helps a lot 
while testing such a setup

Horst




Re: [HACKERS] relation ### modified while in use

2000-10-23 Thread Hiroshi Inoue


Philip Warner wrote:

 At 15:29 23/10/00 +0900, Hiroshi Inoue wrote:
 
 If we have a mechanism to acquire a share lock on a tuple,we
 could use it for managing system info generally. However the
 only allowed lock on a tuple is exclusive.  Access(Share/Exclusive)
 Lock on tables would give us a restricted solution about pg_class
 tuples.
 

 Don't we have this ability? What about taking a RowShare lock on the
 pg_class tuple whenever you read from the table; then requiring schema
 updates take a RowExclusive lock on the pg_class tuple?


Both RowShare and RowExclusive lock are table level
locking. The implementation of tuple level locking is
quite different from that of table level locking.
The information of table level locking is held in shared
memory. OTOH the information of tuple level locking
is held in the tuple itself i.e. a transaction(t_xmax) is
updating/deleting/selecting for update the tuple
If other backends are about to update/delete/select
for update a tuple,they check the information of the
tuple and if the tuple is being updated/... they wait until
the end of the transaction(t_xmax).

Regards.
Hiroshi Inoue


BLERe: AW: AW: [HACKERS] relation ### modified while in use

2000-10-23 Thread Hiroshi Inoue


Zeugswetter Andreas SB wrote:

What I'm proposing is that once an xact has touched a
table, other xacts should not be able to apply schema updates to that
table until the first xact commits.
  
   No, this would mean too many locks, and would leave the dba with hardly a
   chance to alter a table.
  
 
  Are there many applications which have many SELECT statements(without
  FOR UPDATE) in one tx ?

 Why not ?


It seems to me that multiple SELECT statements in a tx has little
meaning unless the tx is executed in SERIALIZABLE isolation level.


  As for locks,weak locks doesn't pass intensive locks.  Dba
  seems to be able to alter a table at any time.

 Sorry, I don't understand this sentence. Tom suggested placing a shared lock on
 any table that is accessed until end of tx. Noone can alter table until all users 
have
 closed their txns and not accessed tables again. Remember that this would include
 creating an index ...


What I meant is the following though I may be misunderstanding your point.

Session-1.
 # begin;
 # declare myc cursor for select * from t1;

Session-2.
  # begin;
  # lock table t1; [blocked]

Session-3.
  # begin;
  # select * from t1; [blocked]

Session-1.
  # abort;

Then
Session-2.
  LOCK TABLE
  #

but
Session-3.
  [still blocked]

Regards.
Hiroshi Inoue


[HACKERS] Mailing list archives available?

2000-10-23 Thread Krzysztof Kowalczyk

Are mailing list archives of various postgresql mailing list available
anywhere?

I know they were some time ago but I couldn't find any link on
www.postgresql.org now. I subscribed to a list mainly because I want to
monitor the progress but the amount of messages kills my inbox. It would
be really convenient for me if I could just browse the archives on web
once in a while.

Krzysztof Kowalczyk



Re: [HACKERS] Mailing list archives available?

2000-10-23 Thread The Hermit Hacker


http://www.postgresql.org/mhonarc has them all listed .. not sure how to
get there from the Web site ... Vince?

On Mon, 23 Oct 2000, Krzysztof Kowalczyk wrote:

 Are mailing list archives of various postgresql mailing list available
 anywhere?
 
 I know they were some time ago but I couldn't find any link on
 www.postgresql.org now. I subscribed to a list mainly because I want to
 monitor the progress but the amount of messages kills my inbox. It would
 be really convenient for me if I could just browse the archives on web
 once in a while.
 
 Krzysztof Kowalczyk
 
 

Marc G. Fournier   ICQ#7615664   IRC Nick: Scrappy
Systems Administrator @ hub.org 
primary: [EMAIL PROTECTED]   secondary: scrappy@{freebsd|postgresql}.org 




Re: [GENERAL] Re: [HACKERS] My new job

2000-10-23 Thread Jan Wieck

Peter Eisentraut wrote:
 Bruce Momjian writes:

  After careful consideration, I have decided to accept a job with Great
  Bridge.

 Whatever happened to this:

 Date: Tue, 09 May 2000 15:19:48 -0400
 From: Tom Lane [EMAIL PROTECTED]
 To: Ross J. Reedstrom [EMAIL PROTECTED]
 Cc: PostgreSQL-general [EMAIL PROTECTED]
 Subject: Re: [GENERAL] Steering committee responce to Great Bridge LLC

 : One thing we have agreed to is that there must not be an unseemly fraction
 : of core members working for the same company.  With six people on core,
 : probably about two working at the same company would be a reasonable
 : limit.

Been in Poland for a week, so pardon for the delay.

Initially  it  was (if I recall correct) Vadim's and my idea.
The main reason behind it  wasn't  to  avoid  influence  from
commercial   entities  into  core.  We've  all  been  working
together for years as a group with great honour and trusting,
so  the  aims of all core members where never questioned.  We
just decided this "fraction" to avoid  any  hireing  to  look
like a takeover.

This  world  is  spinning a little fast at the moment. Let me
repeat what I said to a person I met  last  week  in  Poland,
during  a  PosrgreSQL  conference  in  Wierzba. We have a BSD
license and now I know a good reason why we kept it  all  the
time.  With  that  license  in  place,  there's absolutely no
reason to panic right now. Nothing can be taken away, and  if
things go wrong in the future, those left in the "OpenSource"
corner can start from our last official release  again  -  be
sure  I'll be somewhere in that corner, even if it might take
some time before I can surface again. And I'm sure I'll  meet
all those I loved to work with together in that corner again.

Never underestimate the power of Open Source.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





[HACKERS] Re: Add support for xti.h

2000-10-23 Thread Tom Lane

 Actually, given your description of the problem, I'm half inclined
 to revert the whole patch and instead make configure's test for
 availability of netinet/tcp.h first include netinet/in.h, so
 that that configure test will succeed on IRIX etc.

Pete,
  After looking at this I'm confused again.  The configure test
consists of seeing whether cpp will process

#include netinet/tcp.h

without complaint.  I can well believe that the full C compilation
process will generate errors if netinet/tcp.h is included without
also including netinet/in.h, but it's a little harder to believe
that cpp alone will complain.  Could you double-check this?

It would be useful to look at the config.log file generated by the
configure run that's reporting tcp.h isn't found.  It should contain
the error messages generated by failed tests.

regards, tom lane



[HACKERS] Re: Add support for xti.h

2000-10-23 Thread Tom Lane

Pete Forman [EMAIL PROTECTED] writes:
 I've done bit more research.  xti.h was the correct place to find
 TCP_NODELAY in UNIX98/SUSv2.  However in the Austin Group draft of the
 next version of POSIX and UNIX0x/SUSv3, XTI has been dropped and
 netinet/tcp.h  officially included.

OK, thanks for following up on that.

 to revert the whole patch and instead make configure's test for
 availability of netinet/tcp.h first include netinet/in.h, so
 that that configure test will succeed on IRIX etc.

 I agree with this.  Back out the patch and update configure.in.

Will do.

regards, tom lane



[HACKERS] Great Bridge is hiring!

2000-10-23 Thread Ned Lilly

All,

Apologies for the nature of this post - I think/hope this will be of 
general interest to the PostgreSQL community:

Great Bridge is hiring.  Big time.

We're particularly interested in what we're calling "knowledge 
engineers" - the people who will work with paying Great Bridge customers 
to troubleshoot technical issues with PostgreSQL and other open source 
technologies, work on the fix themselves, and get paid to hack on 
PostgreSQL and other projects in their "down" time.  Here's the position 
description from our website:

--

This highly specialized engineer will be the front line of Great 
Bridge's professional support services, working to troubleshoot and 
resolve customers' technical issues. You will communicate with customers 
(email, live chat, phone) to troubleshoot and resolve problems; work 
closely with engineering staff on identifying bugs and scheduling fixes 
and hack at Great Bridge supported open source projects as available. 
Experience desired is 7-10 years of database programming or 
administration and deep expertise in at least one major RDBMS 
(PostgreSQL, Oracle, DB2, Microsoft SQL Server, Sybase, Informix).  Unix 
orientation is critical, as well as experience in 2 - 3 of the following 
programming languages: C, C++, Perl, PHP, Python and Tcl/Tk.  A 
demonstrated ability to work in small teams in a cooperative environment 
with strong customer service skills is required.

--

Great Bridge is headquartered in Norfolk, Virginia - which boasts a mild 
climate, moderate cost of living, and easy access to lots of water, 
including the Atlantic Ocean, the Chesapeake Bay, and lots of rivers and 
inlets.  For more info on the region, see www.hamptonroads.com.

Great Bridge currently has 32 full-time employees and is growing fast.  
The knowledge engineer positions are located in Norfolk, and offer very 
competitive salaries, stock options, and comprehensive benefits.  If you 
love PostgreSQL and open source, and want to get in on the ground floor 
of a leading open source software company, please contact me off-list 
and let's talk.

Thanks,
Ned

-- 

Ned Lilly e: [EMAIL PROTECTED]
Vice Presidentw: www.greatbridge.com
Evangelism / Hacker Relationsv: 757.233.5523
Great Bridge, LLCf: 757.233.




[HACKERS] Re: [GENERAL] Great Bridge is hiring!

2000-10-23 Thread Dominic J. Eidson

On Mon, 23 Oct 2000, Ned Lilly wrote:

 Date: Mon, 23 Oct 2000 10:53:48 -0400
 From: Ned Lilly [EMAIL PROTECTED]
 To: [EMAIL PROTECTED], [EMAIL PROTECTED]
 Subject: [GENERAL] Great Bridge is hiring!

But for what? :)


-- 
Dominic J. Eidson
"Baruk Khazad! Khazad ai-menu!" - Gimli
---
http://www.the-infinite.org/  http://www.the-infinite.org/~dominic/