Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Philip Warner

At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:
>I've been examining the pg_dump source and output, and I've come to the
>conclusion that I can modify it so that UNIQUE constraints appear as part of
>the CREATE TABLE statement, rather than as a separate CREATE INDEX.
...
>Is there any problem with me working on this?

I actually don't think it's a good idea to force things to work that way. 

Perhaps as an *option*, but even then I'd be inclined to append them as a
series of 'ALTER TABLE ADD CONSTRAINT...' statements.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



RE: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Christopher Kings-Lynne

At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:
> >I've been examining the pg_dump source and output, and I've come to the
> >conclusion that I can modify it so that UNIQUE constraints
> appear as part of
> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.
> ...
> >Is there any problem with me working on this?
>
> I actually don't think it's a good idea to force things to work that way.

Why, exactly?

What's the difference between this:

--
create table test (
 a int4,
 constraint "name" unique (a)
)
--

and this:

--
create table test (
 a int4
)
create unique index "name" on "test" using btree ( "a" "int4_ops" );
--

I note that when a table is dropped, any unique constraints (in fact all
indices) associated with it are also dropped...

> Perhaps as an *option*, but even then I'd be inclined to append them as a
> series of 'ALTER TABLE ADD CONSTRAINT...' statements.

As far as I can tell, Postgres 7.0.3 only supports adding fk constraints.
The CVS version seems to support adding CHECK constraints, but other than
that, it has to be added as an index.  If you're a database user, it's
conceptually better to see right in your table that you've added a named (or
not) unique constraint, rather than noticing at the bottom of the file that
there's some unique index on one of your columns (IMHO).

Chris




RE: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Philip Warner

At 16:33 22/11/00 +0800, Christopher Kings-Lynne wrote:
>At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:
>> >I've been examining the pg_dump source and output, and I've come to the
>> >conclusion that I can modify it so that UNIQUE constraints
>> appear as part of
>> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.
>> ...
>> >Is there any problem with me working on this?
>>
>> I actually don't think it's a good idea to force things to work that way.
>
>Why, exactly?

Having now looked at the code and seen that PK constraints are already
dumped in the table definition, I guess doing unique constraints in the
same way is no worse.

My main concern is that I'd like pg_dump to be able to separate out the
various parts of the schema, and this includes constraints. The ability to
add/drop constraints at any part of the restoration process would be very
nice. The latest incarnations of pg_dump/pg_restore allow people (and
pg_dump/restore) to choose what to restore, and even to define an ordering
for them - and having the constraimts as separate items would be a great
benefit. One example of the problems that I'd like to avoid is in loading
data via INSERT statements - doing:

   Create Table...
   Insert many rows...
   Add Uniqueness Constraint

is *substantially* faster than INSERTs on a table with constraints already
defined.

At the current time we don't even have a working 'ALTER TABLE...' that
works with all constraint types, so my hopes are probably in vain. I don't
suppose you feel like working on 'ALTER TABLE...ADD/DROP CONSTRAINT...' do
you


>What's the difference between this:
>
>--
>create table test (
> a int4,
> constraint "name" unique (a)
>)
>--
>
>and this:
>
>--
>create table test (
> a int4
>)
>create unique index "name" on "test" using btree ( "a" "int4_ops" );

The fact that pg_dump/restore will be able to create the index at the end
of the data load.


>
>As far as I can tell, Postgres 7.0.3 only supports adding fk constraints.
>The CVS version seems to support adding CHECK constraints, but other than
>that, it has to be added as an index.

Sounds like a good thing to work on ;-}


>If you're a database user, it's
>conceptually better to see right in your table that you've added a named (or
>not) unique constraint, rather than noticing at the bottom of the file that
>there's some unique index on one of your columns (IMHO).

This is a good argument for modifying the output of '\d' in psql. It is
also probably a valid argument for a new option on pg_dump to specify if
constraints should be kept separate from table definitions. Then we could
also move FK constraints to the end.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



Re: [HACKERS] Crash during WAL recovery?

2000-11-22 Thread Robert B. Easter

On Wednesday 22 November 2000 00:54, Tom Lane wrote:
> > I don't know from your post if you thought I was adding
> > to the criticism or not, but I can say with certainty I wasn't.
>
> No, I saw that you understood perfectly, I just wanted to add another
> two cents...
>
> > I'm not denigrating the current efforts, because PG documention's pretty
> > good all things considered.  But some volunteers devoted to improving
> > the docs could accomplish a lot.
>
> Yup.  Anyone out there with the time and interest?
>
>   regards, tom lane

I might be interested in helping with it.  Whats involved (DocBook, SGML)?

-- 
 Robert B. Easter  [EMAIL PROTECTED] -
- CompTechNews Message Board   http://www.comptechnews.com/ -
- CompTechServ Tech Services   http://www.comptechserv.com/ -
-- http://www.comptechnews.com/~reaster/ 



Re: [HACKERS] Crash during WAL recovery?

2000-11-22 Thread Thomas Lockhart

> I might be interested in helping with it.  Whats involved (DocBook, SGML)?

Yup. The PostgreSQL source tree has a docs directory with all of the
sources for the docs. I use emacs for editing, and several other options
are discussed in the appendix on documentation in the doc set.

  - Thomas



Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Bruce Momjian

> At 16:33 22/11/00 +0800, Christopher Kings-Lynne wrote:
> >At 15:50 22/11/00 +0800, Christopher Kings-Lynne wrote:
> >> >I've been examining the pg_dump source and output, and I've come to the
> >> >conclusion that I can modify it so that UNIQUE constraints
> >> appear as part of
> >> >the CREATE TABLE statement, rather than as a separate CREATE INDEX.
> >> ...
> >> >Is there any problem with me working on this?
> >>
> >> I actually don't think it's a good idea to force things to work that way.
> >
> >Why, exactly?
> 
> Having now looked at the code and seen that PK constraints are already
> dumped in the table definition, I guess doing unique constraints in the
> same way is no worse.

I have a good reason not to use UNIQUE.  As I remember, pg_dump creates
the tables, copies in the data, then creates the indexes.  This is much
faster than doing the copy with the indexes already created.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



Re: [HACKERS] Crash during WAL recovery?

2000-11-22 Thread Hannu Krosing

Tom Lane wrote:
> 
> More generally, a lot of the PG documentation could use the attention
> of a professional copy editor --- and I'm sad to say that the parts
> contributed by native English speakers aren't necessarily any cleaner
> than the parts contributed by those who are not. 

The difference between native English speaker and English writer is that 
writer usually does not mix up dye and die ;)

But afaik there is no such language as Englis, so first we would need to 
agree on which of the many Englishes the docs will be in.

I guess they are currently in "International" English which is quite
free 
about grammar, spelling and punctuation.

I would hate if we all started to write in some more rigid dialect. 

I've heard that some of these even make you put the full stop at the end
of a 
sentence before closing parenthesiss (like this.)

They claim it is for "typographical aesthetics" ;)

--
Hannu



Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Tom Lane

Bruce Momjian <[EMAIL PROTECTED]> writes:
> I have a good reason not to use UNIQUE.  As I remember, pg_dump creates
> the tables, copies in the data, then creates the indexes.  This is much
> faster than doing the copy with the indexes already created.

Right, that's the real implementation reason for doing it in two steps.

There's also a more abstract concern: ideally, pg_dump's schema output
should be the same as what the user originally entered.  Converting a
table and separate index declaration into one statement is not any more
correct than doing the reverse.  Thus the real problem here is to know
which way the index got created to begin with.  Currently we do not
know that, because (you guessed it) we have not got a declarative
representation for the UNIQUE constraint, only the execution-oriented
fact that the unique index exists.

My feeling is that there should be a stored indication someplace
allowing us to deduce exactly what caused the index to be created.
An ad-hoc way is to add another field to pg_index, but it might be
cleaner to create a new system catalog that covers all types of
constraint.

The next question is what pg_dump should emit, considering that it has
two conflicting goals: it wants to restore the original state of the
constraint catalog *but* also be efficient about loading data.  ALTER
TABLE ADD CONSTRAINT seems to be an essential requirement there.
But it seems to me that it'd be really whizzy if there were two
different styles of output, one for a full dump (CREATE, load data,
add constraints) and one for schema-only dumps that tries to reproduce
the original table declaration with embedded constraint specs.  That
would be nicer for documentation and editing purposes.

regards, tom lane



Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Tom Lane

I said:
> But it seems to me that it'd be really whizzy if there were two
> different styles of output, one for a full dump (CREATE, load data,
> add constraints) and one for schema-only dumps that tries to reproduce
> the original table declaration with embedded constraint specs.  That
> would be nicer for documentation and editing purposes.

I just had an idea about this, based on the hackery that pg_dump
currently does with triggers: what if there were an ALTER command that
allows disabling and re-enabling constraint checking and index building?
Then the dump script could look like

full CREATE TABLE with all constraints shown

ALTER TABLE DISABLE CONSTRAINTS

COPY data in

ALTER TABLE ENABLE CONSTRAINTS

and there wouldn't have to be any difference between schema and full
dump output for CREATE TABLE.  If we were really brave (foolish?)
the last step could be something like

ALTER TABLE ENABLE CONSTRAINTS NOCHECK

which'd suppress the scan for constraint violations that a normal
ALTER ADD CONSTRAINT would want to do.

It also occurs to me that we should not consider pg_dump as the only
area that needs work to fix this.  Why shouldn't pg_dump simply do

full CREATE TABLE with all constraints shown
CREATE all indexes too

-- if not schema dump then:
COPY data in

The answer to that of course is that cross-table constraints (like
REFERENCES clauses) must be disabled while loading the data, or the
intermediate states where only some tables have been loaded are likely
to fail.  So we do need some kind of DISABLE CONSTRAINT mode to make
this work.  But it's silly that pg_dump has to go out of its way to
create the indexes last --- if COPY has a performance problem there,
we should be fixing COPY, not requiring pg_dump to contort itself.
Why can't COPY recognize for itself that rebuilding the indexes after
loading data is a better strategy than incremental index update?
(The simplest implementation would restrict this to happen only if the
table is empty when COPY starts, which'd be sufficient for pg_dump.)

regards, tom lane



Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Bruce Momjian

> The answer to that of course is that cross-table constraints (like
> REFERENCES clauses) must be disabled while loading the data, or the
> intermediate states where only some tables have been loaded are likely
> to fail.  So we do need some kind of DISABLE CONSTRAINT mode to make
> this work.  But it's silly that pg_dump has to go out of its way to
> create the indexes last --- if COPY has a performance problem there,
> we should be fixing COPY, not requiring pg_dump to contort itself.
> Why can't COPY recognize for itself that rebuilding the indexes after
> loading data is a better strategy than incremental index update?
> (The simplest implementation would restrict this to happen only if the
> table is empty when COPY starts, which'd be sufficient for pg_dump.)

COPY would have to check to see if the table is already empty.  You can
COPY into a table that already has data.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] regressplans failures

2000-11-22 Thread Peter Eisentraut

I ran the src/test/regressplans.sh script, which runs the regression tests
under exclusion of various join and scan types.  Without merge joins (-fm)
I get an assertion failure in opr_sanity.

The query is:

  SELECT p1.oid, p1.aggname
  FROM pg_aggregate as p1
  WHERE p1.aggfinalfn = 0 AND p1.aggfinaltype != p1.aggtranstype;

(The plan for this query is a seq scan on pg_aggregate.)

The backtrace is:

#0  0x4012b131 in __kill () from /lib/libc.so.6
#1  0x4012aead in raise (sig=6) at ../sysdeps/posix/raise.c:27
#2  0x4012c534 in abort () at ../sysdeps/generic/abort.c:88
#3  0x8149b98 in ExceptionalCondition (
conditionName=0x81988a0 "!(((file) > 0 && (file) < (int) SizeVfdCache
&& VfdCache[file].fileName != ((void *)0)))", exceptionP=0x81b93c8,
detail=0x0,
fileName=0x8198787 "fd.c", lineNumber=851) at assert.c:70
#4  0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851
#5  0x810e692 in _mdnblocks (file=33, blcksz=8192) at md.c:1095
#6  0x810de9b in mdnblocks (reln=0x403a35f4) at md.c:667
#7  0x810ec80 in smgrnblocks (which=0, reln=0x403a35f4) at smgr.c:441
#8  0x8103303 in RelationGetNumberOfBlocks (relation=0x403a35f4)
at xlog_bufmgr.c:1161
#9  0x8072b04 in initscan (scan=0x822af94, relation=0x403a35f4, atend=0,
nkeys=0, key=0x0) at heapam.c:128
#10 0x8073fa0 in heap_beginscan (relation=0x403a35f4, atend=0,
snapshot=0x822b438, nkeys=0, key=0x0) at heapam.c:811
#11 0x80c69e4 in ExecBeginScan (relation=0x403a35f4, nkeys=0, skeys=0x0,
isindex=0, dir=ForwardScanDirection, snapshot=0x822b438) at
execAmi.c:156
#12 0x80c6986 in ExecOpenScanR (relOid=16960, nkeys=0, skeys=0x0,
isindex=0 '\000', dir=ForwardScanDirection, snapshot=0x822b438,
returnRelation=0xb074, returnScanDesc=0xb078) at execAmi.c:104
#13 0x80d098c in InitScanRelation (node=0x822ae60, estate=0x822aeec,
scanstate=0x822b084) at nodeSeqscan.c:172
#14 0x80d0a62 in ExecInitSeqScan (node=0x822ae60, estate=0x822aeec,
parent=0x0)
at nodeSeqscan.c:242
#15 0x80c917f in ExecInitNode (node=0x822ae60, estate=0x822aeec,
parent=0x0)
at execProcnode.c:152
#16 0x80c7be9 in InitPlan (operation=CMD_SELECT, parseTree=0x823b108,
plan=0x822ae60, estate=0x822aeec) at execMain.c:621
#17 0x80c765b in ExecutorStart (queryDesc=0x822b41c, estate=0x822aeec)
at execMain.c:135
#18 0x8111439 in ProcessQuery (parsetree=0x823b108, plan=0x822ae60,
dest=Remote) at pquery.c:263
#19 0x810ffea in pg_exec_query_string (
query_string=0x823a548 "SELECT p1.oid, p1.aggname\nFROM pg_aggregate
as p1\nWHERE p1.aggfinalfn = 0 AND p1.aggfinaltype != p1.aggtranstype;",
dest=Remote,
parse_context=0x81f13b0) at postgres.c:818


This failure is completely reproduceable by running

src/test/regress$ PGOPTIONS=-fm ./pg_regress opr_sanity

The problem also happens with the setting '-fn -fm', but *not* with the
setting '-fm -fh'.  (Adding or removing -fs or -fi doesn't affect the
outcome.)


The only other two failures are the join test when both merge and hash
joins are disabled and alter_table without index scans.  Both seem
harmless; see attached diffs.

The former is related to outer joins apparently not working with nest
loops.  The latter is a missing ORDER BY, which I'm inclined to fix.

-- 
Peter Eisentraut  [EMAIL PROTECTED]   http://yi.org/peter-e/


*** ./expected/alter_table.out  Tue Aug 29 13:22:31 2000
--- ./results/alter_table.out   Wed Nov 22 15:49:21 2000
***
*** 103,128 
  SELECT unique1 FROM ten_k WHERE unique1 < 20;
   unique1 
  -
!0
!1
!2
!3
 4
!5
 6
!7
!8
 9
!   10
!   11
!   12
13
!   14
!   15
!   16
!   17
!   18
19
  (20 rows)
  
  -- 20 values, sorted 
--- 103,128 
  SELECT unique1 FROM ten_k WHERE unique1 < 20;
   unique1 
  -
!   18
!   15
 4
!2
!1
 6
!   14
 9
!8
!5
!3
13
!   12
19
+   17
+   11
+7
+   10
+   16
+0
  (20 rows)
  
  -- 20 values, sorted 
***
*** 262,272 
  SELECT unique1 FROM tenk1 WHERE unique1 < 5;
   unique1 
  -
!0
!1
 2
 3
!4
  (5 rows)
  
  -- FOREIGN KEY CONSTRAINT adding TEST
--- 262,272 
  SELECT unique1 FROM tenk1 WHERE unique1 < 5;
   unique1 
  -
!4
 2
+1
 3
!0
  (5 rows)
  
  -- FOREIGN KEY CONSTRAINT adding TEST

==



*** ./expected/join.out Mon Nov  6 20:23:47 2000
--- ./results/join.out  Wed Nov 22 15:37:01 2000
***
*** 214,226 
WHERE t1.a = t2.d;
   xxx | a | e  
  -+---+
-  | 0 |   
   | 1 | -1
   | 2 |  2
   | 2 |  4
   | 3 | -3
   |

Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Tom Lane

Bruce Momjian <[EMAIL PROTECTED]> writes:
>> Why can't COPY recognize for itself that rebuilding the indexes after
>> loading data is a better strategy than incremental index update?
>> (The simplest implementation would restrict this to happen only if the
>> table is empty when COPY starts, which'd be sufficient for pg_dump.)

> COPY would have to check to see if the table is already empty.

That's what I said ... or intended to say, anyway.  If there's already
data then the tradeoff between incremental update and index rebuild is
not so obvious, and the easiest first implementation would just be to
always do incremental update in that case.  Or we could add an option
to the COPY command to tell it which to do, and let the user do the
guessing ;-)

There'd also be a locking issue, now that I think about it: to do an
index rebuild, we'd have to be sure that no other transaction is adding
data to the table at the same time.  So we'd need to get a stronger lock
than a plain write lock to do it that way.  A COPY option is sounding
better and better...

regards, tom lane



Re: [HACKERS] regressplans failures

2000-11-22 Thread Tom Lane

Peter Eisentraut <[EMAIL PROTECTED]> writes:
> #3  0x8149b98 in ExceptionalCondition (
> conditionName=0x81988a0 "!(((file) > 0 && (file) < (int) SizeVfdCache
> && VfdCache[file].fileName != ((void *)0)))", exceptionP=0x81b93c8,
> detail=0x0,
> fileName=0x8198787 "fd.c", lineNumber=851) at assert.c:70
> #4  0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851

I'm guessing this is a variant of the problem Philip Warner reported
yesterday.  Probably WAL-related.  Vadim?

> The only other two failures are the join test when both merge and hash
> joins are disabled and alter_table without index scans.  Both seem
> harmless; see attached diffs.
> The former is related to outer joins apparently not working with nest
> loops.  The latter is a missing ORDER BY, which I'm inclined to fix.

FULL JOIN currently is only implementable by mergejoin (if you can
figure out how to do it with a nest or hash join, I'm all ears...).
I guess it's a bug that the planner honors enable_mergejoin = OFF
even when given a FULL JOIN query.  (At least the failure detection
code works, though ;-).)  I'll see what I can do about that.

I'd be inclined *not* to add ORDER BYs just to make regressplans produce
zero diffs in all cases.  The presence of an ORDER BY may cause the
planner to prefer presorted-output plans, thus defeating the purpose
of testing all plan types...

regards, tom lane



[HACKERS] Changes to libpgtcl

2000-11-22 Thread Jan Wieck

Hi,

I'd like make some changes on the 7.1 (to be) libpgtcl.

1.  Make   the  large  object  access  null-byte  safe,  when
libpgtcl is compiled against a 8.0 or higher  version  of
Tcl.

This  would  cause  that  a libpgtcl.so built on a system
with Tcl 8.0 or higher will not any longer  load  into  a
pre  8.0  Tcl interpreter.  Since Tcl's actual version is
8.3, I think it's long enough for backward compatibility.

2.  Add  a "pg_execute" command, that behaves almost like the
"spi_exec" of PL/Tcl.

Any objections?


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #





[HACKERS] PL/Perl on Solaris

2000-11-22 Thread Wade D. Oberpriller

Hello,

I posted this message on pgsql-general, but didn't get a lot of feedback. I am
running into problems using PL/Perl on Solaris 2.5.1.

I have PostgreSQL v7.0.2, Perl v 5.005_03 (built as a shared library), and am
using gcc v2.7.2.2 to compile all of the source.

I include the paths to libperl.so and plperl.so in my LD_LIBRARY_PATH
environment variable. 

I can create the plperl language and create functions that are defined as using
plperl. When I attempt to execute the fundtion the postgres server crashes with 
a status of 11.

If anybody has any clues, hints, previous experiences about this scenario, your
help would be greatly appreciated.

Wade Oberpriller
StorageTek
[EMAIL PROTECTED]




[HACKERS] Question about performance of planner

2000-11-22 Thread Brian Hirt

Hi,

I have a question about the performance of the planner in 7.1.  I've been
testing the 11/21 snapshot of the database just to get an idea of how it
will work for me when I upgrade from 7.02   I've noticed that some queries 
are taking much longer and I've narrowed it down (i think) to the planner.

I've run an identical query against 7.02 and 7.1.  Both databases have the exact
same data, and both databases have been vacuum'd.  As you can see from below,
the 7.1 snapshot is spending 97% of the total time planning the query, where
the 7.0.2 version is spending only 27% of the total time planning the query.

If anyone is interested in this, I'll be happy to supply you with information
that would help track this down.


Thanks.

7.1-snapshot
PLANNER STATISTICS
! system usage stats:
!   7.748602 elapsed 5.02 user 0.20 system sec
!   [5.09 user 0.21 sys total]
!   0/0 [0/0] filesystem blocks in/out
!   47/1246 [349/1515] page faults/reclaims, 0 [0] swaps
!   0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
!   0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
!   Shared blocks: 20 read,  0 written, buffer hit rate = 99.94%
!   Local  blocks:  0 read,  0 written, buffer hit rate = 0.00%
!   Direct blocks:  0 read,  0 written
EXECUTOR STATISTICS
! system usage stats:
!   0.317000 elapsed 0.16 user 0.01 system sec
!   [5.25 user 0.22 sys total]
!   0/0 [0/0] filesystem blocks in/out
!   328/364 [677/1879] page faults/reclaims, 0 [0] swaps
!   0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
!   0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
!   Shared blocks:160 read,  0 written, buffer hit rate = 97.73%
!   Local  blocks:  0 read,  0 written, buffer hit rate = 0.00%
!   Direct blocks:  0 read,  0 written


7.0.2
! Planner Stats:
! system usage stats:
!   0.051438 elapsed 0.05 user 0.00 system sec
!   [0.33 user 0.05 sys total]
!   0/0 [0/0] filesystem blocks in/out
!   0/51 [680/837] page faults/reclaims, 0 [0] swaps
!   0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
!   0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
!   Shared blocks:  0 read,  0 written, buffer hit rate = 100.00%
!   Local  blocks:  0 read,  0 written, buffer hit rate = 0.00%
!   Direct blocks:  0 read,  0 written
! Executor Stats:
! system usage stats:
!   0.136506 elapsed 0.13 user 0.00 system sec
!   [0.46 user 0.05 sys total]
!   0/0 [0/0] filesystem blocks in/out
!   0/6 [680/843] page faults/reclaims, 0 [0] swaps
!   0 [0] signals rcvd, 0/0 [0/0] messages rcvd/sent
!   0/0 [0/0] voluntary/involuntary context switches
! postgres usage stats:
!   Shared blocks: 98 read,  0 written, buffer hit rate = 98.98%
!   Local  blocks:  0 read,  0 written, buffer hit rate = 0.00%
!   Direct blocks:  0 read,  0 written


-- 
The world's most ambitious and comprehensive PC game database project.

  http://www.mobygames.com



Re: [HACKERS] Question about performance of planner

2000-11-22 Thread Tom Lane

Brian Hirt <[EMAIL PROTECTED]> writes:
> I have a question about the performance of the planner in 7.1.  I've been
> testing the 11/21 snapshot of the database just to get an idea of how it
> will work for me when I upgrade from 7.02   I've noticed that some queries 
> are taking much longer and I've narrowed it down (i think) to the planner.

Does EXPLAIN show the same query plan in both cases?

> If anyone is interested in this, I'll be happy to supply you with information
> that would help track this down.

Sure.  Let's see the query and the database schema ...

regards, tom lane



Re: [HACKERS] Crash during WAL recovery?

2000-11-22 Thread Martin A. Marques

On Wednesday 22 November 2000 02:36, Don Baccus wrote:
>
> >More generally, a lot of the PG documentation could use the attention
> >of a professional copy editor --- and I'm sad to say that the parts
> >contributed by native English speakers aren't necessarily any cleaner
> >than the parts contributed by those who are not.  If you have the
> >time and energy to submit corrections, please fall to!
>
> This is very much true.  PG needs some good documentation volunteers.
> I'm not denigrating the current efforts, because PG documention's pretty
> good all things considered.  But some volunteers devoted to improving
> the docs could accomplish a lot.

It would be a pleasure to help with the spanish docs, if any help is needed.

Saludos... :-)


-- 
"And I'm happy, because you make me feel good, about me." - Melvin Udall
-
Martín Marqués  email:  [EMAIL PROTECTED]
Santa Fe - Argentinahttp://math.unl.edu.ar/~martin/
Administrador de sistemas en math.unl.edu.ar
-



Re: [HACKERS] Questions on RI spec (poss. bugs)

2000-11-22 Thread Stephan Szabo


On Tue, 21 Nov 2000, Jan Wieck wrote:

> Stephan Szabo wrote:
> >
> >There's a message on -general about a possible
> > problem in the deferred RI constraints.  He was doing a
> > sequence like:
> > begin
> >  delete
> >  insert
> > end
> > and having it fail even though the deleted key was back in
> > place at the end.
> 
> Isn't  that  (delete  and  reinsert  the  same  PK)  what the
> standard means with "triggered data change violation"?
> 
> It is a second touching of a unique matching PK. And in  this
> case the standard doesn't define a behaviour, instead it says
> you cannot do so.

As Peter said, it really looks like the 99 draft anyway means twice in a
single statement not transaction which is probably there to prevent
infinite loops. 

> In the case of reinserting a deleted PK, does the new PK  row
> inherit the references to the old PK row? If so, an ON DELETE
> CASCADE must be suppressed - no?
I'm not sure because it's unclear to me whether ri actions are actually
deferred.  Restrict for example sounds like it occurs immediately on the
statement and it's not worded differently from others in the draft I have.
So, it's possible that the actions are supposed to occur immediately on
the statement, even if the constraint check is deferred.  I really don't
know, but it would explain a behavioral difference between restrict and
noaction that makes having both make sense (restrict prevents you from 
moving away - no action lets you move away as long as the constraint is
okay at check time).




RE: [HACKERS] Table/Column Constraints

2000-11-22 Thread Stephan Szabo


On Tue, 21 Nov 2000, Christopher Kings-Lynne wrote:

> > > Problem is that there are 5 difference types of constraints,
> > implemented in
> > > 5 different ways.  Do you want a unifed, central catalog of
> > constraints, or
> > > just for some of them, or what?
> >
> > Dunno.  Maybe a unified representation would make more sense, or maybe
> > it's OK to treat them separately.  The existing implementations of the
> > different types of constraints were done at different times, and perhaps
> > are different "just because" rather than for any good reason.  We need
> > investigation before we can come up with a reasonable proposal.
> 
> It strikes me that having a catalog (so to speak) of all contraints, with
> flags in the tables where the contraints are implemented would allow a
> separation of presentation and implementation.

Yeah, the hard part is storing enough information to recover the
constraint in an easy way without going to the implementation details,
strings aren't sufficient by themselves because that gets really difficult
to maintain as table/columns change or are dropped. Maybe a central
catalog like the above and a backend function that takes care of
formatting to text would work.  Or keeping track of the dependent objects
and re-figuring the text form (or drop constraint, or whatever) when those
objects are changed/dropped.

I think that combining different constraints is good to some extent
because there are alot of problems with many constraints (the RI ones have
problems, check constraints are currently not deferrable AFAIK,
the unique constraint doesn't actually have the correct semantics) and
maybe thinking about the whole set of them at the same time would be a
good idea.

> > > I assume that column contraints implicitly become table
> > constraints.  This
> > > will also make it easy to have global unique contraint names.
> > Actually -
> > > are the constraint  names currently unique for an entire database?
> >
> > No, and they shouldn't be --- only per-table, I think.
> 
> Oops - correct.  Wasn't paying attention.  I forgot that the table name is
> specified as part of the ALTER statement.

I'm not sure actually, it seems to say in the syntax rules for the
constraint name definition that the qualified identifier of a constraint
needs to be different from any other qualified identifier for any other
constraint in the same schema, so Christopher may have been right the
first time (given we don't have schema).




Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Bruce Momjian

My feeling is "Let's walk before we run."  We need psql \dt to show
primary/foreign keys and SERIAL first.


> Bruce Momjian <[EMAIL PROTECTED]> writes:
> >> Why can't COPY recognize for itself that rebuilding the indexes after
> >> loading data is a better strategy than incremental index update?
> >> (The simplest implementation would restrict this to happen only if the
> >> table is empty when COPY starts, which'd be sufficient for pg_dump.)
> 
> > COPY would have to check to see if the table is already empty.
> 
> That's what I said ... or intended to say, anyway.  If there's already
> data then the tradeoff between incremental update and index rebuild is
> not so obvious, and the easiest first implementation would just be to
> always do incremental update in that case.  Or we could add an option
> to the COPY command to tell it which to do, and let the user do the
> guessing ;-)
> 
> There'd also be a locking issue, now that I think about it: to do an
> index rebuild, we'd have to be sure that no other transaction is adding
> data to the table at the same time.  So we'd need to get a stronger lock
> than a plain write lock to do it that way.  A COPY option is sounding
> better and better...
> 
>   regards, tom lane
> 


-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026



[HACKERS] syslog output from explain looks weird...

2000-11-22 Thread Larry Rosenman

Just playing with the syslog functionality on 7.1devel, and the
explain output looks weird to me:

Nov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG:  MoveOfflineLogs:
skip 0006
Nov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG:  MoveOfflineLogs:
skip 0005
Nov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE:  QUERY PLAN:
Nov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on
upslog  (cost=0.00..88.65 rows=165 width=28)

seems like it should be better. 

The output at the client looks fine:
ler=# explain select * from upslog where upslogdate >='2000-11-01';
NOTICE:  QUERY PLAN:

Index Scan using upslog_index on upslog  (cost=0.00..88.65 rows=165
width=28)

EXPLAIN
ler=# 


-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [HACKERS] syslog output from explain looks weird...

2000-11-22 Thread Larry Rosenman

* Larry Rosenman <[EMAIL PROTECTED]> [001122 15:03]:
> Just playing with the syslog functionality on 7.1devel, and the
> explain output looks weird to me:
> 
> Nov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG:  MoveOfflineLogs:
> skip 0006
> Nov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG:  MoveOfflineLogs:
> skip 0005
> Nov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE:  QUERY PLAN:
> Nov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on
> upslog  (cost=0.00..88.65 rows=165 width=28)
> 
> seems like it should be better. 
> 
> The output at the client looks fine:
> ler=# explain select * from upslog where upslogdate >='2000-11-01';
> NOTICE:  QUERY PLAN:
> 
> Index Scan using upslog_index on upslog  (cost=0.00..88.65 rows=165
> width=28)
> 
> EXPLAIN
> ler=# 
And here is a fix.  What appears to piss off my syslogd is the no
character lines.  So, I added spaces to the output.  The new client
output looks like:
ler=# explain select * from upslog where upslogdate>='2000-11-01';
NOTICE:  QUERY PLAN:
 
 Index Scan using upslog_index on upslog  (cost=0.00..88.65 rows=165
width=28)

EXPLAIN
ler=# \q
$ 

and the syslog looks like:
Nov 22 15:22:56 lerami pg-test[8299]: [2] NOTICE:  QUERY PLAN:
Nov 22 15:22:56 lerami 
Nov 22 15:22:56 lerami Index Scan using upslog_index on upslog
(cost=0.00..88.65 rows=165 width=28)

And the patch is:

Index: src/backend/commands/explain.c
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/explain.c,v
retrieving revision 1.62
diff -c -r1.62 explain.c
*** src/backend/commands/explain.c  2000/11/12 00:36:56 1.62
--- src/backend/commands/explain.c  2000/11/22 21:16:47
***
*** 120,126 
s = Explain_PlanToString(plan, es);
if (s)
{
!   elog(NOTICE, "QUERY PLAN:\n\n%s", s);
pfree(s);
}
}
--- 120,126 
s = Explain_PlanToString(plan, es);
if (s)
{
!   elog(NOTICE, "QUERY PLAN:\n \n %s", s);
pfree(s);
}
}
-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [HACKERS] Crash during WAL recovery?

2000-11-22 Thread Thomas Lockhart

> It would be a pleasure to help with the spanish docs, if any help is needed.

There is a documentation translation effort hosted in Spain, and I'm
sure that they would welcome help to stay current (I believe that a
substantial portion of docs are already done for a recent, but perhaps
not current, set of docs). There should be a link to this from the
postgresql.org site.

- Thomas



Re: [HACKERS] syslog output from explain looks weird...

2000-11-22 Thread Larry Rosenman

* Larry Rosenman <[EMAIL PROTECTED]> [001122 15:25]:
> * Larry Rosenman <[EMAIL PROTECTED]> [001122 15:03]:
> > Just playing with the syslog functionality on 7.1devel, and the
> > explain output looks weird to me:
> > 
> > Nov 22 14:58:44 lerami pg-test[4005]: [2] DEBUG:  MoveOfflineLogs:
> > skip 0006
> > Nov 22 14:58:44 lerami pg-test[4005]: [3] DEBUG:  MoveOfflineLogs:
> > skip 0005
> > Nov 22 14:59:09 lerami pg-test[4005]: [4] NOTICE:  QUERY PLAN:
> > Nov 22 14:59:0 lerami Nov 22 14:59:09Index Scan using upslog_index on
> > upslog  (cost=0.00..88.65 rows=165 width=28)
> > 
> > seems like it should be better. 
> > 
> > The output at the client looks fine:
> > ler=# explain select * from upslog where upslogdate >='2000-11-01';
> > NOTICE:  QUERY PLAN:
> > 
> > Index Scan using upslog_index on upslog  (cost=0.00..88.65 rows=165
> > width=28)
> > 
> > EXPLAIN
> > ler=# 
> And here is a fix.  What appears to piss off my syslogd is the no
> character lines.  So, I added spaces to the output.  The new client
> output looks like:
> ler=# explain select * from upslog where upslogdate>='2000-11-01';
> NOTICE:  QUERY PLAN:
>  
>  Index Scan using upslog_index on upslog  (cost=0.00..88.65 rows=165
> width=28)
> 
> EXPLAIN
> ler=# \q
> $ 
> 
> and the syslog looks like:
> Nov 22 15:22:56 lerami pg-test[8299]: [2] NOTICE:  QUERY PLAN:
> Nov 22 15:22:56 lerami 
> Nov 22 15:22:56 lerami Index Scan using upslog_index on upslog
> (cost=0.00..88.65 rows=165 width=28)
> 
Looking some more, I found some other places that need a space (I
suspect...), so here is an updated patch.

Index: src/backend/commands/explain.c
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/commands/explain.c,v
retrieving revision 1.62
diff -c -r1.62 explain.c
*** src/backend/commands/explain.c  2000/11/12 00:36:56 1.62
--- src/backend/commands/explain.c  2000/11/22 22:52:39
***
*** 110,116 
s = nodeToString(plan);
if (s)
{
!   elog(NOTICE, "QUERY DUMP:\n\n%s", s);
pfree(s);
}
}
--- 110,116 
s = nodeToString(plan);
if (s)
{
!   elog(NOTICE, "QUERY DUMP:\n \n %s", s);
pfree(s);
}
}
***
*** 120,126 
s = Explain_PlanToString(plan, es);
if (s)
{
!   elog(NOTICE, "QUERY PLAN:\n\n%s", s);
pfree(s);
}
}
--- 120,126 
s = Explain_PlanToString(plan, es);
if (s)
{
!   elog(NOTICE, "QUERY PLAN:\n \n %s", s);
pfree(s);
}
}
***
*** 149,155 
  
if (plan == NULL)
{
!   appendStringInfo(str, "\n");
return;
}
  
--- 149,155 
  
if (plan == NULL)
{
!   appendStringInfo(str, "\n ");
return;
}
  
***
*** 283,289 
 plan->startup_cost, plan->total_cost,
 plan->plan_rows, plan->plan_width);
}
!   appendStringInfo(str, "\n");
  
/* initPlan-s */
if (plan->initPlan)
--- 283,289 
 plan->startup_cost, plan->total_cost,
 plan->plan_rows, plan->plan_width);
}
!   appendStringInfo(str, "\n ");
  
/* initPlan-s */
if (plan->initPlan)
***
*** 293,299 
  
for (i = 0; i < indent; i++)
appendStringInfo(str, "  ");
!   appendStringInfo(str, "  InitPlan\n");
foreach(lst, plan->initPlan)
{
es->rtable = ((SubPlan *) lfirst(lst))->rtable;
--- 293,299 
  
for (i = 0; i < indent; i++)
appendStringInfo(str, "  ");
!   appendStringInfo(str, "  InitPlan\n ");
foreach(lst, plan->initPlan)
{
es->rtable = ((SubPlan *) lfirst(lst))->rtable;
***
*** 369,375 
  
for (i = 0; i < indent; i++)
appendStringInfo(str, "  ");
!   appendStringInfo(str, "  SubPlan\n");
foreach(lst, plan->subPlan)
{
es->rtable = ((SubPlan *) lfirst(lst))->rtable;
--- 369,375 
  
for (i = 0; i < indent; i++)
appendStringInfo(str, "  ");
!   appendStringInfo(str, "  SubPlan\n ");
foreach(lst, plan->subPlan)

Re: [HACKERS] Table/Column Constraints

2000-11-22 Thread jmscott

> 
> On Tue, 21 Nov 2000, Christopher Kings-Lynne wrote:
> 
> > > > Problem is that there are 5 difference types of constraints,
> > > implemented in
> > > > 5 different ways.  Do you want a unifed, central catalog of
> > > constraints, or
> > > > just for some of them, or what?
> > >
> > > Dunno.  Maybe a unified representation would make more sense, or maybe
> > > it's OK to treat them separately.  The existing implementations of the
> > > different types of constraints were done at different times, and perhaps
> > > are different "just because" rather than for any good reason.  We need
> > > investigation before we can come up with a reasonable proposal.
> > 
> > It strikes me that having a catalog (so to speak) of all contraints, with
> > flags in the tables where the contraints are implemented would allow a
> > separation of presentation and implementation.
> 
> Yeah, the hard part is storing enough information to recover the
> constraint in an easy way without going to the implementation details,
> strings aren't sufficient by themselves because that gets really difficult
> to maintain as table/columns change or are dropped. Maybe a central
> catalog like the above and a backend function that takes care of
> formatting to text would work.  Or keeping track of the dependent objects
> and re-figuring the text form (or drop constraint, or whatever) when those
> objects are changed/dropped.
> 
> I think that combining different constraints is good to some extent
> because there are alot of problems with many constraints (the RI ones have
> problems, check constraints are currently not deferrable AFAIK,
> the unique constraint doesn't actually have the correct semantics) and
> maybe thinking about the whole set of them at the same time would be a
> good idea.
> 
> > > > I assume that column contraints implicitly become table
> > > constraints.  This
> > > > will also make it easy to have global unique contraint names.
> > > Actually -
> > > > are the constraint  names currently unique for an entire database?
> > >
> > > No, and they shouldn't be --- only per-table, I think.
> > 
> > Oops - correct.  Wasn't paying attention.  I forgot that the table name is
> > specified as part of the ALTER statement.
> 
> I'm not sure actually, it seems to say in the syntax rules for the
> constraint name definition that the qualified identifier of a constraint
> needs to be different from any other qualified identifier for any other
> constraint in the same schema, so Christopher may have been right the
> first time (given we don't have schema).

tom and i spoke of this problem at the Open Source Database
Summit awhile back. 

in a nutshell, postgres doesn't maintain explicit 
relationships between tables.  my experience says
that foreign/primary keys fall under the
category of extended domains, not rules, and, hence,
postgres is a bit out of the loop.

my vote is for storing the relationships in
the system tables, as most commercial DBs do.
otherwise, an entire class of DDL applications
won't be possible under postgres.

john

-

John Scott
Senior Partner
August Associates

 web: http://www.august.com/~john


Get your own free email account from
http://www.popmail.com




Re: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Philip Warner

At 11:34 22/11/00 -0500, Tom Lane wrote:
>
>   full CREATE TABLE with all constraints shown
>
>   ALTER TABLE DISABLE CONSTRAINTS

I think you need something more like:

SET ALL CONSTRAINTS DISABLED/OFF

since disabling one tables constraints won't work when we have
subselect-in-check (or if it does, then ALTER TABLE  DISABLE
CONSTRAINTS will be a misleading name). Also, I think FK constraints on
another table that is already loaded will fail until the primary table is
loaded.


>
>and there wouldn't have to be any difference between schema and full
>dump output for CREATE TABLE.

I still see a great deal of value in being able to get a list of 'ALTER
TABLE ADD CONSTRAINT...' statements from pg_dump/restore. 


>If we were really brave (foolish?)
>the last step could be something like
>
>   ALTER TABLE ENABLE CONSTRAINTS NOCHECK

Eek. Won't work for index-based constraints, since they are created anyway.
It *might* be a good idea for huge DBs.


>But it's silly that pg_dump has to go out of its way to
>create the indexes last --- if COPY has a performance problem there,
>we should be fixing COPY, not requiring pg_dump to contort itself.

This is fine for COPY, but doesn't work for data-as-INSERTS.


>Why can't COPY recognize for itself that rebuilding the indexes after
>loading data is a better strategy than incremental index update?

The other aspect of COPY that needs fixing is the ability to specify column
order (I think); from memory that's the reason the regression DB can't be
dumped & loaded. It's also be nice to be able to specify a subset of columns.


>(The simplest implementation would restrict this to happen only if the
>table is empty when COPY starts, which'd be sufficient for pg_dump.)

Does this approach have any implications for recovery/reliability; adding a
row but not updating indexes seems a little dangerous. Or is the plan to
drop the indexes, add the data, and create the indexes?


Stepping back from the discussion for a moment, I am beginning to have
doubts about the approach: having pg_dump put the indexes (and constraints)
at the end of the dump is simple and works in all cases. The only issue,
AFAICT, is generating a single complete table defn for easy-reading. The
suggested solution seems a little extreme (a pg_dump specific hack to COPY,
when there are other more general problems with COPY that more urgently
require attention).


Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



RE: [HACKERS] pg_dump / Unique constraints

2000-11-22 Thread Christopher Kings-Lynne

Just a quick question regarding the pg_dump program:

I notice that PRIMARY KEY constraints are currently dumped as:

PRIMARY KEY ("field")

Whereas (to be in line with all the other constraints), it should be dumped
as:

CONSTRAINT "name" PRIMARY KEY ("field")

Otherwise, some poor bugger who went to the trouble of giving his primary
keys custom names will lose them with a dump/restore???

Also, if they have defined a function or trigger that refers to that primary
key by name, won't it fail after a dump/restore? (If the name has changed?)

I'm just asking, because I'm still trying to find something small and
self-contained I can work on!

Chris




[HACKERS] OpenACS datamodel vs. current PG 7.1 sources

2000-11-22 Thread Don Baccus

After Tom's bug fix, I can now load the data model with no
problem.

Very cool, I'm pumped!



- Don Baccus, Portland OR <[EMAIL PROTECTED]>
  Nature photos, on-line guides, Pacific Northwest
  Rare Bird Alert Service and other goodies at
  http://donb.photo.net.



[HACKERS] ALTER TABLE...ADD CONSTRAINT?

2000-11-22 Thread Philip Warner

At 10:21 23/11/00 +0800, Christopher Kings-Lynne wrote:
>
>I'm just asking, because I'm still trying to find something small and
>self-contained I can work on!
>

Is anybody working on:

alter table  add constraint  primary key(column,...);

or

alter table  add constraint  unique(column,...);

or

alter table drop constraint

I guess this is not really a small task as it relates to unifying
constraint handling, but for the PK & unique constraints at least, we must
already have code that does the work - all(?) that has to happen is to make
sure the ALTER command calls it...is that right?



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



[HACKERS] RE: ALTER TABLE...ADD CONSTRAINT?

2000-11-22 Thread Christopher Kings-Lynne

> Is anybody working on:
>
> alter table  add constraint  primary key(column,...);
>
> or
>
> alter table  add constraint  unique(column,...);
>
> or
>
> alter table drop constraint

I'd be more than happy to work on either of the above in the current
implementation, however - I'm not sure it'd be worth it, given that the
constraints system might be up for a reimplementation.

> I guess this is not really a small task as it relates to unifying
> constraint handling, but for the PK & unique constraints at least, we must
> already have code that does the work - all(?) that has to happen
> is to make
> sure the ALTER command calls it...is that right?

That is a thought - can someone point me to the C file that handles CREATE
TABLE so I can see how it's done?  I can't for the life of me find that bit
of code!

Chris




Re: [HACKERS] RE: ALTER TABLE...ADD CONSTRAINT?

2000-11-22 Thread Tom Lane

"Christopher Kings-Lynne" <[EMAIL PROTECTED]> writes:
> can someone point me to the C file that handles CREATE
> TABLE so I can see how it's done?

backend/parser/analyze.c has the preprocessing (see
transformCreateStmt).  Actual execution starts in
backend/commands/creatinh.c, and there's also important code in
backend/catalog/heap.c.

Plus subroutines scattered here, there, and everywhere :-(.

You really won't get far in reading the PG sources until you have
a tool that will quickly find the definition (and optionally all uses)
of any particular symbol you are interested in.  I'm partial to glimpse,
but you could also use ctags/etags or some other indexing program.

regards, tom lane



[HACKERS] Breaking of existing apps with CVS version

2000-11-22 Thread Philip Warner


There is a minor breakage of existing apps that occurs with current CVS.

In 7.0 doing the following:

create table tsc(f1 int4 , f2 int4);
insert into tsc values(1,4);
select sum(f1)/sum(f2) from tsc;

would actually result in zero, since it worked with integers throughout. As
a result, I adopted the following strategy:

select cast(sum(f1) as float8)/sum(f2) from tsc;

which produced the expected results.

Now in 7.1 this breaks with:

ERROR:  Unable to identify an operator '/' for types 'float8' and 'numeric'
You will have to retype this query using an explicit cast

Is there a reason why it doesn't promote float8 to numeric?

 




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



Re: [HACKERS] Breaking of existing apps with CVS version

2000-11-22 Thread Tom Lane

Philip Warner <[EMAIL PROTECTED]> writes:
> select cast(sum(f1) as float8)/sum(f2) from tsc;

> Now in 7.1 this breaks with:

> ERROR:  Unable to identify an operator '/' for types 'float8' and 'numeric'
> You will have to retype this query using an explicit cast

> Is there a reason why it doesn't promote float8 to numeric?

Actually, if we were to do any automatic coercion in this case,
I think that the SQL spec requires casting in the other direction,
numeric to float8.  Mixing exact and inexact numerics (to use the
spec's terminology) can hardly be expected to produce an exact result.

The reason for the change in behavior is that sum(int4) now produces
numeric, not int4, to avoid overflow problems.  I believe this change
is for the better both in practical terms and in terms of closer
adherence to the intent of the SQL spec.  However, it may indeed cause
people to run into the numeric-vs-float8 ambiguity.

I'd prefer that we not try to solve this issue for 7.1.  We've gone
around on the question of changing the numeric-type promotion hierarchy
a couple of times, without reaching any clear resolution of what to do
--- so I doubt that a quick hack in the waning days of the 7.1 cycle
will prove satisfactory.  Let's leave it be until we have a real
solution.

regards, tom lane



Re: [HACKERS] syslog output from explain looks weird...

2000-11-22 Thread Tom Lane

Larry Rosenman <[EMAIL PROTECTED]> writes:
> Looking some more, I found some other places that need a space (I
> suspect...), so here is an updated patch.

This seems like the wrong way to go about it, because anytime anyone
changes any elog output anywhere, we'll risk another failure.  If
syslog can't cope with empty lines, I think the right fix is for the
output-to-syslog routine to change the data just before sending ---
then there is only one place to fix.  See the syslog output routine in
src/backend/utils/error/elog.c.

regards, tom lane



Re: [HACKERS] Breaking of existing apps with CVS version

2000-11-22 Thread Philip Warner

At 23:27 22/11/00 -0500, Tom Lane wrote:
>Philip Warner <[EMAIL PROTECTED]> writes:
>>
>> Is there a reason why it doesn't promote float8 to numeric?
>
>Mixing exact and inexact numerics (to use the
>spec's terminology) can hardly be expected to produce an exact result.

I suppose it's a question of working in the most accurate representation
for each number to minimize inaccuracy, then representing the result as
accurately as possible. Since numeric is more accurate for calculation, I
assumes we'd use it if we had to choose. How we represent the result may be
up to the SQL standard.

All that aside, I was more worried that when people start upgrading to 7.1
we might be a flood of "my application doesn't work any more" bug reports. 


>However, it may indeed cause
>people to run into the numeric-vs-float8 ambiguity.

It's a little more than an ambiguity; anyone that mixes floats with sums
will get a crash in their application.




Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/



[HACKERS] Please advise features in 7.1

2000-11-22 Thread John Huttley

Hello,
I've looked at the resources available through the web page to CVS and other
stuff,
however I cant find a statement of whats likely to be in 7.1 and what is planned
for later.

Reason: I want to know if any of these features are scheduled.

1. Calculated fields in table definitions . eg.

   Create table test (
A Integer,
B integer,
   the_sum   As  (A+B),
);

This is like MSSQL

2. Any parameterised triggers

3. Any parameterised stored procedures that return a result set.


These are _extraordinarily_ useful for application development.

If anyone has a way of bolting on any of these to 7.0, I'd be keen to hear from
you.

Regards

John





Re: [HACKERS] Please advise features in 7.1

2000-11-22 Thread Tom Lane

"John Huttley" <[EMAIL PROTECTED]> writes:
> Reason: I want to know if any of these features are scheduled.

> 1. Calculated fields in table definitions . eg.

>Create table test (
> A Integer,
> B integer,
>the_sum   As  (A+B),
> );

You can do that now (and for many versions past) with a trigger.
It's not quite as convenient as it ought to be, but it's possible.
AFAIK there's no change in that situation for 7.1.

> 2. Any parameterised triggers

We've had parameterized triggers for years.  Maybe you attach some
meaning to that term beyond what I do?

> 3. Any parameterised stored procedures that return a result set.

There is some support (dating back to Berkeley Postquel) for functions
returning sets, but it's pretty ugly and limited.  Proper support might
happen in 7.2 ...

regards, tom lane



Re: [HACKERS] syslog output from explain looks weird...

2000-11-22 Thread Larry Rosenman

* Tom Lane <[EMAIL PROTECTED]> [001122 22:44]:
> Larry Rosenman <[EMAIL PROTECTED]> writes:
> > Looking some more, I found some other places that need a space (I
> > suspect...), so here is an updated patch.
> 
> This seems like the wrong way to go about it, because anytime anyone
> changes any elog output anywhere, we'll risk another failure.  If
> syslog can't cope with empty lines, I think the right fix is for the
> output-to-syslog routine to change the data just before sending ---
> then there is only one place to fix.  See the syslog output routine in
> src/backend/utils/error/elog.c.
Makes sense.  Here's a new patch, now the output even looks better:
Nov 23 00:58:04 lerami pg-test[9914]: [2-1] NOTICE:  QUERY PLAN:
Nov 23 00:58:04 lerami pg-test[9914]: [2-2] 
Nov 23 00:58:04 lerami pg-test[9914]: [2-3] Seq Scan on upsdata
(cost=0.00..2766.62 rows=2308 width=48)
Nov 23 00:58:04 lerami pg-test[9914]: [2-4] 


Index: src/backend/utils/error/elog.c
===
RCS file: /home/projects/pgsql/cvsroot/pgsql/src/backend/utils/error/elog.c,v
retrieving revision 1.67
diff -c -r1.67 elog.c
*** src/backend/utils/error/elog.c  2000/11/14 19:13:27 1.67
--- src/backend/utils/error/elog.c  2000/11/23 06:58:23
***
*** 657,663 
seq++;
  
/* divide into multiple syslog() calls if message is too long */
!   if (len > PG_SYSLOG_LIMIT)
{
static char buf[PG_SYSLOG_LIMIT+1];
int chunk_nr = 0;
--- 657,664 
seq++;
  
/* divide into multiple syslog() calls if message is too long */
!   /* or if the message contains embedded NewLine(s) '\n' */
!   if (len > PG_SYSLOG_LIMIT || strchr(line,'\n') != NULL )
{
static char buf[PG_SYSLOG_LIMIT+1];
int chunk_nr = 0;
***
*** 667,675 
--- 668,684 
{
int l;
int i;
+   /* if we start at a newline, move ahead one char */
+   if (line[0] == '\n')
+   {
+   line++;
+   len--;
+   }
  
strncpy(buf, line, PG_SYSLOG_LIMIT);
buf[PG_SYSLOG_LIMIT] = '\0';
+   if (strchr(buf,'\n') != NULL) 
+   *strchr(buf,'\n') = '\0';
  
l = strlen(buf);
  #ifdef MULTIBYTE
-- 
Larry Rosenman  http://www.lerctr.org/~ler
Phone: +1 972-414-9812 (voice) Internet: [EMAIL PROTECTED]
US Mail: 1905 Steamboat Springs Drive, Garland, TX 75044-6749



Re: [HACKERS] Please advise features in 7.1

2000-11-22 Thread john huttley


- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "John Huttley" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, 23 November 2000 19:05
Subject: Re: [HACKERS] Please advise features in 7.1


> "John Huttley" <[EMAIL PROTECTED]> writes:
> > Reason: I want to know if any of these features are scheduled.
>
> > 1. Calculated fields in table definitions . eg.
>
> >Create table test (
> > A Integer,
> > B integer,
> >the_sum   As  (A+B),
> > );
>
> You can do that now (and for many versions past) with a trigger.
> It's not quite as convenient as it ought to be, but it's possible.
> AFAIK there's no change in that situation for 7.1.
>


Yes,  Perhaps defining the table with a dummy field and setting up a
'before'
trigger which replaced that field with a calculated value?

Messy but feasible.


> > 2. Any parameterised triggers
>
> We've had parameterized triggers for years.  Maybe you attach some
> meaning to that term beyond what I do?

I'm referring to the manual that says functions used for triggers must have
no parameters
and return a type Opaque. And indeed it is impossible to create a trigger
from a plSQL function that takes any parameters.

Thus if we have a lot of triggers which are very similar, we cannot just use
one function
and pass an identifying  parameter or two to it. We must create an
individual function for each trigger.

Its irritating more than fatal.

> > 3. Any parameterised stored procedures that return a result set.
>
> There is some support (dating back to Berkeley Postquel) for functions
> returning sets, but it's pretty ugly and limited.  Proper support might
> happen in 7.2 ...

Something to look forward to! Meanwhile I'll have a play and see if its
possible to use a read trigger
to populate a temporary table. hmm, that might require a statement level
trigger. Another thing for 7.2,
i guess.

The application programming we are doing now utilises stored procedures
returning record sets
(MSSQL) and the lack is showstopper in our migration plans. Sigh.


Thanks Tom

Regards


John





Re: [HACKERS] Please advise features in 7.1

2000-11-22 Thread john huttley


- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "John Huttley" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Thursday, 23 November 2000 19:05
Subject: Re: [HACKERS] Please advise features in 7.1


> "John Huttley" <[EMAIL PROTECTED]> writes:
> > Reason: I want to know if any of these features are scheduled.
>
> > 1. Calculated fields in table definitions . eg.
>
> >Create table test (
> > A Integer,
> > B integer,
> >the_sum   As  (A+B),
> > );
>
> You can do that now (and for many versions past) with a trigger.
> It's not quite as convenient as it ought to be, but it's possible.
> AFAIK there's no change in that situation for 7.1.
>


Yes,  Perhaps defining the table with a dummy field and setting up a
'before'
trigger which replaced that field with a calculated value?

Messy but feasible.


> > 2. Any parameterised triggers
>
> We've had parameterized triggers for years.  Maybe you attach some
> meaning to that term beyond what I do?

I'm referring to the manual that says functions used for triggers must have
no parameters
and return a type Opaque. And indeed it is impossible to create a trigger
from a plSQL function that takes any parameters.

Thus if we have a lot of triggers which are very similar, we cannot just use
one function
and pass an identifying  parameter or two to it. We must create an
individual function for each trigger.

Its irritating more than fatal.

> > 3. Any parameterised stored procedures that return a result set.
>
> There is some support (dating back to Berkeley Postquel) for functions
> returning sets, but it's pretty ugly and limited.  Proper support might
> happen in 7.2 ...

Something to look forward to! Meanwhile I'll have a play and see if its
possible to use a read trigger
to populate a temporary table. hmm, that might require a statement level
trigger. Another thing for 7.2,
i guess.

The application programming we are doing now utilises stored procedures
returning record sets
(MSSQL) and the lack is showstopper in our migration plans. Sigh.


Thanks Tom

Regards


John





Re: [HACKERS] Questions on RI spec (poss. bugs)

2000-11-22 Thread Hannu Krosing

Peter Eisentraut wrote:
> 
> Jan Wieck writes:
> 
> > Stephan Szabo wrote:
> > >
> > >There's a message on -general about a possible
> > > problem in the deferred RI constraints.  He was doing a
> > > sequence like:
> > > begin
> > >  delete
> > >  insert
> > > end
> > > and having it fail even though the deleted key was back in
> > > place at the end.
> >
> > Isn't  that  (delete  and  reinsert  the  same  PK)  what the
> > standard means with "triggered data change violation"?
> 
> Triggered data change violations can only occur if the same attribute is
> changed twice during the same *statement*, not transaction.
>
Do we also get "Triggered data change violations" when we delete and
then 
insert on the FK side in a single transaction ?

I just had to remove a FK constraint because I could not figure ot where 
the violation was coming from ;(

-
Hannu



Re: [HACKERS] regressplans failures

2000-11-22 Thread Tom Lane

"Mikheev, Vadim" <[EMAIL PROTECTED]> writes:
>> I'm guessing this is a variant of the problem Philip Warner reported
>> yesterday.  Probably WAL-related.  Vadim?

> Probably, though I don't understand how WAL is related to execution plans.
> Ok, it's easy to reproduce - I'll take a look.

Could just be a question of a different pattern of table accesses?
Let me know if you want help looking...

regards, tom lane



RE: [HACKERS] regressplans failures

2000-11-22 Thread Mikheev, Vadim

> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > #3  0x8149b98 in ExceptionalCondition (
> > conditionName=0x81988a0 "!(((file) > 0 && (file) < 
> (int) SizeVfdCache
> > && VfdCache[file].fileName != ((void *)0)))", exceptionP=0x81b93c8,
> > detail=0x0,
> > fileName=0x8198787 "fd.c", lineNumber=851) at assert.c:70
> > #4  0x8105e6e in FileSeek (file=33, offset=0, whence=2) at fd.c:851
> 
> I'm guessing this is a variant of the problem Philip Warner reported
> yesterday.  Probably WAL-related.  Vadim?

Probably, though I don't understand how WAL is related to execution plans.
Ok, it's easy to reproduce - I'll take a look.

Vadim



[HACKERS] Talkative initdb, elog message levels

2000-11-22 Thread Tom Lane

Peter Eisentraut <[EMAIL PROTECTED]> writes:
> I'm also somewhat annoyed that these messages show up during initdb
> now.  Anyone know why exactly?  I couldn't trace it down.

I assume you're talking about this DEBUG stuff:

...
Creating directory /home/postgres/testversion/data/pg_xlog
Creating template1 database in /home/postgres/testversion/data/base/1
DEBUG:  starting up
DEBUG:  database system was shut down at 2000-11-22 14:38:01
DEBUG:  CheckPoint record at (0, 8)
DEBUG:  Redo record at (0, 8); Undo record at (0, 8); Shutdown TRUE
DEBUG:  NextTransactionId: 514; NextOid: 16384
DEBUG:  database system is in production state
Creating global relations in /home/postgres/testversion/data/global
DEBUG:  starting up
DEBUG:  database system was shut down at 2000-11-22 14:38:09
DEBUG:  CheckPoint record at (0, 96)
DEBUG:  Redo record at (0, 96); Undo record at (0, 0); Shutdown TRUE
DEBUG:  NextTransactionId: 514; NextOid: 17199
DEBUG:  database system is in production state
Initializing pg_shadow.
Enabling unlimited row width for system tables.
...

AFAICT, it's always been true that elog(DEBUG) will write to stderr,
and initdb does not redirect the backend's stderr.  The change is that
with XLOG enabled, there is now code that will do elog(DEBUG) in the
default path of control during initdb's bootstrap processing.
Specifically, all this chatter is coming out of StartupXLOG() in xlog.c.
Evidently, up to now there were no elog(DEBUG) calls encountered during
a normal bootstrap run.

Not sure whether we should change any code or not.  I don't much like
the idea of having initdb send stderr to /dev/null, for example.
Perhaps StartupXLOG could be made a little less chatty, however?


BTW, Vadim, what is the reasoning for your having invented aliases
STOP and LOG for elog levels REALLYFATAL and DEBUG?  I think it's
confusing to have more than one name for the same severity level.
If we're going to open up the issue of renaming the elog levels to
something saner, there are a whole bunch of changes to be undertaken,
and these aren't the names I'd choose anyway ...

regards, tom lane



RE: [HACKERS] Talkative initdb, elog message levels

2000-11-22 Thread Mikheev, Vadim

> I assume you're talking about this DEBUG stuff:
> 
> ...
> Creating directory /home/postgres/testversion/data/pg_xlog
> Creating template1 database in /home/postgres/testversion/data/base/1
> DEBUG:  starting up
> DEBUG:  database system was shut down at 2000-11-22 14:38:01

I had to add StartupXLOG call when bootstraping to handle OIDs
correctly.

> Not sure whether we should change any code or not.  I don't much like
> the idea of having initdb send stderr to /dev/null, for example.
> Perhaps StartupXLOG could be made a little less chatty, however?

I considered messages during database system startup/shutdown as having
higher interest/priority than regular debug messages. Some if()
wouldn't be bad, probably.

> BTW, Vadim, what is the reasoning for your having invented aliases
> STOP and LOG for elog levels REALLYFATAL and DEBUG?  I think it's
> confusing to have more than one name for the same severity level.
> If we're going to open up the issue of renaming the elog levels to
> something saner, there are a whole bunch of changes to be undertaken,
> and these aren't the names I'd choose anyway ...

Well, as stated above I would think about XLOG (maybe some others?)
messages as about something different from debug ones. Look at syslog -
there are NOTICE & INFO logging levels, not just DEBUG.

As for STOP - there was no REALLYFATAL at the time I started XLOG codding
(> year ago)... Anyway, I like STOP more than REALLYFATAL -:) But wouldn't
insist on this name.

Vadim



RE: [HACKERS] regressplans failures

2000-11-22 Thread Mikheev, Vadim

> >> I'm guessing this is a variant of the problem Philip 
> >> Warner reported yesterday.  Probably WAL-related.  Vadim?
> 
> > Probably, though I don't understand how WAL is related to 
> > execution plans. Ok, it's easy to reproduce - I'll take a look.
> 
> Could just be a question of a different pattern of table accesses?
> Let me know if you want help looking...

Fixed - fdstate was not properly setted in fd.c:fileNameOpenFile
with WAL enabled, sorry.

Philip, please try to reproduce crash.

Vadim



RE: [HACKERS] regressplans failures

2000-11-22 Thread Philip Warner

>
>Fixed - fdstate was not properly setted in fd.c:fileNameOpenFile
>with WAL enabled, sorry.
>
>Philip, please try to reproduce crash.
>

Seems to have fixed the crash for me as well. Thanks.



Philip Warner| __---_
Albatross Consulting Pty. Ltd.   |/   -  \
(A.B.N. 75 008 659 498)  |  /(@)   __---_
Tel: (+61) 0500 83 82 81 | _  \
Fax: (+61) 0500 83 82 82 | ___ |
Http://www.rhyme.com.au  |/   \|
 |----
PGP key available upon request,  |  /
and from pgp5.ai.mit.edu:11371   |/