Re: [HACKERS] pg_dump 'die_on_errors'
At 02:32 PM 12/08/2004, Philip Warner wrote: >At 01:27 PM 12/08/2004, Bruce Momjian wrote: >Set client_min_messages to WARNING? > >Sounds like a plan. Attached patch sets client_min_messages as above and gives some context to errors messages, eg: pg_restore: [archiver (db)] Error from TOC Entry 19; 1255 16438403 FUNCTION foo() pjw pg_restore: [archiver (db)] could not execute query: ERROR: no schema has been selected to create in pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" does not exist pg_restore: [archiver (db)] Error from TOC Entry 1475; 0 0 ACL public postgres pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" does not exist pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" does not exist pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" does not exist pg_restore: [archiver (db)] could not execute query: ERROR: schema "public" does not exist Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 03 5330 3172 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp.mit.edu:11371 |/ pg_dump_040815.patch.gz Description: Binary data ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
[HACKERS] Savepoint weirdness
Hi all, Jason Godden pointed out some weird savepoint behaviour on IRC and i've narrowed this down to a simpler case. We see the following behaviour against HEAD: template1=# create table foo(i int, j text); CREATE TABLE template1=# create unique index foo_idx on foo(i); -- not, creation of idx CREATE INDEX template1=# begin; BEGIN template1=# insert into foo values(1, 'one'); INSERT 584714 1 template1=# select * from foo; i | j ---+- 1 | one (1 row) template1=# savepoint sp1; SAVEPOINT template1=# insert into foo values(2, 'two'); INSERT 584715 1 template1=# insert into foo values(3, 'three'); INSERT 584716 1 template1=# select * from foo; i | j ---+--- 1 | one 2 | two 3 | three (3 rows) template1=# savepoint sp2; SAVEPOINT template1=# update foo set j = upper(j); UPDATE 3 template1=# select * from foo; i | j ---+--- 1 | ONE 2 | TWO 3 | THREE (3 rows) template1=# rollback to sp2; ROLLBACK template1=# select * from foo; i | j ---+--- (0 rows) The rollback to sp2 should have taken us to a point where foo looked like: i | j ---+--- 1 | one 2 | two 3 | three (3 rows) And, indeed, without an index on i, that's what we get. I've attached output of when the index is around and not around. I've also confirmed that other DELETE and INSERT (instead of UPDATE) after savepoint sp2 do not cause this weirdness. Gavintemplate1=# create table foo(i int, j text); CREATE TABLE template1=# create unique index foo_idx on foo(i); CREATE INDEX template1=# begin; BEGIN template1=# insert into foo values(1, 'one'); INSERT 584714 1 template1=# select * from foo; i | j ---+- 1 | one (1 row) template1=# savepoint sp1; SAVEPOINT template1=# insert into foo values(2, 'two'); INSERT 584715 1 template1=# insert into foo values(3, 'three'); INSERT 584716 1 template1=# select * from foo; i | j ---+--- 1 | one 2 | two 3 | three (3 rows) template1=# savepoint sp2; SAVEPOINT template1=# update foo set j = upper(j); UPDATE 3 template1=# select * from foo; i | j ---+--- 1 | ONE 2 | TWO 3 | THREE (3 rows) template1=# rollback to sp2; ROLLBACK template1=# select * from foo; i | j ---+--- (0 rows) template1=# abort; ROLLBACK template1=# select * from foo; i | j ---+--- (0 rows) template1=# drop table foo; DROP TABLE template1=# create table foo(i int, j text); CREATE TABLE template1=# insert into foo values(1, 'one'); INSERT 584722 1 template1=# drop table foo; DROP TABLE template1=# create table foo(i int, j text); CREATE TABLE template1=# begin; BEGIN template1=# insert into foo values(1, 'one'); INSERT 584728 1 template1=# select * from foo; i | j (1 row) template1=# savepoint sp1; SAVEPOINT template1=# insert into foo values(2, 'two'); INSERT 584729 1 template1=# insert into foo values(3, 'three'); INSERT 584730 1 template1=# select * from foo; i | j ---+--- 1 | one 2 | two 3 | three (3 rows) template1=# savepoint sp2; SAVEPOINT template1=# update foo set j = upper(j); UPDATE 3 template1=# select * from foo; i | j ---+--- 1 | ONE 2 | TWO 3 | THREE (3 rows) template1=# rollback to sp2; ROLLBACK template1=# select * from foo; i | j ---+--- 1 | one 2 | two 3 | three (3 rows) ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] [Fwd: Re: [pgsql-hackers-win32] Import from Linux to
Andrew Dunstan said: > > > Bruce Momjian wrote: > >>One issue is that pre-8.0, psql files were opened in Win32 text mode, >>so we wouldn't have seen this bug on Win32, but we would on Linux. >>Because we open them on Win32 now in binary mode so we see control-Z it >>will show up on Win32 too. >> >> >> > > true, *BUT* > > The patch is not platform-specific. It simply makes psql accept the > same line endings on COPY FROM that the backend will accept - in > effect it makes it line-end agnostic - this is a Good Thing (tm). > > We are still months away from releasing 8.0, so I think doing this for > tomorrow's 7.4 bundle makes sense. > There's another question this bug raises, though. Why doesn't the server protest when it sees more copy data passed in after it sees the end marker? cheers andrew ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] 8.0 beta status
Tom Lane wrote: > Bruce Momjian <[EMAIL PROTECTED]> writes: > > However, I don't see any CVS commit that fixed this? What am I missing? > > The failure case is where the template database has a conflicting > table. You didn't show us where you created that table, but it > evidently was not in template1. OK, reproduced by using template1: create tablespace blah location '/bjm/tmp'; CREATE TABLESPACE create table foo (i int) tablespace blah; CREATE TABLE create database foo tablespace blah; ERROR: could not initialize database directory DETAIL: Directory "/u/pgsql/data/pg_tblspc/17229/17232" already exists. Gavin has posted a patch for this so I will put it in the patch queue. -- Bruce Momjian| http://candle.pha.pa.us [EMAIL PROTECTED] | (610) 359-1001 + If your life is a hard drive, | 13 Roberts Road + Christ can be your backup.| Newtown Square, Pennsylvania 19073 ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] pg_dump 'die_on_errors'
Philip Warner <[EMAIL PROTECTED]> writes: > Attached patch sets client_min_messages as above and gives some > context to errors messages, eg: > pg_restore: [archiver (db)] Error from TOC Entry 19; 1255 16438403 FUNCTION foo() pjw > pg_restore: [archiver (db)] could not execute query: ERROR: no schema has been > selected to create in It'd be substantially *more* helpful if it reported the failing command. (The TOC entry label is not of interest to anyone except pg_dump hackers such as yourself.) I went around a release or so back and made sure that pg_dump would always give the text of a failed query, but I forgot to consider pg_restore :-( regards, tom lane ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] [Fwd: Re: [pgsql-hackers-win32] Import from Linux to
"Andrew Dunstan" <[EMAIL PROTECTED]> writes: > There's another question this bug raises, though. Why doesn't the server > protest when it sees more copy data passed in after it sees the end marker? Whether it did or not would make not the slightest bit of difference, since (without the patch) psql isn't going to be looking for an error response. regards, tom lane ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Savepoint weirdness
Gavin Sherry <[EMAIL PROTECTED]> writes: > Jason Godden pointed out some weird savepoint behaviour on IRC and i've > narrowed this down to a simpler case. Can't reproduce it here --- I get the expected output, on two different machines (HPUX and RHL8). What are you testing on? Do you see the same problem after make distclean and rebuild? regards, tom lane ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Savepoint weirdness
I wrote: > Can't reproduce it here --- I get the expected output, Disregard that --- I had managed to omit the create index command while copying and pasting. Man, that is bizarre ... the index shouldn't make any difference at all... regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] [Fwd: Re: [pgsql-hackers-win32] Import from Linux to
Tom Lane wrote: "Andrew Dunstan" <[EMAIL PROTECTED]> writes: There's another question this bug raises, though. Why doesn't the server protest when it sees more copy data passed in after it sees the end marker? Whether it did or not would make not the slightest bit of difference, since (without the patch) psql isn't going to be looking for an error response. True, but an indication of an error on the server log might have been nice. We have now made psql behave correctly, but there is no guarantee that other clients will behave correctly. Surely sending copy data after the end marker is sent should be an error. cheers andrew ---(end of broadcast)--- TIP 6: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] [Fwd: Re: [pgsql-hackers-win32] Import from Linux to
Andrew Dunstan <[EMAIL PROTECTED]> writes: > Surely sending > copy data after the end marker is sent should be an error. I'm unconvinced. For example, this would force a client to parse the contents of a file it's shipping over, rather than just pushing the file verbatim and then unconditionally adding \. The whole \. termination business is unnecessary in protocol v3, and eventually I'd like to deprecate and then remove it. But that's a different discussion ... regards, tom lane ---(end of broadcast)--- TIP 8: explain analyze is your friend
Re: [HACKERS] Savepoint weirdness
Gavin Sherry <[EMAIL PROTECTED]> writes: > Jason Godden pointed out some weird savepoint behaviour on IRC and i've > narrowed this down to a simpler case. The answer turns out to be that GetSnapshotData is miscomputing snapshot xmin and RecentGlobalXmin when inside a subtransaction: it omits our own top transaction ID from the set of open transactions. The presence of the unique index makes a difference because in the unique-index-check code, we check the existing rows using the bogus data, and actually end up concluding that the original rows being updated are globally dead, and marking them so. I'm surprised that we didn't find this one much earlier :-( regards, tom lane ---(end of broadcast)--- TIP 4: Don't 'kill -9' the postmaster
Re: [HACKERS] will PITR in 8.0 be usable for "hot spare"/"log
Eric Kerin wrote: On Sat, 2004-08-14 at 01:11, Tom Lane wrote: Eric Kerin <[EMAIL PROTECTED]> writes: The issues I've seen are: 1. Knowing when the master has finished the file transfer transfer to the backup. The "standard" solution to this is you write to a temporary file name (generated off your process PID, or some other convenient reasonably- unique random name) and rename() into place only after you've finished the transfer. Yup, much easier this way. Done. 2. Handling the meta-files, (.history, .backup) (eg: not sleeping if they don't exist) Yeah, this is an area that needs more thought. At the moment I believe both of these will only be asked for during the initial microseconds of slave-postmaster start. If they are not there I don't think you need to wait for them. It's only plain ol' WAL segments that you want to wait for. (Anyone see a hole in that analysis?) Seems to be working fine this way, I'm now just returning ENOENT if they don't exist. 3. Keeping the backup from coming online before the replay has fully finished in the event of a failure to copy a file, or other strange errors (out of memory, etc). Right, also an area that needs thought. Some other people opined that they want the switchover to occur only on manual command. I'd go with that too if you have anything close to 24x7 availability of admins. If you *must* have automatic switchover, what's the safest criterion? Dunno, but let's think ... I'm not even really talking about automatic startup on fail over. Right now, if the recovery_command returns anything but 0, the database will finish recovery, and come online. This would cause you to have to re-build your backup system from a copy of master unnecessarily. Sounds kinda messy to me, especially if it's a false trigger (temporary io error, out of memory) Well, this is the way most of HA cluster solution are working, in my experience the RH cluster solution rely on a common partition between the two nodes and on a serial connection between them. For sure for a 24x7 service is a compulsory requirement have an automatic procedure that handle the failures without uman intervention. Regards Gaetano Mendola ---(end of broadcast)--- TIP 3: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Savepoint weirdness
On Sun, 15 Aug 2004, Tom Lane wrote: > Gavin Sherry <[EMAIL PROTECTED]> writes: > > Jason Godden pointed out some weird savepoint behaviour on IRC and i've > > narrowed this down to a simpler case. > > The answer turns out to be that GetSnapshotData is miscomputing snapshot > xmin and RecentGlobalXmin when inside a subtransaction: it omits our own > top transaction ID from the set of open transactions. The presence of > the unique index makes a difference because in the unique-index-check > code, we check the existing rows using the bogus data, and actually end > up concluding that the original rows being updated are globally dead, > and marking them so. Yeah. I was scratching my head for a while wondering why a unique index would make a difference. I was on the look out for something which screwed up xmin but assumed it must have been within the unique check since that is that triggered the problem for me (i'd tested delete and insert). > I'm surprised that we didn't find this one much earlier :-( Yeah. It came from Jason writing a proper application which used savepoints. Gavin ---(end of broadcast)--- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] will PITR in 8.0 be usable for "hot spare"/"log
On Sun, 2004-08-15 at 16:22, Gaetano Mendola wrote: > Eric Kerin wrote: > > On Sat, 2004-08-14 at 01:11, Tom Lane wrote: > > > >>Eric Kerin <[EMAIL PROTECTED]> writes: > >> > >>>The issues I've seen are: > >>>1. Knowing when the master has finished the file transfer transfer to > >>>the backup. > >> > >>The "standard" solution to this is you write to a temporary file name > >>(generated off your process PID, or some other convenient reasonably- > >>unique random name) and rename() into place only after you've finished > >>the transfer. > > > > Yup, much easier this way. Done. > > > > > >>>2. Handling the meta-files, (.history, .backup) (eg: not sleeping if > >>>they don't exist) > >> > >>Yeah, this is an area that needs more thought. At the moment I believe > >>both of these will only be asked for during the initial microseconds of > >>slave-postmaster start. If they are not there I don't think you need to > >>wait for them. It's only plain ol' WAL segments that you want to wait > >>for. (Anyone see a hole in that analysis?) > >> > > > > Seems to be working fine this way, I'm now just returning ENOENT if they > > don't exist. > > > > > >>>3. Keeping the backup from coming online before the replay has fully > >>>finished in the event of a failure to copy a file, or other strange > >>>errors (out of memory, etc). > >> > >>Right, also an area that needs thought. Some other people opined that > >>they want the switchover to occur only on manual command. I'd go with > >>that too if you have anything close to 24x7 availability of admins. > >>If you *must* have automatic switchover, what's the safest criterion? > >>Dunno, but let's think ... > > > > > > I'm not even really talking about automatic startup on fail over. Right > > now, if the recovery_command returns anything but 0, the database will > > finish recovery, and come online. This would cause you to have to > > re-build your backup system from a copy of master unnecessarily. Sounds > > kinda messy to me, especially if it's a false trigger (temporary io > > error, out of memory) > > Well, this is the way most of HA cluster solution are working, in my experience > the RH cluster solution rely on a common partition between the two nodes > and on a serial connection between them. > For sure for a 24x7 service is a compulsory requirement have an automatic procedure > that handle the failures without uman intervention. > > > Regards > Gaetano Mendola > Already sent this to Gaetano, didn't realize the mail was on list too: Redhat's HA stuff is a fail over cluster, not a log shipping cluster. For a fail over cluster, log shipping isn't involved. Just the normal WAL replay, same as if the database came back online in the same node. It also has many methods of communication to check if the master is online (Serial, Network, Hard disk quorum device). Once the Backup detects a failure of the master, it powers the master off, and takes over all devices, and network names/IP addresses. In log shipping, you can't even be sure that both nodes will be close enough together to have multiple communication methods. At work, we have an Oracle log shipping setup where the backup cluster is a thousand or so miles away from the master cluster, separated by a T3 link. For a 24x7 zero-downtime type of system, you would have 2 Fail over clusters, separated by a few miles(or a few thousand). Then setup log shipping from the master to the backup. That keeps the system online incase of a single node hardware failure, without having to transfer to the backup log shipping system. The backup is there incase the master is completely destroyed (by fire, hardware corruption, etc) Hence the reason for the remote location. Thanks, Eric ---(end of broadcast)--- TIP 7: don't forget to increase your free space map settings
Re: [HACKERS] will PITR in 8.0 be usable for "hot spare"/"log
Eric Kerin wrote: On Sun, 2004-08-15 at 16:22, Gaetano Mendola wrote: Eric Kerin wrote: On Sat, 2004-08-14 at 01:11, Tom Lane wrote: Eric Kerin <[EMAIL PROTECTED]> writes: The issues I've seen are: 1. Knowing when the master has finished the file transfer transfer to the backup. The "standard" solution to this is you write to a temporary file name (generated off your process PID, or some other convenient reasonably- unique random name) and rename() into place only after you've finished the transfer. Yup, much easier this way. Done. 2. Handling the meta-files, (.history, .backup) (eg: not sleeping if they don't exist) Yeah, this is an area that needs more thought. At the moment I believe both of these will only be asked for during the initial microseconds of slave-postmaster start. If they are not there I don't think you need to wait for them. It's only plain ol' WAL segments that you want to wait for. (Anyone see a hole in that analysis?) Seems to be working fine this way, I'm now just returning ENOENT if they don't exist. 3. Keeping the backup from coming online before the replay has fully finished in the event of a failure to copy a file, or other strange errors (out of memory, etc). Right, also an area that needs thought. Some other people opined that they want the switchover to occur only on manual command. I'd go with that too if you have anything close to 24x7 availability of admins. If you *must* have automatic switchover, what's the safest criterion? Dunno, but let's think ... I'm not even really talking about automatic startup on fail over. Right now, if the recovery_command returns anything but 0, the database will finish recovery, and come online. This would cause you to have to re-build your backup system from a copy of master unnecessarily. Sounds kinda messy to me, especially if it's a false trigger (temporary io error, out of memory) Well, this is the way most of HA cluster solution are working, in my experience the RH cluster solution rely on a common partition between the two nodes and on a serial connection between them. For sure for a 24x7 service is a compulsory requirement have an automatic procedure that handle the failures without uman intervention. Regards Gaetano Mendola Already sent this to Gaetano, didn't realize the mail was on list too: Redhat's HA stuff is a fail over cluster, not a log shipping cluster. Once the Backup detects a failure of the master, it powers the master off, > and takes over all devices, and network names/IP addresses. We are using RH HA stuff since long time and is not necessary have the master powered off ( our setup don't ). In log shipping, you can't even be sure that both nodes will be close enough together to have multiple communication methods. At work, we have an Oracle log shipping setup where the backup cluster is a thousand or so miles away from the master cluster, separated by a T3 link. For a 24x7 zero-downtime type of system, you would have 2 Fail over clusters, separated by a few miles(or a few thousand). Then setup log shipping from the master to the backup. That keeps the system online incase of a single node hardware failure, without having to transfer to the backup log shipping system. The backup is there incase the master is completely destroyed (by fire, hardware corruption, etc) Hence the reason for the remote location. I totally agree with you but not all people can set up a RH HA cluster or equivalent solutions ( is needed very expensive SAN with double port ) and this software version could help in a low cost setup. The scripts that I posted do the failover between master and slave in automatic way delivering also the partial WAL ( I could increase the robusteness checking also a serial connection ) without need expensive HW. For sure this way to proceed ( the log shipping activity ) will increase the availability in case of total disaster ( actualy I transfer to another location a plain dump each 3 hours :-( ). Regards Gaetano Mendola ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] pg_dump 'die_on_errors'
At 01:32 AM 16/08/2004, Tom Lane wrote: It'd be substantially *more* helpful if it reported the failing command. They are two different problems; the TOC entry is important for any multiline command or to rerun the command easily later. Whereas displaying the failed SQL command is a matter of fixing the error messages. The latter is complicated by failed COPY commands which, with die-on-errors off, results in the data being processed as a command, so dumping the command will dump all of the data. In the case of long commands, should the whole command be dumped? eg. (eg. several pages of function definition). In the case of the COPY command, I'm not sure what to do. Obviously, it would be best to avoid sending the data, but the data and command are combined (from memory). Also, the 'data' may be in the form of INSERT statements. Attached patch produces the first 125 chars of the command: pg_restore: [archiver (db)] Error while PROCESSING TOC: pg_restore: [archiver (db)] Error from TOC Entry 26; 1255 16449270 FUNCTION plpgsql_call_handler() pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_call_handler" already exists with same argument types Command was: CREATE FUNCTION plpgsql_call_handler() RETURNS language_handler AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_call_han... pg_restore: [archiver (db)] Error from TOC Entry 27; 1255 16449271 FUNCTION plpgsql_validator(oid) pjw pg_restore: [archiver (db)] could not execute query: ERROR: function "plpgsql_validator" already exists with same argument types Command was: CREATE FUNCTION plpgsql_validator(oid) RETURNS void AS '/var/lib/pgsql-8.0b1/lib/plpgsql', 'plpgsql_validator' LANGU... Philip Warner| __---_ Albatross Consulting Pty. Ltd. |/ - \ (A.B.N. 75 008 659 498) | /(@) __---_ Tel: (+61) 0500 83 82 81 | _ \ Fax: (+61) 03 5330 3172 | ___ | Http://www.rhyme.com.au |/ \| |---- PGP key available upon request, | / and from pgp.mit.edu:11371 |/ pg_dump_040815-1.patch.gz Description: Binary data ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
[HACKERS] plpgsql NULL statement (was Re: [GENERAL] Postgres 8.0 -> BEGIN EXCEPTION END Syntax????)
Daniel Schuchardt <[EMAIL PROTECTED]> writes: > BEGIN > exception ... > EXCEPTION > WHEN OTHERS THEN ?what to write for do nothing? > END; > in oracle it's > WHEN OTHERS THEN null; > but this syntax doesn't work in postgres. In Postgres you just write nothing at all: BEGIN ... EXCEPTION WHEN OTHERS THEN END; However, it does appear that Oracle's PL/SQL has such a statement, and that they don't like empty exception sections (or empty if/then/else arms, etc), but *require* you to write "NULL;" in these places. It seems to me that it would ease porting of Oracle functions if we allowed a NULL statement in plpgsql. It looks like about five minutes' work to add such a thing ... anyone have any objections? regards, tom lane ---(end of broadcast)--- TIP 2: you can get off all lists at once with the unregister command (send "unregister YourEmailAddressHere" to [EMAIL PROTECTED])
Re: [HACKERS] plpgsql NULL statement (was Re: [GENERAL] Postgres 8.0 -> BEGIN EXCEPTION END Syntax????)
Tom Lane said: > Daniel Schuchardt <[EMAIL PROTECTED]> writes: >> BEGIN >> exception ... >> EXCEPTION >> WHEN OTHERS THEN ?what to write for do nothing? >> END; > >> in oracle it's >> WHEN OTHERS THEN null; >> but this syntax doesn't work in postgres. > > In Postgres you just write nothing at all: > > BEGIN > ... > EXCEPTION > WHEN OTHERS THEN > END; > > However, it does appear that Oracle's PL/SQL has such a statement, and > that they don't like empty exception sections (or empty if/then/else > arms, etc), but *require* you to write "NULL;" in these places. > It seems to me that it would ease porting of Oracle functions if > we allowed a NULL statement in plpgsql. > > It looks like about five minutes' work to add such a thing ... anyone > have any objections? > It's got my vote :-) PLSQL is based on Ada and this is an Ada rule. I like the PLSQL/Ada way a lot more than the PostgreSQL example you gave above, which just looks ... odd. cheers andrew ---(end of broadcast)--- TIP 5: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faqs/FAQ.html
Re: [HACKERS] plpgsql NULL statement (was Re: [GENERAL] Postgres
On Mon, 16 Aug 2004, Tom Lane wrote: > > in oracle it's > > WHEN OTHERS THEN null; > > but this syntax doesn't work in postgres. > > It looks like about five minutes' work to add such a thing ... anyone > have any objections? Is NULL above an empty statement in oracle or is it a normal expression (the normal NULL value) so that the above is a shorthand for WHEN OTHERS THEN SELECT NULL; ? -- /Dennis Björklund ---(end of broadcast)--- TIP 8: explain analyze is your friend