Re: [PATCHES] RESET SESSION v3
On Sun, 2007-04-08 at 11:08 +0300, Marko Kreen wrote: > I think implicit ABORT would annoy various tools that > partially parse user sql and expect to know what transaction > state currently is. For them a new tranaction control statement > would be nuisance. That's not the only alternative: we could also either disallow all of the "ALL" variants in a transaction block, or allow RESET SESSION inside a transaction block. I've committed the patch basically as-is: thanks for the patch. I don't feel strongly about the above, but if there's a consensus, we can change the behavior later. -Neil ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] autovacuum multiworkers, patch 5
Alvaro Herrera <[EMAIL PROTECTED]> wrote: > I manually merged your patch on top of my own. This is the result. > Please have a look at whether the new code is correct and behaves sanely > (I haven't tested it). The patch seems to be broken -- the latter half is lost. Regards, --- ITAGAKI Takahiro NTT Open Source Software Center ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
I don't fully understand what "transaction log" means. If it means "archived WAL", the current (8.2) code handle WAL as follows: 1) If full_page_writes=off, then no full page writes will be written to WAL, except for those during onlie backup (between pg_start_backup and pg_stop_backup). The WAL size will be considerably small but it cannot recover from partial/inconsistent write to the database files. We have to go back to the online backup and apply all the archive log. 2) If full_page_writes=on, then full page writes will be written at the first update of a page after each checkpoint, plus full page writes at 1). Because we have no means (in 8.2) to optimize the WAL so far, what we can do is to copy WAL or gzip it at archive time. If we'd like to keep good chance of recovery after the crash, 8.2 provides only the method 2), leaving archive log size considerably large. My proposal maintains the chance of crash recovery the same as in the case of full_page_writes=on and reduces the size of archived log as in the case of full_page_writes=off. Regards; Hannu Krosing wrote: Ühel kenal päeval, T, 2007-04-10 kell 18:17, kirjutas Joshua D. Drake: In terms of idle time for gzip and other command to archive WAL offline, no difference in the environment was given other than the command to archive. My guess is because the user time is very large in gzip, it has more chance for scheduler to give resource to other processes. In the case of cp, idle time is more than 30times longer than user time. Pg_compresslog uses seven times longer idle time than user time. On the other hand, gzip uses less idle time than user time. Considering the total amount of user time, I think it's reasonable measure. Again, in my proposal, it is not the issue to increase run time performance. Issue is to decrease the size of archive log to save the storage. Considering the relatively little amount of storage a transaction log takes, it would seem to me that the performance angle is more appropriate. As I understand it it's not about transaction log but about write-ahead log. and the amount of data in WAL can become very important once you have to keep standby servers in different physical locations (cities, countries or continents) where channel throughput and cost comes into play. With simple cp (scp/rsync) the amount of WAL data needing to be copied is about 10x more than data collected by trigger based solutions (Slony/pgQ). With pg_compresslog WAL-shipping seems to have roughly the same amount and thus becomes a viable alternative again. Is it more efficient in other ways besides negligible tps? Possibly more efficient memory usage? Better restore times for a crashed system? I think that TPS is more affected by number of writes than size of each block written, so there is probably not that much to gain in TPS, except perhaps from better disk cache usage. For me pg_compresslog seems to be a winner even if it just does not degrade performance. -- Koichi Suzuki ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
The score below was taken based on 8.2 code, not 8.3 code. So I don't think the below measure is introduced only in 8.3 code. Tom Lane wrote: > Koichi Suzuki <[EMAIL PROTECTED]> writes: >> For more information, when checkpoint interval is one hour, the amount >> of the archived log size was as follows: >> cp: 3.1GB >> gzip: 1.5GB >> pg_compresslog: 0.3GB > > The notion that 90% of the WAL could be backup blocks even at very long > checkpoint intervals struck me as excessive, so I went looking for a > reason, and I may have found one. There has been a bug in CVS HEAD > since Feb 8 causing every btree page split record to include a backup > block whether needed or not. If these numbers were taken with recent > 8.3 code, please retest with current HEAD. > > regards, tom lane > -- Koichi Suzuki ---(end of broadcast)--- TIP 6: explain analyze is your friend
[PATCHES] High resolution psql \timing on Windows
This patch replace _ftime() by QueryPerformanceCounter() to measure durations in psql \timing on Windows. It had only 15ms~ of time resolusion. I brought the codes from src/include/executor/instrument.h . Regards, --- ITAGAKI Takahiro NTT Open Source Software Center psql_timing_on_windows.patch Description: Binary data ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [PATCHES] autovacuum multiworkers, patch 5
ITAGAKI Takahiro wrote: > > > >Yes, that's correct. Per previous discussion, what I actually wanted to > > > >do was to create a GUC setting to simplify the whole thing, something > > > >like "autovacuum_max_mb_per_second" or "autovacuum_max_io_per_second". > > > >Then, have each worker use up to (max_per_second/active workers) as much > > > >IO resources. > > > > One thing I forgot to mention is that this is unlikely to be implemented > > in 8.3. > > This is a WIP cost balancing patch built on autovacuum-multiworkers-5.patch. > The total cost of workers are adjusted to autovacuum_vacuum_cost_delay. I manually merged your patch on top of my own. This is the result. Please have a look at whether the new code is correct and behaves sanely (I haven't tested it). -- Alvaro Herrerahttp://www.CommandPrompt.com/ PostgreSQL Replication, Consulting, Custom Development, 24x7 support Index: src/backend/commands/vacuum.c === RCS file: /home/alvherre/Code/cvs/pgsql/src/backend/commands/vacuum.c,v retrieving revision 1.349 diff -c -p -r1.349 vacuum.c *** src/backend/commands/vacuum.c 14 Mar 2007 18:48:55 - 1.349 --- src/backend/commands/vacuum.c 11 Apr 2007 23:43:23 - *** vacuum_delay_point(void) *** 3504,3509 --- 3504,3512 VacuumCostBalance = 0; + /* update balance values for workers */ + AutoVacuumUpdateDelay(); + /* Might have gotten an interrupt while sleeping */ CHECK_FOR_INTERRUPTS(); } Index: src/backend/postmaster/autovacuum.c === RCS file: /home/alvherre/Code/cvs/pgsql/src/backend/postmaster/autovacuum.c,v retrieving revision 1.40 diff -c -p -r1.40 autovacuum.c *** src/backend/postmaster/autovacuum.c 28 Mar 2007 22:17:12 - 1.40 --- src/backend/postmaster/autovacuum.c 11 Apr 2007 23:43:31 - *** *** 43,48 --- 43,49 #include "storage/proc.h" #include "storage/procarray.h" #include "storage/sinval.h" + #include "storage/spin.h" #include "tcop/tcopprot.h" #include "utils/flatfiles.h" #include "utils/fmgroids.h" *** *** 52,57 --- 53,59 #include "utils/syscache.h" + static volatile sig_atomic_t got_SIGUSR1 = false; static volatile sig_atomic_t got_SIGHUP = false; static volatile sig_atomic_t avlauncher_shutdown_request = false; *** static volatile sig_atomic_t avlauncher_ *** 59,64 --- 61,67 * GUC parameters */ bool autovacuum_start_daemon = false; + int autovacuum_max_workers; int autovacuum_naptime; int autovacuum_vac_thresh; double autovacuum_vac_scale; *** int autovacuum_freeze_max_age; *** 69,75 int autovacuum_vac_cost_delay; int autovacuum_vac_cost_limit; ! /* Flag to tell if we are in the autovacuum daemon process */ static bool am_autovacuum_launcher = false; static bool am_autovacuum_worker = false; --- 72,78 int autovacuum_vac_cost_delay; int autovacuum_vac_cost_limit; ! /* Flags to tell if we are in an autovacuum process */ static bool am_autovacuum_launcher = false; static bool am_autovacuum_worker = false; *** static int default_freeze_min_age; *** 82,95 /* Memory context for long-lived data */ static MemoryContext AutovacMemCxt; ! /* struct to keep list of candidate databases for vacuum */ ! typedef struct autovac_dbase { ! Oid ad_datid; ! char *ad_name; ! TransactionId ad_frozenxid; ! PgStat_StatDBEntry *ad_entry; ! } autovac_dbase; /* struct to keep track of tables to vacuum and/or analyze, in 1st pass */ typedef struct av_relation --- 85,106 /* Memory context for long-lived data */ static MemoryContext AutovacMemCxt; ! /* struct to keep track of databases in launcher */ ! typedef struct avl_dbase { ! Oid adl_datid; /* hash key -- must be first */ ! TimestampTz adl_next_worker; ! int adl_score; ! } avl_dbase; ! ! /* struct to keep track of databases in worker */ ! typedef struct avw_dbase ! { ! Oid adw_datid; ! char *adw_name; ! TransactionId adw_frozenxid; ! PgStat_StatDBEntry *adw_entry; ! } avw_dbase; /* struct to keep track of tables to vacuum and/or analyze, in 1st pass */ typedef struct av_relation *** typedef struct autovac_table *** 110,123 int at_vacuum_cost_limit; } autovac_table; typedef struct { ! Oid process_db; /* OID of database to process */ ! int worker_pid; /* PID of the worker process, if any */ } AutoVacuumShmemStruct; static AutoVacuumShmemStruct *AutoVacuumShmem; #ifdef EXEC_BACKEND static pid_t avlauncher_forkexec(void); static pid_t avworker_forkexec(void); --- 121,195 int at_vacuum_cost_limit; } autovac_table; + /*- + * This struct holds information about a single worker's whereabouts. We keep + * an array of these
Re: [PATCHES] [HACKERS] Full page writes improvement, code update
Koichi Suzuki <[EMAIL PROTECTED]> writes: > For more information, when checkpoint interval is one hour, the amount > of the archived log size was as follows: > cp: 3.1GB > gzip: 1.5GB > pg_compresslog: 0.3GB The notion that 90% of the WAL could be backup blocks even at very long checkpoint intervals struck me as excessive, so I went looking for a reason, and I may have found one. There has been a bug in CVS HEAD since Feb 8 causing every btree page split record to include a backup block whether needed or not. If these numbers were taken with recent 8.3 code, please retest with current HEAD. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] CREATE TABLE LIKE INCLUDING INDEXES support
NikhilS wrote: > Hi, > > On 4/10/07, Bruce Momjian <[EMAIL PROTECTED]> wrote: > > > > > > Added to TODO: > > > > o Have WITH CONSTRAINTS also create constraint indexes > > > > http://archives.postgresql.org/pgsql-patches/2007-04/msg00149.php > > > Trevor's patch does add unique/primary indexes. This would mean that we have > to remove the syntax support for "INCLUDING INDEXES" and just add code to > the existing WITH CONSTRAINTs code path from his patch. That is all that is required. > Is there something else and hence we have the above TODO? If someone wants to work on this item and submit it, we can review it for 8.3, but if not, it waits until 8.4. -- Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] Packed Varlena Update (v21)
Thanks, that was a distinction I didn't know. TODO updated: o Allow single-byte header storage for array elements --- Gregory Stark wrote: > > "Bruce Momjian" <[EMAIL PROTECTED]> writes: > > > Added to TODO: > > > > o Allow single-byte header storage for arrays > > Fwiw this is "single-byte header storage for varlena array *elements*" > > The arrays themselves already get the packed varlena treatment. > > -- > Gregory Stark > EnterpriseDB http://www.enterprisedb.com > > > ---(end of broadcast)--- > TIP 1: if posting/reading through Usenet, please send an appropriate >subscribe-nomail command to [EMAIL PROTECTED] so that your >message can get through to the mailing list cleanly -- Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [PATCHES] [HACKERS] [Fwd: Index Advisor]
Gurjeet Singh wrote: > The interface etc. may not be beautiful, but it isn't ugly either! It is > a lot better than manually creating pg_index records and inserting them into > cache; we use index_create() API to create the index (build is deferred), > and then 'rollback to savepoint' to undo those changes when the advisor is > done. index_create() causes pg_depends entries too, so a 'RB to SP' is far > much safer than going and deleting cache records manually. My complaint was not that the API used in the code was non-optimal(which I think was Tom's issue), but that the _user_ API was not very clean. Not sure what to recommend, but I will think about it later. -- Bruce Momjian <[EMAIL PROTECTED]> http://momjian.us EnterpriseDB http://www.enterprisedb.com + If your life is a hard drive, Christ can be your backup. + ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[PATCHES] patch to suppress psql timing output in quiet mode
I noticed that when psql accepts input from stdin or -f (but not -c), and timing is set to on in .psqlrc, timing results are printed out to stdout even when -q (quiet) is passed in. This may not be the perfect solution, but it fixes the problem (I'm having problems with bash scripts that are borking on time returned). current behavior: [EMAIL PROTECTED] psql]# echo "select 0" | psql -tAq 0 Time: 1.155 ms [EMAIL PROTECTED] psql]# psql -tAqc"select 0" 0 merlin Index: common.c === RCS file: /projects/cvsroot/pgsql/src/bin/psql/common.c,v retrieving revision 1.133 diff -c -r1.133 common.c *** common.c8 Feb 2007 11:10:27 - 1.133 --- common.c11 Apr 2007 17:20:21 - *** *** 918,924 PQclear(results); /* Possible microtiming output */ ! if (OK && pset.timing) printf(_("Time: %.3f ms\n"), elapsed_msec); /* check for events that may occur during query execution */ --- 918,924 PQclear(results); /* Possible microtiming output */ ! if (OK && pset.timing && !pset.quiet) printf(_("Time: %.3f ms\n"), elapsed_msec); /* check for events that may occur during query execution */ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] [PATCHES] Fix mdsync never-ending loop problem
Heikki Linnakangas <[EMAIL PROTECTED]> writes: > My first thought is that the cycle_ctr just adds extra complexity. The > canceled-flag really is the key in Takahiro-san's patch, so we don't > need the cycle_ctr anymore. We don't have to have it in the sense of the code not working without it, but it probably pays for itself by eliminating useless fsyncs. The overhead for it in my proposed implementation is darn near zero in the non-error case. Also, Takahiro-san mentioned at one point that he was concerned to avoid useless fsyncs because of some property of the LDC patch --- I wasn't too clear on what, but maybe he can explain. regards, tom lane ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[PATCHES] UPDATE using sub selects
Hi, As per discussion on -hackers, a patch which allows updates to use subselects is attached with this mail. As per discussion with Tom, I have adopted the following approach: * Introduce ROWEXPR_SUBLINK type for subqueries that allows multiple column outputs. * Populate the targetList with PARAM_SUBLINK entries dependent on the subselects. * Modify the targets in-place into PARAM_EXEC entries in the make_subplan phase. The above does not require any kluges in the targetList processing code path at all. UPDATEs seem to work fine using subselects with this patch. I have modified the update.sql regression test to include possible variations . No documentation changes are present in this patch. Feedback, comments appreciated. Regards, Nikhils -- EnterpriseDB http://www.enterprisedb.com ? GNUmakefile ? config.log ? config.status ? cscope.out ? src/Makefile.global ? src/backend/postgres ? src/backend/catalog/postgres.bki ? src/backend/catalog/postgres.description ? src/backend/catalog/postgres.shdescription ? src/backend/utils/mb/conversion_procs/conversion_create.sql ? src/backend/utils/mb/conversion_procs/ascii_and_mic/libascii_and_mic.so.0.0 ? src/backend/utils/mb/conversion_procs/cyrillic_and_mic/libcyrillic_and_mic.so.0.0 ? src/backend/utils/mb/conversion_procs/euc_cn_and_mic/libeuc_cn_and_mic.so.0.0 ? src/backend/utils/mb/conversion_procs/euc_jis_2004_and_shift_jis_2004/libeuc_jis_2004_and_shift_jis_2004.so.0.0 ? src/backend/utils/mb/conversion_procs/euc_jp_and_sjis/libeuc_jp_and_sjis.so.0.0 ? src/backend/utils/mb/conversion_procs/euc_kr_and_mic/libeuc_kr_and_mic.so.0.0 ? src/backend/utils/mb/conversion_procs/euc_tw_and_big5/libeuc_tw_and_big5.so.0.0 ? src/backend/utils/mb/conversion_procs/latin2_and_win1250/liblatin2_and_win1250.so.0.0 ? src/backend/utils/mb/conversion_procs/latin_and_mic/liblatin_and_mic.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_ascii/libutf8_and_ascii.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_big5/libutf8_and_big5.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_cyrillic/libutf8_and_cyrillic.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_euc_cn/libutf8_and_euc_cn.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_euc_jis_2004/libutf8_and_euc_jis_2004.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_euc_jp/libutf8_and_euc_jp.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_euc_kr/libutf8_and_euc_kr.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_euc_tw/libutf8_and_euc_tw.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_gb18030/libutf8_and_gb18030.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_gbk/libutf8_and_gbk.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_iso8859/libutf8_and_iso8859.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_iso8859_1/libutf8_and_iso8859_1.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_johab/libutf8_and_johab.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_shift_jis_2004/libutf8_and_shift_jis_2004.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_sjis/libutf8_and_sjis.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_uhc/libutf8_and_uhc.so.0.0 ? src/backend/utils/mb/conversion_procs/utf8_and_win/libutf8_and_win.so.0.0 ? src/bin/initdb/initdb ? src/bin/ipcclean/ipcclean ? src/bin/pg_config/pg_config ? src/bin/pg_controldata/pg_controldata ? src/bin/pg_ctl/pg_ctl ? src/bin/pg_dump/pg_dump ? src/bin/pg_dump/pg_dumpall ? src/bin/pg_dump/pg_restore ? src/bin/pg_resetxlog/pg_resetxlog ? src/bin/psql/psql ? src/bin/scripts/clusterdb ? src/bin/scripts/createdb ? src/bin/scripts/createlang ? src/bin/scripts/createuser ? src/bin/scripts/dropdb ? src/bin/scripts/droplang ? src/bin/scripts/dropuser ? src/bin/scripts/reindexdb ? src/bin/scripts/vacuumdb ? src/include/pg_config.h ? src/include/stamp-h ? src/interfaces/ecpg/compatlib/libecpg_compat.so.2.3 ? src/interfaces/ecpg/ecpglib/libecpg.so.5.3 ? src/interfaces/ecpg/include/ecpg_config.h ? src/interfaces/ecpg/pgtypeslib/libpgtypes.so.2.3 ? src/interfaces/ecpg/preproc/.pgc.c.swp ? src/interfaces/ecpg/preproc/ecpg ? src/interfaces/libpq/exports.list ? src/interfaces/libpq/libpq.so.5.1 ? src/pl/plpgsql/src/libplpgsql.so.1.0 ? src/port/pg_config_paths.h ? src/test/regress/libregress.so.0.0 ? src/test/regress/pg_regress ? src/test/regress/results ? src/test/regress/testtablespace ? src/test/regress/expected/constraints.out ? src/test/regress/expected/copy.out ? src/test/regress/expected/create_function_1.out ? src/test/regress/expected/create_function_2.out ? src/test/regress/expected/largeobject.out ? src/test/regress/expected/largeobject_1.out ? src/test/regress/expected/misc.out ? src/test/regress/expected/tablespace.out ? src/test/regress/sql/constraints.sql ? src/test/regress/sql/copy.sql ? src/test/regress/sql/create_function_1.sql ? src/test/regress/sql/create_function_2.sql ? src/test/regress/sql/largeobject.sql ? src/test/regress/sql/misc.sql ? src/test/regress/sql
Re: [PATCHES] [HACKERS] CIC and deadlocks
On 4/11/07, Tom Lane <[EMAIL PROTECTED]> wrote: [ itch... ] The problem is with time-extended execution of GetSnapshotData; what happens if the other guy lost the CPU for a good long time while in the middle of GetSnapshotData? He might set his xmin based on info you saw as long gone. You might be correct that it's safe, but the argument would have to hinge on the OldestXmin process being unable to commit because of someone holding shared ProcArrayLock; a point you are definitely not making above. (Study the comments in GetSnapshotData for awhile, also those in xact.c's commit-related code.) My argument was based on what you said above, but I obviously did not state it well :) Anyways, I think its better to be safe and we agree that its not such a bad thing to take exclusive lock on procarray because CIC is not something that happens very often. Attached is a revised patch which takes exclusive lock on the procarray, rest remaining the same. Thanks, Pavan -- EnterpriseDB http://www.enterprisedb.com CIC_deadlock_v2.patch Description: Binary data ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] [PATCHES] Fix mdsync never-ending loop problem
Tom Lane wrote: I wrote: Actually, on second look I think the key idea here is Takahiro-san's introduction of a cancellation flag in the hashtable entries, to replace the cases where AbsorbFsyncRequests can try to delete entries. What that means is mdsync() doesn't need an outer retry loop at all: I fooled around with this idea and came up with the attached patch. It seems to do what's intended but could do with more eyeballs and testing before committing. Comments please? I'm traveling today, but I'll take a closer look at it tomorrow morning. My first thought is that the cycle_ctr just adds extra complexity. The canceled-flag really is the key in Takahiro-san's patch, so we don't need the cycle_ctr anymore. -- Heikki Linnakangas EnterpriseDB http://www.enterprisedb.com ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [PATCHES] Table function support
I've been looking at this, and my feeling is that we should drop the PROARGMODE_TABLE business and just define RETURNS TABLE(x int, y int) as exactly equivalent to RETURNS SETOF RECORD with x and y treated as OUT parameters. There isn't any advantage to distinguishing the cases that outweighs breaking client code that looks at pg_proc.proargmodes. I don't believe that the SQL spec prevents us from exposing those parameter names to PL functions, especially since none of our PLs are in the standard at all. Reason for PROARGMODE_TABLE was protection before name's collision, and x, and y are table attributies (not variables) and then we are protected before collision. It's shortcut for create function foo() returns setof record as ... select * from foo() as (x int, y int); Regards Pavel Stehule _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [PATCHES] Table function support
I've been looking at this, and my feeling is that we should drop the PROARGMODE_TABLE business and just define RETURNS TABLE(x int, y int) as exactly equivalent to RETURNS SETOF RECORD with x and y treated as OUT parameters. There isn't any advantage to distinguishing the cases that outweighs breaking client code that looks at pg_proc.proargmodes. I don't believe that the SQL spec prevents us from exposing those parameter names to PL functions, especially since none of our PLs are in the standard at all. Reason for PROARGMODE_TABLE was protection before name's collision, and x, and y are table attributies (not variables) and then we are protected before collision. It's shortcut for create function foo() returns setof record as ... select * from foo() as (x int, y int); Regards Pavel Stehule _ Najdete si svou lasku a nove pratele na Match.com. http://www.msn.cz/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq