Re: [HACKERS] pg_proc.h
-Original Message- From: Tom Lane [mailto:[EMAIL PROTECTED] Sent: 10 November 2005 15:38 To: Dave Page Cc: Andrew Dunstan; PostgreSQL-development Subject: Re: [HACKERS] pg_proc.h Dave Page dpage@vale-housing.co.uk writes: I vote for fixing the file (but then I'm not doing the work). Unused_oids or whatevers it's called is fine, but it's still handy to be able to read the file easily. Our convention is that hand-assigned OIDs are *globally* unique, not just within the particular catalog. This means you *must* use unused_oids to find a free OID; eyeballing the catalog listing isn't enough, even if it were in strict order. Yes, I realise that, my point was that unused_oids doesn't make the file more readable. Given that, I think readability really consists in keeping related functions together. If we were going to do any wholesale reordering, I'd want to see it done with an eye to sorting the functions into logical groups, not a blind numeric sort. That makes sense for groups of functions, but one-offs, or ones that are not easily categorised will just end up being dumped anywhere in there. You hack that file *far* more than I do though, so I can't really argue against what you think would be most convenient. Regards, Dave. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Obtaining a source tree from CVS
Hello, how can i make a checkout from CVS server ? What is the address? You can find all the information you need to know about how to check pgsql out of cvs at: http://www.postgresql.org/developer/sourcecode/ Also if you are looking for a pretty interface to the source code and all the revision notes you may want to take a look at: http://projects.commandprompt.com/projects/public/pgsql Does anyone have an opinion on linking to this from the page Darcy gave above? Like they are against it unless its hosted on a pg server? Or they are against it until it can be more real time? Otherwise I think I will add it. Seems reasonable to me - just make sure you state the update frequency around the lnik so ppl know about it. //Magnus ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] Module incompatibility detection between 8.0 and 8.1
On Thu, Nov 10, 2005 at 11:29:15PM +0100, Olivier Thauvin wrote: I just make postgresql 8.1 rpm for mandriva and I was making basic test. I made in pgfoundry the module pgrpm (rpm function in postgres). snip Well the problem here is the module was build with postgresql 8.0. I will rebuild the module ASAP, this is not a problem. The problem is rpm has no way to detect this kind of issue: snip Does anyone allready think to add marker into module, and maybe into postgres and giving a way to rpm or other packaging tools to detect broken linkage ? To fix the issue, I will add a provide into postgresql-server package, something like postgresql-ABI = 8.1, add equivalent requires into modules. There is some discussion about this on -hackers. It'll probably happen at some stage but the actual method is not clear yet. In the meantime, modules can only work with a single release. I'm actually not sure about the policy for minor releases. Have a nice day, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpPI3EcZAlW4.pgp Description: PGP signature
[HACKERS] Does EXPLAIN ANALYZE show a wrong plan for MIN/MAX?
Hi, I've noticed that sometimes EXPLAIN ANALYZE is much slower than the plain query. After investigating I found that it happens when using MIN or MAX aggregates. It seems that the plan outputted is not the optimized one (available since 8.1) that is really used when running the plain query. I.e. this is about 14 times slower: db= SELECT min(t_stamp) FROM stats; min 2005-01-14 17:43:59+01 (1 row) Time: 2206.841 ms db= EXPLAIN ANALYZE SELECT min(t_stamp) FROM stats; QUERY PLAN - Aggregate (cost=65461.73..65461.74 rows=1 width=8) (actual time=30692.485..30692.488 rows=1 loops=1) - Append (cost=0.00..59648.38 rows=2325338 width=8) (actual time=0.043..22841.814 rows=2325018 loops=1) - Seq Scan on stats (cost=0.00..13.20 rows=320 width=8) (actual time=0.004..0.004 rows=0 loops=1) - Seq Scan on stats_200501 stats (cost=0.00..1.30 rows=30 width=8) (actual time=0.030..0.132 rows=30 loops=1) - Seq Scan on stats_200502 stats (cost=0.00..117.81 rows=4581 width=8) (actual time=0.055..16.635 rows=4581 loops=1) - Seq Scan on stats_200503 stats (cost=0.00..333.05 rows=12905 width=8) (actual time=0.108..46.866 rows=12905 loops=1) - Seq Scan on stats_200504 stats (cost=0.00..805.40 rows=31140 width=8) (actual time=0.212..113.868 rows=31140 loops=1) - Seq Scan on stats_200505 stats (cost=0.00..5432.80 rows=211580 width=8) (actual time=1.394..767.939 rows=211580 loops=1) - Seq Scan on stats_200506 stats (cost=0.00..9533.68 rows=371768 width=8) (actual time=2.870..1352.216 rows=371768 loops=1) - Seq Scan on stats_200507 stats (cost=0.00..9467.76 rows=369176 width=8) (actual time=2.761..1348.064 rows=369176 loops=1) - Seq Scan on stats_200508 stats (cost=0.00..6023.04 rows=234804 width=8) (actual time=1.537..853.712 rows=234804 loops=1) - Seq Scan on stats_200509 stats (cost=0.00..11600.68 rows=452568 width=8) (actual time=3.608..1644.433 rows=452568 loops=1) - Seq Scan on stats_200510 stats (cost=0.00..16318.62 rows=636462 width=8) (actual time=5.367..2329.015 rows=636462 loops=1) - Seq Scan on stats_200511 stats (cost=0.00..1.04 rows=4 width=8) (actual time=0.028..0.041 rows=4 loops=1) Total runtime: 30692.627 ms (15 rows) Time: 30694.357 ms = Best regards -- Matteo Beccati http://phpadsnew.com http://phppgads.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[HACKERS] Postgresql 8.1 XML2
Hi all, I just installed PostgreSQL 8.1 win32 and didn't find option to install contrib/xml2, is it available on win32? Shall I build it on my own? Thanks in advance Paolo ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Does EXPLAIN ANALYZE show a wrong plan for MIN/MAX?
On Fri, Nov 11, 2005 at 11:05:45AM +0100, Matteo Beccati wrote: Hi, I've noticed that sometimes EXPLAIN ANALYZE is much slower than the plain query. After investigating I found that it happens when using MIN or MAX aggregates. It seems that the plan outputted is not the optimized one (available since 8.1) that is really used when running the plain query. It may also be that the overhead of calling gettimeofday() several times per tuple is blowing the time out. What platform is this? Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpJysqyLCBBi.pgp Description: PGP signature
Re: [HACKERS] Does EXPLAIN ANALYZE show a wrong plan for MIN/MAX?
Martijn van Oosterhout wrote: I've noticed that sometimes EXPLAIN ANALYZE is much slower than the plain query. After investigating I found that it happens when using MIN or MAX aggregates. It seems that the plan outputted is not the optimized one (available since 8.1) that is really used when running the plain query. It may also be that the overhead of calling gettimeofday() several times per tuple is blowing the time out. What platform is this? FreeBSD 5.4-RELEASE on an HP DL380 G4. I've also tried to do the same on another machine which has 8.0.3 and FreeBSD 4.9-RELEASE-p3: times for the same query are 15s vs 63s with EXPLAIN ANALYZE. Of course I know 8.0 doesn't optimize min/max the same way 8.1 does. Hope this helps. Best regards -- Matteo Beccati http://phpadsnew.com http://phppgads.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] lc_numeric and decimal delimiter
Gavin Sherry wrote: I am at OpenDBCon in Germany. People are complaining about floats/numerics not accepting German/European conventions for the delimiter point -- a comma. Supporting localized input formats everywhere (numbers, geometric objects, date/time, arrays, etc.) will create dozens of incompatible PostgreSQL variants. I think a preferrable solution would be use an updatable view that has, say, to_numeric tied in somewhere. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Getting table name/tuple from OID
Hi all, I am interested in the answer as well -- how to get a table name (or an operator name) from an OID.the parser must know how to do this, but the segment of code is hard to locate. thanks a lot, Huaxin On 11/7/05, Paresh Bafna [EMAIL PROTECTED] wrote: Actually I want to do this from inside the postgres code i.e. I want to get table name and tuple values from OID of corresponding table OID and tuple OID. Is there any built in function in postgres code to do this? Paresh Christopher Kings-Lynne wrote: Try SELECT 12341234::regclass; Where 12341234 is the OID of a table. Otherwise try: SELECT tableoid, * FROM table; To get the tableoid on each row. Chris Paresh Bafna wrote: Is there any way to retrieve table name and/or tuple values from OID of table/tuple? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings ---(end of broadcast)--- TIP 6: explain analyze is your friend
[HACKERS] 8.1 substring bug?
Consider the following: CREATE TEMP TABLE tbl ( id SERIAL NOT NULL, PRIMARY KEY (id) ); COPY tbl (id) FROM stdin; 1 2 3 4 \. SELECT substring ('1234567890' FOR (SELECT count (*) FROM tbl)::int); This returns '1234', as expected. But SELECT substring ('1234567890' FOR (SELECT count (*) FROM tbl)); returns NULL. I think the problem is that SELECT count(*) returns a BIGINT whereas substring expects an INT. Shouldn't there be a warning? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.0 - 8.1 dump duplicate key problem?
Merlin Moncure [EMAIL PROTECTED] writes: When the dump gets to the point where the indexes/keys are built, the primary key fails to build due to duplicate key constraint failure. However, after dump is complete, I can create the p-key without any modification to the table and everything is fine. That's pretty bizarre. What's the datatype of the key column(s)? Can you reduce it to a smaller test case, or perhaps send me the full dump off-list? (270m is a bit much for email, but web or ftp would work ... also, presumably only the pkey column is needed to generate the error ...) I am working on pairing down a test case. As you can imagine this is a nasty business. My best shot is to go to the source server and trying to dump just that file, but that may mask the problem. Come to think of it, I did make one configuration change: I bumped work_mem after the dump was loaded after noticing a lot of activity in pg_dump. Table schema follows. If the schema seems a bit odd, it is because it was converted from an ISAM file. [Dennis: I am not the guy from IRC] esp=# \d data1.parts_order_line_file Table data1.parts_order_line_file Column | Type | Modifiers --+-+--- id | cuid| lastmod | ts | prl_combined_key | character(9)| not null prl_seq_no | pic_9_3 | not null prl_combined_key_2 | character(9)| prl_item_no | character varying(15) | prl_comment_desc | character varying(2500) | prl_location | character(4)| prl_workstation | character(4)| prl_stock_loc| character(4)| prl_qty | numeric(7,3)| prl_adj_price| numeric(8,2)| prl_cost | numeric(11,5) | prl_weight | numeric(7,2)| prl_uom | character(2)| prl_vendor_no| character(6)| prl_vendor_part_no | character varying(15) | prl_track_this_part | character(1)| prl_warranty_period | character varying(10) | prl_comments_1 | character varying(30) | prl_comments_2 | character varying(30) | prl_qty_shipped | numeric(6,2)[] | prl_qty_still_on_bo | numeric(6,2)[] | prl_qty_credited | numeric(6,2)[] | prl_credit_reason| character(2)[] | prl_credit_reason_type | character(1)[] | prl_cancel_ship | character(1)[] | prl_exchange_part| character(1)| prl_authorization_code | character varying(10) | prl_item_status | character(1)| prl_item_status_alpha| character(1)| prl_cancel_flag | character(1)| prl_charge_type_flag | character(1)| prl_ct_taxable_flag | character(1)| prl_account_cat_code | character(6)| prl_retail_price | numeric(8,2)| prl_line_needs_serials | character(1)| prl_chrg_type_ship_indx | pic_9_1 | prl_claim_type_flag | character(1)| prl_attached_wc_seq_no | pic_9_3 | prl_attached_claim_type | character varying(10) | prl_already_issued | character(1)| prl_returned_part_flag | character(1)| prl_prev_qty_shipped | numeric(6,2)[] | prl_prev_qty_still_on_bo | numeric(6,2)[] | prl_prev_qty_credited| numeric(6,2)[] | Indexes: parts_order_line_file_pkey PRIMARY KEY, btree (prl_combined_key, prl_seq_no) parts_order_line_file_prl_exchange_part_key UNIQUE, btree (prl_exchange_part, id) parts_order_line_file_prl_item_no_key UNIQUE, btree (prl_item_no, id) parts_order_line_file_prl_trx_type_2_key UNIQUE, btree (prl_combined_key_2, prl_item_no, id) CREATE DOMAIN public.pic_9_3 AS int2 DEFAULT 0 CONSTRAINT valid_range CHECK (((VALUE = 0) AND (VALUE = 999))); ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] someone working to add merge?
Hi, there is someone working in add the MERGE statement? i don't find much about what a good implementation of merge must have... i think what it needs to do is something like: - try to lock the rows for update - if the lock cannot be immediatly acquire ask why - if the rows are already locked, wait and try again? - if no rows were found try de insert part - if there was any other error, abort - else update so i suppose we can reuse many of the code breaking the merge in 3 pieces... for now they are just thougths, i will think more in this and try to implement it... comments? ideas? suggestions? -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] Getting table name/tuple from OID
On Fri, Nov 11, 2005 at 08:37:07AM -0500, huaxin zhang wrote: Hi all, I am interested in the answer as well -- how to get a table name (or an operator name) from an OID. the parser must know how to do this, but the segment of code is hard to locate. For the purposes of error messages, RelationGetRelationName() does what you want. Otherwise maybe RelationIdGetRelation(Oid relationId). Check out utils/cache/relcache.c for a variety of functions to extract basic data like this. Hope this helps, -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpoBa0kJAeLU.pgp Description: PGP signature
Re: [HACKERS] Transitive closure of a directed graph
On Wed, Nov 02, 2005 at 06:31:56PM +0100, Steinar H. Gunderson wrote: I was asked to post this here for any interested parties -- please Cc me on any comments/followups as I'm not subscribed to -hackers. ...and here's a version with another algorithm, in PL/Perl (in PL/PgSQL, the same algorithm is too slow, but PL/Perl does it rather nicely). It's not as polished code-wise, but on my data set, it's about ten times as fast (!), and it needs no temporary table: CREATE FUNCTION transitive_closure() RETURNS SETOF edges AS $$ sub dfs { my ($i, $g, $done, $working) = @_; die Loop found! if (defined($working-{$i})); return if (defined($done-{$i})); $working-{$i} = 1; my @nodes = @{$g-{$i}}; my %outgoing = map { $_ = 1 } @nodes; for my $j (@nodes) { dfs($j, $g, $done); for my $k (@{$g-{$j}}) { $outgoing{$k} = 1; } } $g-{$i} = [ keys %outgoing ]; delete $working-{$i}; $done-{$i} = 1; } # fetch all connections belonging to active groups my %g = (); my $q = spi_exec_query('SELECT upper,lower FROM edges'); my $numrows = $q-{'processed'}; for my $i (0..$numrows-1) { my $row = $q-{rows}[$i]; if (!defined($g{$row-{'upper'}})) { $g{$row-{'upper'}} = []; } push @{$g{$row-{'upper'}}}, $row-{'lower'}; } my %done = (); my %working = (); # Repth-first search from all elements for my $i (keys %g) { dfs($i, \%g, \%done, \%working); for my $j (@{$g{$i}}) { return_next({ upper = $i, lower = $j }); } } return; $$ LANGUAGE plperl; As with the previous post, I'm not on the list, so please Cc me on any comments. /* Steinar */ -- Homepage: http://www.sesse.net/ ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] server closed connection on a select query
Hi, I've installed a 8.1.0 PostgreSQL server on a SCO OpenServer 5.0.6. It seemed to work well with psql and such tools. I tried to connect to this server with pgAdmin3 and a query failed. I tried to find which part of the query was wrong and I have a strange result : SELECT 1 FROM pg_language WHERE lanispl IS TRUE; this one crashes the server. SELECT 1 FROM pg_language WHERE lanispl = true; works. It seemed to me that IS TRUE is the culprit so I tried something else SELECT lanispl IS TRUE FROM pg_language; and it works. If I create a table for testing purpose, I can add a where clause with IS TRUE. Last thing I tried was to launch postgres on standalone. With the first query, server crashed with a «Memory fault(coredump)». I can send you the all log if you want. This behavior happens on another server (SCO too) but not on any Linux that I tried. I've attached the patch I apply to be able to build PostgreSQL on SCO OpenServer. I'm not 100% sure it isn't faulty. Did something like this already happened to someone ? Do you know of any test I can do ? Regards. -- Guillaume. !-- http://abs.traduc.org/ http://lfs.traduc.org/ http://traduc.postgresqlfr.org/ -- diff -uNr configure configure --- configure 2005-11-05 05:01:38.0 +0100 +++ configure 2005-11-09 15:25:28.0 +0100 @@ -255,7 +255,7 @@ # # Initializations. # -ac_default_prefix=/usr/local +ac_default_prefix=/usr/labo/+pg ac_config_libobj_dir=. cross_compiling=no subdirs= @@ -276,7 +276,7 @@ PACKAGE_BUGREPORT='pgsql-bugs@postgresql.org' ac_unique_file=src/backend/access/common/heaptuple.c -ac_default_prefix=/usr/local/pgsql +ac_default_prefix=/usr/labo/+pg # Factoring default headers for most tests. ac_includes_default=\ #include stdio.h @@ -1970,6 +1970,7 @@ case $template in aix) pgac_cc_list=gcc xlc;; irix) pgac_cc_list=cc;; # no gcc + sco) pgac_cc_list=cc;; # no gcc *) pgac_cc_list=gcc cc;; esac @@ -3837,7 +3838,7 @@ esac else - with_zlib=yes + with_zlib=no fi; diff -uNr src/Makefile.shlib src/Makefile.shlib --- src/Makefile.shlib 2005-10-28 19:32:22.0 +0200 +++ src/Makefile.shlib 2005-11-09 15:18:36.0 +0100 @@ -213,7 +213,7 @@ else LINK.shared = $(CC) -G endif - LINK.shared += -Wl,-z,text -Wl,-h,$(soname) + LINK.shared += -Wl,-h,$(soname) endif ifeq ($(PORTNAME), svr4) diff -uNr src/backend/parser/keywords.c src/backend/parser/keywords.c --- src/backend/parser/keywords.c 2005-10-15 04:49:22.0 +0200 +++ src/backend/parser/keywords.c 2005-11-09 15:18:36.0 +0100 @@ -41,7 +41,6 @@ {all, ALL}, {also, ALSO}, {alter, ALTER}, - {analyse, ANALYSE}, /* British spelling */ {analyze, ANALYZE}, {and, AND}, {any, ANY}, diff -uNr src/interfaces/ecpg/preproc/keywords.c src/interfaces/ecpg/preproc/keywords.c --- src/interfaces/ecpg/preproc/keywords.c 2005-10-15 04:49:47.0 +0200 +++ src/interfaces/ecpg/preproc/keywords.c 2005-11-09 15:18:36.0 +0100 @@ -39,7 +39,6 @@ {all, ALL}, {also, ALSO}, {alter, ALTER}, - {analyse, ANALYSE}, /* British spelling */ {analyze, ANALYZE}, {and, AND}, {any, ANY}, diff -uNr src/interfaces/libpq/fe-secure.c src/interfaces/libpq/fe-secure.c --- src/interfaces/libpq/fe-secure.c 2005-10-24 17:38:37.0 +0200 +++ src/interfaces/libpq/fe-secure.c 2005-11-09 15:26:51.0 +0100 @@ -566,7 +566,9 @@ { printfPQExpBuffer(conn-errorMessage, libpq_gettext(could not get information about host \%s\: %s\n), - conn-peer_cn, hstrerror(h_errno)); + conn-peer_cn, hstrerror(errno)); +) + return -1; } diff -uNr src/port/getaddrinfo.c src/port/getaddrinfo.c --- src/port/getaddrinfo.c 2005-10-15 04:49:51.0 +0200 +++ src/port/getaddrinfo.c 2005-11-09 15:18:37.0 +0100 @@ -206,7 +206,7 @@ #endif if (hp == NULL) { -switch (h_errno) +switch (errno) { case HOST_NOT_FOUND: case NO_DATA: diff -uNr src/port/thread.c src/port/thread.c --- src/port/thread.c 2005-10-15 04:49:51.0 +0200 +++ src/port/thread.c 2005-11-09 15:18:37.0 +0100 @@ -146,7 +146,7 @@ *result = gethostbyname(name); if (*result != NULL) - *herrno = h_errno; + *herrno = errno; if (*result != NULL) return 0; diff -uNr src/backend/libpq/pqcomm.c src/backend/libpq/pqcomm.c --- src/backend/libpq/pqcomm.c 2005-10-17 18:24:19.0 +0200 +++ src/backend/libpq/pqcomm.c 2005-11-09 14:46:03.0 +0100 @@ -89,6 +89,8 @@ #include storage/ipc.h #include utils/guc.h +typedef unsigned int socklen_t; + /* * Configuration options */ diff -uNr contrib/userlock/user_locks.c contrib/userlock/user_locks.c --- contrib/userlock/user_locks.c2005-11-10 12:49:07.0 +0100 +++ contrib/userlock/user_locks.c2005-11-10 12:49:47.0 +0100 @@ -79,3 +79,11 @@ return true; } + +int +user_unlock_all_pid( int pid ) +{ + elog( ERROR, utilisation de la fonction user_unlock_all_pid ); +
Re: [HACKERS] 8.1 substring bug?
It's even sillier than that: test=# SELECT substring ('1234567890' FOR 4::bigint); substring --- (1 row) test=# SELECT substring ('1234567890' FOR 4::int); substring --- 1234 (1 row) Looking at the explain verbose make it look like it's using the wrong version of substring. It's using the oid 2074 one: test=# select oid, oid::regprocedure from pg_proc where proname = 'substring'; oid | oid ---+- 936 | substring(text,integer,integer) 937 | substring(text,integer) 1680 | substring(bit,integer,integer) 1699 | substring(bit,integer) 2012 | substring(bytea,integer,integer) 2013 | substring(bytea,integer) 2073 | substring(text,text) 2074 | substring(text,text,text) 16579 | substring(citext,integer,integer) 16580 | substring(citext,integer) (10 rows) That substring is for regular expressions. Nasty, not sure how to deal with that one... Have a nice day, On Fri, Nov 11, 2005 at 02:43:23PM +0100, Harald Fuchs wrote: Consider the following: CREATE TEMP TABLE tbl ( id SERIAL NOT NULL, PRIMARY KEY (id) ); COPY tbl (id) FROM stdin; 1 2 3 4 \. SELECT substring ('1234567890' FOR (SELECT count (*) FROM tbl)::int); This returns '1234', as expected. But SELECT substring ('1234567890' FOR (SELECT count (*) FROM tbl)); returns NULL. I think the problem is that SELECT count(*) returns a BIGINT whereas substring expects an INT. Shouldn't there be a warning? ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpfSFcZcyoMQ.pgp Description: PGP signature
Re: [HACKERS] server closed connection on a select query
On Thu, Nov 10, 2005 at 11:53:04PM +0100, Guillaume LELARGE wrote: Hi, I've installed a 8.1.0 PostgreSQL server on a SCO OpenServer 5.0.6. It seemed to work well with psql and such tools. I tried to connect to this server with pgAdmin3 and a query failed. I tried to find which part of the query was wrong and I have a strange result : SELECT 1 FROM pg_language WHERE lanispl IS TRUE; this one crashes the server. SELECT 1 FROM pg_language WHERE lanispl = true; works. Does this pass regression testing (ie make check)? It looks like it's dying all over the please. Posting a backtrace (bt in gdb) would be more helpful. Have a nice day? -- Martijn van Oosterhout kleptog@svana.org http://svana.org/kleptog/ Patent. n. Genius is 5% inspiration and 95% perspiration. A patent is a tool for doing 5% of the work and then sitting around waiting for someone else to do the other 95% so you can sue them. pgpUPpyK963wK.pgp Description: PGP signature
Re: [HACKERS] Module incompatibility detection between 8.0 and 8.1
Martijn van Oosterhout kleptog@svana.org writes: On Thu, Nov 10, 2005 at 11:29:15PM +0100, Olivier Thauvin wrote: To fix the issue, I will add a provide into postgresql-server package, something like postgresql-ABI =3D 8.1, add equivalent requires into modules. There is some discussion about this on -hackers. It'll probably happen at some stage but the actual method is not clear yet. I haven't noticed any discussion, but: an extra provides-symbol seems completely useless to me. Just make the external modules depend on postgresql-server-8.1. If you do want to discuss this, [EMAIL PROTECTED] would be the place; the general hackers list does not concern itself with RPM packaging details. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Does EXPLAIN ANALYZE show a wrong plan for MIN/MAX?
Martijn van Oosterhout kleptog@svana.org writes: On Fri, Nov 11, 2005 at 11:05:45AM +0100, Matteo Beccati wrote: It seems that the plan outputted is not the optimized one (available since 8.1) that is really used when running the plain query. It may also be that the overhead of calling gettimeofday() several times per tuple is blowing the time out. What platform is this? Martijn's explanation is by far the more probable. The high overhead of EXPLAIN ANALYZE has been documented before. regards, tom lane ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Does EXPLAIN ANALYZE show a wrong plan for MIN/MAX?
Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: On Fri, Nov 11, 2005 at 11:05:45AM +0100, Matteo Beccati wrote: It seems that the plan outputted is not the optimized one (available since 8.1) that is really used when running the plain query. It may also be that the overhead of calling gettimeofday() several times per tuple is blowing the time out. What platform is this? Martijn's explanation is by far the more probable. The high overhead of EXPLAIN ANALYZE has been documented before. OK, I've had the same explaination on IRC by dennisb, but I thought it was strange to have a 15x slowdown. So, does benchmarking queries using explain analyze lead to unreliable results? Shouldn't a min/max query use a index scan when possible? Best regards -- Matteo Beccati http://phpadsnew.com http://phppgads.com ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] 8.1 substring bug?
Martijn van Oosterhout kleptog@svana.org writes: It's even sillier than that: test=# SELECT substring ('1234567890' FOR 4::bigint); substring --- (1 row) test=# SELECT substring ('1234567890' FOR 4::int); substring --- 1234 (1 row) This has been complained of before. The problem is that there is no implicit cast from bigint to int, but there is one from bigint to text, so the only acceptable mapping the parser can find is to convert bigint to text and apply the pattern-match version of substring(). (There are some other things happening here because of the weird SQL99 syntax, but that's the bottom line.) I have opined before that implicit cross-category casts to text are evil. Unfortunately, we'll probably break a lot of people's applications if we remove them... regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] generic builtin functions
Tom Lane wrote: Andrew Dunstan [EMAIL PROTECTED] writes: What about having the calling code fill in the io type oid in an extra field in the flinfo? I don't think that's workable; for one thing there's the problem of manual invocation of the I/O functions, which is not going to provide any such special hack. It also turns the enum proposal into a seriously invasive patch (hitting all PLs both inside and outside the core, for instance), at which point you'll start encountering some significant push-back. Darn. I see that. Stuff like: tmp = DatumGetCString(FunctionCall1((desc-arg_out_func[i]), fcinfo-arg[i])); At this stage I am probably going to go with your 64bit proposal, on the ground that it will permit some progress, and in the possibly vain hope that someone will have a flash of insight that will let us do it less redundantly in future. BTW, you might want to think about what'd be involved in supporting arrays and domains over enums ... Yeah. on my list. cheers andrew ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.1 substring bug?
On Fri, 11 Nov 2005, Tom Lane wrote: Martijn van Oosterhout kleptog@svana.org writes: It's even sillier than that: test=# SELECT substring ('1234567890' FOR 4::bigint); substring --- (1 row) test=# SELECT substring ('1234567890' FOR 4::int); substring --- 1234 (1 row) This has been complained of before. The problem is that there is no implicit cast from bigint to int, but there is one from bigint to text, so the only acceptable mapping the parser can find is to convert bigint to text and apply the pattern-match version of substring(). (There are some other things happening here because of the weird SQL99 syntax, but that's the bottom line.) It looks to me like we should be supporting any exact numeric with scale 0 there (at least AFAICS from SQL92 and SQL03), so I don't think the current behavior is compliant. It doesn't look like adding a numeric overload of the function works, and the function also becomes ambiguous for int2 inputs. :( ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] 8.1 substring bug?
Stephan Szabo [EMAIL PROTECTED] writes: It looks to me like we should be supporting any exact numeric with scale 0 there (at least AFAICS from SQL92 and SQL03), so I don't think the current behavior is compliant. It doesn't look like adding a numeric overload of the function works, and the function also becomes ambiguous for int2 inputs. :( Currently (see gram.y, about line 7600) the grammar converts SUBSTRING(foo FOR bar) into pg_catalog.substring(foo, 1, bar) and then leaves the normal function-call-analysis code to make the best of it with that. If bar isn't implicitly castable to integer then you've got trouble. I was toying with the idea of making it translate instead to pg_catalog.substring(foo, 1, (bar)::int4) since AFAICS there isn't any other reasonable mapping once you have committed to having the 1 in there. This does not solve the general problem, but it'd address the particular case anyway ... regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.1 substring bug?
Martijn van Oosterhout kleptog@svana.org writes: In this particular case the syntax makes it unclear that the substring is the problem. Perhaps here the solution would be to put a cast in the grammer, like so: substr_for: FOR a_expr { $$ =3D makeTypeCast($2,int4); } ; Not there, because it would break the variants where FOR introduces a character expression, eg regular expression substring function ::= SUBSTRING left paren character value expression FROM character value expression FOR escape character right paren But I think we could do this in substr_list in the case where we have just a_expr substr_for, because there are no variants of that where the FOR expression is supposed to be string. See my other message just now. regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] someone working to add merge?
Jaime, so i suppose we can reuse many of the code breaking the merge in 3 pieces... for now they are just thougths, i will think more in this and try to implement it... comments? ideas? suggestions? Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). However, Peter suggested that we could do a proof-of-concept implementation, working out syntax and trigger issues, based on a full table lock and do the hard work once it was proved to be feasable. Peter? -- Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] someone working to add merge?
On 11/11/05, Josh Berkus josh@agliodbs.com wrote: Jaime, so i suppose we can reuse many of the code breaking the merge in 3 pieces... for now they are just thougths, i will think more in this and try to implement it... comments? ideas? suggestions? Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). it isn't what select for update does? However, Peter suggested that we could do a proof-of-concept implementation, working out syntax and trigger issues, based on a full table lock and do the hard work once it was proved to be feasable. Peter? -- Josh Berkus Aglio Database Solutions San Francisco -- Atentamente, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] someone working to add merge?
Jaime Casanova wrote: Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). it isn't what select for update does? It won't prevent the insertion of a row with the given predicate. cheers andrew ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] someone working to add merge?
On Fri, 2005-11-11 at 18:15, Jaime Casanova wrote: On 11/11/05, Josh Berkus josh@agliodbs.com wrote: Jaime, so i suppose we can reuse many of the code breaking the merge in 3 pieces... for now they are just thougths, i will think more in this and try to implement it... comments? ideas? suggestions? Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). it isn't what select for update does? Select for update only works if the row is already there. If there's no row, you can't lock it. So you want then to insert it, but then it is possible that somebody inserted it before you, immediately after your update... so the solution would be more like: - try insert; - if insert fails, do update; You can already do that, but you have to place a save-point before the insert, so you can continue your transaction even if the insert fails. Without knowledge of postgres internals, the simplest would be to be able to do the continue transaction if insert fails with the cheapest prise to pay. This would mean wrap up existing code, except that continue transaction after failure of insert part. All this might be completely bull*it of course, I don't know too much about postgres internals. [snip] Cheers, Csaba. ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] 8.1 substring bug?
On Fri, 11 Nov 2005, Tom Lane wrote: Stephan Szabo [EMAIL PROTECTED] writes: It looks to me like we should be supporting any exact numeric with scale 0 there (at least AFAICS from SQL92 and SQL03), so I don't think the current behavior is compliant. It doesn't look like adding a numeric overload of the function works, and the function also becomes ambiguous for int2 inputs. :( Currently (see gram.y, about line 7600) the grammar converts SUBSTRING(foo FOR bar) into pg_catalog.substring(foo, 1, bar) and then leaves the normal function-call-analysis code to make the best of it with that. If bar isn't implicitly castable to integer then you've got trouble. Right, I was thinking we could get around it with another substring that took two numerics, but then I think FROM int2 FOR int2 would be ambiguous. I was toying with the idea of making it translate instead to pg_catalog.substring(foo, 1, (bar)::int4) since AFAICS there isn't any other reasonable mapping once you have committed to having the 1 in there. This does not solve the general problem, but it'd address the particular case anyway ... And, it's fairly reasonable to assume at least right now that any reasonable string offset or length fits in an int4. ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] someone working to add merge?
Josh Berkus wrote: Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). However, Peter suggested that we could do a proof-of-concept implementation, working out syntax and trigger issues, based on a full table lock and do the hard work once it was proved to be feasable. Yes, I've started to work on this. Realizing that the current way to manually do an UPDATE-else-INSERT or DELETE-then-INSERT involves a table lock anyway, a MERGE implementation using a table lock would at least give some convenience benefit to users. (And possibly some performance, too, if the decision logic is currently run in the client.) A predicate locking implementation for MERGE might actually not be all that complicated, because you only need to look on pk = constant, not on arbitrary expressions. Nevertheless, I think it's best to write the MERGE command first and then optimize the locking. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] someone working to add merge?
Josh Berkus josh@agliodbs.com writes: Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). However, Peter suggested that we could do a proof-of-concept implementation, working out syntax and trigger issues, based on a full table lock and do the hard work once it was proved to be feasable. If you don't have any better idea how to do it than a full table lock, you might as well not do it at all. A proof of concept that doesn't solve the hard part of the problem is no proof :-( My first guess about a real implementation would involve extending the index AM API to offer a function insert this key, or return the existing match if there already is one. This might tie into refactoring the existing implementation of unique indexes, in which all the problem is put on the AM's head (which is why only btree copes at the moment). regards, tom lane ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] 8.1 substring bug?
Stephan Szabo [EMAIL PROTECTED] writes: On Fri, 11 Nov 2005, Tom Lane wrote: I was toying with the idea of making it translate instead to pg_catalog.substring(foo, 1, (bar)::int4) since AFAICS there isn't any other reasonable mapping once you have committed to having the 1 in there. This does not solve the general problem, but it'd address the particular case anyway ... And, it's fairly reasonable to assume at least right now that any reasonable string offset or length fits in an int4. If we thought differently we'd be changing the substring function, and we could surely change the translation at the same time. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] someone working to add merge?
OK, I'm relatively new on this list, and I might have missed a few discussions on this topic. I wonder if doing it this way would not be better than using a table lock: - set a save point; - insert the row; - on error: - roll back to the save point; - update the row; - on success release the save point; This would provide less contention while paying the prise for the save point. In low contention scenarios the table lock would be better, and I wonder for high contention scenarios which is better, the table lock, or the save point version... Of course the table lock version is the future if predicate locking is going to be implemented later. Cheers, Csaba. On Fri, 2005-11-11 at 18:37, Peter Eisentraut wrote: Josh Berkus wrote: Funny, we were just discussing this at OpenDBCon. Seems that you can't do a full implementation of MERGE without Predicate Locking (the ability to say lock this table against inserts or updates of any row with key=5). However, Peter suggested that we could do a proof-of-concept implementation, working out syntax and trigger issues, based on a full table lock and do the hard work once it was proved to be feasable. Yes, I've started to work on this. Realizing that the current way to manually do an UPDATE-else-INSERT or DELETE-then-INSERT involves a table lock anyway, a MERGE implementation using a table lock would at least give some convenience benefit to users. (And possibly some performance, too, if the decision logic is currently run in the client.) A predicate locking implementation for MERGE might actually not be all that complicated, because you only need to look on pk = constant, not on arbitrary expressions. Nevertheless, I think it's best to write the MERGE command first and then optimize the locking. ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] Multi-table-unique-constraint
On Thursday 10 November 2005 15:58, you wrote: The multi-table-unique-constraint problem has to be solved before we can even think much about multi-table FKs :-( Do you have ideas about how this should be solved? There's some discussions in the pghackers archives --- look for multi-table index and similar phrases. But if anyone had had a really decent plan, it would have got done by now :-( Are multi-table indexes really required? After reading the code some more, I came across this comment in nbtinsert.c:_bt_doinsert * NOTE: obviously, _bt_check_unique can only detect keys that are already in * the index; so it cannot defend against concurrent insertions of the * same key. We protect against that by means of holding a write lock on * the target page. Any other would-be inserter of the same key must * acquire a write lock on the same target page, so only one would-be * inserter can be making the check at one time. Furthermore, once we are * past the check we hold write locks continuously until we have performed * our insertion, so no later inserter can fail to see our insertion. * (This requires some care in _bt_insertonpg.) Would it be possible to make another routine that locates and aquires a write lock on the page where the key would be inserted in each index(for each table in the inheritance), and holds all these locks until the key is inserted into the correct index. It seems this would solve the unique problem without changing much else. Matt ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] someone working to add merge?
Tom Lane wrote: If you don't have any better idea how to do it than a full table lock, you might as well not do it at all. A proof of concept that doesn't solve the hard part of the problem is no proof :-( But the problem here is not to break any kind of performance barrier, but to give people migrating from MySQL and alternative for REPLACE command. My first guess about a real implementation would involve extending the index AM API to offer a function insert this key, or return the existing match if there already is one. This assumes that there are indexes defined for the columns involved in the merge condition, which is not required anywhere. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] Getting table name/tuple from OID
huaxin zhang napisał(a): Hi all, I am interested in the answer as well -- how to get a table name (or an operator name) from an OID.the parser must know how to do this, but the segment of code is hard to locate. CREATE OR REPLACE FUNCTION gettablename(__oid oid) RETURNS varchar AS $BODY$ SELECT (_sna.nspname || '.' || _tna.relname) AS tableQualifiedName FROM pg_class _tna, pg_namespace _sna WHERE _tna.oid = $1 AND _sna.oid = _tna.relnamespace; $BODY$ LANGUAGE 'sql' IMMUTABLE; ALTER FUNCTION gettablename(__oid oid) OWNER TO root; ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] someone working to add merge?
Peter Eisentraut [EMAIL PROTECTED] writes: This assumes that there are indexes defined for the columns involved in the merge condition, which is not required anywhere. Surely they require a unique constraint --- else the behavior isn't even well defined, is it? regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] someone working to add merge?
On 11/11/05, Peter Eisentraut [EMAIL PROTECTED] wrote: Tom Lane wrote: If you don't have any better idea how to do it than a full table lock, you might as well not do it at all. A proof of concept that doesn't solve the hard part of the problem is no proof :-( But the problem here is not to break any kind of performance barrier, but to give people migrating from MySQL and alternative for REPLACE command. But MERGE isn't REPLACE... REPLACE will delete old records to insert new ones; MERGE try to insert and if the record exists then can UPDATE just a few values, maybe incrementing them with a value (all the calculation are doing by the MERGE) My first guess about a real implementation would involve extending the index AM API to offer a function insert this key, or return the existing match if there already is one. This assumes that there are indexes defined for the columns involved in the merge condition, which is not required anywhere. -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] someone working to add merge?
Tom Lane wrote: Surely they require a unique constraint --- else the behavior isn't even well defined, is it? They require that the merge condition does not match for more than one row, but since the merge condition can do just about anything, there is no guarantee that a unique constraint encompasses it. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] someone working to add merge?
Jaime Casanova Wrote: But MERGE isn't REPLACE... REPLACE will delete old records to insert new ones; MERGE try to insert and if the record exists then can UPDATE just a few values, maybe incrementing them with a value (all the calculation are doing by the MERGE) That sounds like MySQL's 'INSERT INTO ... ON DUPLICATE KEY UPDATE', which they recommend over REPLACE anyways. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] Multi-table-unique-constraint
Matt Newell [EMAIL PROTECTED] writes: Would it be possible to make another routine that locates and aquires a write lock on the page where the key would be inserted in each index(for each table in the inheritance), and holds all these locks until the key is inserted into the correct index. It seems this would solve the unique problem without changing much else. It's an idea, but you are now staring directly into the hornet's nest: 1. How do you avoid deadlock among multiple processes all doing the above for similar (same page anyway) keys? It's difficult if not impossible to ensure that they'll try to take the page locks in the same order. 2. What happens when another process is adding/dropping indexes that should be in the index set? In the normal scenario you don't have any sort of lock on any of the other tables, only the one you are trying to insert into; and so you have no defense against somebody changing their schemas, up to and including dropping the index you are fooling with. Adding such locks would increase the deadlock hazard. Also, for many scenarios (including FKs) it's important to be able to *look up* a particular key, not only to prevent insertion of duplicates. The above approach would require searching multiple indexes. Most of the people who have thought about this have figured that the right solution involves a single index spanning multiple tables (hence, adding a table ID to the index entry headers in such indexes). This fixes the lookup and entry problems, but it's not any help for the lock-against-schema-mods problem, and it leaves you with a real headache if you want to drop just one of the tables. 'Tis a hard problem :-( regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] someone working to add merge?
Jaime Casanova wrote: REPLACE will delete old records to insert new ones; MERGE try to insert and if the record exists then can UPDATE just a few values, maybe incrementing them with a value (all the calculation are doing by the MERGE) I'm not the expert on REPLACE, but it would seem that REPLACE is a special case of MERGE. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] someone working to add merge?
On Fri, Nov 11, 2005 at 18:48:33 +0100, Csaba Nagy [EMAIL PROTECTED] wrote: OK, I'm relatively new on this list, and I might have missed a few discussions on this topic. I wonder if doing it this way would not be better than using a table lock: - set a save point; - insert the row; - on error: - roll back to the save point; - update the row; - on success release the save point; This would provide less contention while paying the prise for the save point. In low contention scenarios the table lock would be better, and I wonder for high contention scenarios which is better, the table lock, or the save point version... You may not be able to update the row after the insert fails. If there is insert occurring in another transaction, the row may not be visible to the current transaction. In which case you can neither insert or update the row. You need to wait for the other transaction to commit or rollback. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] someone working to add merge?
Peter Eisentraut [EMAIL PROTECTED] writes: Tom Lane wrote: Surely they require a unique constraint --- else the behavior isn't even well defined, is it? They require that the merge condition does not match for more than one row, but since the merge condition can do just about anything, there is no guarantee that a unique constraint encompasses it. ISTM to be a reasonable implementation restriction that there be a constraint by which the system can prove that there is at most one matching row. Per other comments in this thread, we'd not be the only implementation making such a restriction. (Certainly, if I were a DBA and were told that the performance of MERGE would go to hell in a handbasket if I had no such constraint, I'd make sure there was one. I don't think there is very much of a use-case for the general scenario.) regards, tom lane ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] MERGE vs REPLACE
It seems to me that it has always been implicitly assumed around here that the MERGE command would be a substitute for a MySQL-like REPLACE functionality. After rereading the spec it seems that this is not the case. MERGE always operates on two different tables, which REPLACE doesn't do. That said, what kind of support for insert-or-update-this-row do we want to provide, if any? Should it be a REPLACE command, an extension of the INSERT command, a modication of the MERGE syntax, or something else? -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
Re: [HACKERS] 8.1 substring bug?
In article [EMAIL PROTECTED], Martijn van Oosterhout kleptog@svana.org writes: It's even sillier than that: test=# SELECT substring ('1234567890' FOR 4::bigint); substring --- (1 row) test=# SELECT substring ('1234567890' FOR 4::int); substring --- 1234 (1 row) Looking at the explain verbose make it look like it's using the wrong version of substring. It's using the oid 2074 one: test=# select oid, oid::regprocedure from pg_proc where proname = 'substring'; oid | oid ---+- 936 | substring(text,integer,integer) 937 | substring(text,integer) 1680 | substring(bit,integer,integer) 1699 | substring(bit,integer) 2012 | substring(bytea,integer,integer) 2013 | substring(bytea,integer) 2073 | substring(text,text) 2074 | substring(text,text,text) 16579 | substring(citext,integer,integer) 16580 | substring(citext,integer) (10 rows) That substring is for regular expressions. Nasty, not sure how to deal with that one... Ah, so it's using substring (STRING from PATTERN for ESCAPE)? Yes, that explains the NULL. Looks like we're in the INT/BIGINT confusion again... Have a nice day, It's a nice day since I have a nice workaround for this misfeature :-) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] Multi-table-unique-constraint
On Friday 11 November 2005 11:07, you wrote: It's an idea, but you are now staring directly into the hornet's nest: 1. How do you avoid deadlock among multiple processes all doing the above for similar (same page anyway) keys? It's difficult if not impossible to ensure that they'll try to take the page locks in the same order. Isn't all that is required is that they iterate through the indexes in the same order. This shouldn't be hard to do, because the set of indexes is the same no matter what table you are inserting into, because the unique constraint will apply to all tables both up and down the inheritance tree. That order just needs to be stored somewhere. What if there was a new system relation(pg_indexset) that stores an array of index oids. Each index that is part of an index set has an fkey into this table. When aquiring the locks on the index pages, you must a) have a ROW SHARE lock on the pg_indexset row for this set, this ensures that the schema won't change from under us. b) do so in the order that the index oids are in. This solves the problem below also, because you would hold a row exclusive lock on the row in this table whenever adding or removing indexes from the set. Now that i think about it some more, i realize that you only need to hold read locks on the index pages that you don't plan on actually inserting a new key into, which shouldn't cause near as much lock contention as holding write locks on multiple indexes' pages. 2. What happens when another process is adding/dropping indexes that should be in the index set? In the normal scenario you don't have any sort of lock on any of the other tables, only the one you are trying to insert into; and so you have no defense against somebody changing their schemas, up to and including dropping the index you are fooling with. Adding such locks would increase the deadlock hazard. Also, for many scenarios (including FKs) it's important to be able to *look up* a particular key, not only to prevent insertion of duplicates. The above approach would require searching multiple indexes. Why would this be required, if it currently isn't? I mean you can already do Select from parent where key=X; and the planner takes care of scanning multiple indexes(or sequence scans). If it is required though, it should be no more difficult that doing what i described above, right? Most of the people who have thought about this have figured that the right solution involves a single index spanning multiple tables (hence, adding a table ID to the index entry headers in such indexes). This fixes the lookup and entry problems, but it's not any help for the lock-against-schema-mods problem, and it leaves you with a real headache if you want to drop just one of the tables. It seems that the above solution would be less work, and would keep the data separate, which seems to be one of the biggest advantages of the current inheritance design. 'Tis a hard problem :-( I think that's why i'm interested:) I hope that I can succeed so as to not have wasted your valuable time. BTW, i'm on the list now, so no need to cc me. Matt ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] MERGE vs REPLACE
Peter Eisentraut wrote: It seems to me that it has always been implicitly assumed around here that the MERGE command would be a substitute for a MySQL-like REPLACE functionality. After rereading the spec it seems that this is not the case. MERGE always operates on two different tables, which REPLACE doesn't do. That said, what kind of support for insert-or-update-this-row do we want to provide, if any? Should it be a REPLACE command, an extension of the INSERT command, a modication of the MERGE syntax, or something else? MERGE of course, it's standard, REPLACE is mysql extension -- Regards Petr Jelinek (PJMODOS) www.parba.cz ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[HACKERS] How to find a number of connections
Is there a way to find a number of current connections on Postgres 7.3.x ? Thank you, Mike ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] MERGE vs REPLACE
On 11/11/05, Peter Eisentraut [EMAIL PROTECTED] wrote: It seems to me that it has always been implicitly assumed around here that the MERGE command would be a substitute for a MySQL-like REPLACE functionality. After rereading the spec it seems that this is not the case. MERGE always operates on two different tables, which REPLACE doesn't do. That said, what kind of support for insert-or-update-this-row do we want to provide, if any? Should it be a REPLACE command, an extension of the INSERT command, a modication of the MERGE syntax, or something else? -- Peter Eisentraut http://developer.postgresql.org/~petere/ MERGE seems to me the better option... not just because is standard but at least i can see some use cases for it... -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] How to find a number of connections
On Fri, Nov 11, 2005 at 15:09:41 -0500, Brusser, Michael [EMAIL PROTECTED] wrote: Is there a way to find a number of current connections on Postgres 7.3.x ? This might help you: http://www.postgresql.org/docs/7.3/static/monitoring.html ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] MERGE vs REPLACE
Jaime Casanova wrote: MERGE seems to me the better option... not just because is standard but at least i can see some use cases for it... I don't think you understand my message: MERGE does not do what REPLACE does. -- Peter Eisentraut http://developer.postgresql.org/~petere/ ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[HACKERS] How to find a number of connections
Is there a way to find a number of current connections on Postgres 7.3.x ? I looked at some system tables and views, but did not see anything obvious. Thank you, Mike
Re: [HACKERS] How to find a number of connections
Please disregard this duplicate submission, my mistake. Mike From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Brusser, Michael Sent: Friday, November 11, 2005 12:35 PM To: pgsql-hackers@postgresql.org Subject: [HACKERS] How to find a number of connections Is there a way to find a number of current connections on Postgres 7.3.x ? I looked at some system tables and views, but did not see anything obvious. Thank you, Mike
Re: [HACKERS] MERGE vs REPLACE
Peter Eisentraut [EMAIL PROTECTED] writes: It seems to me that it has always been implicitly assumed around here that the MERGE command would be a substitute for a MySQL-like REPLACE functionality. After rereading the spec it seems that this is not the case. MERGE always operates on two different tables, which REPLACE doesn't do. Normally I'd plump for following the standard ... but AFAIR, we have had bucketloads of requests for REPLACE functionality, and not one request for spec-compatible MERGE. If, as it appears, full-spec MERGE is also a whole lot harder and slower than REPLACE, it seems that we could do worse than to concentrate on doing REPLACE for now. (We can always come back to MERGE some other day.) regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] MERGE vs REPLACE
On 11/11/05, Peter Eisentraut [EMAIL PROTECTED] wrote: Jaime Casanova wrote: MERGE seems to me the better option... not just because is standard but at least i can see some use cases for it... I don't think you understand my message: MERGE does not do what REPLACE does. -- Peter Eisentraut http://developer.postgresql.org/~petere/ I understand you well... what i was trying to say is that i prefer MERGE (standard SQL command) to be done because the functionally it has (basically a merge of two tables) seems to me to be more usefull than REPLACE (MySql Command)... -- regards, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] Multi-table-unique-constraint
Matt Newell [EMAIL PROTECTED] writes: On Friday 11 November 2005 11:07, you wrote: 1. How do you avoid deadlock among multiple processes all doing the above for similar (same page anyway) keys? Isn't all that is required is that they iterate through the indexes in the same order. Yeah, I was thinking along the same lines. As long as any one index is a member of at most one index set, this'd probably work. (Maybe you wouldn't even need that restriction if you used a globally defined ordering, such as always processing the indexes in order by their pg_class OIDs.) Some concept of shared and exclusive locks on index sets (extending only to the membership of the set, not to operations on the individual member indexes) might fix the schema-change problem, too, although you still need to think about whether there's a risk of deadlocks for that. In the past we've figured that exclusively locking a table is necessary and sufficient for schema alterations on that table, but I'm not sure what to do for cross-table index sets. What if there was a new system relation(pg_indexset) that stores an array of index oids. Each index that is part of an index set has an fkey into this table. I'd be inclined to think about using pg_inherits instead, ie, pretend that the child table indexes are inheritance children of the parent table index. If this is too inefficient, it suggests that we need to fix pg_inherits anyway. Also, for many scenarios (including FKs) it's important to be able to *look up* a particular key, not only to prevent insertion of duplicates. The above approach would require searching multiple indexes. Why would this be required, if it currently isn't? Well, because we're trying to do something that currently isn't possible? It might not matter that we don't have a single instant at which we can swear that the key is not present anywhere in the hierarchy, but I'm not convinced that this is obviously true. Your thought about leaving read locks on index pages while searching other indexes might fix that, though, if it needs fixed at all. Most of the people who have thought about this have figured that the right solution involves a single index spanning multiple tables (hence, adding a table ID to the index entry headers in such indexes). It seems that the above solution would be less work, and would keep the data separate, which seems to be one of the biggest advantages of the current inheritance design. Yeah, I'm getting more attracted to the idea as I think about it. Not so much because it keeps the data separate, as that it avoids needing to store a table OID in index headers, which has been a principal objection to cross-table indexes all along because of the space cost. Probably the next thing to think about is how this would impact the index AM API. I'm disinclined to want to put all of this logic inside the index AMs, so somehow the find and leave page write locked behavior would need to be exposed in the AM API. That ties into a larger goal of not wanting the unique-index behavior to be totally the AM's responsibility as it is right now --- I dislike the fact that nbtree is responsible for reaching into the heap to test rows for liveness, for instance. If we could separate that out a bit, it might make it easier to support unique-index behavior in the other AMs. BTW, i'm on the list now, so no need to cc me. Common practice around here is to cc people anyway --- this has grown out of a history of occasionally-slow list mail delivery. If you don't want it, best to fix it in your mail filters rather than expecting people to change habits for you. regards, tom lane ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] MERGE vs REPLACE
Guys, I understand you well... what i was trying to say is that i prefer MERGE (standard SQL command) to be done because the functionally it has (basically a merge of two tables) seems to me to be more usefull than REPLACE (MySql Command)... But even REPLACE requires predicate locking. There's no real way to get around it. --Josh -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [HACKERS] someone working to add merge?
I Wrote: From the mysql manual: 'REPLACE works exactly like INSERT, except that if an old record in the table has the same value as a new record for a PRIMARY KEY or a UNIQUE index, the old record is deleted before the new record is inserted. See Section 13.2.4, INSERT Syntax.' It also says: Note that unless the table has a PRIMARY KEY or UNIQUE index, using a REPLACE statement makes no sense. It becomes equivalent to INSERT, because there is no index to be used to determine whether a new row duplicates another. ... John ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] someone working to add merge?
Tom Lane Wrote: Surely they require a unique constraint --- else the behavior isn't even well defined, is it? From the mysql manual: 'REPLACE works exactly like INSERT, except that if an old record in the table has the same value as a new record for a PRIMARY KEY or a UNIQUE index, the old record is deleted before the new record is inserted. See Section 13.2.4, INSERT Syntax.' ... John ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] MERGE vs REPLACE
On 11/11/05, Josh Berkus josh@agliodbs.com wrote: Guys, I understand you well... what i was trying to say is that i prefer MERGE (standard SQL command) to be done because the functionally it has (basically a merge of two tables) seems to me to be more usefull than REPLACE (MySql Command)... But even REPLACE requires predicate locking. There's no real way to get around it. --Josh why? seems that REPLACE only work if there are at least one row matching... -- Atentamente, Jaime Casanova (DBA: DataBase Aniquilator ;) ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] How to find a number of connections
Brusser, Michael wrote: Is there a way to find a number of current connections on Postgres 7.3.x ? select count(*) from pg_stat_activity -- Bryan White, ArcaMax Publishing Inc. I never look back, darling. it distracts from the now. - Edna Mode ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] MERGE vs REPLACE
Jaime, why? seems that REPLACE only work if there are at least one row matching... Scenario: session1: REPLACE 1 session2: REPLACE . 1 session1: check to see that 1 exists no session2: check to see that 1 exists no session1: INSERT 1 session2: INSERT 1 ERROR Get the picture? The only way to avoid a race condition is to be able to do predicate locking, that is to lock the table against any data write matching that predicate. -- --Josh Josh Berkus Aglio Database Solutions San Francisco ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [HACKERS] MERGE vs REPLACE
On Fri, 2005-11-11 at 14:40 -0800, Josh Berkus wrote: Jaime, why? seems that REPLACE only work if there are at least one row matching... Get the picture? The only way to avoid a race condition is to be able to do predicate locking, that is to lock the table against any data write matching that predicate. So? That is what save points are for. You can even skip the select for update if you don't mind dead tuples from the attempted insert. SELECT ... FOR UPDATE; IF not exists THEN SAVEPOINT; INSERT ; IF UNIQUE VIOLATION THEN /* Someone else inserted between the SELECT and our INSERT */ ROLLBACK TO SAVEPOINT; UPDATE; ELSE RELEASE SAVEPOINT; FI ELSE UPDATE; FI -- ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [HACKERS] MERGE vs REPLACE
Josh Berkus josh@agliodbs.com writes: But even REPLACE requires predicate locking. There's no real way to get around it. The point though is that REPLACE is restricted to a type of predicate narrow enough to be enforced through a unique-index mechanism, and so it's implementable without solving the general case of predicate locking. Predicate locking for narrow cases isn't very hard; it's the general case of arbitrary predicates that's hard. regards, tom lane ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] MERGE vs REPLACE
On Fri, 11 Nov 2005, Josh Berkus wrote: Jaime, why? seems that REPLACE only work if there are at least one row matching... Scenario: session1: REPLACE 1 session2: REPLACE . 1 session1: check to see that 1 exists no session2: check to see that 1 exists no session1: INSERT 1 session2: INSERT 1 ERROR Get the picture? The only way to avoid a race condition is to be able to do predicate locking, that is to lock the table against any data write matching that predicate. When it comes to predicate locking, I think we should defer to Peter's comment at Open DB Con: http://www.treehou.se/~swm/peter_merge.jpg Gavin ---(end of broadcast)--- TIP 6: explain analyze is your friend
Re: [HACKERS] MERGE vs REPLACE
On Fri, Nov 11, 2005 at 06:00:32PM -0500, Rod Taylor wrote: So? That is what save points are for. You can even skip the select for update if you don't mind dead tuples from the attempted insert. SELECT ... FOR UPDATE; IF not exists THEN SAVEPOINT; INSERT ; IF UNIQUE VIOLATION THEN /* Someone else inserted between the SELECT and our INSERT */ ROLLBACK TO SAVEPOINT; UPDATE; ELSE RELEASE SAVEPOINT; FI ELSE UPDATE; FI Isn't there still a race between INSERT and UPDATE? Low probability, for sure, as it would have had to not exist, then exist, then not exist, but still possible. I'd like a REPLACE that could be safe, or at least cause a COMMIT to fail, for this reason. Cheers, mark -- [EMAIL PROTECTED] / [EMAIL PROTECTED] / [EMAIL PROTECTED] __ . . _ ._ . . .__. . ._. .__ . . . .__ | Neighbourhood Coder |\/| |_| |_| |/|_ |\/| | |_ | |/ |_ | | | | | | \ | \ |__ . | | .|. |__ |__ | \ |__ | Ottawa, Ontario, Canada One ring to rule them all, one ring to find them, one ring to bring them all and in the darkness bind them... http://mark.mielke.cc/ ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
[HACKERS] (View and SQL) VS plpgsql
This has been posted to performance but i didin't had any answer i could look forward... If anyone got some time for explanation,examples.. Abstract: The function that can be found at the end of the e-mail emulate two thing. First it will fill a record set of result with needed column from a table and two empty result column a min and a max. Those two column are then filled by a second query on the same table that will do a min and a max on an index idx_utctime. The function loop for the first recordset and return a setof record that is casted by caller to the function. The goald of this is to enabled the application that will receive the result set to minimise its work by having to group internaly two matching rowset. We use to handle two resultset but i am looking toward improving performances and at first glance it seem to speed up the process. Questions: 1. How could this be done in a single combinasion of SQL and view? 2. In a case like that is plpgsql really givig significant overhead? 3. Performance difference [I would need a working pure-SQL version to compare PLANNER and Explain results ] STUFF: --TABLE INDEX CREATE TABLE archive_event ( inst int4 NOT NULL, cid int8 NOT NULL, src int8 NOT NULL, dst int8 NOT NULL, bid int8 NOT NULL, tid int4 NOT NULL, utctime int4 NOT NULL, CONSTRAINT ids_archives_event_pkey PRIMARY KEY (inst, cid), CONSTRAINT ids_archives_event_cid_index UNIQUE (cid) ) --index CREATE INDEX idx_archive_utctime ON archive_event USING btree (utctime); CREATE INDEX idx_archive_src ON archive_event USING btree (src); CREATE INDEX idx_archive_bid_tid ON archive_event USING btree (tid, bid); --FUNCTION CREATE OR REPLACE FUNCTION console_get_source_rule_level_1() RETURNS SETOF RECORD AS ' DECLARE one_record record; r_record record; BEGIN FOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\') GROUP BY src, bid, tid LOOP SELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\'); r_record.min_time := one_record.timestart; r_record.max_time := one_record.timestop; RETURN NEXT r_record; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql' VOLATILE; GRANT EXECUTE ON FUNCTION console_get_source_rule_level_1() TO console WITH GRANT OPTION; --FUNCTION CALLER SELECT * from get_source_rule_level_1() AS (hits int8,src int8,bid int8,tid int4,min_time int8,max_time int8) -Eric Lauzon ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings
Re: [HACKERS] (View and SQL) VS plpgsql
try this, i had no data to check the plan and didn't have time to invent any. Jim create index idx_archive_jb_idx on archive_event(inst,utctime,src,bid,tid); explain SELECT count(cid) AS hits,src, bid, tid, (select MIN(utctime) from archive_event where src = ae.src AND bid =ae.bid AND tid = ae.tid AND inst = '3' AND utctime BETWEEN '111492' AND '1131512399' ) as min_time, (select MAX(utctime) from) as max_time archive_event where src = ae.src AND bid =ae.bid AND tid = ae.tid AND inst = '3' AND utctime BETWEEN '111492' AND '1131512399 FROM archive_event ae WHERE inst='3' AND (utctime BETWEEN '111492' AND '1131512399') GROUP BY src, bid, tid ; -- Original Message --- From: Eric Lauzon [EMAIL PROTECTED] To: pgsql-hackers@postgresql.org Sent: Fri, 11 Nov 2005 19:12:00 -0500 Subject: [HACKERS] (View and SQL) VS plpgsql This has been posted to performance but i didin't had any answer i could look forward... If anyone got some time for explanation,examples.. Abstract: The function that can be found at the end of the e-mail emulate two thing. First it will fill a record set of result with needed column from a table and two empty result column a min and a max. Those two column are then filled by a second query on the same table that will do a min and a max on an index idx_utctime. The function loop for the first recordset and return a setof record that is casted by caller to the function. The goald of this is to enabled the application that will receive the result set to minimise its work by having to group internaly two matching rowset. We use to handle two resultset but i am looking toward improving performances and at first glance it seem to speed up the process. Questions: 1. How could this be done in a single combinasion of SQL and view? 2. In a case like that is plpgsql really givig significant overhead? 3. Performance difference [I would need a working pure-SQL version to compare PLANNER and Explain results ] STUFF: --TABLE INDEX CREATE TABLE archive_event ( inst int4 NOT NULL, cid int8 NOT NULL, src int8 NOT NULL, dst int8 NOT NULL, bid int8 NOT NULL, tid int4 NOT NULL, utctime int4 NOT NULL, CONSTRAINT ids_archives_event_pkey PRIMARY KEY (inst, cid), CONSTRAINT ids_archives_event_cid_index UNIQUE (cid) ) --index CREATE INDEX idx_archive_utctime ON archive_event USING btree (utctime); CREATE INDEX idx_archive_src ON archive_event USING btree (src); CREATE INDEX idx_archive_bid_tid ON archive_event USING btree (tid, bid); --FUNCTION CREATE OR REPLACE FUNCTION console_get_source_rule_level_1() RETURNS SETOF RECORD AS ' DECLARE one_record record; r_record record; BEGIN FOR r_record IN SELECT count(cid) AS hits,src, bid, tid,NULL::int8 as min_time,NULL::int8 as max_time FROM archive_event WHERE inst=\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\') GROUP BY src, bid, tid LOOP SELECT INTO one_record MIN(utctime) as timestart,MAX(utctime) as timestop from archive_event where src =r_record.src AND bid =r_record.bid AND tid = r_record.tid AND inst =\'3\' AND (utctime BETWEEN \'111492\' AND \'1131512399\'); r_record.min_time := one_record.timestart; r_record.max_time := one_record.timestop; RETURN NEXT r_record; END LOOP; RETURN; END; ' LANGUAGE 'plpgsql' VOLATILE; GRANT EXECUTE ON FUNCTION console_get_source_rule_level_1() TO console WITH GRANT OPTION; --FUNCTION CALLER SELECT * from get_source_rule_level_1() AS (hits int8,src int8,bid int8,tid int4,min_time int8,max_time int8) -Eric Lauzon ---(end of broadcast)--- TIP 5: don't forget to increase your free space map settings --- End of Original Message --- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] MERGE vs REPLACE
On Fri, 2005-11-11 at 18:36 -0500, [EMAIL PROTECTED] wrote: On Fri, Nov 11, 2005 at 06:00:32PM -0500, Rod Taylor wrote: So? That is what save points are for. You can even skip the select for update if you don't mind dead tuples from the attempted insert. SELECT ... FOR UPDATE; IF not exists THEN SAVEPOINT; INSERT ; IF UNIQUE VIOLATION THEN /* Someone else inserted between the SELECT and our INSERT */ ROLLBACK TO SAVEPOINT; UPDATE; ELSE RELEASE SAVEPOINT; FI ELSE UPDATE; FI Isn't there still a race between INSERT and UPDATE? I suppose there is although I hadn't noticed before. I've never run into it and always check to ensure the expected number of tuples were touched by the update or delete. Within the PostgreSQL backend you might get away with having your insert hold a lock on the index page and follow it up with a FOR UPDATE lock on the offending tuple thus ensuring that your update will succeed. If you hack index mechanisms for the support you don't need the SAVEPOINT either -- just don't throw an error when you run across the existing entry. For client side code one possibility is to repeat until successful. WHILE SELECT FOR UPDATE; IF NOT EXISTS THEN SAVEPOINT INSERT; IF UNIQUE VIOLATION THEN ROLLBACK TO SAVEPOINT; ELSE RELEASE SAVEPOINT EXIT; FI ELSE UPDATE; EXIT; END -- Check for infinite loop END -- ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
Re: [HACKERS] Getting table name/tuple from OID
Thanks for the quick reply. I made a mistake last time by asking the question: actually, i would like to know how to get the OID from a table name or operator name. For example, is 512 while = is 96. and some table has the magic relid of 20078, say. How could I find out the OID by giving the (full qualified) name of a table or an operator? I need this since I want to build a query plan from external and I need to know the OID in order to manually create the query plan. Thanks. I am interested in the answer as well -- how to get a table name (or an operator name) from an OID.the parser must know how to do this, but the segment of code is hard to locate.CREATE OR REPLACE FUNCTION gettablename(__oid oid)RETURNS varchar AS$BODY$SELECT (_sna.nspname || '.' || _tna.relname) AS tableQualifiedName FROM pg_class _tna, pg_namespace _sna WHERE _tna.oid = $1 AND _sna.oid =_tna.relnamespace;$BODY$LANGUAGE 'sql' IMMUTABLE;ALTER FUNCTION gettablename(__oid oid) OWNER TO root;