Re: [HACKERS] ALTER TYPE DROP + composite-typed col vs. pg_upgrade

2011-05-21 Thread Heikki Linnakangas

On 28.04.2011 15:41, Noah Misch wrote:

Now that we have ALTER TYPE DROP ATTRIBUTE, pg_dump --binary-upgrade must, for
the sake of composite-typed columns, preserve the dropped-column configuration
of stand-alone composite types.  Here's a test case:

create type t as (x int, y int);
create table has_a (tcol t);
insert into has_a values ('(1,2)');
table has_a; -- (1,2)
alter type t drop attribute y cascade, add attribute z int cascade;
table has_a; -- (1,)
table has_a; -- after pg_upgrade: (1,2)

Apparently I did not fully test the last version after merging it with upstream
changes, because it did not work.  Sorry for that.  This version updates the
queries correctly and adds a test case.  A regular make check passes the new
test case with or without the rest of this patch.  However, a comparison of
regression database dumps before and after a pg_upgrade will reveal the problem
given this new test case.  See, for example, Peter's recent patch to have the
contrib/pg_upgrade make check do this.


Ok, committed.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ts_rank

2011-05-21 Thread Mark
There's some potentially useful information here:
http://www.postgresql.org/docs/9.0/interactive/textsearch-controls.html#TEXTSEARCH-RANKING

Thanks for reply. I was reading the documentation of PostgreSQL, but there
it is not written the name of the used methods. Everywhere there is written,
that ts_rank use standard ranking function. But it is difficult to say which
is the standard function. 
Somewhere I found that it is maybe based on Vector space model and it seems
to be truth, because in the code of tsrank.c is counted the frequency of
words, but I am not sure of that :-(



--
View this message in context: 
http://postgresql.1045698.n5.nabble.com/ts-rank-tp4384614p4414631.html
Sent from the PostgreSQL - hackers mailing list archive at Nabble.com.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2011-05-21 Thread Fabien COELHO


Hello,

Please find attached a minor stylish patch. It compiles and the update 
test cases work for me.


Description:

Add AS EXPLICIT to CREATE CAST

This gives a name to the default case of CREATE CAST, which creates a 
cast which must be explicitely invoked.


From a language definition perspective, it is helpful to have a name for 
every case instead of an implicit fallback, without any word to describe 
it. See for instance CREATE USER CREATEDB/NOCREATEDB or CREATE RULE ... 
DO ALSO/INSTEAD for similar occurences of naming default cases.


--
Fabien.diff --git a/doc/src/sgml/ref/create_cast.sgml b/doc/src/sgml/ref/create_cast.sgml
index c0039ed..35893b7 100644
--- a/doc/src/sgml/ref/create_cast.sgml
+++ b/doc/src/sgml/ref/create_cast.sgml
@@ -20,15 +20,15 @@
 synopsis
 CREATE CAST (replaceablesource_type/replaceable AS replaceabletarget_type/replaceable)
 WITH FUNCTION replaceablefunction_name/replaceable (replaceableargument_type/replaceable [, ...])
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 
 CREATE CAST (replaceablesource_type/replaceable AS replaceabletarget_type/replaceable)
 WITHOUT FUNCTION
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 
 CREATE CAST (replaceablesource_type/replaceable AS replaceabletarget_type/replaceable)
 WITH INOUT
-[ AS ASSIGNMENT | AS IMPLICIT ]
+[ AS ASSIGNMENT | AS EXPLICIT | AS IMPLICIT ]
 /synopsis
  /refsynopsisdiv
 
@@ -74,7 +74,8 @@ SELECT CAST(42 AS float8);
   /para
 
   para
-   By default, a cast can be invoked only by an explicit cast request,
+   By default, or if the cast is declared literalAS EXPLICIT/,
+   a cast can be invoked only by an explicit cast request,
that is an explicit literalCAST(replaceablex/ AS
replaceabletypename/)/literal or
replaceablex/literal::/replaceabletypename/
@@ -238,6 +239,21 @@ SELECT CAST ( 2 AS numeric ) + 4.0;
 /varlistentry
 
 varlistentry
+ termliteralAS EXPLICIT/literal/term
+
+ listitem
+  para
+   Indicates that the cast can be invoked only with an explicit
+   cast request, that is an explicit literalCAST(replaceablex/ AS
+   replaceabletypename/)/literal or
+   replaceablex/literal::/replaceabletypename/
+   construct.
+   This is the default.
+  /para
+ /listitem
+/varlistentry
+
+varlistentry
  termliteralAS IMPLICIT/literal/term
 
  listitem
@@ -405,8 +421,8 @@ CREATE CAST (bigint AS int4) WITH FUNCTION int4(bigint) AS ASSIGNMENT;
acronymSQL/acronym standard,
except that SQL does not make provisions for binary-coercible
types or extra arguments to implementation functions.
-   literalAS IMPLICIT/ is a productnamePostgreSQL/productname
-   extension, too.
+   literalAS IMPLICIT/ and literalAS EXPLICIT/ are
+   a productnamePostgreSQL/productname extension, too.
   /para
  /refsect1
 
diff --git a/src/backend/parser/gram.y b/src/backend/parser/gram.y
index 1d39674..de339db 100644
--- a/src/backend/parser/gram.y
+++ b/src/backend/parser/gram.y
@@ -499,7 +499,7 @@ static void SplitColQualList(List *qualList,
 	DICTIONARY DISABLE_P DISCARD DISTINCT DO DOCUMENT_P DOMAIN_P DOUBLE_P DROP
 
 	EACH ELSE ENABLE_P ENCODING ENCRYPTED END_P ENUM_P ESCAPE EXCEPT
-	EXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN
+	EXCLUDE EXCLUDING EXCLUSIVE EXECUTE EXISTS EXPLAIN EXPLICIT
 	EXTENSION EXTERNAL EXTRACT
 
 	FALSE_P FAMILY FETCH FIRST_P FLOAT_P FOLLOWING FOR FORCE FOREIGN FORWARD
@@ -6313,6 +6313,7 @@ CreateCastStmt: CREATE CAST '(' Typename AS Typename ')'
 
 cast_context:  AS IMPLICIT_P	{ $$ = COERCION_IMPLICIT; }
 		| AS ASSIGNMENT			{ $$ = COERCION_ASSIGNMENT; }
+		| AS EXPLICIT			{ $$ = COERCION_EXPLICIT; }
 		| /*EMPTY*/{ $$ = COERCION_EXPLICIT; }
 		;
 
diff --git a/src/include/parser/kwlist.h b/src/include/parser/kwlist.h
index 12c2faf..f5b2f16 100644
--- a/src/include/parser/kwlist.h
+++ b/src/include/parser/kwlist.h
@@ -148,6 +148,7 @@ PG_KEYWORD(exclusive, EXCLUSIVE, UNRESERVED_KEYWORD)
 PG_KEYWORD(execute, EXECUTE, UNRESERVED_KEYWORD)
 PG_KEYWORD(exists, EXISTS, COL_NAME_KEYWORD)
 PG_KEYWORD(explain, EXPLAIN, UNRESERVED_KEYWORD)
+PG_KEYWORD(explicit, EXPLICIT, UNRESERVED_KEYWORD)
 PG_KEYWORD(extension, EXTENSION, UNRESERVED_KEYWORD)
 PG_KEYWORD(external, EXTERNAL, UNRESERVED_KEYWORD)
 PG_KEYWORD(extract, EXTRACT, COL_NAME_KEYWORD)
diff --git a/src/test/regress/expected/create_cast.out b/src/test/regress/expected/create_cast.out
index 56cd86e..a8858fa 100644
--- a/src/test/regress/expected/create_cast.out
+++ b/src/test/regress/expected/create_cast.out
@@ -27,8 +27,8 @@ ERROR:  function casttestfunc(text) does not exist
 LINE 1: SELECT casttestfunc('foo'::text);
^
 HINT:  No function matches the given name and argument types. You might need to add explicit type casts.
--- Try binary coercion cast
-CREATE CAST (text AS casttesttype) WITHOUT FUNCTION;
+-- Try binary coercion cast and 

Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2011-05-21 Thread Tom Lane
Fabien COELHO coe...@cri.ensmp.fr writes:
 Description:
 Add AS EXPLICIT to CREATE CAST
 This gives a name to the default case of CREATE CAST, which creates a 
 cast which must be explicitely invoked.

I'm not sure this is a good idea.  The CREATE CAST syntax is in the SQL
standard, and this isn't it.  Now I realize that we've extended that
statement already to cover some functionality that's not in the
standard, but that doesn't mean we should create unnecessarily
nonstandard syntax for cases that are in the standard.  If a commercial
vendor did that, wouldn't you castigate them for trying to create vendor
lock-in?

 From a language definition perspective, it is helpful to have a name for 
 every case instead of an implicit fallback, without any word to describe 
 it. See for instance CREATE USER CREATEDB/NOCREATEDB or CREATE RULE ... 
 DO ALSO/INSTEAD for similar occurences of naming default cases.

If we were working in a green field, I couldn't fault this logic ... but
we are not.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: psql include file using relative path

2011-05-21 Thread Josh Kupershmidt
On Fri, May 20, 2011 at 2:35 PM, Gurjeet Singh singh.gurj...@gmail.com wrote:
 On Sat, May 14, 2011 at 5:03 PM, Josh Kupershmidt schmi...@gmail.com
 wrote:
 Thanks a lot for the review. My responses are inline below.

Thanks for the fixes. Your updated patch is sent as a
patch-upon-a-patch, it'll probably be easier for everyone
(particularly the final committer) if you send inclusive patches
instead.

 == Documentation ==
 The patch includes the standard psql help output description for the
 new \ir command. I think ./doc/src/sgml/ref/psql-ref.sgml needs to be
 patched as well, though.

 Done.

This is a decent description from a technical standpoint:

para
When used within a script, if the replaceable
class=parameterfilename/replaceable
uses relative path notation, then the file will be looked up
relative to currently
executing file's location.

If the replaceable class=parameterfilename/replaceable
uses an absolute path
notation, or if this command is being used in interactive
mode, then it behaves the
same as literal\i/ command.
/para

but I think these paragraphs could be made a little more clear, by
making a suggestion about why someone would be interested in \ir. How
about this:

para
The literal\ir/ command is similar to literal\i/, but
is useful for files which
load in other files.

When used within a file loaded via literal\i/literal,
literal\ir/literal, or
option-f/option, if the replaceable
class=parameterfilename/replaceable
is specified with a relative path, then the location of the
file will be determined
relative to the currently executing file's location.
/para

para
If replaceable class=parameterfilename/replaceable is given as an
absolute path, or if this command is used in interactive mode, then
literal\ir/ behaves the same as the literal\i/ command.
/para

The sentence When used within a file ... is now a little
clunky/verbose, but I was trying to avoid potential confusion from
someone trying \ir via 'cat ../filename.sql | psql', which would be
used within a script, but \ir wouldn't know that.


 == Code ==
 1.) I have some doubts about whether the memory allocated here:
char *relative_file = pg_malloc(dir_len + 1 + file_len + 1);
 is always free()'d, particularly if this condition is hit:

if (!fd)
{
psql_error(%s: %s\n, filename, strerror(errno));
return EXIT_FAILURE;
}

 Fixed.

Well, this fix:

if (!fd)
{
+   if (relative_path != NULL)
+   free(relative_path);
+
psql_error(%s: %s\n, filename, strerror(errno));

uses the wrong variable name (relative_path instead of relative_file),
and the subsequent psql_error() call will then reference freed memory,
since relative_file was assigned to filename.

But even after fixing this snippet to get it to compile, like so:

if (!fd)
{
psql_error(%s: %s\n, filename, strerror(errno));
if (relative_file != NULL)
free(relative_file);

return EXIT_FAILURE;
}

I was still seeing memory leaks in valgrind growing with the number of
\ir calls between files (see valgrind_bad_report.txt attached). I
think that relative_file needs to be freed even in the non-error case,
like so:

error:
if (fd != stdin)
fclose(fd);

if (relative_file != NULL)
free(relative_file);

pset.inputfile = oldfilename;
return result;

At least, that fix seemed to get rid of the ballooning valgrind
reports for me. I then see a constant-sized  500 byte leak complaint
from valgrind, the same as with unpatched psql.

 4.) I think the changes to process_file() merit another comment or
 two, e.g. describing what relative_file is supposed to be.

 Added.

Some cleanup for this comment:

+   /*
+* If the currently processing file uses \ir command, and the 
parameter
+* to the command is a relative file path, then we resolve this 
path
+* relative to currently processing file.

suggested tweak:

If the currently processing file uses the \ir command, and the filename
parameter is given as a relative path, then we resolve this path relative
to the currently processing file (pset.inputfile).

+*
+* If the \ir command was executed in interactive mode (i.e. 
not in a
+* script) the we treat it the same as \i command.
+*/

suggested tweak:

If the \ir command was executed in interactive mode (i.e. not in a
script, and pset.inputfile will be NULL) then we treat the filename
the same as the \i command does.

[snip]
The rest looks good.

Josh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Alvaro Herrera
Excerpts from Robert Haas's message of vie may 20 18:41:37 -0400 2011:

 This means that, in a situation where aren't using DML, and are
 running very simple queries without prepared statements, the parser
 bloat resulting from supporting all the other kinds of queries which
 aren't being exercised by the tests results in a slowdown of
 approximately 0.7%.

So the point here is, we do not need to worry about adding new keywords,
because the performance impact is really minimal.  Right?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Pull up aggregate subquery

2011-05-21 Thread Hitoshi Harada
2011/5/5 Hitoshi Harada umi.tan...@gmail.com:
 https://commitfest.postgresql.org/action/patch_view?id=548

 I'll work further if I find time.

After more thought, pulling up aggregate subquery in narrow
conditional cases is quite hard path, especially when the joinrel is
more than 2. It will be hard to check pulling up is safe for other
relations than the target relation.

It was a big shame I missed Tom Lane's session in PGCon, but finding
Parameterized Scan in his slides, it occurred to me that it might
help my problem, too. Before hitting the pull up idea, I once
thought if it would be possible to push outer Var of join down to
Agg's HAVING, which is transferred to underlying SeqScan's filter.
Resulted in something like:

  NestLoop
- SeqScan M (filter: M.val = '1')
- GroupAggregate
  - SeqScan M (filter: L.m_id = M.id)

However, currently we don't have such mechanism to push down Var as a
qual to non-NestLoop. Yeah, it could be even now, but we should avoid
N-loop of Agg. We want to scan Agg once, with Param $1 = M.id =
multiple values. Since I didn't attend his session I'm afraid I don't
understand Parameterized Scan correctly, but once we've got such
mechanism, one example introduced in Robert Haas's blog[1] (originally
shown by Andrew Gierth[2])  and LATERAL maybe.

Do I understand correctly? If so, could someone explain more detail of
how to get Parameterized Scan in the planner?

Regards,

[1]: http://rhaas.blogspot.com/2010/04/finding-our-way-to-lateral.html
[2]: http://archives.postgresql.org/pgsql-hackers/2009-09/msg00525.php

-- 
Hitoshi Harada

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Memory leak in FDW

2011-05-21 Thread Heikki Linnakangas

On 27.04.2011 04:19, Heikki Linnakangas wrote:

On 26.04.2011 21:30, Tom Lane wrote:

Heikki Linnakangasheikki.linnakan...@enterprisedb.com writes:

The trivial fix is to reset the per-tuple memory context between
iterations.


Have you tested this with SRFs?

ForeignNext seems like quite the wrong place for resetting
exprcontext in any case ...


ExecScan would be more appropriate I guess (attached).

This means the context will be reset between each tuple even for nodes
like seqscan that don't use the per-tuple context for anything.
AllocSetReset exits quickly if there's nothing to do, but it takes a
couple of function calls to get there. I wouldn't normally worry about
that, but this is a very hot path for simple queries.

I tested this with:

CREATE TABLE foo AS SELECT generate_series(1,1000);

I ran SELECT COUNT(*) FROM foo many times with \timing on, and took
the smallest time with and without the patch. I got:

1230 ms with the patch
1186 ms without the patch

This was quite repeatable, it's consistently faster without the patch.
That's a bigger difference than I expected. Any random code change can
swing results on micro-benchmarks like this by 1-2%, but that's over 3%.
Do we care?

I might be getting a bit carried away with this, but we could buy that
back by moving the isReset flag from AllocSetContext to
MemoryContextData. That way MemoryContextReset can exit more quickly if
there's nothing to do, patch attached.


I hear no objections, so committed.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] proposal: enhanced get diagnostics statement

2011-05-21 Thread Pavel Stehule
Hello

This proposal is related to exception processing. Inside exception
handler we can get some basic info about exception - message text and
message code. There are other fields - but these fields are no
available now in PL/pgSQL. The cheap access to fields inside ErrorData
structure can be implemented inside GET DIAGNOSTICS statements - this
statement is created for this purpose. I propose a new thee
identifiers, that can be used there: ERROR_DETAIL, ERROR_HINT and
ERROR_CONTEXT. Using is simple:

CREATE OR REPLACE FUNCTION foo()
RETURNS void AS $$
DECLARE
  _detail text;
  _hint text;
  _context text;
BEGIN
  RAISE EXCEPTION 'some message'
USING DETAIL = 'some message specific description',
  HINT = 'some hint related to messgae';

EXCEPTION WHEN OTHERS THEN
  GET DIAGNOSTICS _detail = ERROR_DETAIL,
  _hint = ERROR_HINT,
  _context = ERROR_CONTEXT;

  RAISE WARNING 'caught message: %', SQLERRM
USING DETAIL = e'\ncaught detail: ' || _detail ||
   e'\ncaught hint: ' || _hint ||
   e'\ncaught context: ' || _context;

END;
$$ LANGUAGE plpgsql;
SELECT foo();


A implementation of ERROR_DETAIL and ERROR_HINT is simple and without
possible performance issues. It has zero impact on performance.

A implementation of ERROR_CONTEXT is not without impact on
performance, because context should be collected when exception is
caught. One solution is removing a ERROR_CONTEXT from proposal. Second
solution can be a design of enhanced syntax for exception trap like
(it means - collect CONTEXT when exception is handled)

BEGIN
  EXCEPTION (ERROR_CONTEXT=true) WHEN OTHERS THEN
...
END

Getting a context can be a problem - but it is very important
information, that can significantly help with exception's explanation.

ideas, notes?

Regards

Pavel Stehule
*** ./src/pl/plpgsql/src/gram.y.orig	2011-05-18 19:41:56.755678378 +0200
--- ./src/pl/plpgsql/src/gram.y	2011-05-21 21:03:28.799168296 +0200
***
*** 250,255 
--- 250,256 
  %token keyword	K_CLOSE
  %token keyword	K_COLLATE
  %token keyword	K_CONSTANT
+ %token keyword	K_CONTEXT
  %token keyword	K_CONTINUE
  %token keyword	K_CURSOR
  %token keyword	K_DEBUG
***
*** 263,268 
--- 264,272 
  %token keyword	K_END
  %token keyword	K_ERRCODE
  %token keyword	K_ERROR
+ %token keyword	K_ERROR_CONTEXT
+ %token keyword	K_ERROR_DETAIL
+ %token keyword	K_ERROR_HINT
  %token keyword	K_EXCEPTION
  %token keyword	K_EXECUTE
  %token keyword	K_EXIT
***
*** 877,882 
--- 881,895 
  		else if (tok_is_keyword(tok, yylval,
  K_RESULT_OID, result_oid))
  			$$ = PLPGSQL_GETDIAG_RESULT_OID;
+ 		else if (tok_is_keyword(tok, yylval,
+ K_ERROR_DETAIL, error_detail))
+ 			$$ = PLPGSQL_GETDIAG_ERROR_DETAIL;
+ 		else if (tok_is_keyword(tok, yylval,
+ K_ERROR_HINT, error_hint))
+ 			$$ = PLPGSQL_GETDIAG_ERROR_HINT;
+ 		else if (tok_is_keyword(tok, yylval,
+ K_ERROR_CONTEXT, error_context))
+ 			$$ = PLPGSQL_GETDIAG_ERROR_CONTEXT;
  		else
  			yyerror(unrecognized GET DIAGNOSTICS item);
  	}
***
*** 2141,2146 
--- 2154,2162 
  | K_DUMP
  | K_ERRCODE
  | K_ERROR
+ | K_ERROR_CONTEXT
+ | K_ERROR_DETAIL
+ | K_ERROR_HINT
  | K_FIRST
  | K_FORWARD
  | K_HINT
*** ./src/pl/plpgsql/src/pl_exec.c.orig	2011-05-18 19:42:15.458152167 +0200
--- ./src/pl/plpgsql/src/pl_exec.c	2011-05-21 21:49:02.593299349 +0200
***
*** 1081,1089 
--- 1081,1097 
  		{
  			ErrorData  *edata;
  			ListCell   *e;
+ 			ErrorContextCallback *econtext;
+ 
  
  			estate-err_text = gettext_noop(during exception cleanup);
  
+ 			/* Collect a context info */
+ 			for (econtext = error_context_stack;
+  econtext != NULL;
+  econtext = econtext-previous)
+ (*econtext-callback) (econtext-arg);
+ 
  			/* Save error info */
  			MemoryContextSwitchTo(oldcontext);
  			edata = CopyErrorData();
***
*** 1449,1454 
--- 1457,1504 
    ObjectIdGetDatum(estate-eval_lastoid),
    OIDOID, isnull);
  break;
+ 			case PLPGSQL_GETDIAG_ERROR_DETAIL:
+ 			case PLPGSQL_GETDIAG_ERROR_HINT:
+ 			case PLPGSQL_GETDIAG_ERROR_CONTEXT:
+ {
+ 	char *strval = NULL;
+ 	Datum value;
+ 
+ 	/*
+ 	 * Now a fields based on processing of Error Data
+ 	 * are handled.
+ 	 */
+ 	if (estate-cur_error == NULL)
+ 		ereport(ERROR,
+ (errcode(ERRCODE_SYNTAX_ERROR),
+  errmsg(access to exception's info outside exception handler)));
+ 	switch (diag_item-kind)
+ 	{
+ 		case PLPGSQL_GETDIAG_ERROR_DETAIL:
+ 			strval = estate-cur_error-detail;
+ 			break;
+ 		case PLPGSQL_GETDIAG_ERROR_HINT:
+ 			strval = estate-cur_error-hint;
+ 			break;
+ 		case PLPGSQL_GETDIAG_ERROR_CONTEXT:
+ 			strval = estate-cur_error-context;
+ 			break;
+ 	

[HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Kevin Grittner
As background, for most of SSI development I had a TODO comment in the
source which was basically around the question of whether a predicate
lock on a visible heap tuple should propagate to later versions of the
row as long as the original lock was material.  In early February Heikki
(quite reasonably) insisted that I resolve that and either add code to
do so or a comment about why it wasn't necessary.  After looking at it
for many hours without getting to a logical proof, I thought I had a
proof by example:
  
http://archives.postgresql.org/pgsql-hackers/2011-02/msg00325.php
  
We added code for this, but it has been problematic; probably over half
the SSI bugs which have been found since SSI went into the code tree
have been related to this, and Robert Haas has just found a couple more.
 In discussing how to address this with Jeff Davis (over beers at
PGCon), he provided good advice about how to properly fix these issues,
but posed some very good questions about whether it was really
necessary.  Rather than fix this aspect of the code, we might be able to
eliminate it, which would reduce the code size and some overhead.  Since
this code is more fragile than the rest of SSI, this is particularly
appealing -- my favorite way to deal with fragile code is to remove it.

I went back to the example which persuaded me and took another look.  On
review I see that this didn't prove the point because there was a
dangerous structure with T1 as a pivot which should have caused SSI to
break the cycle.  Since it didn't, either I got careless in my testing
methodology (which I will recheck when I get back to Wisconsin) or there
was a bug -- but either way, this example can't prove that the predicate
locks need to follow the row to new versions.
   
I haven't been able to come up with an example where it actually *is*
needed, but failure to come up with an example where it is needed
doesn't prove that it isn't needed.  Here's my attempt at a logical
proof of whether it is needed.  It would be great if anyone with a grasp
of SSI concepts could confirm its validity or shoot it down.  (Hopefully
Friday's presentation expanded the number of those who can do so.)
  
The issue can be rephrased to this: If transaction T1 reads a row (thus
acquiring a predicate lock on it) and a second transaction T2 updates
that row, must a third transaction T3 which updates the new version of
the row have a rw-conflict in from T1?
 
Now, the first thing which jumps out is the timing requirements -- T3
must overlap T1 or there is no conflict (and therefore no need to
replicate the predicate locks), but T3 must *not* overlap T2 or there
would be ww-conflict and one of them would roll back.  If T2 committed
before T1 acquired its snapshot, T1 would have gotten its predicate lock
on the second version of the row, so for a situation where this could
possibly matter, the following actual timings must exist (read starts
as meaning acquires a snapshot):
 
T1 and T2 start in either order.
T1 reads the row and T2 updates the row in either order.
T2 commits.
T3 starts.
T3 updates the row.
 
So far, using broken lines for rw-conflicts and solid for
wr-dependencies, we have this for apparent order of execution:
 
T1 - - - T2 - T3
 
Not on the slides for the presentation, but briefly mentioned, is that
Alan Fekete and others have proven that serialization anomalies can only
occur if the transaction which appears on the rw-conflict *out* side of
the pivot (the transaction which appears to have executed third)
actually commits first.  T2 committed before T1, so there can't be a
cycle involving a rw-conflict *in* to T1 because that would complete the
dangerous structure -- unless it commits before T2 or is read only and
gets its snapshot before the commit of T2 (another special case which we
left out of the presentation in the interest of time).
 
Since T3 must get its snapshot after the commit of T2, if it develops a
rw-conflict with T1 there will be an SSI serialization failure without
needing the lock propagation.  So now the question breaks down to
whether we can have other transactions in the mix which, given the
above, cause T3 to appear to have executed before T1.
 
Since T3 cannot have committed before T1 (given what we know about the
actual timings required), that has to involve a rw-conflict somewhere in
the graph.  Since a wr-dependency is caused by the writer actually
committing before its work is read by another transaction, causing
apparent order of execution at that point to match actual serial order
of execution, such transactions would be benign in the transaction graph
-- they might exist in a problem graph but would not help to cause a
later-starting transaction to appear to have executed first.  We can
look just a rw-conflicts to cause the apparent order of execution to
loop back in time.
 
Since the time-arrow for rw-conflict points in the direction of the
write, we can ignore read only transactions.  So to complete the cycle
back to 

Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Pavan Deolasee
On Sat, May 21, 2011 at 4:09 PM, Kevin Grittner kevin.gritt...@wicourts.gov
 wrote:


 It would be great if anyone with a grasp
 of SSI concepts could confirm its validity or shoot it down.  (Hopefully
 Friday's presentation expanded the number of those who can do so.)


As a first step, it would be great if you can upload the slides on the
conference website. To expect that the attendees would have understood the
nitty-gritties of SSI just listening to the presentation is so unhuman :-)

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Kevin Grittner
Pavan Deolasee  wrote:
 
 As a first step, it would be great if you can upload the slides on
 the conference website. To expect that the attendees would have
 understood the nitty-gritties of SSI just listening to the
 presentation is so unhuman :-)
 
I'm sure Dan will be doing that soon; meanwhile, maybe this page
will help:
 
http://wiki.postgresql.org/wiki/Serializable
 
-Kevin



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] minor patch submission: CREATE CAST ... AS EXPLICIT

2011-05-21 Thread Fabien COELHO


Hello Tom,

Add AS EXPLICIT to CREATE CAST This gives a name to the default 
case of CREATE CAST, which creates a cast which must be explicitely 
invoked.


I'm not sure this is a good idea.  The CREATE CAST syntax is in the SQL
standard, and this isn't it.  Now I realize that we've extended that
statement already to cover some functionality that's not in the
standard, but that doesn't mean we should create unnecessarily
nonstandard syntax for cases that are in the standard.


The standard provides only one case, so CAST is good enough a name.

Once you start creating alternatives with distinct semantics, then it 
helps to give the initial one a name as well to be able to discuss them 
with something else that the remaining case, or when there is no 
option, especially as there is something to discuss.


Note that the standard is still supported just the same, and the 
documentation already underlines that AS * stuff is a pg extension, 
nothing is really changed. Maybe the documentation could be clearer about 
where the standard stops and where extensions start, even now without an 
AS EXPLICIT clause.


If a commercial vendor did that, wouldn't you castigate them for trying 
to create vendor lock-in?


I'm more concerned with explaining things to students, and its good to 
have words and logic for that.


With respect to the standard, it seems good enough to me if (1) the 
standard is well supported and (2) the documentation clearly says which 
parts are extensions. If you really want to keep to the standard, then do 
not offer any extension.


Moreover, this stuff is really minor compared to RULEs or many other 
things specifics to pg, and the lock-in is light, you just have to 
remove AS EXPLICIT to get away, no big deal.


Well, you decide anyway:-)

Have a nice day,

--
Fabien.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] about EDITOR_LINENUMBER_SWITCH

2011-05-21 Thread Peter Eisentraut
I noticed the 9.1 release notes claim that the new
EDITOR_LINENUMBER_SWITCH thing is an environment variable, whereas it is
actually a psql variable.

This is perhaps sort of a Freudian slip.  Since the editor itself is
configured using an environment variable, shouldn't any configuration
about the editor also be an environment variable, so people can
configure them together?

Another thought is that this whole thing could be done away with if we
just allowed people to pass through arbitrary options to the editor,
like

\edit file.sql +50 -a -b -c

For powerusers, this could have interesting possibilities.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enhanced get diagnostics statement

2011-05-21 Thread Alvaro Herrera
Excerpts from Pavel Stehule's message of sáb may 21 16:05:01 -0400 2011:

 A implementation of ERROR_CONTEXT is not without impact on
 performance, because context should be collected when exception is
 caught. One solution is removing a ERROR_CONTEXT from proposal. Second
 solution can be a design of enhanced syntax for exception trap like
 (it means - collect CONTEXT when exception is handled)

I don't understand why we should worry about this.  I mean, if you don't
catch the error, you have to form errcontext anyway.  Why is it a
problem to generate it when the exception is caught?

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 12:13 PM, Alvaro Herrera
alvhe...@commandprompt.com wrote:
 Excerpts from Robert Haas's message of vie may 20 18:41:37 -0400 2011:
 This means that, in a situation where aren't using DML, and are
 running very simple queries without prepared statements, the parser
 bloat resulting from supporting all the other kinds of queries which
 aren't being exercised by the tests results in a slowdown of
 approximately 0.7%.

 So the point here is, we do not need to worry about adding new keywords,
 because the performance impact is really minimal.  Right?

I think there are several possible points to be made here.  I agree
that it's somewhat reassuring in that it certainly means that the
likely impact of any single keyword is probably minimal.  On the other
hand, I wouldn't go so far as to say that we can add infinite numbers
of keywords with wild abandon: that's certainly not true, and spending
two or three minutes trying to use the existing ones rather than
adding new ones is probably time well spent.  But on the flip side
there seems to be no reason for alarm about adding ~10
keywords/release or so, which I think is approximately what we've been
doing.

Another point is that parsing overhead is quite obviously not the
reason for the massive performance gap between one core running simple
selects on PostgreSQL and one core running simple selects on MySQL.
Even if I had (further) eviscerated the parser to cover only the
syntax those queries actually use, it wasn't going to buy more than a
couple points.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Dan Ports
On Sat, May 21, 2011 at 04:45:15PM -0400, Pavan Deolasee wrote:
 As a first step, it would be great if you can upload the slides on the
 conference website. To expect that the attendees would have understood the
 nitty-gritties of SSI just listening to the presentation is so unhuman :-)

I just posted them at
http://drkp.net/drkp/papers/ssi-pgcon11-slides.pdf

...and they'll be linked from the Serializable wiki page as soon as I
remember how to edit it. :-)

Dan

-- 
Dan R. K. Ports  MIT CSAILhttp://drkp.net/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TYPE DROP + composite-typed col vs. pg_upgrade

2011-05-21 Thread Noah Misch
On Sat, May 21, 2011 at 08:25:30AM -0400, Heikki Linnakangas wrote:
 On 28.04.2011 15:41, Noah Misch wrote:
 Now that we have ALTER TYPE DROP ATTRIBUTE, pg_dump --binary-upgrade must, 
 for
 the sake of composite-typed columns, preserve the dropped-column 
 configuration
 of stand-alone composite types.  Here's a test case:

 create type t as (x int, y int);
 create table has_a (tcol t);
 insert into has_a values ('(1,2)');
 table has_a; -- (1,2)
 alter type t drop attribute y cascade, add attribute z int cascade;
 table has_a; -- (1,)
 table has_a; -- after pg_upgrade: (1,2)

 Apparently I did not fully test the last version after merging it with 
 upstream
 changes, because it did not work.  Sorry for that.  This version updates the
 queries correctly and adds a test case.  A regular make check passes the 
 new
 test case with or without the rest of this patch.  However, a comparison of
 regression database dumps before and after a pg_upgrade will reveal the 
 problem
 given this new test case.  See, for example, Peter's recent patch to have the
 contrib/pg_upgrade make check do this.

 Ok, committed.

Thank you.

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 7:51 PM, Robert Haas robertmh...@gmail.com wrote:
 Another point is that parsing overhead is quite obviously not the
 reason for the massive performance gap between one core running simple
 selects on PostgreSQL and one core running simple selects on MySQL.
 Even if I had (further) eviscerated the parser to cover only the
 syntax those queries actually use, it wasn't going to buy more than a
 couple points.

Incidentally, prepared transactions help a lot.  On unpatched master,
with pgbench -T 300 -S -n:

tps = 10106.900801 (including connections establishing)
tps = 10107.015951 (excluding connections establishing)

vs.

tps = 18212.053457 (including connections establishing)
tps = 18212.246077 (excluding connections establishing)

The reasons for the magnitude of that difference are not entirely
apparent to me.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Kevin Grittner
Robert Haas  wrote:
 
 Incidentally, prepared transactions help a lot.
 
Prepared transactions or prepared statements?
 
-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 4:09 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 [Anyone who has followed along this far has earned my undying
 gratitude.]

 Since the chain of transactions has to overlap T0 and T3, I don't see
 how that can happen.  We established that T0 has to commit before T3 can
 start, so the chain will ultimately have to get to that T0 commit.

 I don't want to start ripping out the apparently useless code without
 someone checking my logic here.  One big gaff in this area is quite
 enough for me.  :-/  Anyone?

How is an UPDATE different from a DELETE and an INSERT?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 8:36 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Robert Haas  wrote:
 Incidentally, prepared transactions help a lot.

 Prepared transactions or prepared statements?

Uh, statements.  -M prepared.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] about EDITOR_LINENUMBER_SWITCH

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 5:47 PM, Peter Eisentraut pete...@gmx.net wrote:
 I noticed the 9.1 release notes claim that the new
 EDITOR_LINENUMBER_SWITCH thing is an environment variable, whereas it is
 actually a psql variable.

 This is perhaps sort of a Freudian slip.  Since the editor itself is
 configured using an environment variable, shouldn't any configuration
 about the editor also be an environment variable, so people can
 configure them together?

It's probably the result of drift between the original patch and what
was eventually committed.  IIRC, Pavel had it as an environment
variable originally, but Tom and I didn't feel the feature was
important enough to merit that treatment.

 Another thought is that this whole thing could be done away with if we
 just allowed people to pass through arbitrary options to the editor,
 like

 \edit file.sql +50 -a -b -c

 For powerusers, this could have interesting possibilities.

That's an intriguing possibility.  But part of the point of the
original feature was to be able to say:

\ef somefunc 10

...and end up on line 10 of somefunc, perhaps in response to an error
message complaining about that line.  I don't think your proposal
would address that.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Jeff Janes
On Sat, May 21, 2011 at 5:31 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, May 21, 2011 at 7:51 PM, Robert Haas robertmh...@gmail.com wrote:
 Another point is that parsing overhead is quite obviously not the
 reason for the massive performance gap between one core running simple
 selects on PostgreSQL and one core running simple selects on MySQL.
 Even if I had (further) eviscerated the parser to cover only the
 syntax those queries actually use, it wasn't going to buy more than a
 couple points.

 Incidentally, prepared transactions help a lot.  On unpatched master,
 with pgbench -T 300 -S -n:

 tps = 10106.900801 (including connections establishing)
 tps = 10107.015951 (excluding connections establishing)

Are you sure that you actually ran that with -M prepared?  The numbers
look suspiciously similar to the ones reported in your original email.

For what it is worth, on my ancient hardware, the patched code is
slower than the unpatched just as often as it is faster, using -n -S
-T 300 on alternations between servers.

Cheers,

Jeff

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Kevin Grittner
Robert Haas  wrote:
 
 How is an UPDATE different from a DELETE and an INSERT?
 
That's the question Jeff asked which caused us to revisit this.
 
There are two answers -- conceptual and implementation.
 
Conceptually, an updated row is the same row, and a row inserted after a
delete is a new row.  Note that READ COMMITTED doesn't treat them the
same on a write conflict.  To give a practical example, police
departments in Wisconsin typically reassign a badge number to a new
officer after an existing officer leaves.  Updating an officer record
keyed by badge number (say, with a new address or a name change) would
be qualitatively different from deleting an old officer record and
inserting a new one for a different person now getting the badge number.
 (OK, so this is somewhat artificial, because they track who had the
number in what temporal ranges, but you get the idea.)
 
In the implementation the only difference between an UPDATE in a table
and a DELETE and INSERT in the same table in the same transaction
(besides concurrency handling) is the ctid linkage, at least as far as I
can see.
 
So I think that you can't just treat the two things the same in SSI just
because the PostgreSQL implementation details make them similar; but I
think that you can treat the two things the same for the reasons I
worked out at the start of the thread.
 
-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] eviscerating the parser

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 8:41 PM, Jeff Janes jeff.ja...@gmail.com wrote:
 On Sat, May 21, 2011 at 5:31 PM, Robert Haas robertmh...@gmail.com wrote:
 On Sat, May 21, 2011 at 7:51 PM, Robert Haas robertmh...@gmail.com wrote:
 Another point is that parsing overhead is quite obviously not the
 reason for the massive performance gap between one core running simple
 selects on PostgreSQL and one core running simple selects on MySQL.
 Even if I had (further) eviscerated the parser to cover only the
 syntax those queries actually use, it wasn't going to buy more than a
 couple points.

 Incidentally, prepared transactions help a lot.  On unpatched master,
 with pgbench -T 300 -S -n:

 tps = 10106.900801 (including connections establishing)
 tps = 10107.015951 (excluding connections establishing)

 Are you sure that you actually ran that with -M prepared?  The numbers
 look suspiciously similar to the ones reported in your original email.

That's without -M prepared; the subsequent number (~18K) is the one
with -M prepared.  So prepared transactions increased throughput by
about 80%, in this test.

 For what it is worth, on my ancient hardware, the patched code is
 slower than the unpatched just as often as it is faster, using -n -S
 -T 300 on alternations between servers.

Well, that's pretty interesting.  The effect *appeared* to be small
but consistent in my testing, but it could be I just got lucky; or the
choice of architecture and/or OS might matter.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SSI predicate locking on heap -- tuple or row?

2011-05-21 Thread Robert Haas
On Sat, May 21, 2011 at 8:50 PM, Kevin Grittner
kevin.gritt...@wicourts.gov wrote:
 Robert Haas  wrote:
 How is an UPDATE different from a DELETE and an INSERT?

 That's the question Jeff asked which caused us to revisit this.

 There are two answers -- conceptual and implementation.

 Conceptually, an updated row is the same row, and a row inserted after a
 delete is a new row.  Note that READ COMMITTED doesn't treat them the
 same on a write conflict.  To give a practical example, police
 departments in Wisconsin typically reassign a badge number to a new
 officer after an existing officer leaves.  Updating an officer record
 keyed by badge number (say, with a new address or a name change) would
 be qualitatively different from deleting an old officer record and
 inserting a new one for a different person now getting the badge number.
  (OK, so this is somewhat artificial, because they track who had the
 number in what temporal ranges, but you get the idea.)

 In the implementation the only difference between an UPDATE in a table
 and a DELETE and INSERT in the same table in the same transaction
 (besides concurrency handling) is the ctid linkage, at least as far as I
 can see.

I think the implementation is what matters here.  I understand that
there are certain situations in which the user might choose to UPDATE
a row and other situations in which they might choose to DELETE and
then INSERT: but the user's intent is basically irrelevant.  If the
system treats those operations in basically the same way, then it
shouldn't be necessary to follow the CTID chain in one case given that
there is no CTID chain in the other case.  Or to put that another way,
if it is necessary to follow the CTID chain, then we should be able to
articulate a reason for that necessity -- something that is materially
different in the UPDATE case.  Otherwise, we're just following the
chain because it's there.

It seems to me that we can actually state with some degree of
precision what that material difference would need to be.  The goal
of SSI is to prevent serialization anomalies that would not be
prevented by snapshot isolation.  Let's suppose that it successfully
does that in the DELETE/INSERT case.  Suppose further that we change
SSI so that it handles the UPDATE case in the same way that it handles
the DELETE/INSERT case.  This change will be incorrect only if there
is a serialization anomaly that snapshot isolation *would have
prevented* in the DELETE/INSERT case that *it does not prevent* in the
update case.  In other words, if SSI needs to be more rigorous in the
UPDATE case, it can only be because snapshot isolation is less
rigorous in that case, and the additional rigor that SSI must apply
there must be exactly equal to whatever snapshot isolation isn't
picking up (as compared with the DELETE/INSERT case).

Does that make any sense?  It seems logical to me, but IJWH.

 So I think that you can't just treat the two things the same in SSI just
 because the PostgreSQL implementation details make them similar; but I
 think that you can treat the two things the same for the reasons I
 worked out at the start of the thread.

Your argument seems reasonable to me; but it would be nice if we could
find a simpler one, because simpler arguments are less likely to be
incorrect.  :-)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] about EDITOR_LINENUMBER_SWITCH

2011-05-21 Thread Pavel Stehule
2011/5/21 Peter Eisentraut pete...@gmx.net:
 I noticed the 9.1 release notes claim that the new
 EDITOR_LINENUMBER_SWITCH thing is an environment variable, whereas it is
 actually a psql variable.

 This is perhaps sort of a Freudian slip.  Since the editor itself is
 configured using an environment variable, shouldn't any configuration
 about the editor also be an environment variable, so people can
 configure them together?

 Another thought is that this whole thing could be done away with if we
 just allowed people to pass through arbitrary options to the editor,
 like

 \edit file.sql +50 -a -b -c

in original patch I had to do some magic operation with line number,
so I had to know, what is a line number. A idea with other options are
interesting. More usable can be store these option inside psql
variable (be consistent with current state). Maybe in EDITOR_OPTIONS ?
With possibility to overwrite this options from metacommand

\edit file.sql 10 -x -y -z

Regards

Pavel Stehule



 For powerusers, this could have interesting possibilities.



 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: enhanced get diagnostics statement

2011-05-21 Thread Pavel Stehule
2011/5/21 Alvaro Herrera alvhe...@commandprompt.com:
 Excerpts from Pavel Stehule's message of sáb may 21 16:05:01 -0400 2011:

 A implementation of ERROR_CONTEXT is not without impact on
 performance, because context should be collected when exception is
 caught. One solution is removing a ERROR_CONTEXT from proposal. Second
 solution can be a design of enhanced syntax for exception trap like
 (it means - collect CONTEXT when exception is handled)

 I don't understand why we should worry about this.  I mean, if you don't
 catch the error, you have to form errcontext anyway.  Why is it a
 problem to generate it when the exception is caught?

Generating context means a calling a few functions with some string
operations - because in this moment is limited functionality, there
isn't too much operations - but it can be changed - PL/PSM dumps all
local variables, ...

somebody uses a exception trapping like mechanism for ignoring errors

FOR r IN SELECT ..
LOOP
  BEGIN
INSERT INTO ...
  EXCEPTION WHEN OTHERS THEN
/* do nothing */
  END;
END LOOP;

or some body can do

BEGIN
  ...
EXCEPTION WHEN OTHERS THEN
  RAISE WARNING ' .';
  RAISE;
END;

In last case the context can wanted - so it cannot be hard problem.
But first case is problem and we has not different way how to do it.

Maybe we can use a simple optimization

when function doesn't contain a GET DIAGNOSTICS statement with
ERROR_CONTEXT field, then we can not collect a context. Only when
function has GET DIAGNOSTICS with ERROR_CONTEXT we will take context
info. This strategy ensure, so there will not be negative performance
effect on current applications.

Pavel


 --
 Álvaro Herrera alvhe...@commandprompt.com
 The PostgreSQL Company - Command Prompt, Inc.
 PostgreSQL Replication, Consulting, Custom Development, 24x7 support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers