Re: [HACKERS] [v9.2] Fix Leaky View Problem

2011-12-03 Thread Kohei KaiGai
I rebased my patch set. New functions in pg_proc.h prevented to apply
previous revision cleanly. Here is no functional changes.

2011/11/3 Kohei KaiGai kai...@kaigai.gr.jp:
 2011/11/2 Tom Lane t...@sss.pgh.pa.us:
 Kohei KaiGai kai...@kaigai.gr.jp writes:
 The reason why I redefined the relid of RangeTblEntry is to avoid
 the problem when security_barrier attribute get changed by concurrent
 transactions between rewriter and planenr stage.

 This is complete nonsense.  If the information is being injected into
 the querytree by the rewriter, it's sufficient to assume that it's up to
 date.  Were it not so, we'd have problems with CREATE OR REPLACE RULE,
 too.

 I revised the patches to revert redefinition in relid of RangeTblEntry, and
 add a flag of security_barrier.
 I seems to work fine, even if view's property was changed between
 rewriter and planner stage.

 postgres=# CREATE VIEW v1 WITH (security_barrier) AS SELECT * FROM t1
 WHERE a % 2 = 0;
 CREATE VIEW
 postgres=# PREPARE p1 AS SELECT * FROM v1 WHERE f_leak(b);
 PREPARE
 postgres=# EXECUTE p1;
 NOTICE:  f_leak = bbb
 NOTICE:  f_leak = ddd
  a |  b
 ---+-
  2 | bbb
  4 | ddd
 (2 rows)

 postgres=# ALTER VIEW v1 SET (security_barrier=false);
 ALTER VIEW
 postgres=# EXECUTE p1;
 NOTICE:  f_leak = aaa
 NOTICE:  f_leak = bbb
 NOTICE:  f_leak = ccc
 NOTICE:  f_leak = ddd
 NOTICE:  f_leak = eee
  a |  b
 ---+-
  2 | bbb
  4 | ddd
 (2 rows)

 postgres=# ALTER VIEW v1 SET (security_barrier=true);
 ALTER VIEW
 postgres=# EXECUTE p1;
 NOTICE:  f_leak = bbb
 NOTICE:  f_leak = ddd
  a |  b
 ---+-
  2 | bbb
  4 | ddd
 (2 rows)

 Thanks,
 --
 KaiGai Kohei kai...@kaigai.gr.jp



-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread NISHIYAMA Tomoaki
Hi,

A new patch:
check for the presence of crtdefs.h in configure

-#if _MSC_VER = 1400 || defined(WIN64)
+#if _MSC_VER = 1400 || HAVE_CRTDEFS_H
 #define errcode __msvc_errcode
 #include crtdefs.h
 #undef errcode

Perhaps there is no guarantee that mingw (not -w64) may not have crtdefs.h in 
the future versions.
the 3 lines
 #define errcode __msvc_errcode
 #include crtdefs.h
 #undef errcode
should be valid as far as crtdefs.h exists and errcode is not defined 
previously.
Because this is the first system include file, we can be sure errcode is not 
defined by this point.
So, I believe its better to just test for the presence of crtdefs.h rather than
try to figure out the name of compiler.

check for fseeko and ftello macro definition before defining to avoid
warning in mingw-w64 4.7.0 20110827

This patch was tested to build successfully on
mingw gcc version 4.6.1
mingw-w64 i686-w64-mingw32 gcc version 4.5.4 20110812
mingw-w64 x86_64-w64-mingw32 gcc version 4.7.0 20110827



mingw64-32bitpatch.diff
Description: Binary data




On 2011/12/02, at 1:29, Andrew Dunstan wrote:

 
 
 On 11/27/2011 09:18 AM, NISHIYAMA Tomoaki wrote:
 Hi,
 
 +/* __MINGW64_VERSION_MAJOR is related to both 32/64 bit gcc compiles by
 + * mingw-w64, however it gots defined only after
 Why not use __MINGW32__, which is defined without including any headers?
 Because it's defined by other than mingw-w64 compilers.
 I see. That's because mingw (not -w64).
 Should it be ok if mingw is ok with that condition?
 
 This really breaks mingw gcc 4.6.1 :( it does not have crtdefs.h)
 If moving downward do not break MSVC, perhaps its the good way.
 Otherwise, we might check for the presence of crtdefs.h with configure?
 
 
 
 I have looked at this a bit. It's fairly ugly, and the only moderately clean 
 way I see forward is a configure test to check for a mingw-w64 compiler, e.g. 
 by running gcc -E -dM over a file which just includes stdio.h and checking 
 for the definedness of __MINGW64__VERSION_MAJOR, or something similar.
 
 
 cheers
 
 andrew
 
 
 
 
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-12-03 Thread Kohei KaiGai
2011/12/3 Robert Haas robertmh...@gmail.com:
 On Fri, Dec 2, 2011 at 6:52 AM, Kohei KaiGai kai...@kaigai.gr.jp wrote:
 At least, it is working. However, it is not a perfect solution to the
 future updates
 of code paths in the core.

 Hmm.  So, do you want this committed?  If so, I think the major thing
 it lacks is documentation.

 I can't help noticing that this amounts, altogether, to less than 600
 lines of code.  I am not sure what your hesitation in taking this
 approach is.  Certainly, there are things not to like in here, but
 I've seen a lot worse, and you can always refine it later.  For a
 first cut, why not?    Even if you had the absolutely perfect hooks in
 core, how much would it save compared to what's here now?  How
 different would your ideal implementation be from what you've done
 here?

You are likely right. Even if the hook provides sepgsql enough
contextual information, it might means maintenance burden being
moved to the core from sepgsql, as we discussed before.
OK, I'd like to go with this approach. I'll try to update documentation
stuff and regression test cases, so please wait for a few days.

Thanks,

 As regards future updates of code paths in core, nothing in here looks
 terribly likely to get broken; or at least if it does then I think
 quite a lot of other things will get broken, too.  Anything we do has
 some maintenance burden, and this doesn't look particularly bad to me.

 --
 Robert Haas
 EnterpriseDB: http://www.enterprisedb.com
 The Enterprise PostgreSQL Company



-- 
KaiGai Kohei kai...@kaigai.gr.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-03 Thread Albe Laurenz
Pavel Stehule wrote:
 My attempt at a syntax that could also cover Peter's wish for multiple
 checker functions:

 CHECK FUNCTION { func(args) | ALL [IN SCHEMA schema] [FOR ROLE user] }
  [ USING check_function ] OPTIONS (optname optarg [, ...])

 check_function should be related to one language, so you have to
 specify language if you would to specify check_function (if we would
 to have more check functions for one language).

Right, I forgot LANGUAGE:

CHECK FUNCTION { func(args) | ALL IN LANGUAGE pl [IN SCHEMA schema] [FOR ROLE 
user] }
 [ USING check_function ] OPTIONS (optname optarg [, ...])

If func(args) is given, the language can be inferred.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review of VS 2010 support patches

2011-12-03 Thread Brar Piening

Magnus Hagander wrote:
  I'd vote for whatever matches the general perl pest practices at 
this time. 


I didn't kow the perl pest practices until now but as the PostgreSQL 
community is more into C I think I know what you mean ;-)


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-03 Thread Dimitri Fontaine
Simon Riggs si...@2ndquadrant.com writes:
 Those are especially important because in 9.2 DDL commands will cause
 additional locking overheads, so preventing DDL will be essential to
 keeping performance stable in high txn rate databases.

The patch now implements any command triggers, and you have the
command tag to branch in the function if you need to.

  CREATE TRIGGER noddl
  INSTEAD OF ANY COMMAND
 EXECUTE PROCEDURE cancel_any_ddl();
  
 So I'd like to see a few more triggers that work out of the box for
 those cases (perhaps wrapped by a function?). It would also allow a
 more useful man page example of how to use this.

We could solve that by providing an extension implementing command
triggers ready for use.  One that allows to easily switch on and off the
capability of running commands seems like a good candidate.

That said, with the current coding, you can't have both a instead of
trigger on any command and a before or after trigger on any given
command, so installing that extension would prevent you from any other
usage of command triggers.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread Magnus Hagander
On Sat, Dec 3, 2011 at 09:24, NISHIYAMA Tomoaki
tomoa...@staff.kanazawa-u.ac.jp wrote:
 Hi,

 A new patch:
 check for the presence of crtdefs.h in configure

 -#if _MSC_VER = 1400 || defined(WIN64)
 +#if _MSC_VER = 1400 || HAVE_CRTDEFS_H
  #define errcode __msvc_errcode
  #include crtdefs.h
  #undef errcode

Have you verified if tihs affects _MSC_VER  1400? Suddently that
branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
something we need to worry about.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread NISHIYAMA Tomoaki
Hi,

 Have you verified if tihs affects _MSC_VER  1400? Suddently that
 branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
 something we need to worry about.


I have no MSVC. In that sense it is not verified in fact, and I hope
those who knows well would kindly comment on it.

However, it appears that pg_config.h is not created through
configure, but just copied from pg_config.h.win32 in those
compilers and thus HAVE_CRTDEFS_H will not be defined.
So, I think this code fragment will not be enabled in
_MSC_VER  1400

In addition, the code fragment should have no harm as far as they
have crtdefs.h.
(Again, this is I think it should, and not tested with real tools.)

On 2011/12/03, at 23:20, Magnus Hagander wrote:

 On Sat, Dec 3, 2011 at 09:24, NISHIYAMA Tomoaki
 tomoa...@staff.kanazawa-u.ac.jp wrote:
 Hi,
 
 A new patch:
 check for the presence of crtdefs.h in configure
 
 -#if _MSC_VER = 1400 || defined(WIN64)
 +#if _MSC_VER = 1400 || HAVE_CRTDEFS_H
  #define errcode __msvc_errcode
  #include crtdefs.h
  #undef errcode
 
 Have you verified if tihs affects _MSC_VER  1400? Suddently that
 branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
 something we need to worry about.
 
 -- 
  Magnus Hagander
  Me: http://www.hagander.net/
  Work: http://www.redpill-linpro.com/
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] why local_preload_libraries does require a separate directory ?

2011-12-03 Thread Tomas Vondra
Hi,

why the libraries loaded using local_preload_libraries need to be placed
in a different subdirectory than libraries loaded using
shared_preload_libraries?

And why it does not use dynamic_library_path but a hardcoded path
'$libdir/plugins'?

I do understand that leaving the users to load whatever libraries they
want is a bad idea, but when the library is loaded from postgresql.conf
it should be safe.

Therefore I'd expect / propose this behaviour:

1) libs loaded from shared_preload_libraries/local_preload_libraries

   - any paths are allowed (relative and absolute)
   - relative paths are resolved using dynamic_library_path if
 specified, $libdir otherwise
   - absolute paths are allowed, may load libraries from other locations

2) libs loaded using LOAD

   - check that the library is loaded from dynamic_library_path (if
 specified), $libdir otherwise


AFAIK this prevents '..' type attacks and makes it easier to install
extensions (shared libs are installed to $libdir, so if you need to load
a library using local_preload_libraries, you have to copy it manually).

Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread Magnus Hagander
On Sat, Dec 3, 2011 at 15:49, NISHIYAMA Tomoaki
tomoa...@staff.kanazawa-u.ac.jp wrote:
 Hi,

 Have you verified if tihs affects _MSC_VER  1400? Suddently that
 branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
 something we need to worry about.


 I have no MSVC. In that sense it is not verified in fact, and I hope
 those who knows well would kindly comment on it.

 However, it appears that pg_config.h is not created through
 configure, but just copied from pg_config.h.win32 in those
 compilers and thus HAVE_CRTDEFS_H will not be defined.
 So, I think this code fragment will not be enabled in
 _MSC_VER  1400

Hmm, true. Unless HAVE_CRTDEFS_H is defined by the sytem, which it
likely isn't - I was confusing it with the kind of defines that MSVC
tends to stick in their own headerfiles, and thought that's what you
were testing for.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] backup_label during crash recovery: do we know how to solve it?

2011-12-03 Thread Heikki Linnakangas

On 03.12.2011 01:25, Daniel Farina wrote:

Here's a protocol: have pg_start_backup() write a file that just means
backing up.  Restarts are OK, because that's all it means, it has no
meaning to a recovery/restoration process.

When one wishes to restore, one must touch a file -- not unlike the
trigger file in recovery.conf (some have argued in the past this
*should* be recovery.conf, except perhaps for its tendency to be moved
to recovery.done) to have that behavior occur.


At the moment, if the situation is ambiguous, the system assumes that 
you're restoring from a backup. What your suggestion amounts to is to 
reverse tht assumption, and assume instead that you're doing crash 
recovery on a system where a backup was being taken. In that case, if 
you take a backup with pg_base_backup(), and fail to archive the WAL 
files correctly, or forget to create a recovery.conf file, the database 
will happily start up from the backup, but is in fact corrupt. That is 
not good either.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Bug in walsender when calling out to do_pg_stop_backup (and others?)

2011-12-03 Thread Heikki Linnakangas

On 19.10.2011 19:41, Greg Jaskiewicz wrote:

On 19 Oct 2011, at 18:28, Florian Pflug wrote:

All the other flags which indicate cancellation reasons are set from signal 
handers, I believe. We could of course mark as ClientConnectionLostPending as 
volatile just to be consistent. Not sure whether that's a good idea, or not. It 
might prevent a mistake should we ever add code to detect lost connections 
asynchronously (i.e., from somewhere else than pq_flush). And the cost is 
probably negligible, because CHECK_FOR_INTERRUPTS tests for InterruptPending 
before calling ProcessInterrupts(), so we only pay the cost of volatile if 
there's actually an interrupt pending. But I still think it's better to add 
qualifies such a volatile only when really necessary. A comment about why it 
*isn't* volatile is probably in order, though, so I'll add that in the next 
version of the patch.


Makes sense.

I had to ask, because it sticks out. And indeed there is a possibility that 
someone will one day will try to use from signal handler, etc.


Let's just mark it as volatile. It's not clear to me that we'll never 
set or read the flag while in a signal handler. We probably don't, but 
what if ImmediateInterruptOK is 'true', for example, just when we fail 
to send a response and try to set the flags. In that case, we have to be 
careful that ClientConnectionLost is set before InterruptPending (which 
we can only guarantee if both are volatile, I believe), otherwise a 
signal might arrive when we've just set InterruptPending, but not yet 
ClientConnectionLost. ProcessInterrupts() would clear InterruptPending, 
but not see ClientConnectionLost, so we would lose the interrupt.


That's not serious, and the window for that happening would be extremely 
small, and I don't think it can actually happen as the code stands, 
because we never try to send anything while ImmediateInterruptOK == 
true. But better safe than sorry.



One small change I'd like to make is to treat the loss of connection 
more as a new top-level event, rather than as a new reason for query 
cancellation. A lost connection is more like receiving SIGTERM, really. 
Please take a look at the attached patch. I've been testing this by 
doing SELECT pg_sleep(1), a from generate_series(1,1000) a;, and 
killing the connection with killall -9 psql.


One remaining issue I have with this is the wording of the error 
message. The closest existing message we have is in old-mode COPY:


ereport(FATAL,
(errcode(ERRCODE_CONNECTION_FAILURE),
 errmsg(connection lost during COPY to stdout)));

In the patch, I made the new message just connection lost, but does 
anyone else have a better opinion on that? Perhaps it should be 
connection lost while sending response to client. Or connection lost 
during execution of statement, but that might not be accurate if we're 
not executing a statement at the moment, but just sending a sync 
message or similar.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
index b83a2ef..c36cf31 100644
--- a/src/backend/libpq/pqcomm.c
+++ b/src/backend/libpq/pqcomm.c
@@ -1247,9 +1247,13 @@ internal_flush(void)
 
 			/*
 			 * We drop the buffered data anyway so that processing can
-			 * continue, even though we'll probably quit soon.
+			 * continue, even though we'll probably quit soon. We also
+			 * set a flag that'll cause the next CHECK_FOR_INTERRUPTS
+			 * to terminate the connection.
 			 */
 			PqSendStart = PqSendPointer = 0;
+			ClientConnectionLost = 1;
+			InterruptPending = 1;
 			return EOF;
 		}
 
diff --git a/src/backend/tcop/postgres.c b/src/backend/tcop/postgres.c
index 976a832..13ea594 100644
--- a/src/backend/tcop/postgres.c
+++ b/src/backend/tcop/postgres.c
@@ -2823,6 +2823,19 @@ ProcessInterrupts(void)
 	(errcode(ERRCODE_ADMIN_SHUTDOWN),
 			 errmsg(terminating connection due to administrator command)));
 	}
+	if (ClientConnectionLost)
+	{
+		QueryCancelPending = false;		/* lost connection trumps QueryCancel */
+		ImmediateInterruptOK = false;	/* not idle anymore */
+		DisableNotifyInterrupt();
+		DisableCatchupInterrupt();
+		/* As in quickdie, don't risk sending to client during auth */
+		if (ClientAuthInProgress  whereToSendOutput == DestRemote)
+			whereToSendOutput = DestNone;
+		ereport(FATAL,
+(errcode(ERRCODE_CONNECTION_FAILURE),
+ errmsg(connection lost)));
+	}
 	if (QueryCancelPending)
 	{
 		QueryCancelPending = false;
diff --git a/src/backend/utils/init/globals.c b/src/backend/utils/init/globals.c
index 9ce64e6..9417c7a 100644
--- a/src/backend/utils/init/globals.c
+++ b/src/backend/utils/init/globals.c
@@ -29,6 +29,7 @@ ProtocolVersion FrontendProtocol;
 volatile bool InterruptPending = false;
 volatile bool QueryCancelPending = false;
 volatile bool ProcDiePending = false;
+volatile bool ClientConnectionLost = false;
 volatile bool ImmediateInterruptOK = false;
 volatile uint32 

Re: [HACKERS] psql line number reporting from stdin

2011-12-03 Thread Peter Eisentraut
On lör, 2011-11-26 at 22:36 +0200, Peter Eisentraut wrote:
 There is a long-standing oddity in psql that running
 
 psql -f foo.sql
 
 returns error messages with file name and line number, like
 
 psql:foo.sql:1: ERROR:  syntax error at or near foo
 
 but running
 
 psql  foo.sql does not.  I suggest we change the latter to print
 
 psql:stdin:1: ERROR:  syntax error at or near foo

It turns out that running

psql -f -

already used to print

psql:stdin:1: ERROR:  blah

except that it got broken between 8.4 and 9.0 (commit b291c0fb), and now
prints

psql:-:1: ERROR:  blah


I'll try to find a way to fix that and integrate it with the change I'm
proposing.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why local_preload_libraries does require a separate directory ?

2011-12-03 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 why the libraries loaded using local_preload_libraries need to be placed
 in a different subdirectory than libraries loaded using
 shared_preload_libraries?

Security: it lets the DBA constrain which libraries are loadable this way.

 I do understand that leaving the users to load whatever libraries they
 want is a bad idea, but when the library is loaded from postgresql.conf
 it should be safe.

We don't have a mechanism that would allow different limitations to be
placed on a GUC variable depending on where the value came from.
To do what you're proposing would require restricting
local_preload_libraries to be superuser-only, which would be a net
decrease in functionality.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-03 Thread Andres Freund
On Saturday, December 03, 2011 12:04:57 AM Dimitri Fontaine wrote:
 Hi,
 
 First thing first: thank you Andres for a great review, I do appreciate
 it.  Please find attached a newer version of the patch.  The github
 repository is also updated.
 
 Andres Freund and...@anarazel.de writes:
  I think at least a
  * command_tagtext
 
 Added.
 
  Why is there explicit documentation of not checking the arguments of said
  trigger function? That seems to be a bit strange to me.
 The code is searching for a function with the given name and 5 text
 arguments, whatever you say in the command.  That could be changed later
 on if there's a need.
Forget it, i was too dumb to remember the code when I wrote the above ;)

  schemaname currently is mightily unusable because whether its sent or not
  depends wether the table was created with a schemaname specified or not.
  That doesn't seem to be a good approach.
 I'm not sure about that, we're spiting out what the user entered.
No, were not. Were writing out something that semantically the same what the 
user entered. At several places NULL/NOT NULL and such get added.


  As an alternative it would be possible to pass the current search path
  but that doesn't seem to the best approach to me;
 The trigger runs with the same search_path settings as the command, right?
For many things it makes sense to specify the search path of a function uppon 
creation of the function. Even moreso if you have security definer 
functions...
Otherwise you can get into troubles via operators and such...

  Then there is nice stuff like CREATE TABLE  (LIKE ...);
  What shall that return in future? I vote for showing it as the plain
  CREATE TABLE where all columns are specified.
 I don't think so.  Think about the main use cases for this patch,
 auditing and replication.  Both cases you want to deal with what the
 user said, not what the system understood.
Is that so? In the replication case the origin table very well may look 
differently on the master compared to the standby. Which is about impossible 
to diagnose with the above behaviour.

  I think it would sensible to allow multiple actions on which to trigger
  to be specified just as you can for normal triggers. I also would like
  an ALL option
 Both are now implemented.
Cool, will check.

 When dealing with an ANY command trigger, your procedure can get fired
 on a command for which we don't have internal support for back parsing
 etc, so you only get the command tag not null. I think that's the way to
 go here, as I don't want to think we need to implement full support for
 command triggers on ALTER OPERATOR FAMILY from the get go.
Thats ok with me.

  Currently the grammar allows options to be passed to command triggers. Do
  we want to keep that? If yes, with the same arcane set of datatypes as
  in data triggers?
  If yes it needs to be wired up.
 In fact it's not the case, you always get called with the same 5
 arguments, all nullable now that we have ANY COMMAND support.
I think you misunderstood me. Check the following excerpt from gram.y:
CreateCmdTrigStmt:
CREATE TRIGGER name TriggerActionTime COMMAND 
trigger_command_list
EXECUTE PROCEDURE func_name '(' TriggerFuncArgs ')'

You have TriggerfuncArgs there but you ignore them.

  * schema qualification:
  An option to add triggers only to a specific schema would be rather neat
  for many scenarios.
  I am not totally sure if its worth the hassle of defining what happens in
  the edge cases of e.g. moving from one into another schema though.
 I had written down support for a WHEN clause in command triggers, but
 the exact design has yet to be worked out. I'd like to have this
 facility but I'd like it even more if that could happen in a later
 patch.
Hm. I am not sure a generic WHEN clause is needed. In my opinion schema 
qualification would be enough...
A later patch is fine imo.

  I contrast to data triggers command triggers cannot change what is done.
  They can either prevent it from running or not. Which imo is fine.
  The question I have is whether it would be a good idea to make it easy to
  prevent recursion Especially if the triggers wont be per schema.
 
 You already have
 
   ATER TRIGGER foo ON COMMAND create table DISABLE;
 
 that you can use from the trigger procedure itself.  Of course, locking
 is an issue if you want to go parallel on running commands with
 recursive triggers attached.  I think we might be able to skip solving
 that problem in the first run.
hm. Not yet convinced. I need to think about it a bit.


Have fun ;)

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why local_preload_libraries does require a separate directory ?

2011-12-03 Thread Tomas Vondra
On 3.12.2011 18:53, Tom Lane wrote:
 Tomas Vondra t...@fuzzy.cz writes:
 why the libraries loaded using local_preload_libraries need to be placed
 in a different subdirectory than libraries loaded using
 shared_preload_libraries?

 Security: it lets the DBA constrain which libraries are loadable this way.

But local_preload_libraries can be set only in postgresql.conf, right?
As this file is maintainted by the DBA, so how does this improve the
security?

The problem I'm trying to solve right now is that I do have an extension
that needs to install two .so libraries - one loaded using
shared_preload_libraries, one loaded using local_preload_libraries.

The make install puts both files into $libdir, so I have to copy it to
the right directory.

 I do understand that leaving the users to load whatever libraries they
 want is a bad idea, but when the library is loaded from postgresql.conf
 it should be safe.

 We don't have a mechanism that would allow different limitations to be
 placed on a GUC variable depending on where the value came from.
 To do what you're proposing would require restricting
 local_preload_libraries to be superuser-only, which would be a net
 decrease in functionality.

Now I'm really confused. AFAIK local_preload_libraries can be set only
from postgresql.conf, not by a user. So the check could be quite simple
- is the backend already started? No: don't check the path. Yes: check
that the path starts with '$libdir/plugin'.

Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inlining comparators as a performance optimisation

2011-12-03 Thread Tom Lane
Robert Haas robertmh...@gmail.com writes:
 OK, so I tried to code this up.  Adding the new amproc wasn't too
 difficult (see attached).  It wasn't obvious to me how to tie it into
 the tuplesort infrastructure, though, so instead of wasting time
 guessing what a sensible approach might be I'm going to use one of my
 lifelines and poll the audience (or is that ask an expert?).

OK, I'll take a whack at improving this over the weekend.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [REVIEW] Patch for cursor calling with named parameters

2011-12-03 Thread Kevin Grittner
The patch is in context format, includes docs and additional
regression tests, applies cleanly, passes world regression tests.
The parser changes don't cause any backing up.
 
Discussion on the list seems to have reached a consensus that this
patch implements a useful feature.  A previous version was reviewed.
This version attempts to respond to points previously raised.
 
Overall, it looked good, but there were a few things I would like Yeb
to address before mark it is marked ready for committer.  I don't
think any of these will take long.
 
(1)  Doc changes:
 
  (a) These contain some unnecessary whitespace changes which make it
  harder to see exactly what changed.
 
  (b) There's one point where cursors should be possessive
  cursor's.
 
  (c) I think it would be less confusing to be consistent about
  mentioning positional and named in the same order each
  time. Mixing it up like that is harder to read, at least for
  me.
 
(2)  The error for mixing positional and named parameters should be
moved up.  That seems more important than too many arguments or
provided multiple times if both are true.
 
(3)  The -- END ADDITIONAL TESTS comment shouldn't be added to the
regression tests.
 
The way this parses out the named parameters and then builds the
positional list, with some code from read_sql_construct() repeated in
read_cursor_args() seems a little bit clumsy to me, but I couldn't
think of a better way to do it.
 
-Kevin


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] missing rename support

2011-12-03 Thread Peter Eisentraut
I noticed the following object types don't have support for an ALTER ...
RENAME command:

DOMAIN (but ALTER TYPE works)
FOREIGN DATA WRAPPER
OPERATOR
RULE
SERVER

Are there any restrictions why these couldn't be added?

(I stumbled upon this while trying to rename a foreign server, but we
might as well make everything consistent.)



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why so few built-in range types?

2011-12-03 Thread Dimitri Fontaine
Hi,

I wanted to craft an answer here and Peter nailed it before I could.  I
use ip4r in a bunch of different projects and environments, it's doing a
perfect job, it's simple to use and damn efficient.

The ipv6 support is on the way, parts of it are already be in the CVS at
http://cvs.pgfoundry.org/cgi-bin/cvsweb.cgi/ip4r/ip4r/. It's missing
tests mainly IIRC from a chat with its author, a well known PostgreSQL
contributor, Andrew Gierth.

Really, I wouldn't even consider adding gist support for inet and cidr.
Their real future has been sketched by Tom at last developer meeting, at
least what I remember hom saying is that they should eventually get
shipped as extensions now that it's easy to do so, and removed out of
core with some more types in the same bucket.

I could be misremembering which types Tom was talking about, though.

Peter Eisentraut pete...@gmx.net writes:
 - ip4 is fixed-length, so it's much faster.  (Obviously, this is living
 on borrowed time.  Who knows.)

 - Conversely, it might be considered a feature that ip4 only stores IPv4
 addresses.

 - ip4 really only stores a single address, not a netmask, not sometimes
 a netmask, or sometimes a range, or sometimes a network and an address,
 or whatever.  That really seems like the most common use case, and no
 matter what you do with the other types, some stupid netmask will appear
 in your output when you least expect it.

 - Integrates with ip4r, which has GiST support.

 - Some old-school internet gurus worked out why inet and cidr have to
 behave the way they do, which no one else understands, and no one dares
 to discuss, whereas ip4/ip4r are simple and appear to be built for
 practical use.

 Really, it's all about worse is better.

-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why so few built-in range types?

2011-12-03 Thread Dimitri Fontaine
Tom Lane t...@sss.pgh.pa.us writes:
 IIRC, a lot of the basic behavior of the inet/cidr types was designed by
 Paul Vixie (though he's not to blame for their I/O presentation).
 So I'm inclined to doubt that they're as broken as Stephen claims.

The ip4r extension's main use case is range lookups.  You get an ip and
want to know what range it's in:  GiST indexing makes that operation
damn fast, and the ip4r datatype is quite flexible about what a range
is.  Apparently core types are solving other problems, that I never had
to solve myself, so I never used them.

Installing ip4r in a database is routine operation, I could accept
having that by default without blinking now.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-03 Thread Andres Freund
On Friday, December 02, 2011 03:09:55 AM Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  On Thursday, December 01, 2011 07:21:25 PM Tom Lane wrote:
  Making this work cleanly would be a bigger deal than I think you're
  thinking.
  
  Obviously that depends on the definition of clean...
  
  Changing the grammar to make that explicit seems to involve a bit too
  many changes on a first glance. The cheap way out seems to be to make
  the decision in analyze.c:transformQuery.
  Would that be an acceptable way forward?
 
 ITYM transformStmt, but yeah, somewhere around there is probably
 reasonable. The way I'm imagining this would work is that IntoClause
 disappears from Query altogether: analyze.c would build a utility
 statement
 CreateTableAs, pull the IntoClause out of the SelectStmt structure and
 put it into the utility statement, and then proceed much as we do for
 ExplainStmt (and for the same reasons).
Ok. Because my book turned out to be boring I started looking at this. I 
wonder if wouldn't be better to do check in directly in raw_parser(). The 
advantage would be that that magically would fix the issue of not logging 
CTAS/select when using log_statement = ddl and we don't need a snapshot or 
such so that shouldn't be a problem.

Any arguments against doing so?

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] cannot read pg_class without having selected a database / is this a bug?

2011-12-03 Thread Tomas Vondra
Hi,

I've written a simple extension that limits number of connection by
IP/db/user, and I do receive this exception:

psql: FATAL:  cannot read pg_class without having selected a database

I've found this happens because the extension defines a client auth hook
that reads pg_stat_activity. The really interesting thing is that this
happens only when I start several backends 'at the same time' right
after the cluster is started. From that time, everything works just fine.

So it seems like a race condition or something like that.

I've prepared a simple testcase to demonstrate this issue - see the
files attached. I've put there several 'sleep' to demonstrate the timing
error.

All you need to do is this:

1) compile the extension (make install)
2) add the extension to shared_preload_libraries
3) restart the cluster
4) start two backends at the same time (within a second or so)


Tomas
#include stdio.h
#include unistd.h

#include sys/time.h
#include sys/types.h
#include sys/ipc.h

#include postgres.h
#include miscadmin.h
#include storage/ipc.h

#include libpq/auth.h
#include pgstat.h

#include executor/executor.h
#include commands/dbcommands.h

#ifdef PG_MODULE_MAGIC
PG_MODULE_MAGIC;
#endif

/* allocates space for the rules */
static void pg_limits_shmem_startup(void);

/* check the rules (using pg_stat_activity) */
static void rules_check(Port *port, int status);

/* Saved hook values in case of unload */
static shmem_startup_hook_type prev_shmem_startup_hook = NULL;

/* Original Hook */
static ClientAuthentication_hook_type prev_client_auth_hook = NULL;

static LWLockId lock;

void		_PG_init(void);
void		_PG_fini(void);

/*
 * Module load callback
 */
void
_PG_init(void)
{
	
	/* can be preloaded only from postgresql.conf */
	if (! process_shared_preload_libraries_in_progress)
		elog(ERROR, connection_limits_shared has to be loaded using 
	shared_preload_libraries);
	
	/*
	 * Request additional shared resources.  (These are no-ops if we're not in
	 * the postmaster process.)  We'll allocate or attach to the shared
	 * resources in pg_limits_shmem_startup().
	 */
	RequestAddinLWLocks(1);

	/* Install hooks. */
	prev_shmem_startup_hook = shmem_startup_hook;
	shmem_startup_hook = pg_limits_shmem_startup;
	
	/* Install Hooks */
	prev_client_auth_hook = ClientAuthentication_hook;
	ClientAuthentication_hook = rules_check;

}

/*
 * Module unload callback
 */
void
_PG_fini(void)
{
	/* Uninstall hooks. */
	shmem_startup_hook = prev_shmem_startup_hook;
}

/* This is probably the most important part - allocates the shared 
 * segment, initializes it etc. */
static
void pg_limits_shmem_startup() {
	
	if (prev_shmem_startup_hook)
		prev_shmem_startup_hook();
	
	/*
	 * Create or attach to the shared memory state, including hash table
	 */
	LWLockAcquire(AddinShmemInitLock, LW_EXCLUSIVE);
	
	/* First time through ... */
	lock = LWLockAssign();

	LWLockRelease(AddinShmemInitLock);
	
}

static
void rules_check(Port *port, int status)
{

	int b, nbackends;
	PgBackendStatus *beentry;

	/*
	 * Any other plugins which use ClientAuthentication_hook.
	 */
	if (prev_client_auth_hook)
		prev_client_auth_hook(port, status);

	/*
	 * Inject a short delay if authentication failed.
	 */
	if (status == STATUS_OK)
	{

		/* lock the segment (serializes the backend creation) */
		LWLockAcquire(lock, LW_EXCLUSIVE);
	
		sleep(1);
		
		/* how many backends are already there ? */
		nbackends = pgstat_fetch_stat_numbackends();
		
		/* loop through the backends */
		for (b = 1; b = nbackends; b++) {
			
			char * usr, * db;
			
			beentry = pgstat_fetch_stat_beentry(b);

			/* pgstatfuncs.c : 630 */
			if (beentry != NULL) {

db = get_database_name(beentry-st_databaseid);
usr = GetUserNameFromId(beentry-st_userid);

			} /* (beentry != NULL) */
			
		} /* for (b = 1; b = nbackends; b++) */

	}
	
	sleep(4);
	
	LWLockRelease(lock);

}
# issue
comment = '...'
default_version = '1.0.0'
relocatable = true

module_pathname = '$libdir/issue'MODULE_big = issue
OBJS = issue.o

EXTENSION = issue
MODULES = issue

CFLAGS=`pg_config --includedir-server`

PG_CONFIG = pg_config
PGXS := $(shell $(PG_CONFIG) --pgxs)
include $(PGXS)

all: issue.so

issue.so: issue.o

issue.o : issue.c

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot read pg_class without having selected a database / is this a bug?

2011-12-03 Thread Robert Haas
2011/12/3 Tomas Vondra t...@fuzzy.cz:
 psql: FATAL:  cannot read pg_class without having selected a database

 I've found this happens because the extension defines a client auth hook
 that reads pg_stat_activity. The really interesting thing is that this
 happens only when I start several backends 'at the same time' right
 after the cluster is started. From that time, everything works just fine.

I'm surprised this ever works.  To read pg_stat_activity, you need a
relcache entry for it.  And how will you build one without selecting a
database?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot read pg_class without having selected a database / is this a bug?

2011-12-03 Thread Tomas Vondra
On 3.12.2011 23:37, Robert Haas wrote:
 2011/12/3 Tomas Vondra t...@fuzzy.cz:
 psql: FATAL:  cannot read pg_class without having selected a database

 I've found this happens because the extension defines a client auth hook
 that reads pg_stat_activity. The really interesting thing is that this
 happens only when I start several backends 'at the same time' right
 after the cluster is started. From that time, everything works just fine.
 
 I'm surprised this ever works.  To read pg_stat_activity, you need a
 relcache entry for it.  And how will you build one without selecting a
 database?

What do you mean by selecting a database?

I do select a database when executing a psql, but I guess you mean
something that initializes the relcache entry and that's probably
executed after the auth hook.

That might explain why it fails at first and then works just fine,
although it's a bit strange. Wouldn't that mean you can't access any
catalogs from the auth hook?

Tomas

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread Andrew Dunstan



On 12/03/2011 09:59 AM, Magnus Hagander wrote:

On Sat, Dec 3, 2011 at 15:49, NISHIYAMA Tomoaki
tomoa...@staff.kanazawa-u.ac.jp  wrote:

Hi,


Have you verified if tihs affects _MSC_VER  1400? Suddently that
branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
something we need to worry about.


I have no MSVC. In that sense it is not verified in fact, and I hope
those who knows well would kindly comment on it.

However, it appears that pg_config.h is not created through
configure, but just copied from pg_config.h.win32 in those
compilers and thus HAVE_CRTDEFS_H will not be defined.
So, I think this code fragment will not be enabled in
_MSC_VER  1400

Hmm, true. Unless HAVE_CRTDEFS_H is defined by the sytem, which it
likely isn't - I was confusing it with the kind of defines that MSVC
tends to stick in their own headerfiles, and thought that's what you
were testing for.




Yes, but there's a deal more work to do here. This whole thing is 
falling over in my build environment (64 bit Windows 7, MinGW/MSys, the 
machine that runs pitta on the buildfarm.)


This is a long way from a done deal.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] WARNING: pgstat wait timeout

2011-12-03 Thread Anders Steinlein
While doing some pgbench testing on a new server with 9.1.1, I noticed 
quite a lot of $subject in the logs. I did some Googling and found this 
previous thread: 
http://archives.postgresql.org/pgsql-hackers/2010-01/msg02897.php It 
doesn't seem like the issue was resolved?


I did:
# pgbench  -i -s 1000 bench1
# pgbench -c 8 -j 4 -T 1000 bench1

The server is:
PostgreSQL 9.1.1
Ubuntu 10.04 2.6.32-35-generic #78-Ubuntu SMP x86_64
Quad Core Xeon, 22GB RAM, 2x SATA RAID1

The warning seems to come in average 3-4 times a minute:
2011-12-03 23:07:31 CET::@:[13728]: WARNING:  pgstat wait timeout
2011-12-03 23:07:37 CET::@:[13728]: WARNING:  pgstat wait timeout
2011-12-03 23:07:47 CET::@:[13556]: WARNING:  pgstat wait timeout
2011-12-03 23:07:52 CET::@:[13729]: WARNING:  pgstat wait timeout

Some non-default postgresql.conf settings:
shared_buffers = 6GB
effective_cache_size = 12GB
work_mem = 16MB
synchronous_commit = off
checkpoint_segments = 16
checkpoint_completion_target = 0.9
autovacuum_max_workers = 6

Is this a bug, something wrong with my setup, or simply harmless noise 
(although annoying in that case)?


Regards,
-- a.

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread Andrew Dunstan



On 12/03/2011 06:12 PM, Andrew Dunstan wrote:



On 12/03/2011 09:59 AM, Magnus Hagander wrote:

On Sat, Dec 3, 2011 at 15:49, NISHIYAMA Tomoaki
tomoa...@staff.kanazawa-u.ac.jp  wrote:

Hi,


Have you verified if tihs affects _MSC_VER  1400? Suddently that
branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
something we need to worry about.


I have no MSVC. In that sense it is not verified in fact, and I hope
those who knows well would kindly comment on it.

However, it appears that pg_config.h is not created through
configure, but just copied from pg_config.h.win32 in those
compilers and thus HAVE_CRTDEFS_H will not be defined.
So, I think this code fragment will not be enabled in
_MSC_VER  1400

Hmm, true. Unless HAVE_CRTDEFS_H is defined by the sytem, which it
likely isn't - I was confusing it with the kind of defines that MSVC
tends to stick in their own headerfiles, and thought that's what you
were testing for.




Yes, but there's a deal more work to do here. This whole thing is 
falling over in my build environment (64 bit Windows 7, MinGW/MSys, 
the machine that runs pitta on the buildfarm.)


This is a long way from a done deal.



In particular, it's a major mess because it does this (or at least the 
version I'm using does):


   #define stat _stat64


which plays merry hell with pgwin32_safestat(). Working around that 
looks very unpleasant indeed.



cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Command Triggers

2011-12-03 Thread Andres Freund
On Friday, December 02, 2011 03:09:55 AM Tom Lane wrote:
 Andres Freund and...@anarazel.de writes:
  On Thursday, December 01, 2011 07:21:25 PM Tom Lane wrote:
  Making this work cleanly would be a bigger deal than I think you're
  thinking.
  
  Obviously that depends on the definition of clean...
  
  Changing the grammar to make that explicit seems to involve a bit too
  many changes on a first glance. The cheap way out seems to be to make
  the decision in analyze.c:transformQuery.
  Would that be an acceptable way forward?
 
 ITYM transformStmt, but yeah, somewhere around there is probably
 reasonable. The way I'm imagining this would work is that IntoClause
 disappears from Query altogether: analyze.c would build a utility
 statement
 CreateTableAs, pull the IntoClause out of the SelectStmt structure and
 put it into the utility statement, and then proceed much as we do for
 ExplainStmt (and for the same reasons).
I have a patch which basically does all this minus some cleanup. I currently 
do the the differentiation for SELECT INTO (not CREATE AS, thats done in 
gram.y) in raw_parser but thats quick to move if others don't aggree on that.

I have two questions now:

First, does anybody think it would be worth getting rid of the duplication 
from OpenIntoRel (formerly from execMain.c) in regard to DefineRelation()?
I noticed that there already is some diversion between both. E.g. CREATE TABLE 
frak TABLESPACE pg_global AS SELECT 1; is possible while it would be forbidden 
via a plain CREATE TABLE. (I will send a fix for this separately).


Secondly, I am currently wondering whether it would be a good idea to use the 
ModifyTable infrastructure for doing the insertion instead an own DestReceiver 
infrastructure thats only used for CTAS.
It looks a bit too complicated to do this without removing the bulk insert and 
HEAP_INSERT_SKIP_WAL optimizations.

Andres

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] SQLDA fix for ECPG

2011-12-03 Thread Michael Meskes
On Sat, Nov 19, 2011 at 10:56:03PM +0100, Boszormenyi Zoltan wrote:
 Hopefully last turn in this topic. For NUMERIC types, the safe minimum
 alignment is a pointer because there are 5 int members followed by
 two pointer members in this struct. I got a crash from this with a lucky
 query and dataset. The DECIMAL struct is safe because the digits[] array
 is embedded there.
 ...

Applied, thanks.

Michael
-- 
Michael Meskes
Michael at Fam-Meskes dot De, Michael at Meskes dot (De|Com|Net|Org)
Michael at BorussiaFan dot De, Meskes at (Debian|Postgresql) dot Org
Jabber: michael.meskes at googlemail dot com
VfL Borussia! Força Barça! Go SF 49ers! Use Debian GNU/Linux, PostgreSQL

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-03 Thread Alvaro Herrera

Excerpts from Alex Hunsaker's message of dom oct 09 03:40:36 -0300 2011:
 On Fri, Oct 7, 2011 at 21:30, Nikhil Sontakke nikkh...@gmail.com wrote:
  Hi Alex,
 
  I guess we both are in agreement with each other :)
 
  After sleeping over it, I think that check is indeed dead code with this new
  non-inheritable check constraints functionality in place. So unless you have
  some other comments, we can mark this as 'Ready for Commiter'.
 
  Again, thanks for the thorough review and subsequent changes!
 
 PFA an updated patch with the check removed and a comment or two added.
 
 I've also marked it ready.

I had a look at this patch today.  The pg_dump bits conflict with
another patch I committed a few days ago, so I'm about to merge them.
I have one question which is about this hunk:

@@ -2312,6 +2317,11 @@ MergeWithExistingConstraint(Relation rel, char *ccname, 
Node *expr,
con-conislocal = true;
else
con-coninhcount++;
+   if (is_only)
+   {
+   Assert(is_local);
+   con-conisonly = true;
+   }
simple_heap_update(conDesc, tup-t_self, tup);
CatalogUpdateIndexes(conDesc, tup);
break;


Is it okay to modify an existing constraint to mark it as only, even
if it was originally inheritable?  This is not clear to me.  Maybe the
safest course of action is to raise an error.  Or maybe I'm misreading
what it does (because I haven't compiled it yet).

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] PostgreSQL fails to build with 32bit MinGW-w64

2011-12-03 Thread NISHIYAMA Tomoaki
Hi,

On 2011/12/04, at 9:45, Andrew Dunstan wrote:
 Yes, but there's a deal more work to do here. This whole thing is falling 
 over in my build environment (64 bit Windows 7, MinGW/MSys, the machine that 
 runs pitta on the buildfarm.)
 
 This is a long way from a done deal.
 
 In particular, it's a major mess because it does this (or at least the 
 version I'm using does):
 
   #define stat _stat64
 
 which plays merry hell with pgwin32_safestat(). Working around that looks 
 very unpleasant indeed.


Thank you for testing and reporting.
Would you mind giving me a bit more information on the environment?
--especially for the MinGW/MSYS versions and any other component that is 
required.

I tested on a virtual machine running 64 bit Windows 7 SP1,
installing MinGW/MSYS in the clean state and then compile.  
The versions coming with pre-packaged repository catalogues of 20110802
(gcc 4.5.2) and latest catalogue (gcc-4.6.1-2) compiles successfully.


On 2011/12/04, at 9:45, Andrew Dunstan wrote:

 
 
 On 12/03/2011 06:12 PM, Andrew Dunstan wrote:
 
 
 On 12/03/2011 09:59 AM, Magnus Hagander wrote:
 On Sat, Dec 3, 2011 at 15:49, NISHIYAMA Tomoaki
 tomoa...@staff.kanazawa-u.ac.jp  wrote:
 Hi,
 
 Have you verified if tihs affects _MSC_VER  1400? Suddently that
 branch would care about HAVE_CRTDEFS_H, and I'm not sure if that's
 something we need to worry about.
 
 I have no MSVC. In that sense it is not verified in fact, and I hope
 those who knows well would kindly comment on it.
 
 However, it appears that pg_config.h is not created through
 configure, but just copied from pg_config.h.win32 in those
 compilers and thus HAVE_CRTDEFS_H will not be defined.
 So, I think this code fragment will not be enabled in
 _MSC_VER  1400
 Hmm, true. Unless HAVE_CRTDEFS_H is defined by the sytem, which it
 likely isn't - I was confusing it with the kind of defines that MSVC
 tends to stick in their own headerfiles, and thought that's what you
 were testing for.
 
 
 
 Yes, but there's a deal more work to do here. This whole thing is falling 
 over in my build environment (64 bit Windows 7, MinGW/MSys, the machine that 
 runs pitta on the buildfarm.)
 
 This is a long way from a done deal.
 
 
 In particular, it's a major mess because it does this (or at least the 
 version I'm using does):
 
   #define stat _stat64
 
 
 which plays merry hell with pgwin32_safestat(). Working around that looks 
 very unpleasant indeed.
 
 
 cheers
 
 andrew
 
 -- 
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers
 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] cannot read pg_class without having selected a database / is this a bug?

2011-12-03 Thread Tom Lane
Tomas Vondra t...@fuzzy.cz writes:
 That might explain why it fails at first and then works just fine,
 although it's a bit strange. Wouldn't that mean you can't access any
 catalogs from the auth hook?

It should be possible to access shared catalogs from an auth hook.
pg_stat_activity is neither shared nor a catalog.  Like Robert,
I find it astonishing that this works ever, because the info needed
simply isn't available until you've connected to a particular database.
The fact that the view is actually defined the same in every database
doesn't enter into that ...

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-03 Thread Nikhil Sontakke
 I had a look at this patch today.  The pg_dump bits conflict with
 another patch I committed a few days ago, so I'm about to merge them.
 I have one question which is about this hunk:


Thanks for taking a look Alvaro.


 @@ -2312,6 +2317,11 @@ MergeWithExistingConstraint(Relation rel, char
 *ccname, Node *expr,
con-conislocal = true;
else
con-coninhcount++;
 +   if (is_only)
 +   {
 +   Assert(is_local);
 +   con-conisonly = true;
 +   }
simple_heap_update(conDesc, tup-t_self, tup);
CatalogUpdateIndexes(conDesc, tup);
break;


 Is it okay to modify an existing constraint to mark it as only, even
 if it was originally inheritable?  This is not clear to me.  Maybe the
 safest course of action is to raise an error.  Or maybe I'm misreading
 what it does (because I haven't compiled it yet).


Hmmm, good question. IIRC, the patch will pass is_only as true only if it
going to be a locally defined, non-inheritable constraint. So I went by the
logic that since it was ok to merge and mark a constraint as locally
defined, it should be ok to mark it non-inheritable from this moment on
with that new local definition?

Regards,
Nikhils


Regards,
Nikhils


[HACKERS] planner fails on HEAD

2011-12-03 Thread Pavel Stehule
Hello

I have a relative simple query

SELECT q.object_id
  FROM queue q
   JOIN
   outgoing.cps_forms f
   ON f.id = q.object_id AND q.object_type = 'cp'
   JOIN
   flat_file_ex fe
   ON fe.id = q.rejected_flat_file_id
 WHERE q.rejected_result = 'ACTV';

The planner fails on this query


#0  0x00cf7424 in __kernel_vsyscall ()
#1  0x004752f1 in raise () from /lib/libc.so.6
#2  0x00476d5e in abort () from /lib/libc.so.6
#3  0x083a1dfe in ExceptionalCondition (conditionName=0x8505474
!(innerstartsel = innerendsel), errorType=0x83db178
FailedAssertion, fileName=0x8505140 costsize.c, lineNumber=1937)
at assert.c:57
#4  0x08244cea in cost_mergejoin (path=0x93acdd4, root=0x93935d4,
sjinfo=0xbfbc9504) at costsize.c:1937
#5  0x0826f859 in create_mergejoin_path (root=0x93935d4,
joinrel=0x93aad80, jointype=JOIN_INNER, sjinfo=0xbfbc9504,
outer_path=0x93ac0f8, inner_path=0x93ac080,
restrict_clauses=0x93acce0, pathkeys=0x0,
mergeclauses=0x93adcb4, outersortkeys=0x93adc98,
innersortkeys=0x93adcd0) at pathnode.c:1576
#6  0x0824cee4 in sort_inner_and_outer (root=0x93935d4,
joinrel=0x93aad80, outerrel=0x93a9a20, innerrel=0x9393e04,
jointype=JOIN_INNER, sjinfo=0xbfbc9504, restrictlist=0x93acce0) at
joinpath.c:306
#7  add_paths_to_joinrel (root=0x93935d4, joinrel=0x93aad80,
outerrel=0x93a9a20, innerrel=0x9393e04, jointype=JOIN_INNER,
sjinfo=0xbfbc9504, restrictlist=0x93acce0) at joinpath.c:103
#8  0x0824ea12 in make_join_rel (root=0x93935d4, rel1=0x9393e04,
rel2=0x93a9a20) at joinrels.c:733
#9  0x0824ee48 in make_rels_by_clause_joins (root=0x93935d4, level=2)
at joinrels.c:268
#10 join_search_one_level (root=0x93935d4, level=2) at joinrels.c:99
#11 0x082410bf in standard_join_search (root=0x93935d4,
levels_needed=3, initial_rels=0x93ac998) at allpaths.c:1127
#12 0x082412cf in make_rel_from_joinlist (root=0x93935d4,
joinlist=value optimized out) at allpaths.c:1058
#13 0x08241390 in make_one_rel (root=0x93935d4, joinlist=0x93aad64) at
allpaths.c:103
#14 0x082593d0 in query_planner (root=0x93935d4, tlist=0x939f740,
tuple_fraction=0, limit_tuples=-1, cheapest_path=0xbfbc98dc,
sorted_path=0xbfbc98d8, num_groups=0xbfbc98d0) at planmain.c:259
#15 0x0825b24d in grouping_planner (root=0x93935d4, tuple_fraction=0)
at planner.c:1240
#16 0x0825cfbd in subquery_planner (glob=0x939f37c, parse=0x9370b08,
parent_root=0x0, hasRecursion=0 '\000', tuple_fraction=0,
subroot=0xbfbc9a7c) at planner.c:524
#17 0x0825d8dd in standard_planner (parse=0x9370b08, cursorOptions=0,
boundParams=0x0) at planner.c:196
#18 0x082d1b2e in pg_plan_query (querytree=0x9370b08, cursorOptions=0,
boundParams=0x0) at postgres.c:720
#19 0x082d1c33 in pg_plan_queries (querytrees=0x939f360,
cursorOptions=0, boundParams=0x0) at postgres.c:779
#20 0x082d26fc in exec_simple_query (argc=2, argv=0x92f95a4,
username=0x92f94a0 pavel) at postgres.c:944
#21 PostgresMain (argc=2, argv=0x92f95a4, username=0x92f94a0 pavel)
at postgres.c:3859
#22 0x082844ae in BackendRun (port=0x9316600) at postmaster.c:3587
#23 BackendStartup (port=0x9316600) at postmaster.c:3272
#24 0x08284b58 in ServerLoop () at postmaster.c:1350
#25 0x082856f3 in PostmasterMain (argc=3, argv=0x92f8308) at postmaster.c:1110
#26 0x0821cd00 in main (argc=3, argv=0x92f8308) at main.c:199

with little bit modified query planner does

ohs=# explain   SELECT q.object_id
  FROM queue q
   JOIN
   outgoing.cps_forms f
   ON f.id = q.object_id AND q.object_type = 'cp'
   JOIN
   flat_file_ex fe
   ON fe.id = q.rejected_flat_file_id
 WHERE q.rejected_result = 'ACTVa';
 QUERY PLAN

 Nested Loop  (cost=0.00..154.05 rows=1 width=4)
   -  Nested Loop  (cost=0.00..145.77 rows=1 width=8)
 -  Seq Scan on queue q  (cost=0.00..137.49 rows=1 width=8)
   Filter: ((object_type = 'cp'::bpchar) AND
(rejected_result = 'ACTVa'::bpchar))
 -  Index Only Scan using cps_forms_pkey on cps_forms f
(cost=0.00..8.27 rows=1 width=4)
   Index Cond: (id = q.object_id)
   -  Index Only Scan using flat_file_ex_pkey on flat_file_ex fe
(cost=0.00..8.27 rows=1 width=4)
 Index Cond: (id = q.rejected_flat_file_id)
(8 rows)

Data and necessary indexes should be correct

ohs=# \dt+
 List of relations
 Schema │ Name  │ Type  │  Owner   │  Size   │ Description
┼───┼───┼──┼─┼─
 public │ cps_form  │ table │ postgres │ 48 kB   │
 public │ flat_file_ex  │ table │ pavel│ 2632 kB │
 public │ np_form   │ table │ pavel│ 432 kB  │
 public │ np_return_number_form │ table │ pavel│ 48 kB   │
 public │ queue │ table │ pavel│ 568 kB  │
(5 rows)

ohs=# \dt+ outgoing.*
 List of 

Re: [HACKERS] planner fails on HEAD

2011-12-03 Thread Pavel Stehule
a plan for modified query is

ohs=# explain analyze SELECT object_id,
   inserted,
   'ASSIGN_RSLT',
   order_id,
   2,
   seqnum,
   rejected_flat_file_id,
   true
FROM (
   SELECT q.object_id,
  fe.inserted,
  q.order_id,
  q.seqnum,
  q.rejected_flat_file_id,
  q.rejected_result
  FROM queue q
   JOIN
   outgoing.cps_forms f
   ON f.id = q.object_id AND q.object_type = 'cp'
   JOIN
   flat_file_ex fe
   ON fe.id = q.rejected_flat_file_id
offset 0) x
 WHERE rejected_result = 'ACTV';

QUERY PLAN

 Subquery Scan on x  (cost=11.68..192.72 rows=1 width=24) (actual
time=1.748..12.398 rows=139 loops=1)
   Filter: (x.rejected_result = 'ACTV'::bpchar)
   Rows Removed by Filter: 17
   -  Limit  (cost=11.68..192.65 rows=6 width=29) (actual
time=1.739..11.655 rows=156 loops=1)
 -  Nested Loop  (cost=11.68..192.65 rows=6 width=29) (actual
time=1.732..11.036 rows=156 loops=1)
   -  Hash Join  (cost=11.68..138.77 rows=15 width=21)
(actual time=1.459..6.987 rows=186 loops=1)
 Hash Cond: (q.object_id = f.id)
 -  Seq Scan on queue q  (cost=0.00..126.24
rows=186 width=21) (actual time=0.032..4.658 rows=186 loops=1)
   Filter: (object_type = 'cp'::bpchar)
   Rows Removed by Filter: 4313
 -  Hash  (cost=9.08..9.08 rows=208 width=4)
(actual time=1.402..1.402 rows=208 loops=1)
   Buckets: 1024  Batches: 1  Memory Usage: 5kB
   -  Seq Scan on cps_forms f
(cost=0.00..9.08 rows=208 width=4) (actual time=0.008..0.576 rows=208
loops=1)
   -  Index Scan using flat_file_ex_pkey on flat_file_ex
fe  (cost=0.00..3.58 rows=1 width=12) (actual time=0.008..0.010 rows=1
loops=186)
 Index Cond: (id = q.rejected_flat_file_id)
 Total runtime: 12.846 ms
(16 rows)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers