Re: [HACKERS] LibreOffice driver 1: Building libpq with Mozilla LDAP instead of OpenLDAP

2011-12-16 Thread Lionel Elie Mamane
On Tue, Dec 13, 2011 at 11:45:28AM +0100, Magnus Hagander wrote:
 On Tue, Dec 13, 2011 at 11:15, Lionel Elie Mamane lio...@mamane.lu wrote:

 LibreOffice can be configured at build-time to use Mozilla LDAP or
 OpenLDAP. We patched postgresql's configure.in to build libpq with
 Mozilla LDAP when requested to do so.

 I'd also be happy to extract from our patch the parts that are
 relevant for integration in postgresql proper, and massage them into
 the right modularity / form. Are you interested?

 Please do.

The patch is attached. I got everybody that touched the patch in the
LibreOffice git repo to agree to the PostgreSQL license for the patch.

The changes to tools/msvc are untested since I don't personally have a
Microsoft Windows build environment and LibreOffice uses
src/interfaces/lipq/win32.mak to build libpq.

The patch introduces a preprocessor macro USE_MICROSOFT_LDAP to enable
the Microsoft LDAP part to replace the previously used #ifdef WIN32,
since one can also use Mozilla LDAP on MS Windows. I found that
cleaner than putting everywhere #if defined(WIN32)  ! defined(USE_MOZLDAP).

Except for that, the only change is to the config/build system proper,
to recognise and use the Mozilla LDAP library.

-- 
Lionel
diff --recursive -u misc/build/postgresql-9.1.1/configure.in misc/build/postgresql-9.1.1.patched/configure.in
--- misc/build/postgresql-9.1.1/configure.in	2011-09-22 23:57:57.0 +0200
+++ misc/build/postgresql-9.1.1.patched/configure.in	2011-12-14 13:10:11.0 +0100
@@ -662,6 +662,13 @@
 AC_MSG_RESULT([$with_ldap])
 AC_SUBST(with_ldap)
 
+AC_MSG_CHECKING([whether to use Mozilla C SDK for LDAP support])
+PGAC_ARG_BOOL(with, mozldap, no,
+  [build with Mozilla LDAP support],
+  [AC_DEFINE([USE_MOZLDAP], 1, [Define to 1 to use the Mozilla LDAP C SDK instead of platform default (OpenLDAP or Microsoft LDAP). (--with-mozldap)])])
+AC_MSG_RESULT([$with_mozldap])
+AC_SUBST(with_mozldap)
+
 
 #
 # Bonjour
@@ -1077,7 +1084,7 @@
 fi
 
 if test $with_ldap = yes ; then
-  if test $PORTNAME != win32; then
+  if test $PORTNAME != win32 || test $with_mozldap = yes; then
  AC_CHECK_HEADERS(ldap.h, [],
   [AC_MSG_ERROR([header file ldap.h is required for LDAP])])
   else
@@ -1086,6 +1093,7 @@
   [AC_INCLUDES_DEFAULT
 #include windows.h
   ])
+ AC_DEFINE([USE_MICROSOFT_LDAP], 1, [Defined when using Microsof LDAP])
   fi
 fi
 
@@ -1498,7 +1506,18 @@
 # We can test for libldap_r only after we know PTHREAD_LIBS
 if test $with_ldap = yes ; then
   _LIBS=$LIBS
-  if test $PORTNAME != win32; then
+  if test $with_mozldap = yes; then
+if test $PORTNAME != win32; then
+  mozlibname=ldap50
+else
+  mozlibname=nsldap32v50
+fi
+AC_CHECK_LIB($mozlibname, ldap_bind, [],
+		 [AC_MSG_ERROR([library $mozlibname is required for Mozilla LDAP])],
+		 [$PTHREAD_CFLAGS $PTHREAD_LIBS $EXTRA_LDAP_LIBS])
+LDAP_LIBS_FE=-l$mozlibname $EXTRA_LDAP_LIBS
+LDAP_LIBS_BE=-l$mozlibname $EXTRA_LDAP_LIBS
+  elif test $PORTNAME != win32; then
 AC_CHECK_LIB(ldap, ldap_bind, [],
 		 [AC_MSG_ERROR([library 'ldap' is required for LDAP])],
 		 [$EXTRA_LDAP_LIBS])
diff --recursive -u misc/build/postgresql-9.1.1/src/backend/libpq/auth.c misc/build/postgresql-9.1.1.patched/src/backend/libpq/auth.c
--- misc/build/postgresql-9.1.1/src/backend/libpq/auth.c	2011-09-22 23:57:57.0 +0200
+++ misc/build/postgresql-9.1.1.patched/src/backend/libpq/auth.c	2011-12-14 13:10:11.0 +0100
@@ -93,11 +93,7 @@
  *
  */
 #ifdef USE_LDAP
-#ifndef WIN32
-/* We use a deprecated function to keep the codepath the same as win32. */
-#define LDAP_DEPRECATED 1
-#include ldap.h
-#else
+#ifdef USE_MICROSOFT_LDAP
 #include winldap.h
 
 /* Correct header from the Platform SDK */
@@ -109,6 +105,10 @@
 		   IN PLDAPControlA * ServerControls,
 			IN PLDAPControlA * ClientControls
 );
+#else
+/* We use a deprecated function to keep the codepath the same as win32. */
+#define LDAP_DEPRECATED 1
+#include ldap.h
 #endif
 
 static int	CheckLDAPAuth(Port *port);
@@ -2043,7 +2043,7 @@
 	*ldap = ldap_init(port-hba-ldapserver, port-hba-ldapport);
 	if (!*ldap)
 	{
-#ifndef WIN32
+#ifndef USE_MICROSOFT_LDAP
 		ereport(LOG,
 (errmsg(could not initialize LDAP: error code %d,
 		errno)));
@@ -2065,7 +2065,7 @@
 
 	if (port-hba-ldaptls)
 	{
-#ifndef WIN32
+#ifndef USE_MICROSOFT_LDAP
 		if ((r = ldap_start_tls_s(*ldap, NULL, NULL)) != LDAP_SUCCESS)
 #else
 		static __ldap_start_tls_sA _ldap_start_tls_sA = NULL;
diff --recursive -u misc/build/postgresql-9.1.1/src/interfaces/libpq/fe-connect.c misc/build/postgresql-9.1.1.patched/src/interfaces/libpq/fe-connect.c
--- misc/build/postgresql-9.1.1/src/interfaces/libpq/fe-connect.c	2011-09-22 23:57:57.0 +0200
+++ misc/build/postgresql-9.1.1.patched/src/interfaces/libpq/fe-connect.c	2011-12-14 

Re: [HACKERS] LibreOffice driver 1: Building libpq with Mozilla LDAP instead of OpenLDAP

2011-12-16 Thread Lionel Elie Mamane
On Wed, Dec 14, 2011 at 10:05:26AM -0500, Tom Lane wrote:
 Pavel Golub pa...@microolap.com writes:
 You wrote:

 TL about OSX though.  (You're aware that Apple ships a perfectly fine
 TL libpq.so in Lion, no?)

 I've not made an attempt to use it directly myself, but it sure looks
 like it should do what the OP wants.

Yes, it looks like that, except for being available on older MacOS X;
LibreOffice wished to stay compatible with 10.4.

-- 
Lionel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] LibreOffice driver 2: MIT Kerberos vs Microsoft Kerberos

2011-12-16 Thread Lionel Elie Mamane
On Tue, Dec 13, 2011 at 05:09:01PM -0500, Stephen Frost wrote:
 * Greg Smith (g...@2ndquadrant.com) wrote:

 Given that pgAdmin III has given up on MIT KRB5, would you feel
 doing the same is appropriate for LibreOffice too?

 Yes, I'd encourage LibreOffice to drop MIT Kerberos for Windows
 from their configure/install of libpq on Windows.

Thanks for the advice, we won't enable it.

-- 
Lionel

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Caching for stable expressions with constant arguments v3

2011-12-16 Thread Marti Raudsepp
On Fri, Dec 16, 2011 at 09:13, Greg Smith g...@2ndquadrant.com wrote:
 I think what would be helpful here next is a self-contained benchmarking
 script.

Alright, a simple script is attached.

 One of the first questions I have is whether
 the performance changed between there and your v5.

Not those testcases anyway, since the executor side changed very
little since my original patch. I'm more worried about the planner, as
that's where the bulk of the code is. There is a small but measurable
regression there (hence the latter 2 tests).

I'm running with shared_buffers=256MB

Unpatched Postgres (git commit 6d09b210):

select * from ts where ts between to_timestamp('2005-01-01',
'-MM-DD') and to_timestamp('2005-01-01', '-MM-DD');
tps = 1.194965 (excluding connections establishing)
tps = 1.187269 (excluding connections establishing)
tps = 1.192899 (excluding connections establishing)
select * from ts where tsnow();
tps = 8.986270 (excluding connections establishing)
tps = 8.974889 (excluding connections establishing)
tps = 8.976249 (excluding connections establishing)
/*uncachable*/ select * from one where ts =
to_date(clock_timestamp()::date::text, '-MM-DD') and ts 
(to_date(clock_timestamp()::date::text, '-MM-DD') + interval '1
year')
tps = 11647.167712 (excluding connections establishing)
tps = 11836.858624 (excluding connections establishing)
tps = 11658.372658 (excluding connections establishing)
/*cachable*/ select * from one where ts = to_date(now()::date::text,
'-MM-DD') and ts  (to_date(now()::date::text, '-MM-DD') +
interval '1 year')
tps = 9762.035996 (excluding connections establishing)
tps = 9695.627270 (excluding connections establishing)
tps = 9791.141908 (excluding connections establishing)

Patched Postgres (v5 applied on top of above commit):

select * from ts where ts between to_timestamp('2005-01-01',
'-MM-DD') and to_timestamp('2005-01-01', '-MM-DD');
tps = 8.580669 (excluding connections establishing)
tps = 8.583070 (excluding connections establishing)
tps = 8.544887 (excluding connections establishing)
select * from ts where tsnow();
tps = 10.467226 (excluding connections establishing)
tps = 10.429396 (excluding connections establishing)
tps = 10.441230 (excluding connections establishing)
/*uncachable*/ select * from one where ts =
to_date(clock_timestamp()::date::text, '-MM-DD') and ts 
(to_date(clock_timestamp()::date::text, '-MM-DD') + interval '1
year')
tps = 11578.768193 (excluding connections establishing)
tps = 11594.920258 (excluding connections establishing)
tps = 11667.866443 (excluding connections establishing)
/*cachable*/ select * from one where ts = to_date(now()::date::text,
'-MM-DD') and ts  (to_date(now()::date::text, '-MM-DD') +
interval '1 year')
tps = 9505.943115 (excluding connections establishing)
tps = 9502.316023 (excluding connections establishing)
tps = 9546.047208 (excluding connections establishing)

Regards,
Marti


bench_cache.sh
Description: Bourne shell script

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving more work outside WALInsertLock

2011-12-16 Thread Heikki Linnakangas

On 16.12.2011 05:27, Tom Lane wrote:

* We write a WAL record that starts 8 bytes before a sector boundary,
so that the prev_link is in one sector and the rest of the record in
the next one(s).


prev-link is not the first field in the header. The CRC is.


* Time passes, and we recycle that WAL file.

* We write another WAL record that starts 8 bytes before the same sector
boundary, so that the prev_link is in one sector and the rest of the
record in the next one(s).

* System crashes, after having written out the earlier sector but not
the later one(s).

On restart, the replay code will see a prev_link that matches what it
expects.  If the CRC for the remainder of the record is not dependent
on the prev_link, then the remainder of the old record will look good
too, and we'll attempt to replay it, n*16MB too late.


The CRC would be in the previous sector with the prev-link, so the CRC 
of the old record would have to match the CRC of the new record. I guess 
that's not totally impossible, though - there could be some WAL-logged 
operations where the payload of the WAL record is often exactly the 
same. Like a heap clean record, when the same page is repeatedly pruned.



Including the prev_link in the CRC adds a significant amount of
protection against such problems.  We should not remove this protection
in the name of shaving a few cycles.


Yeah. I did some quick testing with a patch to leave prev-link out of 
the calculation, and move the record CRC calculation outside the lock, 
too. I don't remember the numbers, but while it did make some 
difference, it didn't seem worthwhile.


Anyway, I'm looking at ways to make the memcpy() of the payload happen 
without the lock, in parallel, and once you do that the record header 
CRC calculation can be done in parallel, too. That makes it irrelevant 
from a performance point of view whether the prev-link is included in 
the CRC or not.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2011-12-16 Thread Simon Riggs
On Wed, Nov 30, 2011 at 4:42 AM, Robert Haas robertmh...@gmail.com wrote:
 It strikes me that there are really two separate problems here.

 1. If you are scanning a system catalog using SnapshotNow, and a
 commit happens at just the right moment, you might see two versions of
 the row or zero rather than one.  You'll see two if you scan the old
 row version, then the concurrent transaction commits, and then you
 scan the new one.  You'll see zero if you scan the new row (ignoring
 it, since it isn't committed yet) and then the concurrent transaction
 commits, and then you scan the old row.

That is a bug and one we should fix. I supplied a patch for that,
written to Tom's idea for how to solve it.

I will apply that, unless there are objections.

 2. Other backends may have data in the relcache or catcaches which
 won't get invalidated until they do AcceptInvalidationMessages().
 That will always happen no later than the next time they lock the
 relation, so if you are holding AccessExclusiveLock then you should be
 OK: no one else holds any lock, and they'll have to go get one before
 doing anything interesting.  But if you are holding any weaker lock,
 there's nothing to force AcceptInvalidationMessages to happen before
 you reuse those cache entries.

Yes, inconsistent cache entries will occur if we allow catalog updates
while the table is already locked by others.

The question is whether that is a problem in all cases.

With these ALTER TABLE subcommands, I don't see any problem with
different backends seeing different values.

/*
 * These subcommands affect general strategies 
for performance
 * and maintenance, though don't change the 
semantic results
 * from normal data reads and writes. Delaying 
an ALTER TABLE
 * behind currently active writes only delays 
the point where
 * the new strategy begins to take effect, so 
there is no
 * benefit in waiting. In this case the minimum 
restriction
 * applies: we don't currently allow concurrent 
catalog
 * updates.
 */
case AT_SetStatistics:
// only used by ANALYZE, which is shut out by the ShareUpdateExclusiveLock

case AT_ClusterOn:
case AT_DropCluster:
// only used by CLUSTER, which is shut out because it needs AccessExclusiveLock

case AT_SetRelOptions:
case AT_ResetRelOptions:
case AT_SetOptions:
case AT_ResetOptions:
case AT_SetStorage:
// not critical

case AT_ValidateConstraint:
// not a problem if some people think its invalid when it is valid

So ISTM that we can allow reduced lock levels for the above commands
if we fix SnapshotNow.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Greg Smith

On 12/14/2011 05:24 AM, Magnus Hagander wrote:

How about passing a parameter to pg_signal_backend? Making
pg_signal_backend(int pid, int sig, bool allow_samerole)?
   


That works, got rid of the parts I didn't like and allowed some useful 
minor restructuring.  I also made the HINT better and match style 
guidelines:


gsmith= select pg_terminate_backend(21205);
ERROR:  must be superuser to terminate other server processes
HINT:  You can cancel your own processes with pg_cancel_backend().
gsmith= select pg_cancel_backend(21205);
 pg_cancel_backend
---
 t

New rev attached and pushed to 
https://github.com/greg2ndQuadrant/postgres/tree/cancel-backend (which 
is *not* the same branch as I used last time; don't ask why, long story)


I considered some additional ways to restructure the checks that could 
remove a further line or two from the logic here, but they all made the 
result seem less readable to me.  And this is not a high performance 
code path.  I may have gone a bit too far with the comment additions 
though, so feel free to trim that back.  It kept feeling weird to me 
that none of the individual signaling functions had their own intro 
comments.  I added all those.


I also wrote up a commentary on the PID wraparound race condition 
possibility Josh brought up.  Some research shows that pid assignment on 
some systems is made more secure by assigning new ones randomly.  That 
seems like it would make it possible to have a pid get reused much 
faster than on the usual sort of system that does sequential assignment 
and wraparound.  A reuse collision still seems extremely unlikely 
though.  With the new comments, at least a future someone who speculates 
on this will know how much thinking went into the current 
implementation:  enough to notice, not enough to see anything worth 
doing about it.  Maybe that's just wasted lines of text?


With so little grief on the last round, I'm going to guess this one will 
just get picked up by Magnus to commit next.  Marking accordingly and 
moved to the current CommitFest.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us

diff --git a/doc/src/sgml/func.sgml b/doc/src/sgml/func.sgml
index e7f7fe0..cf77586 100644
*** a/doc/src/sgml/func.sgml
--- b/doc/src/sgml/func.sgml
*** SELECT set_config('log_statement_stats',
*** 14244,14251 
 para
  The functions shown in xref
  linkend=functions-admin-signal-table send control signals to
! other server processes.  Use of these functions is restricted
! to superusers.
 /para
  
 table id=functions-admin-signal-table
--- 14244,14251 
 para
  The functions shown in xref
  linkend=functions-admin-signal-table send control signals to
! other server processes.  Use of these functions is usually restricted
! to superusers, with noted exceptions.
 /para
  
 table id=functions-admin-signal-table
*** SELECT set_config('log_statement_stats',
*** 14262,14268 
  literalfunctionpg_cancel_backend(parameterpid/parameter typeint/)/function/literal
  /entry
 entrytypeboolean/type/entry
!entryCancel a backend's current query/entry
/row
row
 entry
--- 14262,14271 
  literalfunctionpg_cancel_backend(parameterpid/parameter typeint/)/function/literal
  /entry
 entrytypeboolean/type/entry
!entryCancel a backend's current query.  You can execute this against
! another backend that has exactly the same role as the user calling the
! function.  In all other cases, you must be a superuser.
! /entry
/row
row
 entry
*** SELECT set_config('log_statement_stats',
*** 14304,14309 
--- 14307,14316 
  commandpostgres/command processes on the server (using
  applicationps/ on Unix or the applicationTask
  Manager/ on productnameWindows/).
+ For the less restrictive functionpg_cancel_backend/, the role of an
+ active backend can be found from
+ the structfieldusename/structfield column of the
+ structnamepg_stat_activity/structname view.
 /para
  
 para
diff --git a/src/backend/utils/adt/misc.c b/src/backend/utils/adt/misc.c
index 7a2e0c8..1b7b75b 100644
*** a/src/backend/utils/adt/misc.c
--- b/src/backend/utils/adt/misc.c
***
*** 30,35 
--- 30,36 
  #include postmaster/syslogger.h
  #include storage/fd.h
  #include storage/pmsignal.h
+ #include storage/proc.h
  #include storage/procarray.h
  #include tcop/tcopprot.h
  #include utils/builtins.h
*** current_query(PG_FUNCTION_ARGS)
*** 70,84 
  }
  
  /*
!  * Functions to send signals to other backends.
   */
  static bool
! pg_signal_backend(int pid, int sig)
  {
! 	if (!superuser())
  		ereport(ERROR,
  (errcode(ERRCODE_INSUFFICIENT_PRIVILEGE),
! 			(errmsg(must be superuser 

Re: [HACKERS] Moving more work outside WALInsertLock

2011-12-16 Thread Simon Riggs
On Fri, Dec 16, 2011 at 12:07 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:

 Anyway, I'm looking at ways to make the memcpy() of the payload happen
 without the lock, in parallel, and once you do that the record header CRC
 calculation can be done in parallel, too. That makes it irrelevant from a
 performance point of view whether the prev-link is included in the CRC or
 not.

Better plan. So we keep the prev link in the CRC.

I already proposed a design for that using page-level share locks any
reason not to go with that?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving more work outside WALInsertLock

2011-12-16 Thread Heikki Linnakangas

On 16.12.2011 14:37, Simon Riggs wrote:

On Fri, Dec 16, 2011 at 12:07 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:


Anyway, I'm looking at ways to make the memcpy() of the payload happen
without the lock, in parallel, and once you do that the record header CRC
calculation can be done in parallel, too. That makes it irrelevant from a
performance point of view whether the prev-link is included in the CRC or
not.


Better plan. So we keep the prev link in the CRC.

I already proposed a design for that using page-level share locks any
reason not to go with that?


Sorry, I must've missed that. Got a link?

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving more work outside WALInsertLock

2011-12-16 Thread Simon Riggs
On Fri, Dec 16, 2011 at 12:50 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com wrote:
 On 16.12.2011 14:37, Simon Riggs wrote:

 On Fri, Dec 16, 2011 at 12:07 PM, Heikki Linnakangas
 heikki.linnakan...@enterprisedb.com  wrote:

 Anyway, I'm looking at ways to make the memcpy() of the payload happen
 without the lock, in parallel, and once you do that the record header CRC
 calculation can be done in parallel, too. That makes it irrelevant from a
 performance point of view whether the prev-link is included in the CRC or
 not.


 Better plan. So we keep the prev link in the CRC.

 I already proposed a design for that using page-level share locks any
 reason not to go with that?


 Sorry, I must've missed that. Got a link?

From nearly 4 years ago.

http://grokbase.com/t/postgresql.org/pgsql-hackers/2008/02/reworking-wal-locking/145qrhllcqeqlfzntvn7kjefijey

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storing hot members of PGPROC out of the band

2011-12-16 Thread Robert Haas
On Thu, Dec 15, 2011 at 11:35 PM, Robert Haas robertmh...@gmail.com wrote:
 One small detail I'm noticing on further review is that I've slightly
 changed the semantics here:

            if (!TransactionIdIsNormal(xid)
                || NormalTransactionIdPrecedes(xmax, xid))
                    continue;

 That really ought to be testing =, not just .  That doesn't seem
 like it should affect correctness, though: at worst, we unnecessarily
 include one or more XIDs in the snapshot that will be ignored anyway.
 I'll fix that and rerun the tests but I don't think it'll make any
 difference.

New results with the attached, updated patch.  As predicted, these are
quite similar to the last batch...

m01 tps = 618.460567 (including connections establishing)
s01 tps = 628.394270 (including connections establishing)
m08 tps = 4558.930585 (including connections establishing)
s08 tps = 4490.285074 (including connections establishing)
m16 tps = 7577.677079 (including connections establishing)
s16 tps = 7582.611380 (including connections establishing)
m24 tps = 11556.680518 (including connections establishing)
s24 tps = 11527.982307 (including connections establishing)
m32 tps = 10807.216084 (including connections establishing)
s32 tps = 10871.625992 (including connections establishing)
m80 tps = 10818.092314 (including connections establishing)
s80 tps = 10866.780660 (including connections establishing)

Unlogged Tables:

m01 tps = 670.328782 (including connections establishing)
s01 tps = 818.666971 (including connections establishing)
m08 tps = 4793.337235 (including connections establishing)
s08 tps = 4888.600945 (including connections establishing)
m16 tps = 8016.092785 (including connections establishing)
s16 tps = 8123.217451 (including connections establishing)
m24 tps = 14082.694567 (including connections establishing)
s24 tps = 14114.466246 (including connections establishing)
m32 tps = 17881.340576 (including connections establishing)
s32 tps = 18291.739818 (including connections establishing)
m80 tps = 12767.535657 (including connections establishing)
s80 tps = 17380.055381 (including connections establishing)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


getsnapshotdata-tweaks-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks, 2nd attempt

2011-12-16 Thread Greg Smith
Sounds like there's still a few things left to research out on Alvaro's 
side, and I'm thinking there's a performance/reliability under load 
testing side of this that will take some work to validate too.  Since I 
can't see all that happening fast enough to commit for a bit, I'm going 
to mark it returned with feedback for now.  I'm trying to remove 
everything that isn't likely to end up in the next alpha from the open list.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2011-12-16 Thread Robert Haas
On Fri, Dec 16, 2011 at 7:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
 That is a bug and one we should fix. I supplied a patch for that,
 written to Tom's idea for how to solve it.

 I will apply that, unless there are objections.

I remember several attempts at that, but I don't remember any that
didn't meet with objections.  Do you have a link?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Moving more work outside WALInsertLock

2011-12-16 Thread Heikki Linnakangas

On 16.12.2011 15:03, Simon Riggs wrote:

On Fri, Dec 16, 2011 at 12:50 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:

On 16.12.2011 14:37, Simon Riggs wrote:


I already proposed a design for that using page-level share locks any
reason not to go with that?


Sorry, I must've missed that. Got a link?


 From nearly 4 years ago.

http://grokbase.com/t/postgresql.org/pgsql-hackers/2008/02/reworking-wal-locking/145qrhllcqeqlfzntvn7kjefijey


Ah, thanks. That is similar to what I'm experimenting, but a second 
lwlock is still fairly heavy-weight. I think with many backends, you 
will be beaten badly by contention on the spinlocks alone.


I'll polish up and post what I've been experimenting with, so we can 
discuss that.


--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Robert Haas
On Fri, Dec 16, 2011 at 1:21 AM, Greg Smith g...@2ndquadrant.com wrote:
 This is a problem with the existing code though, and the proposed changes
 don't materially alter that; there's just another quick check in one path
 through.  Right now we check if someone is superuser, then if it's a backend
 PID, then we send the signal.  If you assume someone can run through all the
 PIDs between those checks and the kill, the system is already broken that
 way.

From a theoretical point of view, I believe it to be slightly
different.  If a superuser sends a kill, they will certainly be
authorized to kill whatever they end up killing, because they are
authorized to kill anything.  On the other hand, the proposed patch
would potentially result - in the extremely unlikely event of a
super-fast PID wraparound - in someone cancelling a query they
otherwise wouldn't have been able to cancel.

In practice, the chances of this seem fairly remote.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-16 Thread Albe Laurenz
Pavel Stehule wrote:
 one small update - better emulation of environment for security
 definer functions

Patch applies and compiles fine, core functionality works fine.

I found a little bug:

In backend/commands/functioncmds.c,
function CheckFunction(CheckFunctionStmt *stmt),
while you perform the table scan for CHECK FUNCTION ALL,
you use the variable funcOid to hold the OID of the current
function in the loop.

If no appropriate function is found in the loop, the
check immediately after the table scan will not succeed
because funcOid holds the OID of the last function scanned
in the loop.
As a consequence, CheckFunctionById is called for this
function.

Here is a demonstration:
test= CHECK FUNCTION ALL IN SCHEMA pg_catalog;
[...]
NOTICE:  skip check function plpgsql_checker(oid,regclass,text[]), uses C 
language
NOTICE:  skip check function plperl_call_handler(), uses C language
NOTICE:  skip check function plperl_inline_handler(internal), uses C language
NOTICE:  skip check function plperl_validator(oid), uses C language
NOTICE:  skip check function plperl_validator(oid), language c hasn't 
checker function
CHECK FUNCTION

when it should be:
test= CHECK FUNCTION ALL IN SCHEMA pg_catalog;
[...]
NOTICE:  skip check function plpgsql_checker(oid,regclass,text[]), uses C 
language
NOTICE:  skip check function plperl_call_handler(), uses C language
NOTICE:  skip check function plperl_inline_handler(internal), uses C language
NOTICE:  skip check function plperl_validator(oid), uses C language
NOTICE:  nothing to check
CHECK FUNCTION


Another thing struck me as odd:

You have the option fatal_errors for the checker function, but you
special case it in CheckFunction(CheckFunctionStmt *stmt) and turn
errors to warnings if it is not set.

Wouldn't it be better to have the checker function ereport a WARNING
or an ERROR depending on the setting? Options should be handled by the
checker function.

Yours,
Laurenz Albe

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Robert Haas
On Thu, Dec 15, 2011 at 4:47 PM, Andrew Dunstan and...@dunslane.net wrote:
 Make JSON datatypes only selectable if client encoding is UTF-8.

 Yuck. Do we have this sort of restriction for any other data type?

No, and I don't think it's necessary to do it here, either.  Nor would
it be a good idea, because then the return value of EXPLAIN (FORMAT
JSON) couldn't unconditionally be json.  But I think the important
point is that this is an obscure corner case.  Let me say that one
more time: obscure corner case!

The only reason JSON needs to care about this at all is that it allows
\u1234 to mean Unicode code point 0x1234.  But for that detail, JSON
would be encoding-agnostic.  So I think it's sufficient for us to
simply decide that that particular feature may not work (or even, will
not work) for non-ASCII characters if you use a non-UTF8 encoding.
There's still plenty of useful things that can be done with JSON even
if that particular feature is not available; and that way we don't
have to completely disable the data type just because someone wants to
use EUC-JP or something.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2011-12-16 Thread Simon Riggs
On Fri, Dec 16, 2011 at 1:38 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Dec 16, 2011 at 7:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
 That is a bug and one we should fix. I supplied a patch for that,
 written to Tom's idea for how to solve it.

 I will apply that, unless there are objections.

 I remember several attempts at that, but I don't remember any that
 didn't meet with objections.  Do you have a link?

My patch to implement SnapshotNow correctly, from Jun 27 on this
thread was never reviewed or commented upon by anybody. That was
probably because it only fixes one of the problems, not all of them.

But it does fix a current bug and that's why I'm asking now if there
are objections to committing it.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Simon Riggs
On Fri, Dec 16, 2011 at 1:52 PM, Robert Haas robertmh...@gmail.com wrote:
 On Thu, Dec 15, 2011 at 4:47 PM, Andrew Dunstan and...@dunslane.net wrote:
 Make JSON datatypes only selectable if client encoding is UTF-8.

 Yuck. Do we have this sort of restriction for any other data type?

 No, and I don't think it's necessary to do it here, either.  Nor would
 it be a good idea, because then the return value of EXPLAIN (FORMAT
 JSON) couldn't unconditionally be json.  But I think the important
 point is that this is an obscure corner case.  Let me say that one
 more time: obscure corner case!

 The only reason JSON needs to care about this at all is that it allows
 \u1234 to mean Unicode code point 0x1234.  But for that detail, JSON
 would be encoding-agnostic.  So I think it's sufficient for us to
 simply decide that that particular feature may not work (or even, will
 not work) for non-ASCII characters if you use a non-UTF8 encoding.
 There's still plenty of useful things that can be done with JSON even
 if that particular feature is not available; and that way we don't
 have to completely disable the data type just because someone wants to
 use EUC-JP or something.

Completely agree. I was going to write almost exactly this in reply.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Greg Smith

On 12/16/2011 08:42 AM, Robert Haas wrote:
the proposed patch would potentially result - in the extremely 
unlikely event of a

super-fast PID wraparound - in someone cancelling a query they
otherwise wouldn't have been able to cancel.
   


So how might this get exploited?

-Attach a debugger and put a breakpoint between the check and the kill
-Fork processes to get close to your target
-Wait for a process you want to mess with to appear at the PID you're 
waiting for.  If you miss it, repeat fork bomb and try again.

-Resume the debugger to kill the other user's process

If I had enough access to launch this sort of attack, I think I'd find 
mayhem elsewhere more a more profitable effort.  Crazy queries, work_mem 
abuse, massive temp file generation, trying to get the OOM killer 
involved; seems like there's bigger holes than this already.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Magnus Hagander
On Friday, December 16, 2011, Robert Haas wrote:

 On Fri, Dec 16, 2011 at 1:21 AM, Greg Smith 
 g...@2ndquadrant.comjavascript:;
 wrote:
  This is a problem with the existing code though, and the proposed changes
  don't materially alter that; there's just another quick check in one path
  through.  Right now we check if someone is superuser, then if it's a
 backend
  PID, then we send the signal.  If you assume someone can run through all
 the
  PIDs between those checks and the kill, the system is already broken that
  way.

 From a theoretical point of view, I believe it to be slightly
 different.  If a superuser sends a kill, they will certainly be
 authorized to kill whatever they end up killing, because they are
 authorized to kill anything.  On the other hand, the proposed patch


Not necessarily. What if it's recycled as a backend in a different postgres
installation. Or just a cronjob or shell running as the same user?

Sure, you can argue that the superuser can destroy anything he wants - but
in that case, why do we have a check for this at all in the first place?

I think we can safely say that any OS that actually manages to recycle the
PID in the short time it takes to get between those instructions is so
broken we don't need to care about that.


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Magnus Hagander
On Friday, December 16, 2011, Greg Smith wrote:

 On 12/16/2011 08:42 AM, Robert Haas wrote:

 the proposed patch would potentially result - in the extremely unlikely
 event of a
 super-fast PID wraparound - in someone cancelling a query they
 otherwise wouldn't have been able to cancel.



 So how might this get exploited?

 -Attach a debugger and put a breakpoint between the check and the kill


Once you've attached a debugger, you've already won. You can inject
arbitrary instructions at this point, no?


 -Fork processes to get close to your target
 -Wait for a process you want to mess with to appear at the PID you're
 waiting for.  If you miss it, repeat fork bomb and try again.
 -Resume the debugger to kill the other user's process

 If I had enough access to launch this sort of attack, I think I'd find
 mayhem elsewhere more a more profitable effort.  Crazy queries, work_mem
 abuse, massive temp file generation, trying to get the OOM killer involved;
 seems like there's bigger holes than this already.


killall -9 postgres is even easier.


//Magnus


Re: [HACKERS] Patch to allow users to kill their own queries

2011-12-16 Thread Alvaro Herrera

Excerpts from Greg Smith's message of vie dic 16 11:19:52 -0300 2011:
 On 12/16/2011 08:42 AM, Robert Haas wrote:
  the proposed patch would potentially result - in the extremely 
  unlikely event of a
  super-fast PID wraparound - in someone cancelling a query they
  otherwise wouldn't have been able to cancel.
 
 
 So how might this get exploited?
 
 -Attach a debugger and put a breakpoint between the check and the kill

If you can attach a debugger to a backend, you already have the
credentials to the user running postmaster, which we assume to be a
game over condition.

I guess a more interesting exploitation would be to do this until the
killing backend randomly wins the race condition against a dying backend
and a new one starting up; this would require very rapid reuse of the
PID, plus very rapid startup of the new proces.  I think we already
assume that this is not the case in other code paths, even on systems
that randomly (instead of sequentially) assign PIDs.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Escaping : in .pgpass - code or docs bug?

2011-12-16 Thread Richard Huxton
According to the docs [1], you should escape embedded colons in .pgpass 
(fair enough). Below is PG 9.1.1


user = te:st, db = te:st, password = te:st

$ cat ~/.pgpass
*:*:te:st:te:st:te:st
$ psql91 -U te:st -d te:st
te:st=

$ cat ~/.pgpass
*:*:te\:st:te\:st:te:st
$ psql91 -U te:st -d te:st
te:st=

$ cat ~/.pgpass
*:*:te\:st:te\:st:te\:st
$ psql91 -U te:st -d te:st
psql: FATAL:  password authentication failed for user te:st
password retrieved from file /home/richardh/.pgpass

I'm a bit puzzled how it manages without the escaping in the first case. 
There's a lack of consistency though that either needs documenting or 
fixing.



[1] http://www.postgresql.org/docs/9.1/static/libpq-pgpass.html

--
  Richard Huxton
  Archonet Ltd

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] archive_keepalive_command

2011-12-16 Thread Simon Riggs
archive_command and restore_command describe how to ship WAL files
to/from an archive.

When there is nothing to ship, we delay sending WAL files. When no WAL
files, the standby has no information at all.

To provide some form of keepalive on quiet systems the
archive_keepalive_command provides a generic hook to implement
keepalives. This is implemented as a separate command to avoid storing
keepalive messages in the archive, or at least allow overwrites using
a single filename like keepalive.

Examples
archive_keepalive_command = 'arch_cmd keepalive'   # sends a file
called keepalive to archive, overwrites allowed
archive_keepalive_command = 'arch_cmd %f.%t.keepalive  #sends a file
like 0001000ABFE.20111216143517.keepalive

If there is no WAL file to send, then we send a keepalive file
instead. Keepalive is a small file that contains same contents as a
streaming keepalive message (re: other patch on that).

If no WAL file is available and we are attempting to restore in
standby_mode, then we execute restore_keepalive_command to see if a
keepalive file is available. Checks for a file in the specific
keepalive format and then uses that to update last received info from
master.

e.g.
restore_keepalive_command = 'restore_cmd keepalive'   # gets a file
called keepalive to archive, overwrites allowed

Additions/changes?

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RangeVarGetRelid()

2011-12-16 Thread Noah Misch
On Thu, Dec 15, 2011 at 07:04:20PM -0500, Robert Haas wrote:
 On Fri, Dec 9, 2011 at 5:41 PM, Noah Misch n...@leadboat.com wrote:
  It also seems my last explanation didn't convey the point. ?Yes, nearly 
  every
  command has a different set of permissions checks. ?However, we don't 
  benefit
  equally from performing each of those checks before acquiring a lock.
  Consider renameatt(), which checks three things: you must own the relation,
  the relation must be of a supported relkind, and the relation must not be a
  typed table. ?To limit opportunities for denial of service, let's definitely
  perform the ownership check before taking a lock. ?The other two checks can
  wait until we hold that lock. ?The benefit of checking them early is to 
  avoid
  making a careless relation owner wait for a lock before discovering the
  invalidity of his command. ?That's nice as far as it goes, but let's not
  proliferate callbacks for such a third-order benefit.
 
 I agree, but my point is that so far we have no callbacks that differ
 only in that detail.  I accept that we'd probably want to avoid that.

To illustrate what I had in mind, here's a version of your patch that has five
callers sharing a callback.  The patch is against d039fd51f79e, just prior to
your recent commits.
*** a/src/backend/catalog/namespace.c
--- b/src/backend/catalog/namespace.c
***
*** 211,217  Datum  pg_is_other_temp_schema(PG_FUNCTION_ARGS);
  
  
  /*
!  * RangeVarGetRelid
   *Given a RangeVar describing an existing relation,
   *select the proper namespace and look up the relation OID.
   *
--- 211,217 
  
  
  /*
!  * RangeVarGetRelidExtended
   *Given a RangeVar describing an existing relation,
   *select the proper namespace and look up the relation OID.
   *
***
*** 409,414  RangeVarGetRelidExtended(const RangeVar *relation, LOCKMODE 
lockmode,
--- 409,427 
  }
  
  /*
+  * RangeVarCallbackCheckOwner
+  *RangeVarGetRelidExtended() callback to check ownership alone.
+  */
+ void
+ RangeVarCallbackCheckOwner(const RangeVar *relation,
+  Oid relId, Oid oldRelId, 
void *arg)
+ {
+   /* Nothing to do if the relation was not found. */
+   if (OidIsValid(relId)  !pg_class_ownercheck(relId, GetUserId()))
+   aclcheck_error(ACLCHECK_NOT_OWNER, ACL_KIND_CLASS, 
relation-relname);
+ }
+ 
+ /*
   * RangeVarGetCreationNamespace
   *Given a RangeVar describing a to-be-created relation,
   *choose which namespace to create it in.
*** a/src/backend/commands/alter.c
--- b/src/backend/commands/alter.c
***
*** 105,166  ExecRenameStmt(RenameStmt *stmt)
case OBJECT_SEQUENCE:
case OBJECT_VIEW:
case OBJECT_INDEX:
case OBJECT_COLUMN:
case OBJECT_ATTRIBUTE:
case OBJECT_TRIGGER:
!   case OBJECT_FOREIGN_TABLE:
!   {
!   Oid relid;
! 
!   CheckRelationOwnership(stmt-relation, true);
! 
!   /*
!* Lock level used here should match what will 
be taken later,
!* in RenameRelation, renameatt, or renametrig.
!*/
!   relid = RangeVarGetRelid(stmt-relation, 
AccessExclusiveLock,
!   
 false);
! 
!   switch (stmt-renameType)
!   {
!   case OBJECT_TABLE:
!   case OBJECT_SEQUENCE:
!   case OBJECT_VIEW:
!   case OBJECT_INDEX:
!   case OBJECT_FOREIGN_TABLE:
!   {
!   /*
!* RENAME TABLE 
requires that we (still) hold
!* CREATE rights on the 
containing namespace, as
!* well as ownership of 
the table.
!*/
!   Oid 
namespaceId = get_rel_namespace(relid);
!   AclResult   
aclresult;
! 
!   aclresult = 
pg_namespace_aclcheck(namespaceId,
!   
  GetUserId(),
!  

Re: [HACKERS] [PATCH] Caching for stable expressions with constant arguments v3

2011-12-16 Thread Greg Smith

That looks easy enough to try out now, thanks for the script.

Jaime was interested in trying this out, and I hope he still finds time 
to do that soon.  Now that things have settled down and it's obvious how 
to start testing, that should be a project he can take on.  I don't 
think this is going to reach ready to commit in the next few days 
though, so I'm going to mark it as returned for this CommitFest.  The 
results you're showing are quite interesting, perhaps someone can help 
sorting out the parsing area that's accidentally being decelerated.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] why do we need two snapshots per query?

2011-12-16 Thread Dimitri Fontaine
Robert Haas robertmh...@gmail.com writes:
 I thought about adjusting it, but I didn't see what it made sense to
 adjust it to.  It still is the parameter used for parameter I/O and
 parsing/planning, so the existing text isn't wrong.  It will possibly
 also get reused for execution, but the previous statement has a
 lengthy comment on that, so it didn't seem worth recapitulating here.

Ah yes, the previous comment is not far away, so it's easy to read it
that way.  Agreed.

 Actually, I did, but the change was in the second patch file attached
 to the same email, which maybe you missed?  Combined patch attached.

Oops, I missed it, yes.  Looks good to me.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Caching for stable expressions with constant arguments v3

2011-12-16 Thread Marti Raudsepp
On Fri, Dec 16, 2011 at 18:08, Greg Smith g...@2ndquadrant.com wrote:
 I don't think this
 is going to reach ready to commit in the next few days though, so I'm going
 to mark it as returned for this CommitFest.

Fair enough, I just hope this doesn't get dragged into the next
commitfest without feedback.

 perhaps someone can help sorting out the parsing area
 that's accidentally being decelerated.

Well the slowdown isn't accidental, I think it's expected since I'm
adding a fair bit of code to expression processing (which isn't all
pretty).

It could be reduced by doing the caching decisions in a 2nd pass,
inside ExecInitExpr, but it would mean adding an extra field to
'struct Expr' and require a significant rewrite of the patch. I'm not
sure if it's worthwhile to attempt that approach:
http://archives.postgresql.org/pgsql-hackers/2011-12/msg00483.php

Regards,
Marti

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-12-16 Thread Dimitri Fontaine
Hi,

Kohei KaiGai kai...@kaigai.gr.jp writes:
 The attached patches are revised ones.
 I added explanations of DDL permissions on creation time added by these 
 patches,
 and added a few regression test cases.

The whole patches are now against contrib/sepgsql, which seems to me to
be a good news, but means I'm not skilled to help review further.  I'm
unsure about marking that as “ready for commiter” but I'm definitely
done myself.

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support

As seen here, contrib only patch now:

[2. application/octet-stream; 
pgsql-v9.2-sepgsql-create-permissions-part-4.v2.proc.patch]...

 contrib/sepgsql/expected/create.out |   13 +++-
 contrib/sepgsql/proc.c  |   55 +++
 contrib/sepgsql/sql/create.sql  |7 
 3 files changed, 68 insertions(+), 7 deletions(-)

[3. application/octet-stream; 
pgsql-v9.2-sepgsql-create-permissions-part-2.v2.schema.patch]...

 contrib/sepgsql/expected/create.out |4 +++
 contrib/sepgsql/schema.c|   50 ---
 contrib/sepgsql/sql/create.sql  |6 
 3 files changed, 56 insertions(+), 4 deletions(-)

[4. application/octet-stream; 
pgsql-v9.2-sepgsql-create-permissions-part-3.v2.relation.patch]...

 contrib/sepgsql/expected/create.out |   46 +++
 contrib/sepgsql/hooks.c |   70 +-
 contrib/sepgsql/relation.c  |  144 +--
 contrib/sepgsql/sql/create.sql  |   18 +
 4 files changed, 254 insertions(+), 24 deletions(-)

[5. application/octet-stream; 
pgsql-v9.2-sepgsql-create-permissions-part-1.v2.database.patch]...

 contrib/sepgsql/database.c  |   91 ++
 contrib/sepgsql/expected/create.out |   19 ++
 contrib/sepgsql/hooks.c |  122 --
 contrib/sepgsql/sepgsql.h   |3 +-
 contrib/sepgsql/sql/create.sql  |   15 
 contrib/sepgsql/test_sepgsql|2 +-
 doc/src/sgml/sepgsql.sgml   |   30 -
 7 files changed, 231 insertions(+), 51 deletions(-)

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Joey Adams
On Fri, Dec 16, 2011 at 8:52 AM, Robert Haas robertmh...@gmail.com wrote:
 But I think the important point is that this is an obscure corner case.  Let 
 me say that one
more time: obscure corner case!

+1

 The only reason JSON needs to care about this at all is that it allows
 \u1234 to mean Unicode code point 0x1234.  But for that detail, JSON
 would be encoding-agnostic.  So I think it's sufficient for us to
 simply decide that that particular feature may not work (or even, will
 not work) for non-ASCII characters if you use a non-UTF8 encoding.
 There's still plenty of useful things that can be done with JSON even
 if that particular feature is not available; and that way we don't
 have to completely disable the data type just because someone wants to
 use EUC-JP or something.

So, if the server encoding is not UTF-8, should we ban Unicode escapes:

\u00FCber

or non-ASCII characters?

über

Also:

 * What if the server encoding is SQL_ASCII?

 * What if the server encoding is UTF-8, but the client encoding is
something else (e.g. SQL_ASCII)?

- Joey

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2011-11 Update

2011-12-16 Thread Peter Geoghegan
On 15 December 2011 19:07, Robert Haas robertmh...@gmail.com wrote:
 I think that the tuplesort comparison overhead
 reduction is in Peter G's hands to rebase at the moment; I will look
 at that again when it reemerges.

Yes, sorry about that - I was a little bit sidetracked by something
else. I'll work on it tomorrow.

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: SP-GiST, Space-Partitioned GiST

2011-12-16 Thread Greg Smith
Since this was marked WIP and Tom has now kicked off two side 
discussions, what I've done is tagged references to each of them as 
comments to the main patch, then marked this as returned with feedback.  
Surely what I do in the CF app isn't going to influence what Tom wants 
to work on, so I'll leave this one to his watch now.  If it does turn 
out to be committed soon instead of re-submitted as I'm presuming right 
now, I can always go back and fix that.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Prep object creation hooks, and related sepgsql updates

2011-12-16 Thread Greg Smith

On 12/16/2011 11:58 AM, Dimitri Fontaine wrote:

The whole patches are now against contrib/sepgsql, which seems to me to
be a good news, but means I'm not skilled to help review further.  I'm
unsure about marking that as “ready for commiter” but I'm definitely
done myself.
   


Robert already took a brief look at this upthread and suggested it 
seemed in the right direction if issues like the docs were sorted out.  
That sounds like ready for committer going in his direction to me, 
particularly since there's not a lot of other people who are familiar 
with the sepgsql side; relabeling it in the CF app accordingly.  Note 
that these were labeled as proof-of-concept in the original 
submission, so a commit might not be the right next step even if they 
look good so far.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-16 Thread Greg Smith

On 12/04/2011 02:22 AM, Nikhil Sontakke wrote:


Is it okay to modify an existing constraint to mark it as only, even
if it was originally inheritable?  This is not clear to me.  Maybe the
safest course of action is to raise an error.  Or maybe I'm misreading
what it does (because I haven't compiled it yet).


Hmmm, good question. IIRC, the patch will pass is_only as true only if 
it going to be a locally defined, non-inheritable constraint. So I 
went by the logic that since it was ok to merge and mark a constraint 
as locally defined, it should be ok to mark it non-inheritable from 
this moment on with that new local definition?


With this open question, this looks like it's back in Alvaro's hands 
again to me.  This one started the CF as Ready for Committer and seems 
stalled there for now.  I'm not going to touch its status, just pointing 
this fact out.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us



Re: [HACKERS] review: CHECK FUNCTION statement

2011-12-16 Thread Greg Smith
I just poked at this a bit myself to see how the patch looked.  There's 
just over 4000 lines in the diff.  Even though 1/4 of that is tests, 
which is itself encouraging, that's still a good sized feature.  The 
rate at which code here has still been changing regularly here has me 
nervous about considering this a commit candidate right now though.  It 
seems like it still needs a bit more time to have problems squeezed out 
still.


Two ideas I was thinking about here:

-If you take a step back and look at where the problem parts of the code 
have been recently, are there any new tests or assertions you might add 
to try and detect problems like that in the future?  I haven't been 
following this closely enough to have any suggestions where, and there 
is a lot of error checking aimed at logging already; maybe there's 
nothing new to chase there.


-Can we find some larger functions you haven't tested this against yet 
to throw at it?  It seems able to consume all the cases you've 
constructed for it; it would be nice to find some brand new ones it's 
never seen before to check.


This has made a lot of progress and seems it will be a good commit 
candidate for the next CF.  I think it justs a bit more time than we 
have left in this CommitFest for it right now, particularly given the 
size of the patch.  I'm turning this one into returned with feedback, 
but as a mediocre pl/pgsql author I'm hoping to see more updates still.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-16 Thread Alvaro Herrera
Excerpts from Greg Smith's message of vie dic 16 15:02:20 -0300 2011:
 On 12/04/2011 02:22 AM, Nikhil Sontakke wrote:
 
  Is it okay to modify an existing constraint to mark it as only, even
  if it was originally inheritable?  This is not clear to me.  Maybe the
  safest course of action is to raise an error.  Or maybe I'm misreading
  what it does (because I haven't compiled it yet).
 
 
  Hmmm, good question. IIRC, the patch will pass is_only as true only if 
  it going to be a locally defined, non-inheritable constraint. So I 
  went by the logic that since it was ok to merge and mark a constraint 
  as locally defined, it should be ok to mark it non-inheritable from 
  this moment on with that new local definition?

I think I misread what that was trying to do.  I thought it would turn
on the is only bit on a constraint that a child had inherited from a
previous parent, but that was obviously wrong now that I think about it
again.

 With this open question, this looks like it's back in Alvaro's hands 
 again to me.  This one started the CF as Ready for Committer and seems 
 stalled there for now.  I'm not going to touch its status, just pointing 
 this fact out.

Yeah.  Nikhil, Alex, this is the merged patch.  Have a look that it
still works for you (particularly the pg_dump bits) and I'll commit it.
I adjusted the regression test a bit too.

Thanks.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support


non_inh_check_v4.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch for type privileges

2011-12-16 Thread Greg Smith

On 12/13/2011 01:13 PM, Yeb Havinga wrote:

On 2011-12-12 20:53, Peter Eisentraut wrote:

postgres=  create table a (a int2[]);
ERROR:  permission denied for type smallint[]

OK, that error message should be improved.


Fixing this is easy, but I'd like to look into refactoring this a bit.
Let's ignore that for now; it's easy to do later.


My experience with ignoring things for now is not positive.


This is my favorite comment from the current CommitFest.  I'll probably 
use that line at some point.


Yeb's list is now down to this and some documentation tweaking, so this 
may very well be ready for commit (and an Open Items link toward the do 
later?)  I'm going to mark this one returned with feedback on the 
assumption there is still some refactoring going on here though; can 
always change it if Peter sprints back with a commit candidate.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Allow substitute allocators for PGresult.

2011-12-16 Thread Greg Smith

On 12/01/2011 05:48 AM, Kyotaro HORIGUCHI wrote:

xfer time   Peak RSS
Original: 6.02s 850MB
libpq patch + Original dblink   : 6.11s 850MB
full patch  : 4.44s 643MB
   


These look like interesting results.  Currently Tom is listed as the 
reviewer on this patch, based on comments made before the CF really 
started.  And the patch has been incorrectly been sitting in Waiting 
for author for the last week; oops.  I'm not sure what to do with this 
one now except raise a general call to see if anyone wants to take a 
look at it, now that it seems to be in good enough shape to deliver 
measurable results.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Caching for stable expressions with constant arguments v3

2011-12-16 Thread Jaime Casanova
On Fri, Dec 16, 2011 at 11:08 AM, Greg Smith g...@2ndquadrant.com wrote:
 That looks easy enough to try out now, thanks for the script.

 Jaime was interested in trying this out, and I hope he still finds time to
 do that soon.

Actually i tried some benchmarks with the original version of the
patch and saw some regression with normal pgbench runs, but it wasn't
much... so i was trying to found out some queries that show benefit
now that we have it, that will be a lot more easier

-- 
Jaime Casanova         www.2ndQuadrant.com
Professional PostgreSQL: Soporte 24x7 y capacitación

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: GiST for range types (was Re: [HACKERS] Range Types - typo + NULL string constructor)

2011-12-16 Thread Greg Smith

On 12/13/2011 04:04 PM, Alexander Korotkov wrote:
On Mon, Dec 12, 2011 at 10:41 PM, Jeff Davis pg...@j-davis.com 
mailto:pg...@j-davis.com wrote:


* There's a lot of code for range_gist_penalty. Rather than having
special cases for all combinations of properties in the new an
original,
is it possible to use something a little simpler? Maybe just start the
penalty at zero, and add something for each property of the predicate
range that must be changed. The penalties added might vary, e.g.,
if the
original range has an infinite lower bound, changing it to have an
infinite upper bound might be a higher penalty.

I belive it's possible to make it simplier. I've coded quite 
intuitively. Probably, we should select some representive datasets in 
order to determine which logic is reasonable by tests.


That seems to be a sticking point; you mentioned before that finding 
larger data sets useful for your purposes was hard.


I'm not sure where you'll find data fitting your needs here, but it 
seems difficult to validate all of what you've done so far without it.  
I'm going to mark this one returned and hope you can dig up something 
useful to nail this down.  You might also describe what it is you're 
looking for better and see if anyone else has a suggestion.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us



Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-12-16 Thread Greg Smith

On 12/12/2011 04:35 PM, Andrew Dunstan wrote:

Should all be fixed. Revised patch attached.


There were two successful test results here and only minor things noted 
to fix, which are all cleaned up now.  This seems ready for a committer 
now; I'm just now sure if you want to do it yourself or have someone 
else take a last look over it instead.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_restore --no-post-data and --post-data-only

2011-12-16 Thread Andrew Dunstan



On 12/16/2011 02:43 PM, Greg Smith wrote:

On 12/12/2011 04:35 PM, Andrew Dunstan wrote:

Should all be fixed. Revised patch attached.


There were two successful test results here and only minor things 
noted to fix, which are all cleaned up now.  This seems ready for a 
committer now; I'm just now sure if you want to do it yourself or have 
someone else take a last look over it instead.



I'll do it myself if nobody else wants to comment.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Inlining comparators as a performance optimisation

2011-12-16 Thread Greg Smith
I think we can call a new sorting infrastructure popping out and what 
looks to be over 90 messages on this topic as successful progress on 
this front.  Peter's going to rev a new patch, but with more performance 
results to review and followup discussion I can't see this one as 
wrapping for the current CommitFest.  Marking it returned and we'll 
return to this topic during or before the next CF.  With the majority of 
the comments here coming from committers, I think continuing progress in 
that direction won't be a problem.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] CommitFest 2011-11 Update

2011-12-16 Thread Greg Smith
It's time for another one of these again.  We're now just past the 
nominal 1 month goal of each CommitFest, and as you can obviously see 
from my list traffic I'm trying to close things up.


One of the CF goals is to give everyone who submits something a fair 
review.  There are a few patches that haven't gotten any full review yet.


Collect frequency statistics and selectivity estimation for arrays:  
this area is tough to find help on.  I've been in touch with Nathan 
Boley recently and he does intend to get to this one soon.  Not chasing 
this down earlier is my bad.  If anyone else is interested in this area, 
please let me know.  I hope we get something soon from Nathan, but would 
like to have a backup plan in case he continues to be too busy to work 
on it.


Neither of the two pg_stat_statements enhancement patches have gotten a 
review yet.  The one Peter G has been working on has gotten regular 
updates during the CF but hasn't been reviewed yet.  Daniel Farina has 
only had a few days where it was in a state he could do that against the 
last update, and there's been of system updates at Heroku keeping away 
during most of that.  This I intend to keep nudging forward myself.


A fourth patch is better but not treated well yet:

Allow substitute allocator for PGresult:  Due to some mishandling on 
my side this didn't make it though a second pass of review yet, so it's 
state is a bit unknown still.


Then there are also another 5 patches that have been updated recently 
enough they might be ready to commit, but we're still looking for a 
final review on them first.  They are:


pgsql_fdw contrib module and the related Join push-down for foreign 
tables.  I don't have a good feel for the order these two need to be 
applied in, or exactly where they stand right now.


Fix Leaky Views Problem, again:  Robert is planning to look at this more

avoid taking two snapshots per query:  Dimitri is done with this one 
now.  I think that moves this toward Ready for Committer, which in 
theory Robert could do himself now.  Playing around with this area seems 
complicated enough that it would be nice to have another committer look 
at this though.


splitting plpython into smaller parts:  This was still being worked on 
by Peter E as of yesterday, so I'm assuming it's still viable to commit 
soon.


On a positive note, I seem to have unstuck two of the long running 
arguments this week.  unite recovery.conf and postgresql.conf has a 
potential path forward again.  I don't think we can just forget about 
executing on that until the last minute before the final CommitFest.  
Nailing down who is working on each of those moving parts is something 
we should do, but that seems unlikely to move until the other CF work is 
closed up though.


pg_last_xact_insert_timestamp thread has taken an odd turn from ready to 
commit back to receiving a set of alternate proposals.  The primary 
keepalive patch and archive_keepalive_command design proposal from 
Simon pair seem to add up to something that addresses the concerns I 
raised about computations from a single system and differences between 
transaction and WAL timing.  I'm finding myself on the hook for this 
area, so I'll try myself to help keep it from stalling again.


--
Greg Smith   2ndQuadrant USg...@2ndquadrant.com   Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support  www.2ndQuadrant.us


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: cross column stats revisited ...

2011-12-16 Thread Jim Nasby
On Nov 30, 2011, at 2:38 PM, Robert Haas wrote:
 2011/9/13 PostgreSQL - Hans-Jürgen Schönig postg...@cybertec.at:
 hello everybody,
 
 here is the next version of the cross column patch.
 in the meantime zoli and i managed to make the cross column sampling work.
 some prototype syntax is already working and we are able to store cross 
 column data.
 next on the list is further planner integration and finally some support for 
 joins (which is the main point of our work).
 
 feedback is highly welcome ;).
 
 Did this get added to a CommitFest at any point?  If not, and you
 still want feedback, please add it here:
 
 https://commitfest.postgresql.org/action/commitfest_view/open

Sorry if this is just noise, but I don't see this on the current commit fest. 
Did this get pulled in somewhere? I'd hate to see this ball get dropped...
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-16 Thread Alex Hunsaker
On Fri, Dec 16, 2011 at 12:06, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Yeah.  Nikhil, Alex, this is the merged patch.  Have a look that it
 still works for you (particularly the pg_dump bits) and I'll commit it.
 I adjusted the regression test a bit too.

Other than the version checks seem to be off by one looks fine. I
assume I/we missed that in the original patch. I also adjusted the
version check in describe.c to be consistent with the other version
checks in that file (= 90200 instead of  90100).

(Also, nice catch on false AS as r.conisonly in describe.c)

--

*** a/src/bin/pg_dump/pg_dump.c
--- b/src/bin/pg_dump/pg_dump.c
***
*** 5996,6003  getTableAttrs(TableInfo *tblinfo, int numTables)
  tbinfo-dobj.name);

resetPQExpBuffer(q);
!   if (g_fout-remoteVersion = 90100)
{
appendPQExpBuffer(q, SELECT tableoid, oid, 
conname, 
   
pg_catalog.pg_get_constraintdef(oid) AS consrc, 
  conislocal, 
convalidated, conisonly 
--- 5996,6004 
  tbinfo-dobj.name);

resetPQExpBuffer(q);
!   if (g_fout-remoteVersion = 90200)
{
+   /* conisonly is new in 9.2 */
appendPQExpBuffer(q, SELECT tableoid, oid, 
conname, 
   
pg_catalog.pg_get_constraintdef(oid) AS consrc, 
  conislocal, 
convalidated, conisonly 
***
*** 6007,6012  getTableAttrs(TableInfo *tblinfo, int numTables)
--- 6008,6026 
  ORDER BY 
conname,
  
tbinfo-dobj.catId.oid);
}
+   else if (g_fout-remoteVersion = 90100)
+   {
+   /* conisvalidated is new in 9.1 */
+   appendPQExpBuffer(q, SELECT tableoid, oid, 
conname, 
+  
pg_catalog.pg_get_constraintdef(oid) AS consrc, 
+ conislocal, 
convalidated, 
+ false as 
conisonly 
+ FROM 
pg_catalog.pg_constraint 
+ WHERE 
conrelid = '%u'::pg_catalog.oid 
+AND 
contype = 'c' 
+ ORDER BY 
conname,
+ 
tbinfo-dobj.catId.oid);
+   }
else if (g_fout-remoteVersion = 80400)
{
appendPQExpBuffer(q, SELECT tableoid, oid, 
conname, 
*** a/src/bin/psql/describe.c
--- b/src/bin/psql/describe.c
***
*** 1783,1789  describeOneTableDetails(const char *schemaname,
{
char *is_only;

!   if (pset.sversion  90100)
is_only = r.conisonly;
else
is_only = false AS conisonly;
--- 1783,1789 
{
char *is_only;

!   if (pset.sversion = 90200)
is_only = r.conisonly;
else
is_only = false AS conisonly;

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-16 Thread Alvaro Herrera

Excerpts from Alex Hunsaker's message of vie dic 16 17:50:12 -0300 2011:
 
 On Fri, Dec 16, 2011 at 12:06, Alvaro Herrera
 alvhe...@commandprompt.com wrote:
 
  Yeah.  Nikhil, Alex, this is the merged patch.  Have a look that it
  still works for you (particularly the pg_dump bits) and I'll commit it.
  I adjusted the regression test a bit too.
 
 Other than the version checks seem to be off by one looks fine. I
 assume I/we missed that in the original patch. I also adjusted the
 version check in describe.c to be consistent with the other version
 checks in that file (= 90200 instead of  90100).

Uhm ... you're right that convalidated is present in 9.1 but AFAIR it's
only used for FKs, not CHECKs which is what this code path is about (for
CHECKs I only introduced it in 9.2, which is the patch that caused the
merge conflict in the first place).  FKs use a completely separate path
in pg_dump which doesn't need the separate convalidated check.  So I
don't think we really need to add a separate branch for 9.1 here, but it
certainly needs a comment improvement.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Review: Non-inheritable check constraints

2011-12-16 Thread Alex Hunsaker
On Fri, Dec 16, 2011 at 14:01, Alvaro Herrera
alvhe...@commandprompt.com wrote:

 Excerpts from Alex Hunsaker's message of vie dic 16 17:50:12 -0300 2011:

 On Fri, Dec 16, 2011 at 12:06, Alvaro Herrera
 alvhe...@commandprompt.com wrote:

  Yeah.  Nikhil, Alex, this is the merged patch.  Have a look that it
  still works for you (particularly the pg_dump bits) and I'll commit it.
  I adjusted the regression test a bit too.

 Other than the version checks seem to be off by one looks fine.

 Uhm ... you're right that convalidated is present in 9.1 [...] So I
 don't think we really need to add a separate branch for 9.1 here, but it
 certainly needs a comment improvement.

Hrm... What am I missing?

$ inh_v4/bin/psql -c 'select version();' -d test
 version
-
 PostgreSQL 9.1.0 on x86_64-unknown-linux-gnu, compiled by gcc (GCC)
4.6.1 20110819 (prerelease), 64-bit
(1 row)

$ inh_v4/bin/pg_dump test
pg_dump: SQL command failed
pg_dump: Error message from server: ERROR:  column conisonly does not exist
LINE 1: ...aintdef(oid) AS consrc, conislocal, convalidated, conisonly ...
 ^
pg_dump: The command was: SELECT tableoid, oid, conname,
pg_catalog.pg_get_constraintdef(oid) AS consrc, conislocal,
convalidated, conisonly FROM pg_catalog.pg_constraint WHERE conrelid =
'237964'::pg_catalog.oidAND contype = 'c' ORDER BY conname

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] WIP: cross column stats revisited ...

2011-12-16 Thread Boszormenyi Zoltan
Hi,

2011-12-16 21:42 keltezéssel, Jim Nasby írta:
 On Nov 30, 2011, at 2:38 PM, Robert Haas wrote:
 2011/9/13 PostgreSQL - Hans-Jürgen Schönig postg...@cybertec.at:
 hello everybody,

 here is the next version of the cross column patch.
 in the meantime zoli and i managed to make the cross column sampling work.
 some prototype syntax is already working and we are able to store cross 
 column data.
 next on the list is further planner integration and finally some support 
 for joins (which is the main point of our work).

 feedback is highly welcome ;).
 Did this get added to a CommitFest at any point?  If not, and you
 still want feedback, please add it here:

 https://commitfest.postgresql.org/action/commitfest_view/open
 Sorry if this is just noise, but I don't see this on the current commit fest. 
 Did this get pulled in somewhere? I'd hate to see this ball get dropped...

the last published patch doesn't apply without rejects to
current GIT, we are working on finishing it.

Best regards,
Zoltán Böszörményi

-- 
--
Zoltán Böszörményi
Cybertec Schönig  Schönig GmbH
Gröhrmühlgasse 26
A-2700 Wiener Neustadt, Austria
Web: http://www.postgresql-support.de
 http://www.postgresql.at/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Daniel Farina
On Tue, Dec 13, 2011 at 1:13 PM, Merlin Moncure mmonc...@gmail.com wrote:
 On Tue, Dec 13, 2011 at 2:41 PM, Peter Eisentraut pete...@gmx.net wrote:
 On tis, 2011-12-13 at 08:44 -0500, Robert Haas wrote:
 Just because all our languages are Turing-complete doesn't mean they
 are all equally well-suited to every task.  Of course, that doesn't
 mean we'd add a whole new language just to get a JSON parser, but I
 don't think that's really what Peter was saying.

 That was in fact what I was saying.

 Rather, I think the
 point is that embedded Javascript is *extremely* popular, lots and
 lots of people are supporting it, and we ought to seriously consider
 doing the same.  It's hard to think of another PL that we could add
 that would give us anywhere near the bang for the buck that Javascript
 would.

 If JavaScript (trademark of Oracle, btw.; be careful about calling
 anything PL/JavaScript) had a near-canonical implementation with a
 stable shared library and a C API, then this might be a no-brainer.  But
 instead we have lots of implementations, and the one being favored here
 is written in C++ and changes the soname every 3 months.  I don't think
 that's the sort of thing we want to carry around.

 Mozilla SpiderMonkey seems like a good fit: it compiles to a
 dependency free .so, has excellent platform support, has a stable C
 API, and while it's C++ internally makes no use of exceptions (in
 fact, it turns them off in the c++ compiler).  ISTM to be a suitable
 foundation for an external module, 'in core' parser, pl, or anything
 really.

To the best of my knowledge:

libv8 is also exception-free, and compiled with exceptions off.  plv8
does make use of exceptions, though, something that gave me pause when
reading it.  At first I thought it was to integrate with libv8, but
that did not seem to be the case, so it probably could learn to use
return codes instead.  libv8 also has a light dependency list:

ldd /usr/lib/libv8.so (/lib/ entries and linux omitted)

libicuuc.so.44 = /usr/lib/libicuuc.so.44 (0x7fc838459000)
libstdc++.so.6 = /usr/lib/x86_64-linux-gnu/libstdc++.so.6 
(0x7fc838151000)
libicudata.so.44 = /usr/lib/libicudata.so.44 (0x7fc836aed000)

So ICU and C++.

In addition, more projects have been successful in embedding libv8;
right now it has the entrenchment advantage over libmozjs in
applications that are not closely tied to XUL/Mozilla, although that
could change in a few years.  Institutionally Mozilla has not
historically been quick to prioritize anything not essential to
shipping Firefox, and I would imagine V8 is in a similar situation,
even though they occasionally make concessions for non-browsing use
cases (ex: multi-gigabyte heap sizes).

I would regard either choice as at least equally risky in this way,
given what I know (refinements welcome).

Both libv8 and libmozjs are maintained in Debian, and are parts of at
least one stable release.

In spite of the hazard posed by the aggressive releases and
non-general-purpose focus of the maintainers of both of these runtimes
at this time, I am still in favor of having a binding to at least one
of them into mainline, with the ability to get new or alternative
versions via extensions.  If extensions were already pervasive and
everyone was installing them everywhere I'd think otherwise (just
leave it as an extension), but the cost of not being able to index and
manipulate JSON efficiently and with a trusted language is just too
huge to let slide.

Right now the perception of Postgres...actually, databases in general,
including virtually all of the newcomers -- is that they are
monolithic systems, and for most people either 9.3 will have
javascript and indexing of JSON documents, or it won't.  In most cases
I would say meh, let them eat cake until extensions become so
apparently dominant that we can wave someone aside to extension-land,
but in this case I think that would be a strategic mistake.

-- 
fdr

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Andrew Dunstan



On 12/16/2011 05:39 PM, Daniel Farina wrote:

To the best of my knowledge:

libv8 is also exception-free, and compiled with exceptions off.  plv8
does make use of exceptions, though, something that gave me pause when
reading it.  At first I thought it was to integrate with libv8, but
that did not seem to be the case, so it probably could learn to use
return codes instead.


Yeah. We should look at that.

cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storing hot members of PGPROC out of the band

2011-12-16 Thread Bruce Momjian
Robert Haas wrote:
 On that theory, I'm inclined to think that's not really a problem.
 We'll go nuts if we refuse to commit anything until it shows a
 meaningful win on every imaginable workload, and it seems like this
 can't really be worse than the status quo; any case where it is must
 be some kind of artifact.  We're better of getting rid of as much
 ProcArrayLock contention as possible, rather than keeping it around
 because there are corner cases where it decreases contention on some
 other lock.

Interesting conclusion, and it makes sense.  Seems once this is applied
we will have more places to look for contention improvements.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] JSON for PG 9.2

2011-12-16 Thread Robert Haas
On Fri, Dec 16, 2011 at 12:13 PM, Joey Adams joeyadams3.14...@gmail.com wrote:
 So, if the server encoding is not UTF-8, should we ban Unicode escapes:

    \u00FCber

 or non-ASCII characters?

    über

The former.  Refusing the escapes makes sense, because it's totally
unclear how we ought to interpret them.  Refusing the characters would
be just breaking something for no particular reason.  Right now, for
example, EXPLAIN (FORMAT JSON) could easily end up returning non-ASCII
characters in whatever the database encoding happens to be.  That
command would be unusable if we arbitrarily chucked an error every
time a non-ASCII character showed up and the database encoding wasn't
UTF-8.

 Also:

  * What if the server encoding is SQL_ASCII?

  * What if the server encoding is UTF-8, but the client encoding is
 something else (e.g. SQL_ASCII)?

It's not clear to me why these cases would require any special handling.

In the spirit of Simon's suggestion that we JFDI, I cooked up a patch
today that JFDI.  See attached.  This lacks any form of
canonicalization and therefore doesn't support comparison operators.
It also lacks documentation, regression testing, and probably an
almost uncountable number of other bells and whistles that people
would like to have.  This is more or less a deliberate decision on my
part: I feel that the biggest problem with this project is that we've
spent far too much time dithering over what the exactly perfect set of
functionality set would be, and not enough time getting good basic
functionality committed.  So this is as basic as it gets.  It does
exactly one thing: validation.  If people are happy with it, we can
extend from here incrementally.

Thoughts?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


json-rmh-v1.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storing hot members of PGPROC out of the band

2011-12-16 Thread Robert Haas
On Fri, Dec 16, 2011 at 8:25 PM, Bruce Momjian br...@momjian.us wrote:
 Robert Haas wrote:
 On that theory, I'm inclined to think that's not really a problem.
 We'll go nuts if we refuse to commit anything until it shows a
 meaningful win on every imaginable workload, and it seems like this
 can't really be worse than the status quo; any case where it is must
 be some kind of artifact.  We're better of getting rid of as much
 ProcArrayLock contention as possible, rather than keeping it around
 because there are corner cases where it decreases contention on some
 other lock.

 Interesting conclusion, and it makes sense.  Seems once this is applied
 we will have more places to look for contention improvements.

Yeah.  The performance results I posted the other day seem to show
that on some of these tests we're thrashing our CLOG buffers, and the
difference between unlogged tables and permanent tables seems to
indicate pretty clearly that WALInsertLock is a huge problem.  I'm
going to look more at the CLOG stuff next week, and also keep poking
at ProcArrayLock, where I think there's still room for further
improvement.  I am leaving WALInsertLock to Heikki for now, since (1)
I don't want to collide with what he's working on, (2) he knows more
about it than I do, anyway, and (3) it's a really hard problem and I
don't have any particularly good ideas about how to fix it.  :-(

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] ALTER TABLE lock strength reduction patch is unsafe

2011-12-16 Thread Robert Haas
On Fri, Dec 16, 2011 at 8:54 AM, Simon Riggs si...@2ndquadrant.com wrote:
 On Fri, Dec 16, 2011 at 1:38 PM, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Dec 16, 2011 at 7:07 AM, Simon Riggs si...@2ndquadrant.com wrote:
 That is a bug and one we should fix. I supplied a patch for that,
 written to Tom's idea for how to solve it.

 I will apply that, unless there are objections.

 I remember several attempts at that, but I don't remember any that
 didn't meet with objections.  Do you have a link?

 My patch to implement SnapshotNow correctly, from Jun 27 on this
 thread was never reviewed or commented upon by anybody. That was
 probably because it only fixes one of the problems, not all of them.

Well, I think it was mostly because you didn't sound terribly
optimistic about the approach: So maybe this idea is worth doing, but
I don't think it helps us much reduce lock levels for DDL.  And also
because you described the patch as a rough hack, and not something
you thought ready to commit.

I am also not entirely sure I believe that this is plugging all the
failure cases.  I think that it may just be making the failure cases
more obscure, rather than really getting rid of them.  Consider
something like the following:

T1: Update row version A, creating new row version B.
T2: Begin scanning the catalog in question.  We happen to encounter
row version B first.  We remember T1's XID as in progress, but don't
see the row since T1 hasn't committed.
T1: Rollback.
T3: Update row version A, creating new row version C.
T3: Commit.
T2: Scan now comes to row version A; we don't see that version either,
since T3 is committed.

I don't think there's any guarantee that T2's scan will see tuples
inserted after the start of the scan.  If I'm correct about that, and
I'm pretty sure it's true for sequential scans anyway, then T2's scan
might end without seeing C either.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Storing hot members of PGPROC out of the band

2011-12-16 Thread Jim Nasby
On Dec 16, 2011, at 7:25 PM, Bruce Momjian wrote:
 Robert Haas wrote:
 On that theory, I'm inclined to think that's not really a problem.
 We'll go nuts if we refuse to commit anything until it shows a
 meaningful win on every imaginable workload, and it seems like this
 can't really be worse than the status quo; any case where it is must
 be some kind of artifact.  We're better of getting rid of as much
 ProcArrayLock contention as possible, rather than keeping it around
 because there are corner cases where it decreases contention on some
 other lock.
 
 Interesting conclusion, and it makes sense.  Seems once this is applied
 we will have more places to look for contention improvements.

I also wonder how much this throws some previous performance tests into 
suspicion. If it's not uncommon for performance improvement attempts to shift a 
bottleneck to a different part of the system and marginally hurt performance 
then we might be throwing away good performance improvement ideas before we 
should...
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers