Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Heikki Linnakangas

On 23.08.2012 23:07, Alvaro Herrera wrote:

One problem with this is that I get this warning:

/pgsql/source/HEAD/contrib/pg_upgrade/exec.c: In function ‘s_exec_prog’:
/pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might be 
possible candidate for ‘gnu_printf’ format attribute 
[-Wmissing-format-attribute]
/pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might be 
possible candidate for ‘gnu_printf’ format attribute 
[-Wmissing-format-attribute]

I have no idea how to silence that.  Ideas?


You can do what the warning suggests, and tell the compiler that 
exec_prog takes printf-like arguments. See elog.h for an example of that:


extern int
errmsg(const char *fmt,...)
/* This extension allows gcc to check the format string for consistency with
   the supplied arguments. */
__attribute__((format(PG_PRINTF_ATTRIBUTE, 1, 2)));

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2RC1 wraps this Thursday ...

2012-08-24 Thread Magnus Hagander
On Fri, Aug 24, 2012 at 1:06 AM, Andrew Dunstan and...@dunslane.net wrote:

 On 08/23/2012 02:44 PM, Andrew Dunstan wrote:


 On 08/23/2012 01:58 PM, Tom Lane wrote:

 Andrew Dunstan and...@dunslane.net writes:

 On 08/23/2012 12:40 AM, Tom Lane wrote:

 Anybody who wants to fix it is surely welcome to, but I'm not going
 to consider this item as a reason to postpone RC1.

 I'm not sure what you want done. I can test Amit's patch in a couple of
 Windows environments (say XP+mingw and W7+MSVC) if that's what you need.

 Well, the patch as-is just adds another copy of code that really needs
 to be refactored into some new file in src/port/ or some such. That's
 not work I care to do while being unable to test the result ...




 OK, I'll see if I can carve out a bit of time.



 I have spent a couple of hours on this, and I'm sufficiently nervous about
 it that I'm not going to do anything in a hurry. I will see what can be done
 over the weekend, possibly, but no promises.

 TBH I'd rather stick with the less invasive approach of the original patch
 at this stage, and see about refactoring for 9.3.

+1.

While I haven't looked at the code specifically, these areas can be
quite fragile and very environment-dependent. I'd rather not upset it
this close to release - especially not after RC wrap.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Why does analyze_new_cluster.sh use sleep?

2012-08-24 Thread Bruce Momjian
On Thu, Aug 23, 2012 at 11:15:06PM -0400, Peter Eisentraut wrote:
 On Thu, 2012-08-23 at 17:05 -0400, Bruce Momjian wrote:
  On Thu, Aug 23, 2012 at 02:17:44AM -0400, Peter Eisentraut wrote:
   The script analyze_new_cluster.sh output by pg_upgrade contains several
   sleep calls (see contrib/pg_upgrade/check.c).  What is the point of
   this?  If the purpose of this script is to get the database operational
   again as soon as possible, waiting a few seconds doing nothing surely
   isn't helping.
   
   I could maybe see the point of waiting a bit between the different
   vacuumdb calls, to catch some breath, but the one before the first call
   to vacuumdb is highly dubious to me.
  
  The sleep is there so the user can read the status message, in case it
  scrolls off the screen once the next stage starts.
 
 That seems completely arbitrary and contrary to the point of the script.
 The pg_upgrade output already explains what the script is for.  If we
 really wanted the user to confirm what is going to happen, we should
 wait for a key press or something.  I also don't think that 2 seconds is
 enough to read and react to the written text.  Also, by that logic, we
 need to put a delay between each database processed by vacuumdb as well.

Well, the idea is that the script is running stages, and your system is
mostly useful after the first stage is done.  I don't see a keypress as
helping there.  I think this is different from the vacuumdb case.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recently noticed documentation issues

2012-08-24 Thread Craig Ringer

On 08/24/2012 12:42 PM, Amit Kapila wrote:


Isn't what you are telling explained in Returning Cursors section on below
link:
http://www.postgresql.org/docs/9.1/static/plpgsql-cursors.html


Yes, but nowhere in:

  http://www.postgresql.org/docs/current/static/sql-fetch.html

and IMO not clearly enough in the PL/PgSQL docs; it only appears in 
sample code.


--
Craig Ringer


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Recently noticed documentation issues

2012-08-24 Thread Amit Kapila
From: Craig Ringer [mailto:ring...@ringerc.id.au] 
Sent: Friday, August 24, 2012 6:59 PM
On 08/24/2012 12:42 PM, Amit Kapila wrote:

 Isn't what you are telling explained in Returning Cursors section on
below
 link:
 http://www.postgresql.org/docs/9.1/static/plpgsql-cursors.html

Yes, but nowhere in:

   http://www.postgresql.org/docs/current/static/sql-fetch.html

 and IMO not clearly enough in the PL/PgSQL docs; it only appears in 
 sample code.

  Ref-cursors usage is also along with functions, so mentioning them along
with
  PL/PgSQL docs looks more logical to me.
  However in context of SQL statements also Fetch can use internally
generated cursor 
  names in quoted strings, so about that some description might be useful.
But again I 
  don't know clearly the usage of it, whether such kind of functionality is
used by 
  users or PG is expecting to get Fetch used in such contexts.
  

With Regards,
Amit Kapila.





-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Tom Lane
Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
 On 23.08.2012 23:07, Alvaro Herrera wrote:
 One problem with this is that I get this warning:
 /pgsql/source/HEAD/contrib/pg_upgrade/exec.c: In function ‘s_exec_prog’:
 /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
 be possible candidate for ‘gnu_printf’ format attribute 
 [-Wmissing-format-attribute]
 /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
 be possible candidate for ‘gnu_printf’ format attribute 
 [-Wmissing-format-attribute]
 
 I have no idea how to silence that.  Ideas?

 You can do what the warning suggests, and tell the compiler that 
 exec_prog takes printf-like arguments.

exec_prog already has such decoration, and Alvaro's patch doesn't remove
it.  So the question is, exactly what the heck does that warning mean?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] 9.2RC1 wraps this Thursday ...

2012-08-24 Thread Tom Lane
Magnus Hagander mag...@hagander.net writes:
 On Fri, Aug 24, 2012 at 1:06 AM, Andrew Dunstan and...@dunslane.net wrote:
 TBH I'd rather stick with the less invasive approach of the original patch
 at this stage, and see about refactoring for 9.3.

 +1.

 While I haven't looked at the code specifically, these areas can be
 quite fragile and very environment-dependent. I'd rather not upset it
 this close to release - especially not after RC wrap.

Fair enough.  Will one of you deal with the patch as-is, then?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Bruce Momjian
On Fri, Aug 24, 2012 at 10:08:58AM -0400, Tom Lane wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
  On 23.08.2012 23:07, Alvaro Herrera wrote:
  One problem with this is that I get this warning:
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c: In function ‘s_exec_prog’:
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
  be possible candidate for ‘gnu_printf’ format attribute 
  [-Wmissing-format-attribute]
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
  be possible candidate for ‘gnu_printf’ format attribute 
  [-Wmissing-format-attribute]
  
  I have no idea how to silence that.  Ideas?
 
  You can do what the warning suggests, and tell the compiler that 
  exec_prog takes printf-like arguments.
 
 exec_prog already has such decoration, and Alvaro's patch doesn't remove
 it.  So the question is, exactly what the heck does that warning mean?

It sounds like it is suggestioning to use more specific attribute
decoration.  This might be relevant:

http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

-Wmissing-format-attribute
Warn about function pointers that might be candidates for format
attributes. Note these are only possible candidates, not absolute ones.
GCC guesses that function pointers with format attributes that are used
in assignment, initialization, parameter passing or return statements
should have a corresponding format attribute in the resulting type. I.e.
the left-hand side of the assignment or initialization, the type of the
parameter variable, or the return type of the containing function
respectively should also have a format attribute to avoid the warning.

GCC also warns about function definitions that might be candidates
for format attributes. Again, these are only possible candidates. GCC
guesses that format attributes might be appropriate for any function
that calls a function like vprintf or vscanf, but this might not always
be the case, and some functions for which format attributes are
appropriate may not be detected. 

and I see this option enabled in configure.in.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 ./pod/perl581delta.pod:
 At startup Perl blocks the SIGFPE signal away since there isn't much
 Perl can do about it.  Previously this blocking was in effect also for
 programs executed from within Perl.  Now Perl restores the original
 SIGFPE handling routine, whatever it was, before running external
 programs.

So there's a gap in the restore logic someplace.

 perl.h also has some tidbits: ...
 That doesn't sound very well reasoned and especially not very well tested to 
 me.

Time to file a Perl bug?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andres Freund
On Friday, August 24, 2012 04:53:36 PM Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  ./pod/perl581delta.pod:
  At startup Perl blocks the SIGFPE signal away since there isn't much
  Perl can do about it.  Previously this blocking was in effect also for
  programs executed from within Perl.  Now Perl restores the original
  SIGFPE handling routine, whatever it was, before running external
  programs.
 
 So there's a gap in the restore logic someplace.

Well, the logic is not triggering at all in pg's case. Its just used if perl 
is exec()ing something...


  perl.h also has some tidbits: ...
  That doesn't sound very well reasoned and especially not very well tested
  to me.
 
 Time to file a Perl bug?
Anybody more involved in the perl community volunteering?

Greetings,

Andres
-- 
 Andres Freund http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 It sounds like it is suggestioning to use more specific attribute
 decoration.  This might be relevant:

   http://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html

   -Wmissing-format-attribute
   Warn about function pointers that might be candidates for format
   attributes. Note these are only possible candidates, not absolute ones.
   GCC guesses that function pointers with format attributes that are used
   in assignment, initialization, parameter passing or return statements
   should have a corresponding format attribute in the resulting type. I.e.
   the left-hand side of the assignment or initialization, the type of the
   parameter variable, or the return type of the containing function
   respectively should also have a format attribute to avoid the warning.

   GCC also warns about function definitions that might be candidates
   for format attributes. Again, these are only possible candidates. GCC
   guesses that format attributes might be appropriate for any function
   that calls a function like vprintf or vscanf, but this might not always
   be the case, and some functions for which format attributes are
   appropriate may not be detected. 

 and I see this option enabled in configure.in.

Hm.  I'm a bit dubious about enabling warnings that are so admittedly
heuristic as this.  They admit that the warnings might be wrong, and
yet document no way to shut them up.  I think we might be better advised
to not enable this warning.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Alvaro Herrera
Actually it seems better to just get rid of the extra varargs function
and just have a single exec_prog.  The extra NULL argument is not
troublesome enough, it seems.  Updated version attached.

Again, win32 testing would be welcome.  Sadly, buildfarm does not run
pg_upgrade's make check.

-- 
Álvaro Herrerahttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


upgrade_execprog-2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Statistics and selectivity estimation for ranges

2012-08-24 Thread Heikki Linnakangas

On 20.08.2012 00:31, Alexander Korotkov wrote:

On Thu, Aug 16, 2012 at 4:40 PM, Heikki Linnakangas
heikki.linnakan...@enterprisedb.com  wrote:


On 15.08.2012 11:34, Alexander Korotkov wrote:


Ok, we've to decide if we need standard histogram. In some cases it can
be used for more accurate estimation of   and   operators.
But I think it is not so important. So, we can replace standard
histogram
with histograms of lower and upper bounds?



Yeah, I think that makes more sense. The lower bound histogram is still
useful for  and  operators, just not as accurate if there are lots of
values with the same lower bound but different upper bound.



New version of patch.
* Collect new stakind STATISTIC_KIND_BOUNDS_HISTOGRAM, which is lower and
upper bounds histograms combined into single ranges array, instead
of STATISTIC_KIND_HISTOGRAM.


One worry I have about that format for the histogram is that you 
deserialize all the values in the histogram, before you do the binary 
searches. That seems expensive if stats target is very high. I guess you 
could deserialize them lazily to alleviate that, though.



* Selectivity estimations for,=,,=  using this
histogram.


Thanks!

I'm going to do the same for this that I did for the sp-gist patch, and 
punt on the more complicated parts for now, and review them separately. 
Attached is a heavily edited version that doesn't include the length 
histogram, and consequently doesn't do anything smart for the  and  
operators.  is estimated using the bounds histograms. There's now a 
separate stakind for the empty range fraction, since it's not included 
in the length-histogram.


I tested this on a dataset containing birth and death dates of persons 
that have a wikipedia page, obtained from the dbpedia.org project. I can 
send a copy if someone wants it. The estimates seem pretty accurate.


Please take a look, to see if I messed up something.

--
  Heikki Linnakangas
  EnterpriseDB   http://www.enterprisedb.com
diff --git a/src/backend/utils/adt/Makefile b/src/backend/utils/adt/Makefile
index a692086..a929f4a 100644
--- a/src/backend/utils/adt/Makefile
+++ b/src/backend/utils/adt/Makefile
@@ -30,7 +30,8 @@ OBJS = acl.o arrayfuncs.o array_selfuncs.o array_typanalyze.o \
 	tsginidx.o tsgistidx.o tsquery.o tsquery_cleanup.o tsquery_gist.o \
 	tsquery_op.o tsquery_rewrite.o tsquery_util.o tsrank.o \
 	tsvector.o tsvector_op.o tsvector_parser.o \
-	txid.o uuid.o windowfuncs.o xml.o rangetypes_spgist.o
+	txid.o uuid.o windowfuncs.o xml.o rangetypes_spgist.o \
+	rangetypes_typanalyze.o rangetypes_selfuncs.o
 
 like.o: like.c like_match.c
 
diff --git a/src/backend/utils/adt/rangetypes.c b/src/backend/utils/adt/rangetypes.c
index fe9e0c4..f229a9d 100644
--- a/src/backend/utils/adt/rangetypes.c
+++ b/src/backend/utils/adt/rangetypes.c
@@ -1228,23 +1228,6 @@ hash_range(PG_FUNCTION_ARGS)
 	PG_RETURN_INT32(result);
 }
 
-/* ANALYZE support */
-
-/* typanalyze function for range datatypes */
-Datum
-range_typanalyze(PG_FUNCTION_ARGS)
-{
-	/*
-	 * For the moment, just punt and don't analyze range columns.  If we get
-	 * close to release without having a better answer, we could consider
-	 * letting std_typanalyze do what it can ... but those stats are probably
-	 * next door to useless for most activity with range columns, so it's not
-	 * clear it's worth gathering them.
-	 */
-	PG_RETURN_BOOL(false);
-}
-
-
 /*
  *--
  * CANONICAL FUNCTIONS
diff --git a/src/backend/utils/adt/rangetypes_selfuncs.c b/src/backend/utils/adt/rangetypes_selfuncs.c
new file mode 100644
index 000..ebfc427
--- /dev/null
+++ b/src/backend/utils/adt/rangetypes_selfuncs.c
@@ -0,0 +1,560 @@
+/*-
+ *
+ * rangetypes_selfuncs.c
+ *	  Functions for selectivity estimation of range operators
+ *
+ * Estimates are based on histograms of lower and upper bounds.
+ *
+ * Portions Copyright (c) 1996-2012, PostgreSQL Global Development Group
+ * Portions Copyright (c) 1994, Regents of the University of California
+ *
+ *
+ * IDENTIFICATION
+ *	  src/backend/utils/adt/rangetypes_selfuncs.c
+ *
+ *-
+ */
+#include postgres.h
+
+#include math.h
+
+#include catalog/pg_operator.h
+#include catalog/pg_statistic.h
+#include utils/lsyscache.h
+#include utils/rangetypes.h
+#include utils/selfuncs.h
+#include utils/typcache.h
+
+static double calc_hist_selectivity(TypeCacheEntry *typcache,
+	VariableStatData *vardata, RangeType *constval, Oid operator);
+static double calc_hist_selectivity_scalar(TypeCacheEntry *typcache,
+			 RangeBound *constbound,
+			 RangeBound *hist, int hist_nvalues,
+			 bool equal);
+
+static int range_bsearch(TypeCacheEntry *typcache, RangeBound *value,
+			  RangeBound *hist, int hist_length, bool equal);
+static double calc_rangesel(TypeCacheEntry *typcache, VariableStatData *vardata,

Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andrew Dunstan


On 08/24/2012 10:58 AM, Andres Freund wrote:

On Friday, August 24, 2012 04:53:36 PM Tom Lane wrote:

Andres Freund and...@2ndquadrant.com writes:

./pod/perl581delta.pod:
At startup Perl blocks the SIGFPE signal away since there isn't much
Perl can do about it.  Previously this blocking was in effect also for
programs executed from within Perl.  Now Perl restores the original
SIGFPE handling routine, whatever it was, before running external
programs.

So there's a gap in the restore logic someplace.

Well, the logic is not triggering at all in pg's case. Its just used if perl
is exec()ing something...



perl.h also has some tidbits: ...
That doesn't sound very well reasoned and especially not very well tested
to me.

Time to file a Perl bug?

Anybody more involved in the perl community volunteering?



Just run perlbug and let us know the bug number.

cheers

andrew




--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] default_isolation_level='serializable' crashes on Windows

2012-08-24 Thread Tom Lane
Kevin Grittner kevin.gritt...@wicourts.gov writes:
 Tom Lane t...@sss.pgh.pa.us wrote:
 Lastly, I simplified the added code in InitPostgres down to just a
 bare assignment to XactIsoLevel.  It doesn't seem worthwhile to
 add all the cycles involved in SetConfigOption(), when we have no
 desire to change the GUC permanently.  (I think Kevin's code was
 wrong anyway in that it was using PGC_S_OVERRIDE, which would
 impact the reset state for the GUC.)
 
 Point taken on PGC_S_OVERRIDE.  And that probably fixes the issue
 that caused me to hold up when I was about ready to pull the trigger
 this past weekend.  A final round of testing showed a SET line on
 psql start, which is clearly wrong.  I suspected that I needed to go
 to a lower level in setting that, but hadn't had a chance to sort
 out just what the right path was.  In retrospect, just directly
 assigning the value seems pretty obvious.

Hm, I'm not sure why using SetConfigOption() would result in anything
happening client-side.  (If this GUC were GUC_REPORT, that would result
in a parameter-change report to the client; but it isn't, and anyway
that shouldn't cause psql to print SET.)  Might be worth looking
closer at what was happening there.

 Kevin, do you want to apply it?  You had mentioned wanting some
 practice with back-patches.
 
 I'm getting on a plane to Istanbul in less than 48 hours for the
 VLDB conference, and scrambling to tie up loose ends.

OK, I've committed it.

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PATCH: pgbench - random sampling of transaction written into log

2012-08-24 Thread Tomas Vondra
Hi,

attached is a patch that adds support for random sampling in pgbench, when
it's executed with -l flag. You can do for example this:

  $ pgbench -l -T 120 -R 1 db

and then only 1% of transactions will be written into the log file. If you
omit the tag, all the transactions are written (i.e. it's backward
compatible).

Recently I've been using pgbench on hardware that can handle a lot of
transactions (various flavors of SSDs, lots of RAM, ...), and in such
cases the log files may get pretty big (even several gigabytes for a
single 1-hour run). A reasonable random sample (along with the stats
provided by pgbench itself) is often more than enough in such cases.

kind regards
Tomasdiff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 00cab73..e849b2b 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -145,6 +145,9 @@ char   *index_tablespace = NULL;
 #define naccounts  10
 
 bool   use_log;/* log transaction latencies to 
a file */
+bool   use_log_sampling;   /* sample the log randomly */
+intnsample_rate = 100; /* default log sampling rate */
+
 bool   is_connect; /* establish connection for 
each transaction */
 bool   is_latencies;   /* report per-command latencies */
 intmain_pid;   /* main process id used 
in log filename */
@@ -364,6 +367,7 @@ usage(void)
 -f FILENAME  read transaction script from FILENAME\n
 -j NUM   number of threads (default: 1)\n
 -l   write transaction times to log file\n
+-R NUM   log sampling rate in pct (default: 100)\n
 -M simple|extended|prepared\n
  protocol for submitting queries to server 
(default: simple)\n
 -n   do not run VACUUM before tests\n
@@ -877,21 +881,25 @@ top:
instr_time  diff;
double  usec;
 
-   INSTR_TIME_SET_CURRENT(now);
-   diff = now;
-   INSTR_TIME_SUBTRACT(diff, st-txn_begin);
-   usec = (double) INSTR_TIME_GET_MICROSEC(diff);
+   /* either no sampling or is within the sample */
+   if ((! use_log_sampling) || (rand() % 100  
nsample_rate)) {
+
+   INSTR_TIME_SET_CURRENT(now);
+   diff = now;
+   INSTR_TIME_SUBTRACT(diff, st-txn_begin);
+   usec = (double) INSTR_TIME_GET_MICROSEC(diff);
 
 #ifndef WIN32
-   /* This is more than we really ought to know about 
instr_time */
-   fprintf(logfile, %d %d %.0f %d %ld %ld\n,
-   st-id, st-cnt, usec, st-use_file,
-   (long) now.tv_sec, (long) now.tv_usec);
+   /* This is more than we really ought to know 
about instr_time */
+   fprintf(logfile, %d %d %.0f %d %ld %ld\n,
+   st-id, st-cnt, usec, 
st-use_file,
+   (long) now.tv_sec, (long) 
now.tv_usec);
 #else
-   /* On Windows, instr_time doesn't provide a timestamp 
anyway */
-   fprintf(logfile, %d %d %.0f %d 0 0\n,
-   st-id, st-cnt, usec, st-use_file);
+   /* On Windows, instr_time doesn't provide a 
timestamp anyway */
+   fprintf(logfile, %d %d %.0f %d 0 0\n,
+   st-id, st-cnt, usec, 
st-use_file);
 #endif
+   }
}
 
if (commands[st-state]-type == SQL_COMMAND)
@@ -1962,7 +1970,7 @@ main(int argc, char **argv)
state = (CState *) xmalloc(sizeof(CState));
memset(state, 0, sizeof(CState));
 
-   while ((c = getopt_long(argc, argv, 
ih:nvp:dSNc:j:Crs:t:T:U:lf:D:F:M:, long_options, optindex)) != -1)
+   while ((c = getopt_long(argc, argv, 
ih:nvp:dSNc:j:Crs:t:T:U:lf:R:D:F:M:, long_options, optindex)) != -1)
{
switch (c)
{
@@ -2070,6 +2078,15 @@ main(int argc, char **argv)
case 'l':
use_log = true;
break;
+   case 'R':
+   use_log_sampling = true;
+   nsample_rate = atoi(optarg);
+   if (nsample_rate = 0 || nsample_rate  100)
+   {
+   fprintf(stderr, invalid sampling rate: 
%d\n, nsample_rate);
+

[HACKERS] PATCH: pgbench - aggregation of info written into log

2012-08-24 Thread Tomas Vondra
Hi,

this patch adds support for aggregation of info written into the log.
Instead of info about each transaction, a summary for time intervals (with
custom length) is written into the log. All you need to do is add -A
seconds, e.g.

  $ pgbench -T 3600 -A 10 -l db

which will produce log with 10-second summaries, containing interval start
timestamp, number of transactions, sum of latencies, sum of 2nd power of
latencies, min and max latency (it's done this way to allow handling of
multiple logs produced by -j option).

This patch is a bit less polished (and more complex) than the other
pgbench patch I've sent a while back, and I'm not sure how to handle the
Windows branch. That needs to be fixed during the commit fest.

kind regards
Tomasdiff --git a/contrib/pgbench/pgbench.c b/contrib/pgbench/pgbench.c
index 00cab73..9fede3c 100644
--- a/contrib/pgbench/pgbench.c
+++ b/contrib/pgbench/pgbench.c
@@ -145,6 +145,8 @@ char   *index_tablespace = NULL;
 #define naccounts  10
 
 bool   use_log;/* log transaction latencies to 
a file */
+bool   use_log_agg;/* log aggregates instead of individual 
transactions */
+intnaggseconds = 1;/* number of seconds for 
aggregation */
 bool   is_connect; /* establish connection for 
each transaction */
 bool   is_latencies;   /* report per-command latencies */
 intmain_pid;   /* main process id used 
in log filename */
@@ -240,6 +242,18 @@ typedef struct
char   *argv[MAX_ARGS]; /* command word list */
 } Command;
 
+typedef struct
+{
+
+   longstart_time; /* when does the interval start 
*/
+   int cnt;/* number of transactions */
+   double  min_duration;   /* min/max durations */
+   double  max_duration;
+   double  sum;/* sum(duration), 
sum(duration^2) - for estimates */
+   double  sum2;
+   
+} AggVals;
+
 static Command **sql_files[MAX_FILES]; /* SQL script files */
 static int num_files;  /* number of script files */
 static int num_commands = 0;   /* total number of Command structs */
@@ -364,6 +378,7 @@ usage(void)
 -f FILENAME  read transaction script from FILENAME\n
 -j NUM   number of threads (default: 1)\n
 -l   write transaction times to log file\n
+-A NUM   write aggregates (over NUM seconds) 
instead\n
 -M simple|extended|prepared\n
  protocol for submitting queries to server 
(default: simple)\n
 -n   do not run VACUUM before tests\n
@@ -817,9 +832,22 @@ clientDone(CState *st, bool ok)
return false;   /* always false */
 }
 
+static
+void agg_vals_init(AggVals * aggs, instr_time start)
+{
+   aggs-cnt = 0;
+   aggs-sum = 0;
+   aggs-sum2 = 0;
+   
+   aggs-min_duration = 3600 * 100.0; /* one hour */
+   aggs-max_duration = 0;
+
+   aggs-start_time = INSTR_TIME_GET_DOUBLE(start);
+}
+
 /* return false iff client should be disconnected */
 static bool
-doCustom(TState *thread, CState *st, instr_time *conn_time, FILE *logfile)
+doCustom(TState *thread, CState *st, instr_time *conn_time, FILE *logfile, 
AggVals * agg)
 {
PGresult   *res;
Command   **commands;
@@ -881,17 +909,65 @@ top:
diff = now;
INSTR_TIME_SUBTRACT(diff, st-txn_begin);
usec = (double) INSTR_TIME_GET_MICROSEC(diff);
-
+   
+   /* should we aggregate the results or not? */
+   if (use_log_agg) {
+   
+   /* are we still in the same interval? if yes, 
accumulate the
+* values (print them otherwise) */
+   if (agg-start_time + naggseconds = 
INSTR_TIME_GET_DOUBLE(now)) {
+   
+   /* accumulate */
+   agg-cnt += 1;
+   
+   agg-min_duration = (usec  
agg-min_duration) ? usec : agg-min_duration;
+   agg-max_duration = (usec  
agg-max_duration) ? usec : agg-max_duration;
+   
+   agg-sum  += usec;
+   agg-sum2 += usec * usec;
+   
+   } else {
+   
+   /* print */
 #ifndef WIN32
-   /* This is 

Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andres Freund
On Friday, August 24, 2012 05:09:18 PM Andrew Dunstan wrote:
 On 08/24/2012 10:58 AM, Andres Freund wrote:
  On Friday, August 24, 2012 04:53:36 PM Tom Lane wrote:
  Andres Freund and...@2ndquadrant.com writes:
  ./pod/perl581delta.pod:
  At startup Perl blocks the SIGFPE signal away since there isn't much
  Perl can do about it.  Previously this blocking was in effect also for
  programs executed from within Perl.  Now Perl restores the original
  SIGFPE handling routine, whatever it was, before running external
  programs.
  
  So there's a gap in the restore logic someplace.
  
  Well, the logic is not triggering at all in pg's case. Its just used if
  perl is exec()ing something...
  
  perl.h also has some tidbits: ...
  That doesn't sound very well reasoned and especially not very well
  tested to me.
  
  Time to file a Perl bug?
  
  Anybody more involved in the perl community volunteering?
 
 Just run perlbug and let us know the bug number.
https://rt.perl.org/rt3/Public/Bug/Display.html?id=114574

Andres
-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] PATCH: pgbench - aggregation of info written into log

2012-08-24 Thread Tomas Vondra
On 24 Srpen 2012, 23:25, Tomas Vondra wrote:
 Hi,

 this patch adds support for aggregation of info written into the log.
 Instead of info about each transaction, a summary for time intervals (with
 custom length) is written into the log. All you need to do is add -A
 seconds, e.g.

   $ pgbench -T 3600 -A 10 -l db

 which will produce log with 10-second summaries, containing interval start
 timestamp, number of transactions, sum of latencies, sum of 2nd power of
 latencies, min and max latency (it's done this way to allow handling of
 multiple logs produced by -j option).

I've forgot to mention that just like the other patch, this is meant for
cases when you really don't need the raw per-transaction info, but
per-second summary is just enough. This is helpful especially when the
test produces a lot of transaction, thus huge log files etc.

So you can either log all the transactions and then post-process the huge
files, or completely skip that and just log the summaries.

Tomas



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andres Freund
On Friday, August 24, 2012 04:53:36 PM Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  ./pod/perl581delta.pod:
  At startup Perl blocks the SIGFPE signal away since there isn't much
  Perl can do about it.  Previously this blocking was in effect also for
  programs executed from within Perl.  Now Perl restores the original
  SIGFPE handling routine, whatever it was, before running external
  programs.
 
 So there's a gap in the restore logic someplace.
 
  perl.h also has some tidbits: ...
  That doesn't sound very well reasoned and especially not very well tested
  to me.
 
 Time to file a Perl bug?
We probably should workaround that bug anyway given that its a pretty trivial 
DOS using only a trusted language and it will take quite some time to push out 
newer perl versions even if that bug gets fixed.

Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3 seems to 
work. Is that acceptable?

Andres
-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Alex Hunsaker
On Fri, Aug 24, 2012 at 4:10 PM, Andres Freund and...@2ndquadrant.com wrote:

 We probably should workaround that bug anyway given that its a pretty trivial
 DOS using only a trusted language and it will take quite some time to push out
 newer perl versions even if that bug gets fixed.

 Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3 seems to
 work. Is that acceptable?

Makes sense to me. (I have not looked to see if there is some perl
knob we can flip for this)


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andres Freund
On Saturday, August 25, 2012 12:15:00 AM Alex Hunsaker wrote:
 On Fri, Aug 24, 2012 at 4:10 PM, Andres Freund and...@2ndquadrant.com 
wrote:
  We probably should workaround that bug anyway given that its a pretty
  trivial DOS using only a trusted language and it will take quite some
  time to push out newer perl versions even if that bug gets fixed.
  
  Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3
  seems to work. Is that acceptable?
 
 Makes sense to me. (I have not looked to see if there is some perl
 knob we can flip for this)
I couldn't find any.  After some macro indirection the signal() call ends up 
being done unconditionally by a compiled function (Perl_sys_init3) without any 
conditions, so I don't think there is much that can be done without changing 
perl's source code...

Andres
-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PATCH: optimized DROP of multiple tables within a transaction

2012-08-24 Thread Tomas Vondra
Hi,

attached is a patch that improves performance when dropping multiple
tables within a transaction. Instead of scanning the shared buffers for
each table separately, the patch removes this and evicts all the tables
in a single pass through shared buffers.

Our system creates a lot of working tables (even 100.000) and we need
to perform garbage collection (dropping obsolete tables) regularly. This
often took ~ 1 hour, because we're using big AWS instances with lots of
RAM (which tends to be slower than RAM on bare hw). After applying this
patch and dropping tables in groups of 100, the gc runs in less than 4
minutes (i.e. a 15x speed-up).

This is not likely to improve usual performance, but for systems like
ours, this patch is a significant improvement.

kind regards
Tomas
diff --git a/src/backend/catalog/storage.c b/src/backend/catalog/storage.c
index 993bc49..593c360 100644
--- a/src/backend/catalog/storage.c
+++ b/src/backend/catalog/storage.c
@@ -335,6 +335,9 @@ smgrDoPendingDeletes(bool isCommit)
PendingRelDelete *pending;
PendingRelDelete *prev;
PendingRelDelete *next;
+
+   SMgrRelation   *srels = palloc(sizeof(SMgrRelation));
+   int nrels = 0, i = 0;
 
prev = NULL;
for (pending = pendingDeletes; pending != NULL; pending = next)
@@ -358,14 +361,25 @@ smgrDoPendingDeletes(bool isCommit)
SMgrRelation srel;
 
srel = smgropen(pending-relnode, 
pending-backend);
-   smgrdounlink(srel, false);
-   smgrclose(srel);
+
+   srels = repalloc(srels, sizeof(SMgrRelation) * 
(nrels+1));
+   srels[nrels++] = srel;
}
/* must explicitly free the list entry */
pfree(pending);
/* prev does not change */
}
}
+
+   if (nrels  0) {
+   smgrdounlinkall(srels, nrels, false);
+   
+   for (i = 0; i  nrels; i++)
+   smgrclose(srels[i]);
+   
+   pfree(srels);
+   }
+
 }
 
 /*
diff --git a/src/backend/storage/buffer/bufmgr.c 
b/src/backend/storage/buffer/bufmgr.c
index a3bf9a4..8238f49 100644
--- a/src/backend/storage/buffer/bufmgr.c
+++ b/src/backend/storage/buffer/bufmgr.c
@@ -108,6 +108,7 @@ static volatile BufferDesc *BufferAlloc(SMgrRelation smgr,
 static void FlushBuffer(volatile BufferDesc *buf, SMgrRelation reln);
 static void AtProcExit_Buffers(int code, Datum arg);
 
+static int rnode_comparator(const void * p1, const void * p2);
 
 /*
  * PrefetchBuffer -- initiate asynchronous read of a block of a relation
@@ -2130,6 +2131,53 @@ DropRelFileNodeAllBuffers(RelFileNodeBackend rnode)
 }
 
 /* -
+ * DropRelFileNodeAllBuffersList
+ *
+ * This function removes from the buffer pool all the pages of all
+ * forks of the specified relations.  It's equivalent to calling
+ * DropRelFileNodeBuffers once per fork with firstDelBlock = 0 for
+ * each of the relations.
+ * 
+ */
+void
+DropRelFileNodeAllBuffersList(RelFileNodeBackend * rnodes, int nnodes)
+{
+   int i;
+   int j;
+
+   /* sort the list of rnodes */
+   pg_qsort(rnodes, nnodes, sizeof(RelFileNodeBackend), rnode_comparator);
+
+   /* If it's a local relation, it's localbuf.c's problem. */
+   for (j = 0; j  nnodes; j++) {
+   if (rnodes[j].backend != InvalidBackendId)
+   {
+   if (rnodes[j].backend == MyBackendId)
+   DropRelFileNodeAllLocalBuffers(rnodes[j].node);
+   }
+   }
+
+   for (i = 0; i  NBuffers; i++)
+   {
+   volatile BufferDesc *bufHdr = BufferDescriptors[i];
+
+   RelFileNodeBackend * rnode = bsearch((const void 
*)(bufHdr-tag.rnode), rnodes,
+   
nnodes, sizeof(RelFileNodeBackend),
+   
rnode_comparator);
+
+   /* buffer does not belong to any of the relations */
+   if (rnode == NULL)
+   continue;
+
+   LockBufHdr(bufHdr);
+   if (RelFileNodeEquals(bufHdr-tag.rnode, rnode-node))
+   InvalidateBuffer(bufHdr);   /* releases spinlock */
+   else
+   UnlockBufHdr(bufHdr);
+   }
+}
+
+/* -
  * DropDatabaseBuffers
  *
  * This function removes all the buffers in the buffer cache for a
@@ 

Re: [HACKERS] NOT NULL constraints in foreign tables

2012-08-24 Thread Robert Haas
On Wed, Aug 22, 2012 at 12:59 PM, Jeff Davis davis.jeff...@gmail.com wrote:
 On Tue, 2012-08-21 at 10:41 -0400, Robert Haas wrote:
 The thing to keep in mind here is that EVERY property of a foreign
 table is subject to change at any arbitrary point in time, without our
 knowledge.  ... Why should CHECK constraints be any different than,
 say, column types?

 So, let's say someone changes column types from int to bigint on the
 remote side, and you still have int on the local side. It continues to
 work and everything is fine until all of a sudden you get 2^33 back, and
 that generates an error.

 That sounds closer to the semantics of constraint enforcement mechanism
 #2 than #3 to me. That is, everything is fine until you get something
 that you know is wrong, and you throw an error.

Sure, but in that case you're not paying anything extra for it.

 Why should that be any worse with foreign tables than anything else?
 I mean, lots of people, as things stand today, manage to set up
 partitioned tables using CHECK constraints.  There are undoubtedly
 people who don't understand the planner benefit of having an
 appropriate CHECK constraint on each partition, but it's not exactly a
 common cause of confusion.

 But there are no consequences there other than performance. With
 unenforced constraints, they may get correct results during development
 and testing, and wrong results occasionally when in production. That's
 hard to explain to a user.

Sure.  Of course, your example of a column that is bigserial on one
side and an integer on the other side is a perfect example of how that
could happen *anyway*.  I'm all in favor of building things in a way
that minimizes the possibility of user confusion.  But since foreign
tables inevitably carry large amounts of risk in that area anyway, I
can't get very excited about fixing 10% of the problem.  That seems
likely to create the perception of safety without the reality.

 And if you don't issue a query at all, the constraint might not still be
 true; but I don't think that implies that checking it when you do run a
 query is useless.

Well, it does to me, but your mileage may vary (and obviously does).

 I think if we go down this road of trying to validate
 remote-side CHECK constraints, we're going to end up with a mishmash
 of cases where constraints are checked and other cases where
 constraints are not checked, and then that really is going to be
 confusing.

 If we use keywords to differentiate constraints that are different
 semantically, then we can just say that some types of constraints are
 allowed on foreign tables and some are not.

 I guess what I'd like to avoid is saying that a check constraint on a
 regular table means one thing, and the same check constraint on a
 foreign table means something else. If we differentiate them by
 requiring special keywords like NOT ENFORCED, then it would be more
 user-visible what's going on, and it would allow room for new semantics
 later if we want. Normal constraints would be disallowed on foreign
 tables, but NOT ENFORCED ones would be allowed.

This, I could get behind.

 That brings up another point: what if someone really, really, doesn't
 want to pay the overhead of enforcing their constraint on a local table,
 but wants the planner benefit? Would they have to make it a remote table
 to bypass the constraint check?

This is also a good point.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] foreign key locks

2012-08-24 Thread Robert Haas
On Wed, Aug 22, 2012 at 5:31 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Patch v19 contains some tweaks.  Most notably,

 1. if an Xid requests a lock A, and then a lock B which is stronger than
 A, then record only lock B and forget lock A.  This is important for
 performance, because EvalPlanQual obtains ForUpdate locks on the tuples
 that it chases on an update chain.  If EPQ was called by an update or
 delete, previously a MultiXact was being created.

 In a stock pgbench run this was causing lots of multis to be created,
 even when there are no FKs.

 This was most likely involved in the 9% performance decrease that was
 previously reported.

Ah-ha!  Neat.  I'll try to find some time to re-benchmark this during
the next CF, unless you did that already.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Tom Lane
Andres Freund and...@2ndquadrant.com writes:
 Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3 seems to
 work. Is that acceptable?

Surely that's breaking perl's expectations, to more or less the same
degree they're breaking ours?

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] pg_upgrade's exec_prog() coding improvement

2012-08-24 Thread Peter Eisentraut
On Fri, 2012-08-24 at 10:08 -0400, Tom Lane wrote:
 Heikki Linnakangas heikki.linnakan...@enterprisedb.com writes:
  On 23.08.2012 23:07, Alvaro Herrera wrote:
  One problem with this is that I get this warning:
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c: In function ‘s_exec_prog’:
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
  be possible candidate for ‘gnu_printf’ format attribute 
  [-Wmissing-format-attribute]
  /pgsql/source/HEAD/contrib/pg_upgrade/exec.c:96:2: warning: function might 
  be possible candidate for ‘gnu_printf’ format attribute 
  [-Wmissing-format-attribute]
  
  I have no idea how to silence that.  Ideas?
 
  You can do what the warning suggests, and tell the compiler that 
  exec_prog takes printf-like arguments.
 
 exec_prog already has such decoration, and Alvaro's patch doesn't remove
 it.  So the question is, exactly what the heck does that warning mean?

The warning is about s_exec_prog, not exec_prog.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] plperl sigfpe reset can crash the server

2012-08-24 Thread Andres Freund
On Saturday, August 25, 2012 06:38:09 AM Tom Lane wrote:
 Andres Freund and...@2ndquadrant.com writes:
  Doing a pqsignal(SIGFPE, FloatExceptionHandler) after PERL_SYS_INIT3
  seems to work. Is that acceptable?
 
 Surely that's breaking perl's expectations, to more or less the same
 degree they're breaking ours?
Well. Their expectation simply does not work *at all* because they do 
something (setting SIGFPE to SIG_IGN) which is completely ignored on at least 
one major platform (x86 linux) for longer than it has git history.

Their math code seems to work around generating such errors, but I find it 
rather hard to read (or rather read  understand).

Doing what I proposed admittedly has the issue that we would jump out of perl 
code without much ado. I have no idea whats the proper perly way to do so is.
It just seems we should do something...

if (in_perl)
   return;

Would be the equivalent of what they want?

Greetings,

Andres
-- 
Andres Freund   http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training  Services


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers