Re: [HACKERS] psql: bogus descriptions displayed by \d+

2011-07-17 Thread Josh Kupershmidt
On Sat, Jul 16, 2011 at 12:49 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 After a bit of review of the archives, the somebody was me:
 http://git.postgresql.org/gitweb/?p=postgresql.gita=commitdiffh=b7d67954456f15762c04e5269b64adc88dcd0860

 and this thread was the discussion about it:
 http://archives.postgresql.org/pgsql-hackers/2009-12/msg01982.php

 It looks like we thought about pg_dump, but did not think about psql.

Ah, interesting. I didn't even know this functionality existed. And I
think there is some documentation lacking; in the 8.4 doc page:
  http://www.postgresql.org/docs/8.4/static/sql-comment.html

I don't see any mention of comments on an index's columns. And the
docs also neglect to mention comments on a view's columns as well,
which is why I thought \d+ view_name was producing bogus output as
well (it's really looking for those column comments).

 I think it might be reasonable to remove the Description column from
 \d+ output for indexes and sequences, on the grounds that (1) it's
 useless against 9.x servers, and (2) for those relkinds we add other
 columns and so the horizontal space is precious.

AFAICT the extra Description column for \d+ sequence_name is bogus in
both 8.4 and 9.0, so there should be no objections to ripping that
out.

 We could also consider showing Description only when talking to a
 pre-9.0 server; but that's going to render the code even more
 spaghetti-ish, and the value seems pretty limited.

And as for \d+ index_name, I agree with Robert's sentiments here,
doesn't seem worth the bother.

Josh

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID

2011-07-17 Thread Jim Nasby
On Jul 12, 2011, at 11:30 AM, Joshua D. Drake wrote:
 This all becomes much easier if we keep the ads out of the commit
 messages, and stick to the technical side there. And find another
 venue for the other credit.
 
 I'm open to ideas.
 
 I think the commit log isn't actually useful for the advertising portion of 
 this. Users don't read commit logs for the most part. However, it is an easy 
 way for people who are writing release notes, press releases, etc... to find 
 the information.
 
 Is it a good place for the information? No.
 
 Is it the easiest place to store it until somebody steps up and creates a 
 proper way to track it so that it can be desimnated properly throughout the 
 community? Probably.
 
 We do need a way to track this information.
+1 on everything Josh said.

Does git allow for additional commit fields? That would allow for easy tracking 
without much additional burden on committers.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-17 Thread Tom Lane
Mark Kirkwood mark.kirkw...@catalyst.net.nz writes:
 [ temp-files-v6.patch.gz ]

I've applied this patch with some editorialization, notably:

I changed the datatype of temporary_files_size to uint64 so as to avoid
worries about accumulating roundoff error over a long transaction.
This is probably just paranoia, since on most machines double can store
integers exactly up to 2^52 which is more than the largest possible
temp_file_limit, but it seemed safer.

fileSize of a temp file, and temporary_files_size, must be updated
correctly whether or not temp_file_limit is currently being enforced.
Otherwise things do not behave sanely if temp_file_limit is turned on
mid-transaction.

Fixed bogus calculation in enforcement check, per my note yesterday.

Added a new SQLSTATE code, ERRCODE_CONFIGURATION_LIMIT_EXCEEDED, because
I felt that ERRCODE_QUERY_CANCELED wasn't at all appropriate for this.
(Maybe we should think about changing the statement_timeout code to use
that too, although I'm not entirely sure that we can easily tell
statement_timeout apart from a query cancel.)

Rephrased the error message to temporary file size exceeds
temp_file_limit (%dkB), as per Tatsuo's first suggestion but not his
second.  There's still room for bikeshedding here, of course.

Removed the reset of temporary_files_size in CleanupTempFiles.  It was
inappropriate because some temp files can survive CleanupTempFiles, and
unnecessary anyway because the counter is correctly decremented when
unlinking a temp file.  (This was another reason for wanting
temporary_files_size to be exact, because we're now doing dead reckoning
over the life of the session.)

Set the maximum value of temp_file_limit to INT_MAX not MAX_KILOBYTES.
There's no reason for the limit to be less on 32-bit machines than
64-bit, since we are doing arithmetic in uint64 not size_t.

Minor rewrite of documentation.

Also, I didn't add the proposed regression test, because it seems shaky
to me.  In an installation with larger than default work_mem, the sort
might not spill to disk.  I don't think this feature is large enough to
deserve its very own, rather expensive, regression test anyway.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] PostgreSQL Buildfarm client release 4.6

2011-07-17 Thread Andrew Dunstan


I have just released version 4.6 of the PostgreSQL Buildfarm client. It 
can be downloaded from: 
https://github.com/downloads/PGBuildFarm/client-code/build-farm-4_6.tgz Note 
that this is NOT on PgFoundry, as it seems to have a rather uncertain 
future.


Changes:

   * Ability to specify 'ALL' as branches to build in config file,
 cueing run_branches.pl to get the actual list of branches of
 interest from the server and build all of them. This means
 branches can be automatically dropped when they are past end of
 life, and new picked up when they are created, without operator
 intervention.
   * Ability to throttle builds for some or all branches, either by
 specifying that they will run only at certain times of the day, or
 by specifying a minimum time that must elapse between builds
   * Extension modules facility, allowing for performing tasks such as
 building and testing non-core units like drivers, or runnign 
 extra non-standard tests, without having to touch the standard

 buildfarm code. This is still experimental.
   * bug fix for certain failure names in SCM module
   * bug fixes for isolation checks
   * bug fix for stack trace locations
   * bug fix for objdump name when using --host config


Enjoy.

cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] storing TZ along timestamps

2011-07-17 Thread Jim Nasby
On Jul 6, 2011, at 9:24 AM, Alexey Klyukin wrote:
 So, I'd think there are 2 reasonable approaches to storing the
 timezone part:
 
 1. Store the timezone abbreviation (i.e. 'EST' along w/ the timestamp
 data). 
 2. Assign OID to each of the timezones and store it w/ the timestamp.
 
 The first option seem to avoid the necessity of creating a new system
 catalog for timezone information and the burden of updating it,
 because current implementation is already capable of translating
 abbreviations to useful timezone information. The question is, whether
 just a TZ abbreviation is sufficient to uniquely identify the timezone
 and get the offset and DST rules. If it's not sufficient, how
 conflicting TZ short names are handled in the current code (i.e. 'AT
 TIME ZONE ...')?

It's not enough. See the timezone_abbreviations setting.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: new contrib module plpgsql's embeded sql validator

2011-07-17 Thread Jim Nasby
On Jul 7, 2011, at 11:31 PM, Pavel Stehule wrote:
 a lazy deep SQL validation inside plpgsq functions is interesting
 attribute. It allows to work with temporary tables and it make testing
 and debugging harder, because lot of errors in embedded queries are
 detected too late. I wrote a simple module that can to help little
 bit. It is based on plpgsql plugin API and it ensures a deep
 validation of embedded sql early - after start of execution. I am
 thinking, so this plugin is really useful and it is example of plpgsql
 pluging - that is missing in contrib.
 
 Example:
 
 buggy function - raise error when par  10
 
 
 CREATE OR REPLACE FUNCTION public.kuku(a integer)
 RETURNS integer
 LANGUAGE plpgsql
 AS $function$
 begin
  if (a  10) then
return b + 1;
  else
return a + 1;
  end if;
 end;
 $function$
 
 but it is works for par = 10
 
 postgres=# select kuku(1);
 kuku
 --
2
 (1 row)
 
 postgres=# load 'plpgsql';
 LOAD
 postgres=# load 'plpgsql_esql_checker';
 LOAD
 postgres=# select kuku(1);
 ERROR:  column b does not exist
 LINE 1: SELECT b + 1
   ^
 QUERY:  SELECT b + 1
 CONTEXT:  PL/pgSQL function kuku line 3 at RETURN
 
 with esql checker this bug is identified without dependency on used
 parameter's value
 
 What do you think about this idea?
 
 The code contains a plpgsql_statement_tree walker - it should be moved
 to core and used generally - statistic, coverage tests, ...

I think this should at least be a contrib module; it seems very useful.

On a somewhat related note, I'd also really like to have the ability to parse 
things like .sql files externally, to do things like LINT checking.
--
Jim C. Nasby, Database Architect   j...@nasby.net
512.569.9461 (cell) http://jim.nasby.net



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-17 Thread Tatsuo Ishii
 Could you please elaborate why Current usage 8000kB can bigger than
 temp file limit 8kB? I undertstand the point that temp files are
 allocated by 8kB at once, but I don't understand why those numbers you
 suggested could happen. Actually I tried with the modified patches and
 got:
 
 test=# CREATE TEMPORARY TABLE resourcetemp1 AS SELECT 
 generate_series(1,10);
 SELECT 10
 test=# SET temp_file_limit = 578;
 SET
 test=# SELECT count(*) FROM (select * FROM resourcetemp1  ORDER BY 1) AS a;
 ERROR:  aborting due to exceeding temp file limit, current usage 576kB, 
 requested size 8192kB, thus it will exceed temp file limit 578kB
 
 You didn't show us how you computed those numbers, but I'm really
 dubious that FileWrite() has got any ability to produce numbers that
 are helpful.  Like Cedric, I think the write amount in any one call
 is usually going to be one block.

Here it is(fd.c).

/*
 * If performing this write will increase the file size, then abort if 
it will
 * exceed temp_file_limit
 */
if (temp_file_limit = 0  VfdCache[file].fdstate  FD_TEMPORARY)
{
if (VfdCache[file].seekPos + amount = VfdCache[file].fileSize)
{
increaseSize = true;
if ((temporary_files_size + (double)amount) / 1024.0  
(double)temp_file_limit)
ereport(ERROR,

(errcode(ERRCODE_QUERY_CANCELED),
 errmsg(aborting due to 
exceeding temp file limit, current usage %dkB, requested size %dkB, thus it 
will exceed temp file limit %dkB,

(int)(temporary_files_size/1024),
amount,

temp_file_limit)));
}
}

I also attached the whole patch against fd.c at the point when Mark
proposed the changes.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp


fd.c.patch.gz
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Add temp_file_limit GUC parameter to constrain temporary file sp

2011-07-17 Thread Bruce Momjian
Tom Lane wrote:
 Add temp_file_limit GUC parameter to constrain temporary file space usage.
 
 The limit is enforced against the total amount of temp file space used by
 each session.
 
 Mark Kirkwood, reviewed by C?dric Villemain and Tatsuo Ishii

Should we document that sessions that exceed this limit generate an
error?  I don't see any mention of this in the docs.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Add temp_file_limit GUC parameter to constrain temporary file sp

2011-07-17 Thread Tom Lane
Bruce Momjian br...@momjian.us writes:
 Tom Lane wrote:
 Add temp_file_limit GUC parameter to constrain temporary file space usage.
 
 The limit is enforced against the total amount of temp file space used by
 each session.
 
 Mark Kirkwood, reviewed by C?dric Villemain and Tatsuo Ishii

 Should we document that sessions that exceed this limit generate an
 error?  I don't see any mention of this in the docs.

Huh?  Seems like a waste of text to me.  What else would happen?

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduced power consumption in WAL Writer process

2011-07-17 Thread Simon Riggs
On Fri, Jul 15, 2011 at 6:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Jul 15, 2011, at 8:55 AM, Simon Riggs si...@2ndquadrant.com wrote:
 The only difference is how bulk write operations are handled. As long
 as we wake WALWriter before wal_buffers fills then we'll be good.
 Wakeup once per wal buffer is too much. I agree we should measure this
 to check how frequently wakeups are required for bulk ops.

 Yeah. The trick is to get the wake-ups to be frequent enough without
 adding too much latency to the backends that have to perform them. Off-hand,
 I don't  have a good feeling for how hard that will be.

 I'd say send the signal when wal buffers are more than X% full (maybe
 half).  The suggestion to send it when wrapping around at the end of the
 array is not quite right, because that's an arbitrary condition that's
 not related to how much work there is for the walwriter to do.  It
 should be cheap to check for this while advancing to a new wal buffer.

Yes, I was trying to go too simple.

I think we need to put the calculation and SetLatch() after we release
WALInsertLock, so as to avoid adding contention.

-- 
 Simon Riggs   http://www.2ndQuadrant.com/
 PostgreSQL Development, 24x7 Support, Training  Services

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-17 Thread Tom Lane
Tatsuo Ishii is...@postgresql.org writes:
 You didn't show us how you computed those numbers, but I'm really
 dubious that FileWrite() has got any ability to produce numbers that
 are helpful.  Like Cedric, I think the write amount in any one call
 is usually going to be one block.

 Here it is(fd.c).

  ereport(ERROR,
  (errcode(ERRCODE_QUERY_CANCELED),
  errmsg(aborting due to exceeding temp file limit, 
 current usage %dkB, requested size %dkB, thus it will exceed temp file limit 
 %dkB,
 (int)(temporary_files_size/1024),
 amount,
 temp_file_limit)));

The thing is that unless amount is really large, you're just going to
have two numbers that are very close to each other.  I think this is
just useless complication, because amount is almost always going to
be 8kB or less.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduced power consumption in WAL Writer process

2011-07-17 Thread Tom Lane
Simon Riggs si...@2ndquadrant.com writes:
 On Fri, Jul 15, 2011 at 6:33 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 I'd say send the signal when wal buffers are more than X% full (maybe
 half).  The suggestion to send it when wrapping around at the end of the
 array is not quite right, because that's an arbitrary condition that's
 not related to how much work there is for the walwriter to do.  It
 should be cheap to check for this while advancing to a new wal buffer.

 I think we need to put the calculation and SetLatch() after we release
 WALInsertLock, so as to avoid adding contention.

Yeah, I agree with putting the actual SetLatch call after we release the
lock ... but doesn't the calculation need to be done while we're still
holding it?  Otherwise it'd be using potentially-inconsistent values.

regards, tom lane

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-17 Thread Tatsuo Ishii
  ereport(ERROR,
  (errcode(ERRCODE_QUERY_CANCELED),
  errmsg(aborting due to exceeding temp file limit, 
 current usage %dkB, requested size %dkB, thus it will exceed temp file limit 
 %dkB,
 (int)(temporary_files_size/1024),
 amount,
 temp_file_limit)));
 
 The thing is that unless amount is really large, you're just going to
 have two numbers that are very close to each other.  I think this is
 just useless complication, because amount is almost always going to
 be 8kB or less.

Oops. My bad. I should have divide amount by 1024. Now I understand
your point.
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Japanese: http://www.sraoss.co.jp

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Re: patch review : Add ability to constrain backend temporary file space

2011-07-17 Thread Mark Kirkwood

On 18/07/11 06:25, Tom Lane wrote:

Mark Kirkwoodmark.kirkw...@catalyst.net.nz  writes:

[ temp-files-v6.patch.gz ]

I've applied this patch with some editorialization, notably:


Awesome, the changes look great!

Cheers

Mark

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Re: [COMMITTERS] pgsql: Add temp_file_limit GUC parameter to constrain temporary file sp

2011-07-17 Thread Bruce Momjian
Tom Lane wrote:
 Bruce Momjian br...@momjian.us writes:
  Tom Lane wrote:
  Add temp_file_limit GUC parameter to constrain temporary file space usage.
  
  The limit is enforced against the total amount of temp file space used by
  each session.
  
  Mark Kirkwood, reviewed by C?dric Villemain and Tatsuo Ishii
 
  Should we document that sessions that exceed this limit generate an
  error?  I don't see any mention of this in the docs.
 
 Huh?  Seems like a waste of text to me.  What else would happen?

Well, if we exceed work_mem, we spill to temp disk.  If we exceed temp
disk, we error out.  Those different behaviors don't seem obvious to me.
The work_mem docs do mention is spills to disk though, so maybe it is
OK.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + It's impossible for everything to be true. +

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: a validator for configuration files

2011-07-17 Thread Robert Haas
On Sun, Jul 17, 2011 at 12:59 AM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 On Sat, Jul 16, 2011 at 10:04 PM, Tom Lane t...@sss.pgh.pa.us wrote:
 Robert Haas robertmh...@gmail.com writes:
 Is there any way that we could get *rid* of custom_variable_classes?

 Well, we could just drop it and say you can set any dotted-name GUC
 you feel like.

 ...and the fact that we've made them set an extra GUC to shoot
 themselves in the foot hardly seems like an improvement.  I was more
 thinking along the lines of having loadable modules register custom
 variable classes at load time, through some sort of C API provided for
 that purpose, rather than having the user declare a list that may or
 may not match what the .so files really care about.

 Well, we *do* have a C API for that, of a sort.  The problem is, what do
 you do in processes that have not loaded the relevant extension?  (And
 no, I don't like the answer of let's force the postmaster to load every
 extension we want to set any parameters for.)

 I agree custom_variable_classes is conceptually messy, but it's a
 reasonably lightweight compromise that gives some error checking without
 requiring a lot of possibly-irrelevant extensions to be loaded into
 every postgres process.

Hmm.  Maybe what we need is a mechanism that allows the configuration
to be associated a loadable module, and whenever that module is
loaded, we also load the associated configuration settings.  This is
probably terribly syntax, but something like:

ALTER LOAD 'plpgsql' SET plpgsql.variable_conflict = 'whatever';

AFAICS, that would remove the need to set variables in postgresql.conf
that can't be validated, and so we could just disallow it.

OTOH, maybe that's more trouble than can be justified by the size of
the problem.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: a validator for configuration files

2011-07-17 Thread Florian Pflug
On Jul18, 2011, at 01:29 , Robert Haas wrote:
 Hmm.  Maybe what we need is a mechanism that allows the configuration
 to be associated a loadable module, and whenever that module is
 loaded, we also load the associated configuration settings.  This is
 probably terribly syntax, but something like:
 
 ALTER LOAD 'plpgsql' SET plpgsql.variable_conflict = 'whatever';

A variant of this would be to allow extensions (in the CREATE EXTENSION
sense) to declare custom GUCs in their control file. Then we'd only
need to load those files, which seems better than loading a shared
library.

We do expect people to wrap their loadable modules in extensions
anyway nowadays, do we?

best regards,
Florian Pflug


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduced power consumption in WAL Writer process

2011-07-17 Thread Peter Geoghegan
This is a bit of a detour, but probably a useful one. Attached is a
patch that replaces a tight PostmasterIsAlive() polling loop in the AV
launcher with a latch, making use of the new WL_POSTMASTER_DEATH
functionality. It's similar to what we've already done for the
archiver. It is relatively straightforward as these auxiliary process
polling loop elimination patches go (certainly compared to what we're
contemplating with the WAL writer), but it raises some questions that
we were lucky to have been able to avoid when I worked on the
archiver. Obviously, this patch isn't finished.

We register various generic signal handlers for the AVLauncher,
including StatementCancelHandler(). Of course, signals that are
handled generically have the same potential to invalidate WaitLatch()
timeout as any other type of signal. We should be mindful of this.

ISTM that these generic handlers ought to be handling this
generically, and that there should be a Latch for just this purpose
for each process within PGPROC. We already have this Latch in PGPROC:

Latch   waitLatch;  /* allow us to wait for sync rep */

Maybe its purpose should be expanded to current process Latch?

Another concern is, what happens when we receive a signal, generically
handled or otherwise, and have to SetLatch() to avoid time-out
invalidation? Should we just live with a spurious
AutoVacLauncherMain() iteration, or should we do something like check
if the return value of WaitLatch indicates that we woke up due to a
SetLatch() call, which must have been within a singal handler, and
that we should therefore goto just before WaitLatch() and elide the
spurious iteration? Given that we can expect some signals to occur
relatively frequently, spurious iterations could be a real concern.

Incidentally, should I worry about the timeout long for WaitLatch()
overflowing?

-- 
Peter Geoghegan       http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training and Services
diff --git a/src/backend/postmaster/autovacuum.c b/src/backend/postmaster/autovacuum.c
index 2f3fcbf..cf5eb59 100644
--- a/src/backend/postmaster/autovacuum.c
+++ b/src/backend/postmaster/autovacuum.c
@@ -84,6 +84,7 @@
 #include postmaster/postmaster.h
 #include storage/bufmgr.h
 #include storage/ipc.h
+#include storage/latch.h
 #include storage/pmsignal.h
 #include storage/proc.h
 #include storage/procsignal.h
@@ -267,6 +268,10 @@ int			AutovacuumLauncherPid = 0;
 static pid_t avlauncher_forkexec(void);
 static pid_t avworker_forkexec(void);
 #endif
+/*
+ * Latch used by signal handlers to wake up the sleep in the main loop.
+ */
+static Latch mainloop_latch;
 NON_EXEC_STATIC void AutoVacWorkerMain(int argc, char *argv[]);
 NON_EXEC_STATIC void AutoVacLauncherMain(int argc, char *argv[]);
 
@@ -545,6 +550,8 @@ AutoVacLauncherMain(int argc, char *argv[])
 	 */
 	rebuild_database_list(InvalidOid);
 
+	InitLatch(mainloop_latch);
+
 	for (;;)
 	{
 		struct timeval nap;
@@ -567,38 +574,20 @@ AutoVacLauncherMain(int argc, char *argv[])
 
 		/*
 		 * Sleep for a while according to schedule.
+		 * Wait on Latch.
 		 *
-		 * On some platforms, signals won't interrupt the sleep.  To ensure we
-		 * respond reasonably promptly when someone signals us, break down the
-		 * sleep into 1-second increments, and check for interrupts after each
-		 * nap.
+		 * We handle wait time invalidation by calling
+		 * SetLatch() in signal handlers.
 		 */
-		while (nap.tv_sec  0 || nap.tv_usec  0)
-		{
-			uint32		sleeptime;
-
-			if (nap.tv_sec  0)
-			{
-sleeptime = 100;
-nap.tv_sec--;
-			}
-			else
-			{
-sleeptime = nap.tv_usec;
-nap.tv_usec = 0;
-			}
-			pg_usleep(sleeptime);
-
-			/*
-			 * Emergency bailout if postmaster has died.  This is to avoid the
-			 * necessity for manual cleanup of all postmaster children.
-			 */
-			if (!PostmasterIsAlive())
-proc_exit(1);
-
-			if (got_SIGTERM || got_SIGHUP || got_SIGUSR2)
-break;
-		}
+		WaitLatch(mainloop_latch, WL_LATCH_SET | WL_TIMEOUT | WL_POSTMASTER_DEATH, (nap.tv_sec * 100L) + nap.tv_usec);
+		/*
+		 * Emergency bailout if postmaster has died.  This is to avoid the
+		 * necessity for manual cleanup of all postmaster children.
+		 *
+		 * This happens again here because we may have slept for quite a while.
+		 */
+		if (!PostmasterIsAlive())
+			proc_exit(1);
 
 		DisableCatchupInterrupt();
 
@@ -1322,6 +1311,8 @@ static void
 avl_sighup_handler(SIGNAL_ARGS)
 {
 	got_SIGHUP = true;
+	/* let the waiting loop iterate */
+	SetLatch(mainloop_latch);
 }
 
 /* SIGUSR2: a worker is up and running, or just finished, or failed to fork */
@@ -1329,6 +1320,8 @@ static void
 avl_sigusr2_handler(SIGNAL_ARGS)
 {
 	got_SIGUSR2 = true;
+	/* let the waiting loop iterate */
+	SetLatch(mainloop_latch);
 }
 
 /* SIGTERM: time to die */
@@ -1336,6 +1329,8 @@ static void
 avl_sigterm_handler(SIGNAL_ARGS)
 {
 	got_SIGTERM = true;
+	/* let the waiting loop iterate */
+	SetLatch(mainloop_latch);
 }
 
 

-- 
Sent via 

Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID

2011-07-17 Thread Robert Haas
On Sun, Jul 17, 2011 at 1:20 PM, Jim Nasby j...@nasby.net wrote:
 On Jul 12, 2011, at 11:30 AM, Joshua D. Drake wrote:
 This all becomes much easier if we keep the ads out of the commit
 messages, and stick to the technical side there. And find another
 venue for the other credit.

 I'm open to ideas.

 I think the commit log isn't actually useful for the advertising portion 
 of this. Users don't read commit logs for the most part. However, it is an 
 easy way for people who are writing release notes, press releases, etc... to 
 find the information.

 Is it a good place for the information? No.

 Is it the easiest place to store it until somebody steps up and creates a 
 proper way to track it so that it can be desimnated properly throughout the 
 community? Probably.

 We do need a way to track this information.
 +1 on everything Josh said.

 Does git allow for additional commit fields? That would allow for easy 
 tracking without much additional burden on committers.

I mean, there's git notes, but that's not exactly what we're looking
for here, and I don't see how it would easy the burden on committers
anyway, and it doesn't solve the problem of not being able to change
things after the fact.  I think this is a clear-cut case of needing
some sort of web application to manage this.  I'd even be willing to
help fill in the relevant info.  But I'm not going to write it
myself...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID

2011-07-17 Thread Andrew Dunstan



On 07/17/2011 08:36 PM, Robert Haas wrote:



We do need a way to track this information.

+1 on everything Josh said.

Does git allow for additional commit fields? That would allow for easy tracking 
without much additional burden on committers.

I mean, there's git notes, but that's not exactly what we're looking
for here, and I don't see how it would easy the burden on committers
anyway, and it doesn't solve the problem of not being able to change
things after the fact.  I think this is a clear-cut case of needing
some sort of web application to manage this.  I'd even be willing to
help fill in the relevant info.  But I'm not going to write it
myself...



My understanding of git notes is that they can be added after a commit 
without changing the commit - indeed that's apparently a large part of 
their raison d'être:


   A typical use of notes is to supplement a commit message without
   changing the commit itself. Notes can be shown by git log along with
   the original commit message.

It is a pity that you can't define extra fields as is suggested above.


cheers

andrew

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Reduced power consumption in WAL Writer process

2011-07-17 Thread Robert Haas
On Sun, Jul 17, 2011 at 8:20 PM, Peter Geoghegan pe...@2ndquadrant.com wrote:
 This is a bit of a detour, but probably a useful one. Attached is a
 patch that replaces a tight PostmasterIsAlive() polling loop in the AV
 launcher with a latch, making use of the new WL_POSTMASTER_DEATH
 functionality. It's similar to what we've already done for the
 archiver. It is relatively straightforward as these auxiliary process
 polling loop elimination patches go (certainly compared to what we're
 contemplating with the WAL writer), but it raises some questions that
 we were lucky to have been able to avoid when I worked on the
 archiver. Obviously, this patch isn't finished.

Might be good to start a new thread for each auxilliary process, or we
may get confused.

 We register various generic signal handlers for the AVLauncher,
 including StatementCancelHandler(). Of course, signals that are
 handled generically have the same potential to invalidate WaitLatch()
 timeout as any other type of signal. We should be mindful of this.

Right.

 ISTM that these generic handlers ought to be handling this
 generically, and that there should be a Latch for just this purpose
 for each process within PGPROC. We already have this Latch in PGPROC:

 Latch           waitLatch;              /* allow us to wait for sync rep */

 Maybe its purpose should be expanded to current process Latch?

I think that could be a really good idea, though I haven't looked at
it closely enough to be sure.

 Another concern is, what happens when we receive a signal, generically
 handled or otherwise, and have to SetLatch() to avoid time-out
 invalidation? Should we just live with a spurious
 AutoVacLauncherMain() iteration, or should we do something like check
 if the return value of WaitLatch indicates that we woke up due to a
 SetLatch() call, which must have been within a singal handler, and
 that we should therefore goto just before WaitLatch() and elide the
 spurious iteration? Given that we can expect some signals to occur
 relatively frequently, spurious iterations could be a real concern.

Really?  I suspect that it doesn't much matter exactly how many
machine language instructions we execute on each wake-up, within
reasonable bounds, of course.  Maybe some testing is in order?

On another note, I might be inclined to write something like:

if ((return_value_of_waitlatch  WL_POSTMASTER_DEATH)  !PostmasterIsAlive())
   proc_exit(1);

...so as to avoid calling that function unnecessarily on every iteration.

 Incidentally, should I worry about the timeout long for WaitLatch()
 overflowing?

How would that happen?

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID

2011-07-17 Thread Robert Haas
On Sun, Jul 17, 2011 at 8:44 PM, Andrew Dunstan and...@dunslane.net wrote:
 My understanding of git notes is that they can be added after a commit
 without changing the commit - indeed that's apparently a large part of their
 raison d'être:

   A typical use of notes is to supplement a commit message without
   changing the commit itself. Notes can be shown by git log along with
   the original commit message.

Right... but it's still append-only, and I think that there is little
reason to suppose that append-only is what we want or need here.

 It is a pity that you can't define extra fields as is suggested above.

Agreed.  The 'git way' is apparently to add things like:

Reviewed-by: So And So s...@so.com

at the end of the commit message.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Single pass vacuum - take 1

2011-07-17 Thread Robert Haas
On Fri, Jul 15, 2011 at 7:23 AM, Simon Riggs si...@2ndquadrant.com wrote:
 My additional requests would be that we can easily tell which blocks
 have been modified like this, that we have a way to turn this off if
 we get bugs for next few releases, that we check it all works with Hot
 Standby just fine and that all the block inspection tools support it.

To me, this seems like a bit too much of an internal change to justify
adding a GUC.  But it probably is a good idea to ping the authors of
the various block inspection tools -- does contrib/pageinspect care
about this sort of thing, or just the out-of-core stuff?

I'd also think that it would be a good idea to consider (most likely
as a separate patch) the idea you suggested upthread: skip phase 2 if
the number of dead tuples is insufficient to justify the cost of
scanning the indexes.  I've been wondering if it might make sense to
couple that change with a decrease in vacuum_scale_factor - in effect,
vacuum more frequently, but don't scan the indexes every time.

One possibly useful metric for benchmarking these sorts of changes is
to run a write workload for a while until the size of the tables
stabilize and then measure how big they are.  If a given change causes
the table size to stabilize at a lower value, that suggests that the
change makes vacuum more effective at controlling bloat, which is
good.  Conversely, if the change causes the table size to stabilize at
a higher value, that suggests that we've made vacuum less effective at
controlling bloat, which is bad.  In fact, I'd venture to say that
anything that falls into the latter category is dead on arrival...

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [COMMITTERS] pgsql: Enable CHECK constraints to be declared NOT VALID

2011-07-17 Thread Alvaro Herrera
Excerpts from Robert Haas's message of dom jul 17 20:36:49 -0400 2011:

  Does git allow for additional commit fields? That would allow for easy 
  tracking without much additional burden on committers.
 
 I mean, there's git notes, but that's not exactly what we're looking
 for here, and I don't see how it would easy the burden on committers
 anyway, and it doesn't solve the problem of not being able to change
 things after the fact.

Eh, git notes *can* be changed after the fact, and are *not* append
only.  And as the committer who started this discussion in the first
place, I don't have any problem with having to edit them separately from
the commit message, which is a tiny portion of the work involved in
figuring out the patch, anyway.

What's not clear to me, is whether they are sent to the remote when you
invoke git push.  I'm not clear on whether this needing a separate
command or more arguments to push, or it's just not possible.

 I think this is a clear-cut case of needing some sort of web
 application to manage this.  I'd even be willing to help fill in the
 relevant info.  But I'm not going to write it myself...

Having a web app would work for me, but a larger job than just using git
notes.  So if the notes really work, +1 to them from me.

-- 
Álvaro Herrera alvhe...@commandprompt.com
The PostgreSQL Company - Command Prompt, Inc.
PostgreSQL Replication, Consulting, Custom Development, 24x7 support

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] spinlock contention

2011-07-17 Thread Robert Haas
On Fri, Jul 8, 2011 at 6:02 AM, Florian Pflug f...@phlo.org wrote:
 I don't want to fiddle with your git repo, but if you attach a patch
 that applies to the master branch I'll give it a spin if I have time.

 Patch attached.

 Beware that it needs at least GCC 4.1, otherwise it'll use a per-partition
 spin lock instead of locked xadd to increment the shared counters.

[ Back from vacation, catching up on email. ]

gcc version 4.4.5 (Ubuntu/Linaro 4.4.4-14ubuntu5)

pgbench -n -S -T 180 -c 32 -j 32

with lwlock-part patch:
tps = 36974.644091 (including connections establishing)

unpatched cd34647c666be867f95ef8fc0492c30356043f10:
tps = 39432.064293 (including connections establishing)

And with -c 8 -j 8:

tps = 26946.202428 (including connections establishing)
tps = 27206.507424 (including connections establishing)

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] patch: Allow \dd to show constraint comments

2011-07-17 Thread Robert Haas
On Sat, Jul 2, 2011 at 8:37 PM, Josh Kupershmidt schmi...@gmail.com wrote:
 On Sun, Jun 19, 2011 at 11:59 AM, Josh Kupershmidt schmi...@gmail.com wrote:
 I think we still need to handle my Still TODO concerns noted
 upthread. I don't have a lot of time this weekend due to a family
 event, but I was mulling over putting in a is_system boolean column
 into pg_comments and then fixing psql's \dd[S] to use pg_comments. But
 I am of course open to other suggestions.

 I went ahead and did it this way, though I wasn't 100% sure about some
 of the guesses for the is_system column I made. I assumed the
 following types should have is_system = true: casts, roles, databases,
 access methods, tablespaces. I assumed is_system = false for large
 objects. For the rest, I just used whether the schema name of the
 object was 'pg_catalog' or 'information_schema'.

It seems funny to have is_system = true unconditionally for any object
type.  Why'd you do it that way?  Or maybe I should ask, why true
rather than false?

 With the is_system column in place, I was able to make psql's \dd
 actually use pg_comments.

 I had to tweak the pg_proc.h entry for pg_opfamily_is_visible to play
 nice with the recent transform function change to that file.

It seems like we ought to have this function for symmetry regardless
of what happens to the rest of this, so I extracted and committed this
part.

 Issues still to be resolved:

 1.) For now, I'm just ignoring the issue of visibility checks; I
 didn't see a simple way to support these checks \dd was doing:

    processSQLNamePattern(pset.db, buf, pattern, true, false,
                          n.nspname, p.proname, NULL,
                          pg_catalog.pg_function_is_visible(p.oid));

 I'm a bit leery of adding an is_visible column into pg_comments, but
 I'm not sure I have a feasible workaround if we do want to keep this
 visibility check. Maybe a big CASE statement for the last argument of
 processSQLNamePattern() would work...

Yeah... or we could add a function pg_object_is_visible(classoid,
objectoid) that would dispatch to the relevant visibility testing
function based on object type.  Not sure if that's too much of a
kludge, but it wouldn't be useful only here: you could probably use it
in combination with pg_locks, for example.

The real problem with the is_system column is that it seems to be
entirely targeted around what psql happens to feel like doing.  I'm
pretty sure we'll regret it if we choose to go that route.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] proposal: a validator for configuration files

2011-07-17 Thread Pavel Stehule
2011/7/18 Florian Pflug f...@phlo.org:
 On Jul18, 2011, at 01:29 , Robert Haas wrote:
 Hmm.  Maybe what we need is a mechanism that allows the configuration
 to be associated a loadable module, and whenever that module is
 loaded, we also load the associated configuration settings.  This is
 probably terribly syntax, but something like:

 ALTER LOAD 'plpgsql' SET plpgsql.variable_conflict = 'whatever';

 A variant of this would be to allow extensions (in the CREATE EXTENSION
 sense) to declare custom GUCs in their control file. Then we'd only
 need to load those files, which seems better than loading a shared
 library.

+1

Pavel


 We do expect people to wrap their loadable modules in extensions
 anyway nowadays, do we?

 best regards,
 Florian Pflug


 --
 Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
 To make changes to your subscription:
 http://www.postgresql.org/mailpref/pgsql-hackers


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] reducing the overhead of frequent table locks, v4

2011-07-17 Thread Robert Haas
On Tue, Jul 12, 2011 at 3:24 PM, Jeff Davis pg...@j-davis.com wrote:
 On Tue, 2011-07-12 at 13:32 -0500, Robert Haas wrote:
 On Jul 12, 2011, at 12:02 PM, Jeff Davis pg...@j-davis.com wrote:
  Yeah, I think you're right here. It's probably not much of a practical
  concern.
 
  I was slightly bothered because it seemed a little unpredictable. But it
  seems very minor, and if we wanted to fix it later I think we could.

 Yes, I agree. I think there are a number of things we could possibly 
 fine-tune, but it's not clear to me just yet which ones are really problems 
 or what the right solutions are.  I think once the basic patch is in and 
 people start beating on it we'll get a better feeling for which parts can 
 benefit from further engineering.

 OK, marking ready for committer assuming that you will take care of my
 previous complaints (the biggest one is that holdsStrongLockCount should
 be boolean).

 Disclaimer: I have done no performance review at all, even though this
 is a performance patch!

 I like the patch and I like the approach. It seems like the potential
 benefits are worth the extra complexity, which seems manageable and
 mostly isolated to lock.c.

Thanks.  Committed, with minor changes based on your comments.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] lazy vxid locks, v2

2011-07-17 Thread Robert Haas
On Fri, Jul 15, 2011 at 4:41 PM, Josh Berkus j...@agliodbs.com wrote:
 (1) can you re-link me to the pgbench and sysbench setup you used to
 test this originally?  I'd like to implement those.

I didn't use sysbench.  The pgbench command I used was something like:

pgbench -n -S -T $TIME -c $CLIENTS -j $CLIENTS

...with varying times and client counts.  Three minutes or so seems to
be enough to get a reasonably good measurement.  We have a report from
another tester that -M prepared exacerbates the effect of the main
fastlock patch (which I just committed) so that might be worth trying
here, too.

 (2) the max machine I can test these on is 16 cores.  Is that adequate,
 or do we need more cores for real testing?

I think you should be able to see a small benefit on a machine of that
size.  My memory is a bit fuzzy because it's 1 AM and I haven't looked
at this in over a week, but I believe that this patch produces only a
small improvement in performance by itself, apparently because most of
the gain that we get from reducing lock manager contention gets eaten
up by additional spinlock contention.  However, I think the change is
worthwhile anyway, because the benefit of any changes we might make to
reduce spinlock contention is going to be limited by the rate at which
we can shove traffic through the lock manager.  In other words, aside
from whatever benefit this patch may have on its own, I believe it's
an enabler for future performance work.

In mulling over this patch, it's occurred to me that there are some
stylistic things about it that can stand to be improved.  So I'm going
to do that for the next version.  However, if you want to test on this
version, feel free, because I think those changes are going to have
little if any effect on performance.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] storing TZ along timestamps

2011-07-17 Thread Robert Haas
On Fri, Jul 8, 2011 at 11:13 AM, Stuart Bishop stu...@stuartbishop.net wrote:
 On Mon, Jun 6, 2011 at 7:50 AM, Jim Nasby j...@nasby.net wrote:
 On Jun 4, 2011, at 3:56 AM, Greg Stark wrote:
 On Thu, Jun 2, 2011 at 8:58 PM, Jim Nasby j...@nasby.net wrote:

 I'm torn between whether the type should store the original time or the 
 original time converted to GMT.

 This is the wrong way to think about it. We *never* store time
 converted to GMT.  When we want to represent a point in time we
 represent it as seconds since the epoch.
 Right. Sorry, my bad.

 The question here is how to represent more complex concepts than
 simply points in time. I think the two concepts under discussion are
 a) a composite type representing a point in time and a timezone it
 should be interpreted in for operations and display and b) the
 original input provided which is a text string with the constraint
 that it's a valid input which can be interpreted as a point in time.

 My fear with A is that something could change that would make it impossible 
 to actually get back to the time that was originally entered. For example, a 
 new version of the timezone database could change something. Though, that 
 problem also exists for timestamptz today, so presumably if it was much of 
 an issue we'd have gotten complaints by now.

 The common problem is daylight savings time being declared or
 cancelled. This happens numerous times throughout the year, often with
 short notice.

 If you want to store '6pm July 3rd 2014 Pacific/Fiji', and want that
 to keep meaning 6pm Fiji time no matter what decisions the Fijian
 government makes over the next two years, you need to store the
 wallclock (local) time and the timezone. The wallclock time remains
 fixed, but the conversion to UTC may float.

 If you are storing an point in time that remains stable no matter
 future political decisions, you store UTC time and an offset. The
 conversion to wallclock time may float, and your 6pm Fiji time meeting
 might change to 5pm or 7pm depending on the policical edicts.

This is a pretty good point.  You would want the first of these
(probably) for the time a meeting is scheduled to start, and the
second for the time of a meteor shower (or some other natural event
that doesn't care what the politicians decide).

I feel like the second of these is pretty well handled by our existing
timestamptz data type.  Granted, you can't store the intended display
time zone, but putting it in a separate column is not really a
problem: at least, it has the right semantics.  So maybe the first is
the one we should be aiming at.  If so, storing a counter and a time
zone is the wrong approach: you need to record something like year,
month, day, hour, minute, second, tzname.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers