Re: ALTER SYSTEM SET command to change postgresql.conf parameters (RE: [HACKERS] Proposal for Allow postgresql.conf values to be changed via SQL [review])

2013-08-17 Thread Amit Kapila
On Fri, Aug 16, 2013 at 11:12 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
 Robert Haas escribió:
 On Fri, Aug 9, 2013 at 8:44 AM, Stephen Frost sfr...@snowman.net wrote:
  A shared catalog which defined which *database* to run the trigger in,
  with a way to fire off a new backend worker in that database and tell it
  to run the trigger, might be interesting and would deal with the issue
  that the trigger would behave differently depending on the database
  connected to.  That would bring along other issues, of course, but it
  seemed an interesting enough idea to mention.

 Eh, maybe.  I'm not sure there's enough use case for that to justify
 the amount of infrastructure it would require.  I'm happy with the
 recent enhancements to background workers, but there's an awful lot of
 ground to cover between that and what you're proposing.

 Yeah, agreed.  There's a lot of infrastructure required for this; it
 seems hard to justify it only to support disabling ALTER SYSTEM.

I think disabling ALTER SYSTEM can be more easily supported if we
choose one of below options:
a. UI into contrib module, people who don't want doesn't include it
b. Have an include file mechanism, so that user can comment the
include in postgresql.conf and disable it.

If I remember correctly, earlier you mentioned that by default auto
file should be parsed after postgresql.conf, but how about reverting
to previous mechanism of include such that if the file is mentioned
in include then it will be parsed, else will be ignored. I think this
can be reasonable way to disable.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


New ECPG idea, was: Re: [HACKERS] ECPG FETCH readahead

2013-08-17 Thread Boszormenyi Zoltan

2013-08-17 12:08 keltezéssel, Boszormenyi Zoltan írta:

Hi,

I am restarting this old thread... :-)

2012-04-24 10:17 keltezéssel, Michael Meskes írta:

OK, I will implement #2. Another question popped up: what to do
with FETCH ALL? The current readahead window size or temporarily
bumping it to say some tens of thousands can be used. We may not
know how much is the all records. This, although lowers performance,
saves memory.

I would say doing a large fetch in two or three batches won't cost too much in
terms of performance.


Please, don't apply this patch yet. I discovered a rather big hole
that can confuse the cursor position tracking if you do this:
...
That will also need a new round of review. Sorry for that.

No problem, better to find it now instead of after release.

Anyway, I moved the patch to 2012-next (I hope I did it correctly) so 2012-1
can be closed. Let's try to get this patch done in the next commit fest.

Michael


I had time to look into this patch of mine again after about 1.5 years.
Frankly, this time was too long to remember every detail of the patch
and looking at parts of the patch as a big entity was confusing.

So I started fresh and to make review easier, I broke the patch up
into small pieces that all build on each other. I have also fixed quite
a few bugs, mostly in my code, but some in the ECPG parser and
the regression tests as well.

I have put the broken up patchset into a GIT tree of mine at GitHub:
https://github.com/zboszor/ecpg-readahead/
but the huge compressed patch is also attached for reference.
It was generated with

$ git diff 
221e92f64c6e136e550ec2592aac3ae0d4623209..870922676e6ae0faa4ebbf94b92e0b97ec418e16


ECPG regression tests are now Valgrind-clean except two of them
but both are pre-existing bugs.

1. ecpg/test/compat_informix/rfmtlong.pgc points out a problem in
   ecpg/compatlib/informix.c

==5036== 1 errors in context 1 of 4:
==5036== Invalid read of size 4
==5036==at 0x4E3453C: rfmtlong (informix.c:941)
==5036==by 0x4007DA: fmtlong.constprop.0 (rfmtlong.pgc:22)
==5036==by 0x4006BE: main (rfmtlong.pgc:45)
==5036==  Address 0x60677d8 is 24 bytes inside a block of size 25 alloc'd
==5036==at 0x4C28409: malloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==5036==by 0x4E34268: rfmtlong (informix.c:783)
==5036==by 0x4007DA: fmtlong.constprop.0 (rfmtlong.pgc:22)
==5036==by 0x4006BE: main (rfmtlong.pgc:45)

The same error is reported 4 times.

2. ecpg_add_mem() seems to leak memory:

==5463== 256 bytes in 16 blocks are definitely lost in loss record 1 of 1
==5463==at 0x4C2A121: calloc (in 
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==5463==by 0x4E3E153: ecpg_alloc (memory.c:21)
==5463==by 0x4E3E212: ecpg_add_mem (memory.c:110)
==5463==by 0x4E3542B: ecpg_store_result (execute.c:409)
==5463==by 0x4E37E5A: ecpg_process_output (execute.c:1777)
==5463==by 0x4E38CCA: ecpg_do (execute.c:2137)
==5463==by 0x4E38D8A: ECPGdo (execute.c:2159)
==5463==by 0x400A82: fn (alloc.pgc:51)
==5463==by 0x5152C52: start_thread (pthread_create.c:308)
==5463==by 0x545C13C: clone (clone.S:113)

The last two issue we talked about in this thread are also implemented:
- permanently raise the readahead window if the application sends a
  bigger FETCH command, and
- temporarily raise the readahead window for FETCH ALL commands

The cursor position tracking was completely rewritten, so the client side
properly follows the cursor position known by the backend and doesn't
skip MOVE statements where it shouldn't. The previously known
bug is completely eliminated this way.

Please, review that patch.


I have another idea to make ECPG building on this huge patch.

Currently, UPDATE/DELETE WHERE CURRENT OF has to issue a MOVE
before the command in case the cursor positions known by the application
and the backend are different.

My idea builds on the fact that UPDATE/DELETE RETURNING is present
in all supported back branches.

A mini-parser only understanding SELECT, UPDATE and DELETE should
be added to ecpglib, so

DECLARE cursor CURSOR FOR SELECT ...

and

PREPARE prepared_stmt FROM :query;
DECLARE cursor CURSOR FOR prepared_stmt;

can be analyzed and tweaked behind the application's back.

This is needed to detect whether a query is a simple updatable
scan of a table, and returning errors early to the application if it's not,
without actually sending the UPDATE/DELETE WHERE CURRENT OF
query to the backend.

For the purpose of WHERE CURRENT OF, I would add a ctid
column at the end of the targelist that is treated like resjunk
in the backend when returning data to the application.

So, SELECTs would return the ctid information of the tuples.
The cursor query was a FETCH N with abs(N)1 because of
the readahead. For this reason, the cursor positions known
by the application and the backend are different.

The extra MOVE can be eliminated by replacing

UPDATE table SET ... WHERE CURRENT 

Re: [HACKERS] [pgsql-zh-general] Chinese in Postgres

2013-08-17 Thread Huang Bambo
Hi, Francesco,
As I mentioned that you said If I insert the data using a C++ program I
have empty squares, I guess you forget to convert you string to UTF-8
before insert.

Bambo


On Fri, Aug 16, 2013 at 7:25 PM, ciifrance...@tiscali.it 
ciifrance...@tiscali.it wrote:

 Hello all,
 before writing this message, I wrote about this in other mailing lists
 without solving my problem.
 Maybe some of you can help me.

 I have problems with a DB in postgres, when i try to insert Chinese
 strings in UTF-8 format.
 If I insert the data using a C++ program I have empty squares, in this
 format: ��� (3 empty squares for each chinese ideogram as that is the
 length in UTF-8)
 If the string contains chinese mixed with ASCII, the ASCII is OK but the
 Chinese is broken:
 漢語1-3漢語  -- ��1-3��

 All the data is read from a binary file. It seems it's read correctly, but
 something happens when the query is executed.
 (If the text is in a different language that uses only 2 bytes for each
 letter, I will see only 2 empty squares per character, ex. hebrew, but this
 is not good anyway...)

 Strange things:
 1. if i insert the record doing a query from command line (putty), the
 chinese text is OK. This problem is only when i insert by the C++ program.
 2. I checked the C++ functions involved by creating unitary tests; if i
 run some other tests (on another virtual machine) the text is not damaged.
 These strange things are confusing me, but maybe they will be useful
 informations for somebody who had the same problem.

 The DB is set for UTF-8
  Name | Owner | Encoding |   Collate   |Ctype| Access
 privileges

 --+---+--+-+-+--
 postgres | pgsql | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 MyDB | pgsql | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template0| pgsql | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
 template1| pgsql | UTF8 | en_US.UTF-8 | en_US.UTF-8 |

 Previously I also tried with:

  Name | Owner | Encoding |   Collate   |Ctype| Access
 privileges

 --+---+--+-+-+--
 postgres | pgsql | UTF8 | C   | C   |
 MyDB | pgsql | UTF8 | C   | C   |
 ...

 But the problem was the same.
 I know that you would like to see the code, but it's too long (anyway if
 you want i can try to write some lines of code, like connection to Db and
 so on). I don't know if there is some log create by postgres when inserting
 damaged data, sould be useful.

 For now, in order to save your time my question is: did anybody of you
 have the same problem?
 (and how did you solve it?)

 Thanks,
 Francesco

 Invita i tuoi amici e Tiscali ti premia! Il consiglio di un amico vale più
 di uno spot in TV. Per ogni nuovo abbonato 30 € di premio per te e per lui!
 Un amico al mese e parli e navighi sempre gratis:
 http://freelosophy.tiscali.it/




Re: [HACKERS] libpq thread locking during SSL connection start

2013-08-17 Thread Peter Eisentraut
On Mon, 2013-08-12 at 10:49 -0400, Stephen Frost wrote:
  Alternatively, if we want to just print an error message and
 proceed, we
  should put the strerror based on the return value into the message.
 
 That could certainly be added.

Here is a patch for that.  I also adjusted the message wording to be
more in line with project style.

 Should we also be adding an
 error message+strerror in cases where pthread_mutex_unlock() fails for
 some reason?
 
Ideally yes, I think.  But it's probably less urgent, because it if
fails the next lock request will probably error?

diff --git a/src/interfaces/libpq/fe-secure.c b/src/interfaces/libpq/fe-secure.c
index b16968b..3bd0113 100644
--- a/src/interfaces/libpq/fe-secure.c
+++ b/src/interfaces/libpq/fe-secure.c
@@ -256,14 +256,18 @@ struct sigpipe_info
 	/* First time through? */
 	if (conn-ssl == NULL)
 	{
+#ifdef ENABLE_THREAD_SAFETY
+		int rc;
+#endif
+
 		/* We cannot use MSG_NOSIGNAL to block SIGPIPE when using SSL */
 		conn-sigpipe_flag = false;
 
 #ifdef ENABLE_THREAD_SAFETY
-		if (pthread_mutex_lock(ssl_config_mutex))
+		if ((rc = pthread_mutex_lock(ssl_config_mutex)))
 		{
 			printfPQExpBuffer(conn-errorMessage,
-   libpq_gettext(unable to acquire mutex\n));
+			  libpq_gettext(could not acquire mutex: %s\n), strerror(rc));
 			return PGRES_POLLING_FAILED;
 		}
 #endif
@@ -1115,10 +1119,12 @@ struct sigpipe_info
 		 * SSL_context struct.
 		 */
 #ifdef ENABLE_THREAD_SAFETY
-		if (pthread_mutex_lock(ssl_config_mutex))
+		int rc;
+
+		if ((rc = pthread_mutex_lock(ssl_config_mutex)))
 		{
 			printfPQExpBuffer(conn-errorMessage,
-   libpq_gettext(unable to acquire mutex\n));
+			  libpq_gettext(could not acquire mutex: %s\n), strerror(rc));
 			return -1;
 		}
 #endif
@@ -1333,10 +1339,12 @@ struct sigpipe_info
 		X509_STORE *cvstore;
 
 #ifdef ENABLE_THREAD_SAFETY
-		if (pthread_mutex_lock(ssl_config_mutex))
+		int rc;
+
+		if ((rc = pthread_mutex_lock(ssl_config_mutex)))
 		{
 			printfPQExpBuffer(conn-errorMessage,
-   libpq_gettext(unable to acquire mutex\n));
+			  libpq_gettext(could not acquire mutex: %s\n), strerror(rc));
 			return -1;
 		}
 #endif

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] StrategyGetBuffer optimization, take 2

2013-08-17 Thread Robert Haas
On Mon, Aug 5, 2013 at 11:49 AM, Merlin Moncure mmonc...@gmail.com wrote:
 *) What I think is happening:
 I think we are again getting burned by getting de-scheduled while
 holding the free list lock. I've been chasing this problem for a long
 time now (for example, see:
 http://postgresql.1045698.n5.nabble.com/High-SYS-CPU-need-advise-td5732045.html)
 but not I've got a reproducible case.  What is happening this:

 1. in RelationGetBufferForTuple (hio.c): fire LockRelationForExtension
 2. call ReadBufferBI.  this goes down the chain until StrategyGetBuffer()
 3. Lock free list, go into clock sweep loop
 4. while holding clock sweep, hit 'hot' buffer, spin on it
 5. get de-scheduled
 6. now enter the 'hot buffer spin lock lottery'
 7. more/more backends pile on, linux scheduler goes bezerk, reducing
 chances of winning #6
 8. finally win the lottery. lock released. everything back to normal.

This is an interesting theory, but where's the evidence?  I've seen
spinlock contention come from enough different places to be wary of
arguments that start with it must be happening because

IMHO, the thing to do here is run perf record -g during one of the
trouble periods.  The performance impact is quite low.  You could
probably even set up a script that runs perf for five minute intervals
at a time and saves all of the perf.data files.  When one of these
spikes happens, grab the one that's relevant.

If you see that s_lock is where all the time is going, then you've
proved it's a PostgreSQL spinlock rather than something in the kernel
or a shared library.  If you can further see what's calling s_lock
(which should hopefully be possible with perf -g), then you've got it
nailed dead to rights.

...Robert


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] dynamic background workers, round two

2013-08-17 Thread Robert Haas
On Sun, Aug 11, 2013 at 1:31 AM, Andres Freund and...@anarazel.de wrote:
 So, I'd suggest something like:

 typedef enum BgwHandleStatus {
BGWH_SUCCESS, /* sucessfully got status */
BGWH_NOT_YET, /* worker hasn't started yet */
BGWH_GONE, /* worker had been started, but shut down already */
BGWH_POSTMASTER_DIED /* well, there you go */
 } BgwHandleStatus;


 BgwHandleStatus GetBackgroundWorkerPid(BackgroundWorkerHandle *handle, pid_t 
 *pid);
 BgwHandleStatus WaitForBackgroundWorkerStartup(BackgroundWorkerHandle 
 *handle, pid_t *pid);

OK, here's a patch that API.  I renamed the constants a bit, because a
process that has stopped is not necessarily gone; it could be
configured for restart.  But we can say that it is stopped, at the
moment.

I'm not sure that this API is an improvement.  But I think it's OK, if
you prefer it.

...Robert


bgworker-feedback-v2.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix Windows socket error checking for MinGW

2013-08-17 Thread Andrew Dunstan


On 08/17/2013 01:16 AM, Noah Misch wrote:

On Fri, Aug 16, 2013 at 06:56:45PM -0500, Michael Cronenworth wrote:

I started a thread on the general list so read that for more info.

http://www.postgresql.org/message-id/520a6e55.40...@cchtml.com

I'm also going to submit the patch to CommitFest.
+#ifndef WIN32
if (SOCK_ERRNO == EWOULDBLOCK)
+#else
+   if (SOCK_ERRNO == WSAEWOULDBLOCK)
+#endif

Thanks for looking into this.  I suspect this patch is achieving the right
runtime behavior, but some cleanup is in order.  src/include/port/win32.h
makes some effort to preempt the need for a patch like this, but the relevant
code isn't used for MinGW:

/*
  * For Microsoft Visual Studio 2010 and above we intentionally redefine
  * the regular Berkeley error constants and set them to the WSA constants.
  * Note that this will break if those constants are used for anything else
  * than Windows Sockets errors.
  */
#if _MSC_VER = 1600
#pragma warning(disable:4005)
#define EMSGSIZE WSAEMSGSIZE
#define EAFNOSUPPORT WSAEAFNOSUPPORT
#define EWOULDBLOCK WSAEWOULDBLOCK
#define EPROTONOSUPPORT WSAEPROTONOSUPPORT
#define ECONNRESET WSAECONNRESET
#define EINPROGRESS WSAEINPROGRESS
#define ENOBUFS WSAENOBUFS
#define ECONNREFUSED WSAECONNREFUSED
#define EOPNOTSUPP WSAEOPNOTSUPP
#pragma warning(default:4005)
#endif

I suspect we should do one of the following:

1. Redefine those constants for more (all?) compilers.
2. Remove that block and put #ifdef around all usage of such constants in
frontend code, as you have done.
3. Remove that block and make src/backend/port/win32/socket.c frontend-usable,
so frontend code can treat errno like backend code treats errno.

What do you recommend?




We don't seem to have a problem with this on native builds, only on 
cross-compiles AFAIK (see buildfarm for proof). The native mingw-w64 
build works just fine. So my first question is going to be why is the 
cross-compile different?



cheers

andrew


--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Chinese in Postgres

2013-08-17 Thread Peter Geoghegan
On Fri, Aug 16, 2013 at 4:25 AM, ciifrance...@tiscali.it
ciifrance...@tiscali.it wrote:
 If I insert the data using a C++ program I have empty squares, in this
 format: ��� (3 empty squares for each chinese ideogram as that is the length
 in UTF-8)
 If the string contains chinese mixed with ASCII, the ASCII is OK but the
 Chinese is broken:
 漢語1-3漢語  -- ��1-3��

You mentioned nothing about what platform this is or how you've built
the program, and nothing about operating system locale.

If this is a Windows program (you mention PuTTY), I'd read up on
differences between what are known as Unicode and Multibyte
encodings on MSDN:

http://msdn.microsoft.com/en-us/library/2dax2h36.aspx

Of course, this is a total stab in the dark, but then people with the
problem that you describe don't tend to be on *nix systems as a rule.
As someone said upthread, if Postgres does that then it's because the
bytes you sent aren't what you think the are when rendered as UTF-8.

-- 
Peter Geoghegan


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Fix Windows socket error checking for MinGW

2013-08-17 Thread Noah Misch
On Sat, Aug 17, 2013 at 02:04:24PM -0400, Andrew Dunstan wrote:
 On Fri, Aug 16, 2013 at 06:56:45PM -0500, Michael Cronenworth wrote:
 +#ifndef WIN32
 if (SOCK_ERRNO == EWOULDBLOCK)
 +#else
 +   if (SOCK_ERRNO == WSAEWOULDBLOCK)
 +#endif

 We don't seem to have a problem with this on native builds, only on  
 cross-compiles AFAIK (see buildfarm for proof). The native mingw-w64  
 build works just fine. So my first question is going to be why is the  
 cross-compile different?

One of the reports Michael cited was apparently native:
http://www.postgresql.org/message-id/e1uclpd-l4...@wrigleys.postgresql.org

Perhaps only some versions of w32api trigger the problem.  I agree we ought to
understand the necessary conditions before proceeding.

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql missing tab completion for extensions

2013-08-17 Thread Dimitri Fontaine
Magnus Hagander mag...@hagander.net writes:
 Magnus Hagander mag...@hagander.net writes:
  psql is missing tab completion for \dx (or more usfully, for \dx+).
  Attached patch fixes this.
 Done.

Thanks!

Regards,
-- 
Dimitri Fontaine
http://2ndQuadrant.fr PostgreSQL : Expertise, Formation et Support


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Feature Request on Extensions

2013-08-17 Thread Steven Citron-Pousty
Greetings all: 
I spoke to Josh B and company at OSCON about a feature we really need for 
PostgreSQL extensions on OpenShift (Red Hat's Platform as a Service). 

What we need is the ability for Postgresql to load extensions from a users file 
space. For example, if, as a developer on a machine, I wanted to load PostGIS 
into PostgreSQL, I should be able to (either at DB start, Cluster 
Initiatialization, or from within the psql command line) specify an additional 
path where PostgreSQL can look for the binaries for extensions. Compile time is 
not an acceptable solution UNLESS, that compile time parameter can be an 
environment variable that is present for all users but points to different 
locations. This feature would be turned on or off at the whole database level. 

I know that under a typical server install scenario this would be a bad from a 
security perspective, but on OpenShift we runs things differently. Let me 
explain what happens when a developer creates an application on OpenShift: 

1) Each developer application on OpenShift is actually a unique user on the 
Linux machine. 
2) That user is locked down very tight using SELinux and CGroups to create a 
container like environment. 
3) We then start up an instance of PostgreSQL server running as that unique 
user. If there are 30 applications using Postgresql on that node in our 
OpenShift cluster then there will be 30 instances of Postgresql running on the 
machine each as a different userid. But the binary they start from is the 
single binary that is laid down in the OS. Therefore everyone is starting from 
the same binary and they share all the binaries. 
4) The developer does not have permissions to add binaries (or any files) to 
locations other /home/uniqueuserid/app-root/data and a PAM namespaced /tmp. 
So even while a user could probably download all the pieces necessary to 
compile an extension they have no way to actually write the binaries for the 
extension to the correct Postgresql file location. 

What this means for OpenShift is that the we have to approve, package, and 
maintain any extension that a user wants to use. But we don't want to have that 
relationship between the developer and their Postgresql extensions. Given our 
security model we don't care what extension they want to run (as long as they 
don't violate the TOS). 

So what we really need is the ability to tell Postgresql - yup it is fine for 
a user to install an extension - especially if they server is running as the 
same user trying to install the extension - so go ahead and look in this 
directory for any possible extensions 

I hope this all makes sense. I also realize this will take some time to make it 
into Postgresql, but I think multi-tenant systems are going to become more and 
more common in the future. 

Thanks and I look forward to the discussion... 
Steve 





Re: [HACKERS] Feature Request on Extensions

2013-08-17 Thread Peter Eisentraut
On Sat, 2013-08-17 at 17:53 -0400, Steven Citron-Pousty wrote:
 What we need is the ability for Postgresql to load extensions from a
 users file space.

Sure, that would be useful for a number of reasons.  I think there was a
proposed patch for that a while ago that some people objected to, so
that might need to be revised a little.  But it sounds doable.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] pg_sleep(interval)

2013-08-17 Thread Robert Haas
On Fri, Aug 16, 2013 at 4:16 PM, Peter Eisentraut pete...@gmx.net wrote:
 That example can be used as an argument against almost any kind of
 overloading.

Yep.

And that may be justified.  We don't handle overloading particularly well.

-- 
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers