Re: [HACKERS] Patch for reserved connections for replication users

2013-10-19 Thread Amit Kapila
On Thu, Oct 17, 2013 at 8:57 AM, Amit Kapila amit.kapil...@gmail.com wrote:
 On Wed, Oct 16, 2013 at 4:30 AM, Gibheer gibh...@zero-knowledge.org wrote:
 On Mon, 14 Oct 2013 11:52:57 +0530
 Amit Kapila amit.kapil...@gmail.com wrote:

 On Sun, Oct 13, 2013 at 2:08 PM, Gibheer gibh...@zero-knowledge.org
 wrote:
  On Sun, 13 Oct 2013 11:38:17 +0530
  Amit Kapila amit.kapil...@gmail.com wrote:
 
  On Thu, Oct 10, 2013 at 3:17 AM, Gibheer
  gibh...@zero-knowledge.org wrote:
   On Mon, 7 Oct 2013 11:39:55 +0530
   Amit Kapila amit.kapil...@gmail.com wrote:
   Robert Haas wrote:
   On Mon, Aug 5, 2013 at 2:04 AM, Andres Freund
   andres(at)2ndquadrant(dot)com wrote:
 I would be glad, if you could also test the patch again, as I'm nearly
 code blind after testing it for 4 hours.
 I had the problem, that I could not attach as many replication
 connections as I wanted, as they were denied as normal connections. I
 think I got it fixed, but I'm not 100% sure at the moment.
 After some sleep, I will read the code again and test it again, to make
 sure, it really does what it is supposed to do.

 You have forgotten to attach the patch. However, now it is important
 to first get the consensus on approach to do this feature, currently
 there are 3 approaches:
 1. Have replication_reserved_connections as a separate parameter to
 reserve connections for replication
 2. Consider max_wal_sender to reserve connections for replication
 3. Treat replication connections as a pool outside max_connections

 Apart from above approaches, we need to think how user can view the
 usage of connections, as pg_stat_activity doesn't show replication
 connections, so its difficult for user to see how the connections are
 used.

 I am really not sure what is best way to goahead from here, but I
 think it might be helpful if we can study some use cases or how other
 databases solve this problem.

Today I spent some time seeing how other databases (in particular
MySQL) achieve it. There seems to be no separate way to configure
replication connections, rather if user faced with
too_many_connections
(https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html),
they allow one spare connection (super user connection) to check what
all connections are doing, it seems all connections can be viewed
through one common command Show ProcessList
(https://dev.mysql.com/doc/refman/5.5/en/show-processlist.html).

By above, I don't mean that we should only do what other databases
have done, rather it is towards trying to find a way which can be
useful for users of Postgresql.

Your views/thoughts?

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] RelFileNode to Relation

2013-10-19 Thread Soroosh Sardari
Hi

I need to get a Relation instance but I have only a RelFileNode!
I see the relcache.h, only the following function seems helpful

extern Relation RelationIdGetRelation(Oid relationId);

However, there is another problem, In the RelFileNode, only relNode exist
and as comment said this is equivalent to pg_class.relfilenode and in some
situation relation oid (pg_class.oid) and relation file node are not the
same.

So i cant use above function!

Any idea?

Regards,
Soroosh Sardari
Sharif University of Tech.


Re: [HACKERS] RelFileNode to Relation

2013-10-19 Thread Tom Lane
Soroosh Sardari soroosh.sard...@gmail.com writes:
 I need to get a Relation instance but I have only a RelFileNode!

Why do you think you need to do that?  Such a lookup is inherently the
wrong thing, because relations' relfilenode values are not fixed (unless
you have a lock on the relation, which presumably you don't).

regards, tom lane


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] RelFileNode to Relation

2013-10-19 Thread Soroosh Sardari
On Sat, Oct 19, 2013 at 11:53 AM, Tom Lane t...@sss.pgh.pa.us wrote:

 Soroosh Sardari soroosh.sard...@gmail.com writes:
  I need to get a Relation instance but I have only a RelFileNode!

 Why do you think you need to do that?  Such a lookup is inherently the
 wrong thing, because relations' relfilenode values are not fixed (unless
 you have a lock on the relation, which presumably you don't).

 regards, tom lane



I know, it's against PG abstraction.
In PG we don't need to know anything about relation in the smgr level. but
in my project i want to encrypt
all pages of a particular relation and decrypt it when those pages are
fetching back to buffer.

This behavior depends on some relation properties. So in smgropen I must
get some knowledge about the relation and all of i have is RelFileNode.

I can not see the pg_class for the relation oid, because smgropen may
called before insertion in pg_class.

Soroosh


Re: [HACKERS] Updatable view columns

2013-10-19 Thread Dean Rasheed
On 18 October 2013 15:43, Robert Haas robertmh...@gmail.com wrote:

 Committed.


Excellent. Thank you!
And thank you Marko for your thorough review.

Regards,
Dean


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] psql tab completion for updatable foreign tables

2013-10-19 Thread Dean Rasheed
On 18 October 2013 16:41, Robert Haas robertmh...@gmail.com wrote:
 On Fri, Oct 18, 2013 at 1:34 AM, Dean Rasheed dean.a.rash...@gmail.com 
 wrote:
 Personally, I think this is too fancy anyway.  I'd just complete all
 views and foreign tables and be done with it.  We don't inspect
 permissions either, for example.  This might be too confusing for users.

 Yeah, I think you're probably right.

 I tend to agree.  When the rules were simple (i.e. pretty much nothing
 was updateable) it might have made sense to make tab completion hew to
 them, but they're complex enough now that I think it no longer does.
 There are now three different ways that a view can be updateable
 (auto, trigger, rule) and the rules are complex.

 Based on that it sounds like we need a new version of this patch.  If
 that's not going to happen RSN, we should mark this returned with
 feedback and it can be resubmitted if and when someone finds the time
 to update it.


OK, here's a new version that just completes with all tables, views
and foreign tables.

Doing this makes the insert, update and delete queries all the same,
which means there's not much point in keeping all three, so I've just
kept Query_for_list_of_updatables for use with INSERT, UPDATE and
DELETE.

Regards,
Dean


tab_complete.v3.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Commitfest II CLosed

2013-10-19 Thread David Fetter
Thanks very much to Mike Blackwell and Craig Kerstiens for their
persistence through what most people would consider a tedious and
thankless task.  Thanks also to the patch submitters, reviewers and
other participants.

That the formal commitfest is over does not mean that your patch won't
get reviewed or committed until November.  What it does mean is that
people will be setting patch review as a lower priority, frequently so
they can live their lives, work on new stuff, do their day jobs...

We got 20 patches, many quite significant, committed this time.
Kudos!

Cheers,
David.
-- 
David Fetter da...@fetter.org http://fetter.org/
Phone: +1 415 235 3778  AIM: dfetter666  Yahoo!: dfetter
Skype: davidfetter  XMPP: david.fet...@gmail.com
iCal: webcal://www.tripit.com/feed/ical/people/david74/tripit.ics

Remember to vote!
Consider donating to Postgres: http://www.postgresql.org/about/donate


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW API / flow charts for the docs?

2013-10-19 Thread Peter Eisentraut
On Fri, 2013-10-18 at 18:46 -0400, Stephen Frost wrote:
 As to your point about not wanting to do it for a single image- it seems
 we could potentially say that for every individual image proposed, but
 if we don't keep track of those images anywhere then we may not realize
 that 5 or 10 have actually been done and proposed but never integrated.
 If they're kept on the wiki then perhaps we would both keep track of the
 ones proposed and realize when it's worthwhile to add support for them
 to the doc build system.

Right.

 As to Tom's point on the previous thread, that we would need to actually
 maintain these images, that is helped by using dia, imv, since it's a
 pretty simple tool to use and understand and is available on many
 platforms.

I'd also take a look at ditaa, which might make this even simpler.



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW API / flow charts for the docs?

2013-10-19 Thread Peter Eisentraut
On Sat, 2013-10-19 at 00:52 +0200, Tomas Vondra wrote:
 AFAIK graphviz can give you at least .ps .svg .fig .png .gif .dia
 formats (and some other). I believe that covers most (if not all) of
 the cases you've mentioned.

I'm not concerned about that.  It's the code that would need to be
written to implement those cases.




-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for reserved connections for replication users

2013-10-19 Thread Gibheer
On Sat, 19 Oct 2013 12:09:57 +0530
Amit Kapila amit.kapil...@gmail.com wrote:

 On Thu, Oct 17, 2013 at 8:57 AM, Amit Kapila
 amit.kapil...@gmail.com wrote:
  On Wed, Oct 16, 2013 at 4:30 AM, Gibheer
  gibh...@zero-knowledge.org wrote:
  On Mon, 14 Oct 2013 11:52:57 +0530
  Amit Kapila amit.kapil...@gmail.com wrote:
 
  On Sun, Oct 13, 2013 at 2:08 PM, Gibheer
  gibh...@zero-knowledge.org wrote:
   On Sun, 13 Oct 2013 11:38:17 +0530
   Amit Kapila amit.kapil...@gmail.com wrote:
  
   On Thu, Oct 10, 2013 at 3:17 AM, Gibheer
   gibh...@zero-knowledge.org wrote:
On Mon, 7 Oct 2013 11:39:55 +0530
Amit Kapila amit.kapil...@gmail.com wrote:
Robert Haas wrote:
On Mon, Aug 5, 2013 at 2:04 AM, Andres Freund
andres(at)2ndquadrant(dot)com wrote:
  I would be glad, if you could also test the patch again, as I'm
  nearly code blind after testing it for 4 hours.
  I had the problem, that I could not attach as many replication
  connections as I wanted, as they were denied as normal
  connections. I think I got it fixed, but I'm not 100% sure at the
  moment. After some sleep, I will read the code again and test it
  again, to make sure, it really does what it is supposed to do.
 
  You have forgotten to attach the patch. However, now it is important
  to first get the consensus on approach to do this feature, currently
  there are 3 approaches:
  1. Have replication_reserved_connections as a separate parameter to
  reserve connections for replication
  2. Consider max_wal_sender to reserve connections for replication
  3. Treat replication connections as a pool outside max_connections
 
  Apart from above approaches, we need to think how user can view the
  usage of connections, as pg_stat_activity doesn't show replication
  connections, so its difficult for user to see how the connections
  are used.
 
  I am really not sure what is best way to goahead from here, but I
  think it might be helpful if we can study some use cases or how
  other databases solve this problem.
 
 Today I spent some time seeing how other databases (in particular
 MySQL) achieve it. There seems to be no separate way to configure
 replication connections, rather if user faced with
 too_many_connections
 (https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html),
 they allow one spare connection (super user connection) to check what
 all connections are doing, it seems all connections can be viewed
 through one common command Show ProcessList
 (https://dev.mysql.com/doc/refman/5.5/en/show-processlist.html).
 
 By above, I don't mean that we should only do what other databases
 have done, rather it is towards trying to find a way which can be
 useful for users of Postgresql.
 
 Your views/thoughts?
 
 With Regards,
 Amit Kapila.
 EnterpriseDB: http://www.enterprisedb.com
 

Hi,

I have accessto MySQL and PostgreSQL at work and it is true, that MySQL
has not separate pools. It also happend to us, that we lost connection
from a slave and it was unable to get back into replication on MySQL
and Postgres, because of some stupid applications.
One difference is, like you said, that replication connections are
listed in `show processlist`, where replication connections in postgres
are listed in a seperate view from the rest of the connections. I think
the postgres way is the better in this case, as the picture of the
replication state of the complete cluster can be viewed by one select
on the master. In MySQL it needs one SQL on each slave.

On the other hand, wor on logical replication is done, which will have
predefined slots for it
(http://wiki.postgresql.org/wiki/BDR_User_Guide#max_logical_slots).
This will also consume slots from max_wal_senders
(http://wiki.postgresql.org/wiki/BDR_User_Guide#max_wal_senders). With
that, I think the best approach is to build a pool around replication
only connections, despite it's possible kind. Information about them
will only be available through pg_stat_replication. When a connection
limit is hit, it is clear wether it is a normal connection or a
replication connection and where the user should look for further
information about it.
Also nobody would have to calculate how many connections would have to
be reserved for what service.

I have yet to take a look how background worker connections are handled
(seperate pool or unified with normal connections), because for them
the same limitations apply. We want them running despite the load from
applications or users.

I will report back, when I had more time to look into it.

regards,

Stefan Radomski


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] GIN improvements part 1: additional information

2013-10-19 Thread Bruce Momjian
On Thu, Oct  3, 2013 at 05:24:49PM -0400, Bruce Momjian wrote:
 On Thu, Oct  3, 2013 at 02:48:20PM -0400, Robert Haas wrote:
  On Thu, Oct 3, 2013 at 2:43 PM, Heikki Linnakangas
  hlinnakan...@vmware.com wrote:
   It seems we've all but decided that we'll require reindexing GIN indexes 
   in
   9.4.
  
  I thought the consensus in Ottawa was strongly against that.  I'm not
  aware that anyone has subsequently changed their position on the
  topic.  Bruce is right to point out that we've done such things before
  and can therefore do it again, but just because we have the technical
  means to do it doesn't make it good policy.
  
  That having been said, if we do decide to break it...
 
 Agreed.  I was stating only that this is easy for pg_upgrade.  One cool
 thing is that the upgrades completes, and the indexes are there, but
 just marked as invalid until the REINDEX.
 
 One other point Alexander made is that the new GIN indexes will be
 smaller so most people would want the new format in the new cluster
 anyway.

I am in Moscow with Alexander and we were discussing GIN pg_upgrade
issues.  One option we have already discussed was to take the old GIN
index code and put it in a separate directory, and call the new GIN
index something different, but that got hung up on how users were going
to create indexes of the new type.

One nice trick would be for the index creation code to check the global
variable IsBinaryUpgrade that pg_upgrade sets.  If CREATE INDEX ... GIN
finds IsBinaryUpgrade set, it should create an (empty) index of type
gin-v1/old.  If it is not set, it should create a new gin index.  This
will allow pg_upgrade to work, and allow REINDEX to create a new-type
GIN index from an old one.

We would need to append -v1 to the old index directory, system
catalog, and function names.  We could then remove the old GIN index
code in some later major release, and pg_upgrade will then mark the
indexes as invalid and create a REINDEX script.

This allows users to reindex their GIN indexes over time, but it doesn't
lock us into keeping the gin-v1 code around forever.

-- 
  Bruce Momjian  br...@momjian.ushttp://momjian.us
  EnterpriseDB http://enterprisedb.com

  + Everyone has their own god. +


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] FDW API / flow charts for the docs?

2013-10-19 Thread Vik Fearing
On 10/19/2013 02:22 AM, Christopher Browne wrote:
 I would be more inclined to let GraphViz into the process than Dia;
 the former fits *much* better into a Make-based process.

I also cast my vote for Graphviz.

-- 
Vik



-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] autovacuum_work_mem

2013-10-19 Thread Peter Geoghegan
There has recently been considerable discussion around auto-tuning.
Throughout the course of this discussion, I raised the idea of
creating a new GUC to separately control autovacuum's usage of
maintenance_work_mem [1], explaining the rationale in some detail [2].
At the time Magnus seemed to think this a good idea [3].

Attached simple patch adds a new GUC, autovacuum_work_mem, which when
set to a value other than -1 (the default) is preferred by autovacuum
over maintenance_work_mem. All other usage of VACUUM is unaffected.

I won't repeat the rationale for the patch here. Appropriate
documentation changes are included. I don't think it's a problem that
autovacuum_work_mem is kind of similar to vacuum_mem in name.
maintenance_work_mem was last spelt vacuum_mem about 10 years ago.
Enough time has passed that I think it very unlikely that someone
might conflate the two.

I have decided to have the default value of -1 carry, and not have it
automatically take the same value as maintenance_work_mem, because
doing so could create the impression that it needs to be set
explicitly, which of course it does not - this is not the same
situation as exists for wal_buffers. We just check if its set when
going to VACUUM, if VACUUM is requested from a worker process.

It seemed neater to me to create a new flag, so that in principle any
vacuum() code path can request autovacuum_work_mem, rather than having
lazyvacuum.c code call IsAutoVacuumWorkerProcess() for the same
purpose. To date, that's only been done within vacuumlazy.c for things
like logging.

[1] 
http://www.postgresql.org/message-id/cam3swztr1uu+7kr1zougwcjriw9nvbqdjqydmrwypevdfi4...@mail.gmail.com

[2] 
http://www.postgresql.org/message-id/cam3swztyod0ycla-4nrb4s8-ugjyr514aey+8o6vjqwvbzs...@mail.gmail.com

[3] 
http://www.postgresql.org/message-id/CABUevEzVrd36yeFzYBzad0=r09eqRqNoMwX8r=urikg9drf...@mail.gmail.com

-- 
Peter Geoghegan


autovacuum_work_mem.v1.2013_10_19.patch
Description: Binary data

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Patch for reserved connections for replication users

2013-10-19 Thread Amit Kapila
On Sun, Oct 20, 2013 at 1:26 AM, Gibheer gibh...@zero-knowledge.org wrote:
 On Sat, 19 Oct 2013 12:09:57 +0530
 Amit Kapila amit.kapil...@gmail.com wrote:

 On Thu, Oct 17, 2013 at 8:57 AM, Amit Kapila
 amit.kapil...@gmail.com wrote:
  On Wed, Oct 16, 2013 at 4:30 AM, Gibheer
  gibh...@zero-knowledge.org wrote:
  On Mon, 14 Oct 2013 11:52:57 +0530
  Amit Kapila amit.kapil...@gmail.com wrote:
 
  On Sun, Oct 13, 2013 at 2:08 PM, Gibheer
  gibh...@zero-knowledge.org wrote:
   On Sun, 13 Oct 2013 11:38:17 +0530
   Amit Kapila amit.kapil...@gmail.com wrote:
  
   On Thu, Oct 10, 2013 at 3:17 AM, Gibheer
   gibh...@zero-knowledge.org wrote:
On Mon, 7 Oct 2013 11:39:55 +0530
Amit Kapila amit.kapil...@gmail.com wrote:
Robert Haas wrote:
On Mon, Aug 5, 2013 at 2:04 AM, Andres Freund
andres(at)2ndquadrant(dot)com wrote:
  I would be glad, if you could also test the patch again, as I'm
  nearly code blind after testing it for 4 hours.
  I had the problem, that I could not attach as many replication
  connections as I wanted, as they were denied as normal
  connections. I think I got it fixed, but I'm not 100% sure at the
  moment. After some sleep, I will read the code again and test it
  again, to make sure, it really does what it is supposed to do.
 
  You have forgotten to attach the patch. However, now it is important
  to first get the consensus on approach to do this feature, currently
  there are 3 approaches:
  1. Have replication_reserved_connections as a separate parameter to
  reserve connections for replication
  2. Consider max_wal_sender to reserve connections for replication
  3. Treat replication connections as a pool outside max_connections
 
  Apart from above approaches, we need to think how user can view the
  usage of connections, as pg_stat_activity doesn't show replication
  connections, so its difficult for user to see how the connections
  are used.
 
  I am really not sure what is best way to goahead from here, but I
  think it might be helpful if we can study some use cases or how
  other databases solve this problem.

 Today I spent some time seeing how other databases (in particular
 MySQL) achieve it. There seems to be no separate way to configure
 replication connections, rather if user faced with
 too_many_connections
 (https://dev.mysql.com/doc/refman/5.5/en/too-many-connections.html),
 they allow one spare connection (super user connection) to check what
 all connections are doing, it seems all connections can be viewed
 through one common command Show ProcessList
 (https://dev.mysql.com/doc/refman/5.5/en/show-processlist.html).

 By above, I don't mean that we should only do what other databases
 have done, rather it is towards trying to find a way which can be
 useful for users of Postgresql.

 Your views/thoughts?

 With Regards,
 Amit Kapila.
 EnterpriseDB: http://www.enterprisedb.com


 Hi,

 I have accessto MySQL and PostgreSQL at work and it is true, that MySQL
 has not separate pools. It also happend to us, that we lost connection
 from a slave and it was unable to get back into replication on MySQL
 and Postgres, because of some stupid applications.
 One difference is, like you said, that replication connections are
 listed in `show processlist`, where replication connections in postgres
 are listed in a seperate view from the rest of the connections. I think
 the postgres way is the better in this case, as the picture of the
 replication state of the complete cluster can be viewed by one select
 on the master. In MySQL it needs one SQL on each slave.

Going either way (separate management of replication connections or
unified max_connections),  user has to understand how to configure
the system, so that it serves his purpose.
Here I think the important thing is to decide which way it would be
easy for users to understand and configure the system.
As an user, I would be happy with one parameter (max_connections)
rather than having multiple parameters for connection management and
understand  each one separately to configure the system. However here
many users would be more comfortable if there are multiple parameters
for configuring the system. I was not sure which way users would like
to configure connection management and neither we had consensus to
proceed, thats why I had checked other database to know how users are
configuring connection management in database and it seems to me that
many users are using single parameter.

With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] [PATCH] Statistics collection for CLUSTER command

2013-10-19 Thread Noah Misch
  (2013/08/08 20:52), Vik Fearing wrote:
  As part of routine maintenance monitoring, it is interesting for us to
  have statistics on the CLUSTER command (timestamp of last run, and
  number of runs since stat reset) like we have for (auto)ANALYZE and
  (auto)VACUUM.  Patch against today's HEAD attached.

Adding new fields to PgStat_StatTabEntry imposes a substantial distributed
cost, because every database stats file write-out grows by the width of those
fields times the number of tables in the database.  Associated costs have been
and continue to be a pain point with large table counts:

http://www.postgresql.org/message-id/flat/1718942738eb65c8407fcd864883f...@fuzzy.cz
http://www.postgresql.org/message-id/flat/52268887.9010...@uptime.jp

In that light, I can't justify widening PgStat_StatTabEntry by 9.5% for this.
I recommend satisfying this monitoring need in your application by creating a
cluster_table wrapper function that issues CLUSTER and then updates statistics
you store in an ordinary table.  Issue all routine CLUSTERs by way of that
wrapper function.  A backend change that would help here is to extend event
triggers to cover the CLUSTER command, permitting you to inject monitoring
after plain CLUSTER and dispense with the wrapper.

Thanks,
nm

-- 
Noah Misch
EnterpriseDB http://www.enterprisedb.com


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers