Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Darren Lafreniere
Stephen Frost  wrote:

> For at least some of the common BRIN use-cases, where the rows are
> inserted in-order and never/very-rarely modified or deleted, this
> approach would work very well.
>

Thanks Stephen, this is exactly our use case.


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-05 Thread Edson Richter
 Devrim Gündüz escreveu 

> Hi again,
>
> On Mon, 2016-10-03 at 00:18 +, Edson Richter wrote:
> > It is not working, I get the following error:
> >
> > [root@backup1 yum.repos.d]# LANG=C yum install pgadmin4 pgadmin4-web
> > Loaded plugins: ulninfo
> > epel/x86_64/metalink | 2.6 kB  00:00:00
> > ol7_UEKR3 | 1.2 kB  00:00:00
> > ol7_latest | 1.4 kB  00:00:00
> > pgdg96 | 4.1 kB  00:00:00
> > https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
> >  ;
> > -x86_64/repodata/repomd.xml:
> > [Errno 14] HTTPS Error 404 - Not Found
> > Trying other mirror.
>
> Can you please try again with this?
>
> yum --enablerepo pgdg96-updates-testing install pgadmin4 pgadmin4-web
>
> Just tested on:
> $ cat /etc/oracle-release
> Oracle Linux Server release 7.2
>
> If it also works for you, I'll push the updated packages to updates repo.

The command runs without errors! Thanks.

After install, I'll let you know if there is additional issues.

Thank you!

Regards,

Edson Richter

>
> Regards,
>
> --
> Devrim GÜNDÜZ
> EnterpriseDB: http://www.enterprisedb.com
> PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
> Twitter: @DevrimGunduz , @DevrimGunduzTR


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Stephen Frost
Darren,

* Darren Lafreniere (dlafreni...@onezero.com) wrote:
> Tom Lane  wrote:
> > > Gavin Wahl wrote:
> > >> It seems trivial to accelerate a MAX or MIN query with a BRIN index. You
> > >> just find the page range with the largest/smallest value, and then only
> > >> scan that one. Would that be hard to implement? I'm interested in
> > working
> > >> on it if someone can give me some pointers.
> >
> > I think this proposal is fairly broken anyway.  The page range with the
> > largest max-value may once have contained the largest live row, but
> > there's no guarantee that it still does.  It might even be completely
> > empty.  You could imagine an algorithm like this:
> >
> > 1. Find page-range with largest max.  Scan it to identify live row with
> > largest value.  If *no* live values, find page-range with next largest
> > max, repeat until no page ranges remain (whereupon return NULL).
> >
> > 2. For each remaining page-range whose indexed max exceeds the value
> > currently in hand, scan that page-range to see if any value exceeds
> > the one in hand, replacing the value if so.
> >
> > This'd probably allow you to omit scanning some of the page-ranges
> > in the table, but in a lot of cases you'd end up scanning many of them;
> > and you'd need a lot of working state to remember which ranges you'd
> > already looked at.  It'd certainly always be a lot more expensive than
> > answering the same question with a btree index, because in no case do
> > you get to avoid scanning the entire contents of the index.
[...]
> A b-tree index would certainly be faster for ordering. But in scenarios
> where you have huge datasets that can't afford the space or update time
> required for b-tree, could such a BRIN-accelerated ordering algorithm at
> least be faster than ordering with no index?

For at least some of the common BRIN use-cases, where the rows are
inserted in-order and never/very-rarely modified or deleted, this
approach would work very well.

Certainly, using this would be much cheaper than a seqscan/top-N sort,
for small values of 'N', relative to the number of rows in the table,
in those cases.

In general, I like the idea of supporting this as BRIN indexes strike me
as very good for very large tables which have highly clumped data in
them and being able to do a top-N query on those can be very useful at
times.

Thanks!

Stephen


signature.asc
Description: Digital signature


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Darren Lafreniere
Tom Lane  wrote:

> > Gavin Wahl wrote:
> >> It seems trivial to accelerate a MAX or MIN query with a BRIN index. You
> >> just find the page range with the largest/smallest value, and then only
> >> scan that one. Would that be hard to implement? I'm interested in
> working
> >> on it if someone can give me some pointers.
>
> I think this proposal is fairly broken anyway.  The page range with the
> largest max-value may once have contained the largest live row, but
> there's no guarantee that it still does.  It might even be completely
> empty.  You could imagine an algorithm like this:
>
> 1. Find page-range with largest max.  Scan it to identify live row with
> largest value.  If *no* live values, find page-range with next largest
> max, repeat until no page ranges remain (whereupon return NULL).
>
> 2. For each remaining page-range whose indexed max exceeds the value
> currently in hand, scan that page-range to see if any value exceeds
> the one in hand, replacing the value if so.
>
> This'd probably allow you to omit scanning some of the page-ranges
> in the table, but in a lot of cases you'd end up scanning many of them;
> and you'd need a lot of working state to remember which ranges you'd
> already looked at.  It'd certainly always be a lot more expensive than
> answering the same question with a btree index, because in no case do
> you get to avoid scanning the entire contents of the index.
>
> regards, tom lane
>


Thanks Tom,

A b-tree index would certainly be faster for ordering. But in scenarios
where you have huge datasets that can't afford the space or update time
required for b-tree, could such a BRIN-accelerated ordering algorithm at
least be faster than ordering with no index?

Darren Lafreniere


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Tom Lane
Alvaro Herrera  writes:
> Darren Lafreniere wrote:
>> We found a pgsql-hackers thread from about a year ago about optimizing
>> ORDER BY for BRIN indexes. Tom Lane suggested that he was working on it:
>> https://www.postgresql.org/message-id/11881.1443393360%40sss.pgh.pa.us

> Tom said he was working on some infrastructure planner changes
> ("upper-planner path-ification"), not that he was working on improving
> usage of BRIN indexes.  As far as I know, nobody has worked on that.

Alvaro's reading is correct; I wasn't planning to work on any such thing,
and still am not.

Looking again at the original thread:

> Gavin Wahl wrote:
>> It seems trivial to accelerate a MAX or MIN query with a BRIN index. You
>> just find the page range with the largest/smallest value, and then only
>> scan that one. Would that be hard to implement? I'm interested in working
>> on it if someone can give me some pointers.

I think this proposal is fairly broken anyway.  The page range with the
largest max-value may once have contained the largest live row, but
there's no guarantee that it still does.  It might even be completely
empty.  You could imagine an algorithm like this:

1. Find page-range with largest max.  Scan it to identify live row with
largest value.  If *no* live values, find page-range with next largest
max, repeat until no page ranges remain (whereupon return NULL).

2. For each remaining page-range whose indexed max exceeds the value
currently in hand, scan that page-range to see if any value exceeds
the one in hand, replacing the value if so.

This'd probably allow you to omit scanning some of the page-ranges
in the table, but in a lot of cases you'd end up scanning many of them;
and you'd need a lot of working state to remember which ranges you'd
already looked at.  It'd certainly always be a lot more expensive than
answering the same question with a btree index, because in no case do
you get to avoid scanning the entire contents of the index.

regards, tom lane


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] Installing pgAdmin 4 in Oracle Enterprise Linux 7

2016-10-05 Thread Devrim Gündüz

Hi again,

On Mon, 2016-10-03 at 00:18 +, Edson Richter wrote:
> It is not working, I get the following error:
> 
> [root@backup1 yum.repos.d]# LANG=C yum install pgadmin4 pgadmin4-web
> Loaded plugins: ulninfo
> epel/x86_64/metalink | 2.6 kB  00:00:00
> ol7_UEKR3 | 1.2 kB  00:00:00
> ol7_latest | 1.4 kB  00:00:00
> pgdg96 | 4.1 kB  00:00:00
> https://download.postgresql.org/pub/repos/yum/testing/9.6/redhat/rhel-7Server
> -x86_64/repodata/repomd.xml: 
> [Errno 14] HTTPS Error 404 - Not Found
> Trying other mirror.

Can you please try again with this?

yum --enablerepo pgdg96-updates-testing install pgadmin4 pgadmin4-web 

Just tested on:
$ cat /etc/oracle-release 
Oracle Linux Server release 7.2

If it also works for you, I'll push the updated packages to updates repo.

Regards,

-- 
Devrim GÜNDÜZ
EnterpriseDB: http://www.enterprisedb.com
PostgreSQL Danışmanı/Consultant, Red Hat Certified Engineer
Twitter: @DevrimGunduz , @DevrimGunduzTR


signature.asc
Description: This is a digitally signed message part


Re: [GENERAL] Unexpected trouble from pg_basebackup

2016-10-05 Thread Magnus Hagander
On Wed, Oct 5, 2016 at 3:55 PM, otheus uibk  wrote:

>
>
> On Tue, Oct 4, 2016 at 10:49 PM, Magnus Hagander 
> wrote:
>
>>
>>
>> On Tue, Oct 4, 2016 at 10:42 PM, otheus uibk 
>> wrote:
>>
>>> After a 3 to 4 minute delay, pg_basebackup started doing it's thing and
>>> finished within a few minutes. So now the question is: why the startup
>>> delay?
>>>
>>
>>
>> Sounds to me like it's doing a CHECKPOINT with spreading, which make it
>> take time. Try with "-c fast" and see if the problem goes away.
>>
>
> Maybe not too far off.
>
>
> 2016-10-04 17:43:40.620 GMT 57eb90a0.6e07
> 402 0   LOG:  checkpoint complete: wrote 12799 buffers (1.0%); 0
> transaction log file(s) added, 0 removed, 5 recycled; write=1389.348 s,
> sync=0.033 s, total=1389.400 s; sync files=240, longest=0.003 s,
> average=0.000 s; distance=92915 kB, estimate=129373 kB
> 2016-10-04 18:20:31.714 GMT 57eb90a0.6e07
> 403 0   LOG:  checkpoint starting: time
> 2016-10-04 18:39:19.870 GMT 57eb90a0.6e07
> 404 0   LOG:  checkpoint complete: wrote 10265 buffers (0.8%); 0
> transaction log file(s) added, 0 removed, 4 recycled; write=1128.118 s,
> sync=0.023 s, total=1128.155 s; sync files=190, longest=0.002 s,
> average=0.000 s; distance=73419 kB, estimate=123778 kB
> 2016-10-04 19:07:12.647 GMT [unknown]   pgsync  57f3fde0.52e3   2
>   0   LOG:  replication connection authorized: user=pgsync
> 2016-10-04 19:07:12.703 GMT 57eb90a0.6e07
> 405 0   LOG:  checkpoint starting: force wait
> 2016-10-04 19:20:32.879 GMT [unknown]   pgsync  57f40100.5891   2
>   0   LOG:  replication connection authorized: user=pgsync
> 2016-10-04 19:23:05.249 GMT 57eb90a0.6e07
> 406 0   LOG:  checkpoint complete: wrote 8638 buffers (0.7%); 0
> transaction log file(s) added, 0 removed, 5 recycled; write=952.514 s,
> sync=0.016 s, total=952.546 s; sync files=238, longest=0.002 s,
> average=0.000 s; distance=68257 kB, estimate=118226 kB
> 2016-10-04 19:23:05.249 GMT 57eb90a0.6e07
> 407 0   LOG:  checkpoint starting: force wait
> 2016-10-04 19:28:52.232 GMT 57eb90a0.6e07
> 408 0   LOG:  checkpoint complete: wrote 3102 buffers (0.2%); 0
> transaction log file(s) added, 0 removed, 0 recycled; write=346.957 s,
> sync=0.018 s, total=346.982 s; sync files=149, longest=0.002 s,
> average=0.000 s; distance=36016 kB, estimate=110005 kB
> 2016-10-04 19:30:31.922 GMT [unknown]   pgsync  57f40357.5c70   2
>   0   LOG:  replication connection authorized: user=pgsync
> 2016-10-04 19:30:31.932 GMT 57eb90a0.6e07
> 409 0   LOG:  checkpoint starting: force wait
> 2016-10-04 19:33:40.857 GMT 57eb90a0.6e07
> 410 0   LOG:  checkpoint complete: wrote 1763 buffers (0.1%); 0
> transaction log file(s) added, 0 removed, 1 recycled; write=188.886 s,
> sync=0.030 s, total=188.924 s; sync files=125, longest=0.004 s,
> average=0.000 s; distance=13135 kB, estimate=100318 kB
>
>
>
> OK, so what was happening is that the checkpoints were taking 5 to 15
> minutes minutes, and aborted basebackups were triggering new checkpoints
> which waited on the previous ones.
>
> Is it possible the new environment is not high-performance enough??
>
> possibly relevant configuration change:
>checkpoint_timeout=1h
>
>
>
>
pg_basebackup will always send in a checkpoint. PostgreSQL will time that
to take approximately checkpoint_timeout * checkpoint_completion_target
time. So with checkpoint_timeout set to a very high value, it will by
default target something like 30 minutes before it even gets started. The
only reason it takes as *little* as it does is that your system is lightly
loaded.

You can do a fast checkpoint with the parameter I suggested before, or you
can tune your checkpoint_timeout to be something that gives you a more
reasonable time.

-- 
 Magnus Hagander
 Me: http://www.hagander.net/
 Work: http://www.redpill-linpro.com/


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Darren Lafreniere
Ahh, yes. I misread that. Thank you for the clarification.

On Wed, Oct 5, 2016 at 2:27 PM, Alvaro Herrera 
wrote:

> Darren Lafreniere wrote:
>
> > "In addition to simply finding the rows to be returned by a query, an
> index
> > may be able to deliver them in a specific sorted order. This allows a
> > query's ORDER BY specification to be honored without a separate sorting
> > step. Of the index types currently supported by PostgreSQL, only B-tree
> can
> > produce sorted output — the other index types return matching rows in an
> > unspecified, implementation-dependent order."
> >
> > We found a pgsql-hackers thread from about a year ago about optimizing
> > ORDER BY for BRIN indexes. Tom Lane suggested that he was working on it:
> > https://www.postgresql.org/message-id/11881.1443393360%40sss.pgh.pa.us
>
> Tom said he was working on some infrastructure planner changes
> ("upper-planner path-ification"), not that he was working on improving
> usage of BRIN indexes.  As far as I know, nobody has worked on that.
>
> --
> Álvaro Herrerahttps://www.2ndQuadrant.com/
> PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
>


Re: [GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Alvaro Herrera
Darren Lafreniere wrote:

> "In addition to simply finding the rows to be returned by a query, an index
> may be able to deliver them in a specific sorted order. This allows a
> query's ORDER BY specification to be honored without a separate sorting
> step. Of the index types currently supported by PostgreSQL, only B-tree can
> produce sorted output — the other index types return matching rows in an
> unspecified, implementation-dependent order."
> 
> We found a pgsql-hackers thread from about a year ago about optimizing
> ORDER BY for BRIN indexes. Tom Lane suggested that he was working on it:
> https://www.postgresql.org/message-id/11881.1443393360%40sss.pgh.pa.us

Tom said he was working on some infrastructure planner changes
("upper-planner path-ification"), not that he was working on improving
usage of BRIN indexes.  As far as I know, nobody has worked on that.

-- 
Álvaro Herrerahttps://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


[GENERAL] BRIN indexes and ORDER BY

2016-10-05 Thread Darren Lafreniere
Hello,

We're curious about the current behavior in 9.5.4, and possible future
enhancements, of BRIN indexes with respect to ordering.

In the docs, section 11.4. "Indexes and ORDER BY" (
https://www.postgresql.org/docs/9.5/static/indexes-ordering.html) is clear
that anything other than B-tree indexes have unspecified ordering:

"In addition to simply finding the rows to be returned by a query, an index
may be able to deliver them in a specific sorted order. This allows a
query's ORDER BY specification to be honored without a separate sorting
step. Of the index types currently supported by PostgreSQL, only B-tree can
produce sorted output — the other index types return matching rows in an
unspecified, implementation-dependent order."

We found a pgsql-hackers thread from about a year ago about optimizing
ORDER BY for BRIN indexes. Tom Lane suggested that he was working on it:
https://www.postgresql.org/message-id/11881.1443393360%40sss.pgh.pa.us


Our current test shows that ordering by a BRIN indexed column still
performs an unoptimized sort:

SELECT generate_series(1, 1000) AS id INTO test;
CREATE INDEX idx_test_id ON test USING BRIN (id);
EXPLAIN SELECT id FROM test ORDER BY id DESC LIMIT 20;

Limit  (cost=410344.40..410344.45 rows=20 width=4)
  ->  Sort  (cost=410344.40..435344.40 rows=100 width=4)"
Sort Key: id DESC
->  Seq Scan on test  (cost=0.00..144248.00 rows=1000 width=4)

Is there anything we're missing to speed this up? Or is it still a future
feature?

Thank you,
Darren Lafreniere


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-05 Thread Aleksander Alekseev
> > I could align ZSON to PostgreSQL code style. I only need to run pgindent
> > and write a few comments. Do you think community would be interested in
> > adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> > problem for this?
> 
> CREATE COMPRESSION DICTIONARY public.simple_dict (
> WORDS = my_json_schema
> );
> 
> Then use the dictionary by referencing the DictionaryId within the
> datatype modifier, e.g. ZSON(357)
> 
> That way we can use this for TEXT, XML, JSON etc.. as ZTEXT, ZXML, ZJSON
> 
> So it seems like a generally useful thing to me.

Good idea!

What about evolving schema of JSON/JSONB/XML? For instance,
adding/removing keys in new versions of the application. UPDATE
COMPRESSION DICTIONARY?

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-05 Thread Simon Riggs
On 4 October 2016 at 16:34, Aleksander Alekseev
 wrote:
> Hello, Simon.
>
> Thanks for you interest to this project!
>
>> Will you be submitting this to core?
>
> I could align ZSON to PostgreSQL code style. I only need to run pgindent
> and write a few comments. Do you think community would be interested in
> adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> problem for this?

CREATE COMPRESSION DICTIONARY public.simple_dict (
WORDS = my_json_schema
);

Then use the dictionary by referencing the DictionaryId within the
datatype modifier, e.g. ZSON(357)

That way we can use this for TEXT, XML, JSON etc.. as ZTEXT, ZXML, ZJSON

So it seems like a generally useful thing to me.

-- 
Simon Riggshttp://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-05 Thread Aleksander Alekseev
> > I could align ZSON to PostgreSQL code style. I only need to run pgindent
> > and write a few comments. Do you think community would be interested in
> > adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
> > problem for this?
> 
> I find the references to pglz quite troubling, particularly by reading
> that this data type visibly uses its own compression logic.

ZSON just replaces frequently used strings to 16-bit codes. It also adds
PGLZ_HINT_SIZE (=32 by default, could be also 0) zero bytes in the
beginning to make it more likely that data will be compressed using PGLZ.
After all, who will use ZSON for small documents? Thats all.

Hope it explains references to PGLZ.

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-05 Thread Aleksander Alekseev
> ~everyone wants lower data storage and wants some kind of compression.
> Can this be made to automatically retrain when analyzing (makes sense?)?
> And create a new dictionary only if it changes compared to the last one.

It's an interesting idea. However I doubt it could be automated in
one-size-fits-all manner. One users would like to do re-learning during
analyzing, others during vacuum, once a month or say using triggers and
some sort of heuristics.

Despite that I see no reason not to accept pull requests with
implementations of different re-learning automation strategies. It's
just not a priority for me personally.

-- 
Best regards,
Aleksander Alekseev


signature.asc
Description: PGP signature


Re: [GENERAL] Lock contention in TransactionIdIsInProgress()

2016-10-05 Thread Tomáš Uko
Hi Jeff,



We have encountered same problem as you (in 9.5.4), it seems that so far it
hasn’t been ported back from 9.6, but if you take this commit and apply it
to 9.5 source codes, it seems to be working



But anyway, does anybody knows plans about backporting this to 9.5 official
way?



Regards

Tomas


Re: [GENERAL] postgres failed to start from services manager on windows 2008 r2

2016-10-05 Thread PHANIKUMAR G
hi,
  thanks for your reply. we use postgres zip, during install time of our
product, on target host this zip will be extracted. We are not creating
data directory, when service is started by default data directory will be
created.

we created/registered  the service with below commands

*pg_ctl.exe register -N "prostgresService" -D
"fullpath_to_data_directory" -W*

*>sc description prostgresService "prostgresService Applicatio*
*n Server Database Service"*


On Sat, Oct 1, 2016 at 9:36 PM, Thomas Kellerer  wrote:

> PHANIKUMAR G schrieb am 01.10.2016 um 17:30:
>
>> We then tried to start the service with the following changes.
>>
>>
>>  a. we changed the "Log on as" type to "This account"
>>  b. changed the account name to ".\> privileges>"
>>  c. And entered the password for this account and saved.
>>
>> After that we are able to start the postgresql service successfully.
>>
>> Found that the pg_log directory is created under data directory with
>> postgres-Sep.log
>>
>>
>> Why it is failing with local system account, the local system account
>> is part of administrators group. If we specifically provide
>> credentials to the service as explained above, service getting
>> started. Please help me to understand what is causing.
>>
>
>
> Where is the data directory? And how did you create the data directory?
>
> Check the privileges on the data directory.
> This sounds as if the local system account does not have the privilege to
> write to the data directory.
>
> The local system account (or "Administrator") don't have the privilege to
> read and write all files.
> Those accounts only have the ability to give themselves these privileges.
>
> Also: if I'm not mistaken, Postgres 9.3 creates the service with the
> "Local Network Service", not with "Local System Account"
>
> So how did you create the initial service?
>
>
>
>
>
>
> --
> Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-general
>


Re: [GENERAL] ZSON, PostgreSQL extension for compressing JSONB

2016-10-05 Thread Vitaly Burovoy
On 10/4/16, Dorian Hoxha  wrote:
> On Tue, Oct 4, 2016 at 5:34 PM, Aleksander Alekseev 
>  wrote:
>> Hello, Simon.
>>
>> Thanks for you interest to this project!
>>
>> > Will you be submitting this to core?
>>
>> I could align ZSON to PostgreSQL code style. I only need to run pgindent
>> and write a few comments. Do you think community would be interested in
>> adding it to /contrib/ ? I mean doesn't ZSON solve a bit too specific
>> problem for this?
>
> ~everyone wants lower data storage and wants some kind of compression.
> Can this be made to automatically retrain when analyzing (makes sense?)?
> And create a new dictionary only if it changes compared to the last one.

ANALYZE does not walk through all the table and does not change its records.
Creating a new dictionary for compressing purposes supposes updating
rows of the original table to replace entries to references to a
dictionary.

-- 
Best regards,
Vitaly Burovoy


-- 
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general