Re: [BUGS] [HACKERS] Segmentation fault in libpq

2017-07-03 Thread Michal Novotny



On 07/03/2017 04:58 AM, Craig Ringer wrote:

On 3 July 2017 at 03:12, Andres Freund <and...@anarazel.de> wrote:

Hi,

On 2017-07-02 20:58:52 +0200, Michal Novotný wrote:

thank you all for your advice. I've been investigating this a little more
and finally it turned out it's not a bug in libpq although I got confused
by going deep as several libpq functions. The bug was really on our side
after trying to use connection pointer after calling PQfinish(). The code
is pretty complex so it took some time to investigate however I would like
to apologize for "blaming" libpq instead of our code.

Usually using a tool like valgrind is quite helpful to find issues like
that, because it'll show you the call-stack accessing the memory and
*also* the call-stack that lead to the memory being freed.

Yep, huge help.

BTW, on Windows, the free tool DrMemory (now 64-bit too, yay) or
commercial Purify work great.


Well, good to know about Windows stuff however we use Linux so that's 
not a big deal. Unfortunately it's easy to miss something in valgrind if 
you have once multi-threaded library linked to libpq and this 
multi-threaded library is used in conjunction with another libraries 
sharing some of the data among them.


Thanks once again,
Michal

--
Michal Novotny
System Development Lead
michal.novo...@greycortex.com

GREYCORTEX s.r.o.
Purkynova 127, 61200 Brno
Czech Republic
www.greycortex.com



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [BUGS] [HACKERS] Segmentation fault in libpq

2017-07-03 Thread Michal Novotny



On 07/02/2017 09:12 PM, Andres Freund wrote:

Hi,

On 2017-07-02 20:58:52 +0200, Michal Novotný wrote:

thank you all for your advice. I've been investigating this a little more
and finally it turned out it's not a bug in libpq although I got confused
by going deep as several libpq functions. The bug was really on our side
after trying to use connection pointer after calling PQfinish(). The code
is pretty complex so it took some time to investigate however I would like
to apologize for "blaming" libpq instead of our code.

Usually using a tool like valgrind is quite helpful to find issues like
that, because it'll show you the call-stack accessing the memory and
*also* the call-stack that lead to the memory being freed.

- Andres


Well, I've tried but I was unable to locate the issue so I had to 
investigate the code our little further and finally I've been able to 
find the issue.


Thanks again,
Michal


--
Michal Novotny
System Development Lead
michal.novo...@greycortex.com

GREYCORTEX s.r.o.
Purkynova 127, 61200 Brno
Czech Republic
www.greycortex.com



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Segmentation fault in libpq

2017-06-29 Thread Michal Novotny

Hi,

comments inline ...


On 06/29/2017 03:08 PM, Merlin Moncure wrote:

On Thu, Jun 29, 2017 at 4:01 AM, Michal Novotny
<michal.novo...@greycortex.com> wrote:

Hi all,

we've developed an application using libpq to access a table in the PgSQL
database but we're sometimes experiencing segmentation fault on
resetPQExpBuffer() function of libpq called from PQexecParams() with
prepared query.

PostgreSQL version is 9.6.3 and the backtrace is:

Core was generated by `/usr/ti/bin/status-monitor2 -m
/usr/lib64/status-monitor2/modules'.
Program terminated with signal 11, Segmentation fault.
#0  resetPQExpBuffer (str=str@entry=0x9f4a28) at pqexpbuffer.c:152
152 str->data[0] = '\0';

Thread 1 (Thread 0x7fdf68de3840 (LWP 3525)):
#0  resetPQExpBuffer (str=str@entry=0x9f4a28) at pqexpbuffer.c:152
No locals.
#1  0x7fdf66e0333d in PQsendQueryStart (conn=conn@entry=0x9f46d0) at
fe-exec.c:1371
No locals.
#2  0x7fdf66e044b9 in PQsendQueryParams (conn=conn@entry=0x9f46d0,
command=command@entry=0x409a98 "SELECT min, hour, day, month, dow, sensor,
module, params, priority, rt_due FROM sm.cron WHERE sensor = $1 ORDER BY
priority DESC", nParams=nParams@entry=1, paramTypes=paramTypes@entry=0x0,
paramValues=paramValues@entry=0xa2b7b0, paramLengths=paramLengths@entry=0x0,
paramFormats=paramFormats@entry=0x0, resultFormat=resultFormat@entry=0) at
fe-exec.c:1192
No locals.
#3  0x7fdf66e0552b in PQexecParams (conn=0x9f46d0, command=0x409a98
"SELECT min, hour, day, month, dow, sensor, module, params, priority, rt_due
FROM sm.cron WHERE sensor = $1 ORDER BY priority DESC", nParams=1,
paramTypes=0x0, paramValues=0xa2b7b0, paramLengths=0x0, paramFormats=0x0,
resultFormat=0) at fe-exec.c:1871
No locals.

Unfortunately we didn't have more information from the crash, at least for
now.

Is this a known issue and can you help me with this one?

Is your application written in C?  We would need to completely rule
out your code (say, by double freeing result or something else nasty)
before assuming problem was withing libpq itself, particularly in this
area of the code.  How reproducible is the problem?

merlin


The application is written in plain C. The issue is it happens just 
sometimes - sometimes it happens and sometimes it doesn't.  Once it 
happens it causes the application crash but as it's systemd unit with 
Restart=on-failure flag it's automatically being restarted.


What's being done is:
1) Ensure connection already exists and create a new one if it doesn't 
exist yet

2) Run PQexecParams() with specified $params that has $params_cnt elements:

res = PQexecParams(conn, prepared_query, params_cnt, NULL, (const char 
**)params, NULL, NULL, 0);


3) Check for result and report error and exit if "PQresultStatus(res) != 
PGRES_TUPLES_OK"

4) Do some processing with the result
5) Clear result using PQclear()

It usually works fine but sometimes it's crashing and I don't know how 
to investigate further.


Could you please help me based on information provided above?

Thanks,
Michal

--
Michal Novotny
System Development Lead
michal.novo...@greycortex.com

GREYCORTEX s.r.o.
Purkynova 127, 61200 Brno
Czech Republic
www.greycortex.com



--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Segmentation fault in libpq

2017-06-29 Thread Michal Novotny

Hi all,

we've developed an application using libpq to access a table in the 
PgSQL database but we're sometimes experiencing segmentation fault on 
resetPQExpBuffer() function of libpq called from PQexecParams() with 
prepared query.


PostgreSQL version is 9.6.3 and the backtrace is:

Core was generated by `/usr/ti/bin/status-monitor2 -m 
/usr/lib64/status-monitor2/modules'.
Program terminated with signal 11, Segmentation fault.
#0  resetPQExpBuffer (str=str@entry=0x9f4a28) at pqexpbuffer.c:152
152 str->data[0] = '\0';

Thread  1 (Thread  0x7fdf68de3840 (LWP 3525)):
#0  resetPQExpBuffer (str=str@entry=0x9f4a28) at pqexpbuffer.c:152
No locals.
#1  0x7fdf66e0333d in PQsendQueryStart (conn=conn@entry=0x9f46d0) at 
fe-exec.c:1371
No locals.
#2  0x7fdf66e044b9 in PQsendQueryParams (conn=conn@entry=0x9f46d0, command=command@entry=0x409a98"SELECT min, hour, day, month, dow, sensor, module, params, priority, 
rt_due FROM sm.cron WHERE sensor = $1 ORDER BY priority DESC", nParams=nParams@entry=1, paramTypes=paramTypes@entry=0x0, paramValues=paramValues@entry=0xa2b7b0, paramLengths=paramLengths@entry=0x0, paramFormats=paramFormats@entry=0x0, resultFormat=resultFormat@entry=0) at fe-exec.c:1192

No locals.
#3  0x7fdf66e0552b in PQexecParams (conn=0x9f46d0, command=0x409a98"SELECT min, hour, day, month, dow, sensor, module, params, priority, 
rt_due FROM sm.cron WHERE sensor = $1 ORDER BY priority DESC", nParams=1, paramTypes=0x0, paramValues=0xa2b7b0, paramLengths=0x0, paramFormats=0x0, resultFormat=0) at fe-exec.c:1871

No locals.

Unfortunately we didn't have more information from the crash, at least 
for now.


Is this a known issue and can you help me with this one?

Thanks,
Michal

--
Michal Novotny
System Development Lead
michal.novo...@greycortex.com

GREYCORTEX s.r.o.
Purkynova 127, 61200 Brno
Czech Republic
www.greycortex.com



Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny
Hi Pavel,
thanks for the information. I've been doing more investigation of this
issue and there's autovacuum running on the table however it's
automatically starting even if there is "autovacuum = off" in the
postgresql.conf configuration file.

The test of rm 5T file was fast and not taking 24 hours already. I guess
the autovacuum is the issue. Is there any way how to disable it? If I
killed the process using 'kill -9' yesterday the process started again.

Is there any way how to cancel this process and disallow PgSQL to run
autovacuum again and do the drop instead?

Thanks,
Michal

On 01/12/2016 12:01 PM, Pavel Stehule wrote:
> Hi
> 
> 2016-01-12 11:57 GMT+01:00 Michal Novotny <michal.novo...@trustport.com
> <mailto:michal.novo...@trustport.com>>:
> 
> Dear PostgreSQL Hackers,
> I've discovered an issue with dropping a large table (~5T). I was
> thinking drop table is fast operation however I found out my assumption
> was wrong.
> 
> Is there any way how to tune it to drop a large table in the matter of
> seconds or minutes? Any configuration variable in the postgresql.conf or
> any tune up options available?
> 
> 
> drop table should be fast.
> 
> There can be two reasons why not:
> 
> 1. locks - are you sure, so this statement didn't wait on some lock?
> 
> 2. filesystem issue  - can you check the speed of rm 5TB file on your IO?
> 
> Regards
> 
> Pavel
> 
> 
> 
>  
> 
> 
> PostgreSQL version used is PgSQL 9.4.
> 
> Thanks a lot!
> Michal
> 
> 
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org
> <mailto:pgsql-hackers@postgresql.org>)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
> 
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny
Hi Andres,

thanks a lot. I've managed to run DROP TABLE and then cancel process
using pg_cancel_backend(autovacuum_pid) and it passed and dropped the 5T
table.

Thanks a lot!
Michal

On 01/12/2016 12:37 PM, Andres Freund wrote:
> Hi,
> 
> On 2016-01-12 12:17:01 +0100, Michal Novotny wrote:
>> thanks a lot for your reply. Unfortunately I've found out most it didn't
>> really start DROP TABLE yet and it's locked on autovacuum running for
>> the table and even if I kill the process it's autostarting again and again.
> 
> Start the DROP TABLE and *then* cancel the autovacuum session. That
> should work.
> 
>> Is there any way how to do the DROP TABLE and bypass/disable autovacuum
>> entirely? Please note the "autovacuum = off" is set in the config file
>> (postgresql.conf).
> 
> That actually is likely to have caused the problem. Every
> autovacuum_freeze_max_age tables need to be vacuumed - otherwise the
> data can't be interpreted correctly anymore at some point. That's called
> 'anti-wraparound vacuum". It's started even if you disabled autovacuum,
> to prevent database corruption.
> 
> If you disable autovacuum, you really should start vacuums in some other
> way.
> 
> Greetings,
> 
> Andres Freund
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny
Hi Andres,
thanks a lot for your reply. Unfortunately I've found out most it didn't
really start DROP TABLE yet and it's locked on autovacuum running for
the table and even if I kill the process it's autostarting again and again.

Is there any way how to do the DROP TABLE and bypass/disable autovacuum
entirely? Please note the "autovacuum = off" is set in the config file
(postgresql.conf).

Thanks a lot,
Michal

On 01/12/2016 12:05 PM, Andres Freund wrote:
> Hi Michal,
> 
> This isn't really a question for -hackers, the list for postgres
> development. -general or -performance would have been more appropriate.
> 
> On 2016-01-12 11:57:05 +0100, Michal Novotny wrote:
>> I've discovered an issue with dropping a large table (~5T). I was
>> thinking drop table is fast operation however I found out my assumption
>> was wrong.
> 
> What exactly did you do, and how long did it take. Is there any chance
> you were actually waiting for the lock on that large table, instead of
> waiting for the actual execution?
> 
>> Is there any way how to tune it to drop a large table in the matter of
>> seconds or minutes? Any configuration variable in the postgresql.conf or
>> any tune up options available?
> 
> The time for dropping a table primarily is spent on three things:
> 1) acquiring the exclusive lock. How long this takes entirely depends on
>the concurrent activity. If there's a longrunning session using that
>table it'll take till that session is finished.
> 2) The cached portion of that table needs to be eviced from cache. How
>long that takes depends on the size of shared_buffers - but usually
>this is a relatively short amount of time, and only matters if you
>drop many, many relations.
> 3) The time the filesystem takes to actually remove the, in your case
>5000 1GB, files. This will take a while, but shouldn't be minutes.
> 
> 
> Greetings,
> 
> Andres Freund
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny
Hi Andres,

On 01/12/2016 12:37 PM, Andres Freund wrote:
> Hi,
> 
> On 2016-01-12 12:17:01 +0100, Michal Novotny wrote:
>> thanks a lot for your reply. Unfortunately I've found out most it didn't
>> really start DROP TABLE yet and it's locked on autovacuum running for
>> the table and even if I kill the process it's autostarting again and again.
> 
> Start the DROP TABLE and *then* cancel the autovacuum session. That
> should work.


By cancelling the autovacuum session you mean to run
pg_cancel_backend(pid int) *after* running DROP TABLE ?


> 
>> Is there any way how to do the DROP TABLE and bypass/disable autovacuum
>> entirely? Please note the "autovacuum = off" is set in the config file
>> (postgresql.conf).


So should I set autovacuum to enable (on) and restart pgsql before doing
DROP TABLE (and pg_cancel_backend() as mentioned above)?


> 
> That actually is likely to have caused the problem. Every
> autovacuum_freeze_max_age tables need to be vacuumed - otherwise the
> data can't be interpreted correctly anymore at some point. That's called
> 'anti-wraparound vacuum". It's started even if you disabled autovacuum,
> to prevent database corruption.

Ok, any recommendation how to set autovacuum_freeze_max_age?

Thanks,
Michal


> 
> If you disable autovacuum, you really should start vacuums in some other
> way.
> 
> Greetings,
> 
> Andres Freund
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny

On 01/12/2016 12:20 PM, Andres Freund wrote:
> On 2016-01-12 12:17:09 +0100, Pavel Stehule wrote:
>> 2016-01-12 12:14 GMT+01:00 Michal Novotny <michal.novo...@trustport.com>:
>>
>>> Hi Pavel,
>>> thanks for the information. I've been doing more investigation of this
>>> issue and there's autovacuum running on the table however it's
>>> automatically starting even if there is "autovacuum = off" in the
>>> postgresql.conf configuration file.
>>>
>>
>> Real autovacuum is automatically cancelled. It looks like VACUUM started by
>> cron, maybe?
> 
> Unless it's an anti-wraparound autovacuum...
> 
> Andres
> 

Autovacuum is not started by CRON. How should I understand the
"anti-wraparound autovacuum" ?

Thanks,
Michal


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Question about DROP TABLE

2016-01-12 Thread Michal Novotny
Dear PostgreSQL Hackers,
I've discovered an issue with dropping a large table (~5T). I was
thinking drop table is fast operation however I found out my assumption
was wrong.

Is there any way how to tune it to drop a large table in the matter of
seconds or minutes? Any configuration variable in the postgresql.conf or
any tune up options available?

PostgreSQL version used is PgSQL 9.4.

Thanks a lot!
Michal


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Database schema diff

2015-10-14 Thread Michal Novotny
Hi,

thanks a lot for your reply, unfortunately it's not working at all, I
run it as:

# java -jar apgdiff-2.4.jar  

But it's stuck on the futex wait so unfortunately it didn't work at all.

Thanks for the reply anyway,
Michal


On 10/14/2015 01:53 PM, Иван Фролков wrote:
>> I would like to ask you whether is there any tool to be able to compare
>> database schemas ideally no matter what the column order is or to dump
>> database table with ascending order of all database columns.
> 
> Take a look a tool called apgdiff http://apgdiff.com/
> Its development seems suspended, but it is still useful tool, except cases 
> with new features etc.
> Anyway, you could find bunch of forks at the github - I did support for 
> instead of triggers, other people did another options and so on
> 
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Database schema diff

2015-10-14 Thread Michal Novotny
I have to admit I was having the same idea few years ago however I never
got to implement it, nevertheless I should mount 2 trees for diff
comparison, isn't that correct?

I mean to mount  as /mnt/dumps/old and <new-dump? as
/mnt/dumps/new and run diff tool from /mnt/dumps on old and new to get
the difference. This, however, requires mounting directly onto a file
system space (the main advantage why to use FUSE) which is what I would
like to avoid.

Nevertheless, if I overlook my unwillingness to mount it, and if we say,
it's fine for me, does it accept the dump file to be mounted or does it
work on the live system directly in the PgSQL database system?

Thanks,
Michal

On 10/14/2015 10:59 AM, Torello Querci wrote:
> Few years ago I developed a tool called fsgateway
> (https://github.com/mk8/fsgateway) that show metadata (table, index,
> sequences, view) as normal files using fuse.
> In this way to yout can get differences between running db instance
> using diff, meld or what do you prefear.
> 
> Unfortunally at the moment not all you need is supported, yet.
> 
> Best regards
> 
> P.S. I think that this is the wrong list for questione like this one.
> 
> On Wed, Oct 14, 2015 at 10:26 AM, Shulgin, Oleksandr
> <oleksandr.shul...@zalando.de <mailto:oleksandr.shul...@zalando.de>> wrote:
> 
> On Tue, Oct 13, 2015 at 5:48 PM, Michal Novotny
> <michal.novo...@trustport.com <mailto:michal.novo...@trustport.com>>
> wrote:
> 
> Hi guys,
> 
> I would like to ask you whether is there any tool to be able to
> compare
> database schemas ideally no matter what the column order is or
> to dump
> database table with ascending order of all database columns.
> 
> For example, if I have table (called table) in schema A and in
> schema B
> (the time difference between is 1 week) and I would like to
> verify the
> column names/types matches but the order is different, i.e.:
> 
> Schema A (2015-10-01) |  Schema B (2015-10-07)
>   |
> id int|  id int
> name varchar(64)  |  name varchar(64)
> text text |  description text
> description text  |  text text
> 
> Is there any tool to compare and (even in case above) return
> that both
> tables match? Something like pgdiff or something?
> 
> This should work for all schemas, tables, functions, triggers
> and all
> the schema components?
> 
> 
> I've used pg_dump --split for this purpose a number of times (it
> requires patching pg_dump[1]).
> 
> The idea is to produce the two database's schema dumps split into
> individual files per database object, then run diff -r against the
> schema folders.  This worked really well for my purposes.
> 
> This will however report difference in columns order, but I'm not
> really sure why would you like to ignore that.
> 
> --
> Alex
> 
> [1] 
> http://www.postgresql.org/message-id/AANLkTikLHA2x6U=q-t0j0ys78txhfmdtyxjfsrsrc...@mail.gmail.com
> 
> 


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


Re: [HACKERS] Database schema diff

2015-10-14 Thread Michal Novotny
Hi Christopher,

thanks a lot for your suggestion however I need to run against dump
files so it's useless for me.

Thanks anyway,
Michal


On 10/13/2015 07:23 PM, Christopher Browne wrote:
> On 13 October 2015 at 11:48, Michal Novotny
> <michal.novo...@trustport.com <mailto:michal.novo...@trustport.com>> wrote:
> 
> Hi guys,
> 
> I would like to ask you whether is there any tool to be able to compare
> database schemas ideally no matter what the column order is or to dump
> database table with ascending order of all database columns.
> 
> For example, if I have table (called table) in schema A and in schema B
> (the time difference between is 1 week) and I would like to verify the
> column names/types matches but the order is different, i.e.:
> 
> Schema A (2015-10-01) |  Schema B (2015-10-07)
>   |
> id int|  id int
> name varchar(64)  |  name varchar(64)
> text text |  description text
> description text  |  text text
> 
> Is there any tool to compare and (even in case above) return that both
> tables match? Something like pgdiff or something?
> 
> This should work for all schemas, tables, functions, triggers and all
> the schema components?
> 
> Also, is there any tool to accept 2 PgSQL dump files (source for
> pg_restore) and compare the schemas of both in the way above?
> 
> Thanks a lot!
> Michal
> 
> 
> I built a tool I call "pgcmp", which is out on GitHub
> <https://github.com/cbbrowne/pgcmp>
> 
> The one thing that you mention that it *doesn't* consider is the
> ordering of columns.
> 
> It would not be difficult at all to add that comparison; as simple as adding
> an extra capture of table columns and column #'s.  I'd be happy to consider
> adding that in.
> 
> Note that pgcmp expects the database to be captured as databases; it
> pulls data
> from information_schema and such.  In order to run it against a pair of
> dumps,
> you'd need to load those dumps into databases, first.
> -- 
> When confronted by a difficult problem, solve it by reducing it to the
> question, "How would the Lone Ranger handle this?"


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers


[HACKERS] Database schema diff

2015-10-13 Thread Michal Novotny
Hi guys,

I would like to ask you whether is there any tool to be able to compare
database schemas ideally no matter what the column order is or to dump
database table with ascending order of all database columns.

For example, if I have table (called table) in schema A and in schema B
(the time difference between is 1 week) and I would like to verify the
column names/types matches but the order is different, i.e.:

Schema A (2015-10-01) |  Schema B (2015-10-07)
  |
id int|  id int
name varchar(64)  |  name varchar(64)
text text |  description text
description text  |  text text

Is there any tool to compare and (even in case above) return that both
tables match? Something like pgdiff or something?

This should work for all schemas, tables, functions, triggers and all
the schema components?

Also, is there any tool to accept 2 PgSQL dump files (source for
pg_restore) and compare the schemas of both in the way above?

Thanks a lot!
Michal


-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers