Re: [HACKERS] pg_autovacuum w/ dbt2

2005-01-12 Thread Mark Wong
On Tue, Dec 21, 2004 at 05:56:47PM -0500, Tom Lane wrote:
 If you want to track it yourself, please change those elog(ERROR)s to
 elog(PANIC) so that they'll generate core dumps, then build with
 --enable-debug if you didn't already (--enable-cassert would be good too)
 and get a debugger stack trace from the core dump.

Ok, well I got a core dump with 8.0rc4, but I'm not sure if it's
exactly the same problem.  I have the postgres binary and the core
here:
http://developer.osdl.org/markw/pgsql/core/2files.tar.bz2

But it's for ia64, if you got one.  Otherwise, this is what gdb is
telling me with a bt:

(gdb) bt
#0  FunctionCall2 (flinfo=0x60187850, arg1=16, arg2=1043)
at fmgr.c:1141
#1  0x4007a320 in _bt_checkkeys (scan=0x600c2d80, 
tuple=0x2101f660, dir=ForwardScanDirection, 
continuescan=0x6fff8ae0 \001) at nbtutils.c:542
#2  0x40078eb0 in _bt_endpoint (scan=0x60187690, 
dir=ForwardScanDirection) at nbtsearch.c:1309
#3  0x400771e0 in _bt_first (scan=0x60187690, 
dir=ForwardScanDirection) at nbtsearch.c:482
#4  0x40074350 in btgettuple (fcinfo=0x1) at nbtree.c:265
#5  0x403bd430 in FunctionCall2 (flinfo=0x60187700, 
arg1=6917529027642685072, arg2=1) at fmgr.c:1141
#6  0x4006b3a0 in index_getnext (scan=0x60187690, 
direction=ForwardScanDirection) at indexam.c:429
#7  0x4006a1e0 in systable_getnext (sysscan=0x60187668)
at genam.c:253
#8  0x4039c970 in SearchCatCache (cache=0x20001f1e0140, v1=0, 
v2=6917529027641871376, v3=4294966252, v4=6917546619827097184)
at catcache.c:1217
#9  0x403a9ee0 in SearchSysCache (cacheId=33, key1=1043, key2=0, 
key3=0, key4=0) at syscache.c:524
#10 0x40049110 in TupleDescInitEntry (desc=0x601872c8, 
attributeNumber=4, attributeName=0x60187614 \023\004, 
oidtypeid=1043, typmod=28, attdim=0) at tupdesc.c:444
#11 0x401b5fc0 in ExecTypeFromTLInternal (
targetList=0x60135d40, hasoid=-64 'À', skipjunk=1 '\001')
at execTuples.c:570
#12 0x401a4a20 in ExecInitJunkFilter (targetList=0x60135b38, 
hasoid=-64 'À', slot=0x601258a0) at execJunk.c:76
#13 0x401a6890 in InitPlan (queryDesc=0x60177ed0, 
explainOnly=0 '\0') at execMain.c:456
#14 0x401a5800 in ExecutorStart (queryDesc=0x60177ed0, 
explainOnly=0 '\0') at execMain.c:160
#15 0x401d6ab0 in _SPI_pquery (queryDesc=0x60177ed0, tcount=0)
at spi.c:1521
#16 0x401d6390 in _SPI_execute_plan (plan=0x6fff9380, 
Values=0x0, Nulls=0x0, snapshot=0x0, crosscheck_snapshot=0x0, 
read_only=0 '\0', tcount=0) at spi.c:1452


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pg_autovacuum w/ dbt2

2005-01-12 Thread Tom Lane
Mark Wong [EMAIL PROTECTED] writes:
 Ok, well I got a core dump with 8.0rc4, but I'm not sure if it's
 exactly the same problem.  I have the postgres binary and the core
 here:
   http://developer.osdl.org/markw/pgsql/core/2files.tar.bz2
 But it's for ia64, if you got one.

Poking around with gdb, it seems that the scankey structure being used
by SearchCatCache got clobbered; which is a bit surprising because
that's just a local variable in that function, and hence isn't really
very exposed.  The contents of cache-cc_skey are okay, but cur_skey[0]
and cur_skey[1] don't match, which implies the clobber happened
somewhere between lines 1110 and 1217 of catcache.c.

(gdb) f 8
#8  0x4039c970 in SearchCatCache (cache=0x20001f1e0140, v1=0,
v2=6917529027641871376, v3=4294966252, v4=6917546619827097184)
at catcache.c:1217
1217in catcache.c
(gdb) p cache-cc_skey
$7 = {{sk_flags = 0, sk_attno = -2, sk_strategy = 3, sk_subtype = 0,
sk_func = {fn_addr = 0x2003a9c8, fn_oid = 184, fn_nargs = 2,
  fn_strict = 1 '\001', fn_retset = 0 '\0', fn_extra = 0x0,
  fn_mcxt = 0x6009e550, fn_expr = 0x0}, sk_argument = 0}, {
sk_flags = 0, sk_attno = 0, sk_strategy = 0, sk_subtype = 0, sk_func = {
  fn_addr = 0, fn_oid = 0, fn_nargs = 0, fn_strict = 0 '\0',
  fn_retset = 0 '\0', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0},
sk_argument = 0}, {sk_flags = 0, sk_attno = 0, sk_strategy = 0,
sk_subtype = 0, sk_func = {fn_addr = 0, fn_oid = 0, fn_nargs = 0,
  fn_strict = 0 '\0', fn_retset = 0 '\0', fn_extra = 0x0, fn_mcxt = 0x0,
  fn_expr = 0x0}, sk_argument = 0}, {sk_flags = 0, sk_attno = 0,
sk_strategy = 0, sk_subtype = 0, sk_func = {fn_addr = 0, fn_oid = 0,
  fn_nargs = 0, fn_strict = 0 '\0', fn_retset = 0 '\0', fn_extra = 0x0,
  fn_mcxt = 0x0, fn_expr = 0x0}, sk_argument = 0}}
(gdb) p cur_skey
$8 = {{sk_flags = 0, sk_attno = 1, sk_strategy = 24932, sk_subtype = 24948,
sk_func = {fn_addr = 0, fn_oid = 0, fn_nargs = 0, fn_strict = 0 '\0',
  fn_retset = 0 '\0', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0},
sk_argument = 1043}, {sk_flags = 0, sk_attno = 1043, sk_strategy = 0,
sk_subtype = 4294967295, sk_func = {fn_addr = 0, fn_oid = 0,
  fn_nargs = 0, fn_strict = 0 '\0', fn_retset = 0 '\0', fn_extra = 0x0,
  fn_mcxt = 0x0, fn_expr = 0x0}, sk_argument = 0}, {sk_flags = 0,
sk_attno = 0, sk_strategy = 0, sk_subtype = 0, sk_func = {fn_addr = 0,
  fn_oid = 0, fn_nargs = 0, fn_strict = 0 '\0', fn_retset = 0 '\0',
  fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0}, sk_argument = 0}, {
sk_flags = 0, sk_attno = 0, sk_strategy = 0, sk_subtype = 0, sk_func = {
  fn_addr = 0, fn_oid = 0, fn_nargs = 0, fn_strict = 0 '\0',
  fn_retset = 0 '\0', fn_extra = 0x0, fn_mcxt = 0x0, fn_expr = 0x0},
sk_argument = 0}}

The core dump happens because we eventually try to jump through the
zeroed-out fn_addr function pointer.

Not sure what to make of this.  That's extremely heavily used,
well-debugged code; it's hard to believe that there are any intermittent
bugs in it.

I notice that the backend seems to have been using some nonstandard C
code:

Error while reading shared library symbols:
/home/markw/dbt2/storedproc/pgsql/c/../../../storedproc/pgsql/c/funcs.so: No 
such file or directory.

What is that, and how much confidence have you got in it?

regards, tom lane

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pg_autovacuum w/ dbt2

2005-01-12 Thread Mark Wong
On Wed, Jan 12, 2005 at 09:17:33PM -0500, Tom Lane wrote:
 I notice that the backend seems to have been using some nonstandard C
 code:
 
 Error while reading shared library symbols:
 /home/markw/dbt2/storedproc/pgsql/c/../../../storedproc/pgsql/c/funcs.so: No 
 such file or directory.
 
 What is that, and how much confidence have you got in it?

That's my C stored function library.  I'll attached it if anyone wants
to take a persusal.  Well, it was my first attempt with C stored
functions and SPI calls, so it wouldn't surprise me if it was flawed.
Would supplying the .so help the debugging?

Mark
/*
 * This file is released under the terms of the Artistic License.  Please see
 * the file LICENSE, included in this package, for details.
 *
 * Copyright (C) 2003 Mark Wong  Open Source Development Lab, Inc.
 *
 * Based on TPC-C Standard Specification Revision 5.0 Clause 2.8.2.
 */

#include sys/types.h
#include unistd.h
#include postgres.h
#include fmgr.h
#include executor/spi.h

/*
#define DEBUG
*/

#define DELIVERY_1 \
SELECT no_o_id\n \
FROM new_order\n \
WHERE no_w_id = %d\n \
  AND no_d_id = %d

#define DELIVERY_2 \
DELETE FROM new_order\n \
WHERE no_o_id = %s\n \
  AND no_w_id = %d\n \
  AND no_d_id = %d

#define DELIVERY_3 \
SELECT o_c_id\n \
FROM orders\n \
WHERE o_id = %s\n \
  AND o_w_id = %d\n \
  AND o_d_id = %d

#define DELIVERY_4 \
UPDATE orders\n \
SET o_carrier_id = %d\n \
WHERE o_id = %s\n \
  AND o_w_id = %d\n \
  AND o_d_id = %d

#define DELIVERY_5 \
UPDATE order_line\n \
SET ol_delivery_d = current_timestamp\n \
WHERE ol_o_id = %s\n \
  AND ol_w_id = %d\n \
  AND ol_d_id = %d

#define DELIVERY_6 \
SELECT SUM(ol_amount * ol_quantity)\n \
FROM order_line\n \
WHERE ol_o_id = %s\n \
  AND ol_w_id = %d\n \
  AND ol_d_id = %d

#define DELIVERY_7 \
UPDATE customer\n \
SET c_delivery_cnt = c_delivery_cnt + 1,\n \
c_balance = c_balance + %s\n \
WHERE c_id = %s\n \
  AND c_w_id = %d\n \
  AND c_d_id = %d

#define NEW_ORDER_1 \
SELECT w_tax\n \
FROM warehouse\n \
WHERE w_id = %d

#define NEW_ORDER_2 \
SELECT d_tax, d_next_o_id\n \
FROM district \n \
WHERE d_w_id = %d\n \
  AND d_id = %d\n \
FOR UPDATE

#define NEW_ORDER_3 \
UPDATE district\n \
SET d_next_o_id = d_next_o_id + 1\n \
WHERE d_w_id = %d\n \
  AND d_id = %d

#define NEW_ORDER_4 \
SELECT c_discount, c_last, c_credit\n \
FROM customer\n \
WHERE c_w_id = %d\n \
  AND c_d_id = %d\n \
  AND c_id = %d

#define NEW_ORDER_5 \
INSERT INTO new_order (no_o_id, no_w_id, no_d_id)\n \
VALUES (%s, %d, %d)

#define NEW_ORDER_6 \
INSERT INTO orders (o_id, o_d_id, o_w_id, o_c_id, o_entry_d,\n \
o_carrier_id, o_ol_cnt, o_all_local)\n \
VALUES (%s, %d, %d, %d, current_timestamp, NULL, %d, %d)

#define NEW_ORDER_7 \
SELECT i_price, i_name, i_data\n \
FROM item\n \
WHERE i_id = %d

#define NEW_ORDER_8 \
SELECT s_quantity, %s, s_data\n \
FROM stock\n \
WHERE s_i_id = %d\n \
  AND s_w_id = %d

#define NEW_ORDER_9 \
UPDATE stock\n \
SET s_quantity = s_quantity - %d\n \
WHERE s_i_id = %d\n \
  AND s_w_id = %d

#define NEW_ORDER_10 \
INSERT INTO order_line (ol_o_id, ol_d_id, ol_w_id, ol_number,\n \
ol_i_id, ol_supply_w_id, ol_delivery_d,\n \
ol_quantity, ol_amount, ol_dist_info)\n \
VALUES (%s, %d, %d, %d, %d, %d, NULL, %d, %f, '%s')

#define ORDER_STATUS_1 \
SELECT c_id\n \
FROM customer\n \
WHERE c_w_id = %d\n \
  AND c_d_id = %d\n \
  AND c_last = '%s'\n \
ORDER BY c_first ASC

#define ORDER_STATUS_2 \
SELECT c_first, c_middle, c_last, c_balance\n \
FROM customer\n \
WHERE c_w_id = %d\n \
  AND c_d_id = %d\n \
  AND c_id = %d

#define ORDER_STATUS_3 \
SELECT o_id, o_carrier_id, o_entry_d, o_ol_cnt\n \
FROM orders\n \
WHERE o_w_id = %d\n \
  AND o_d_id = %d \n \
  AND o_c_id = %d\n \
ORDER BY o_id DESC

#define ORDER_STATUS_4 \
SELECT ol_i_id, ol_supply_w_id, ol_quantity, ol_amount,\n \
   ol_delivery_d\n \
FROM order_line\n \
WHERE ol_w_id = %d\n \
  AND ol_d_id = %d\n \
  AND ol_o_id = %s

#define PAYMENT_1 \
SELECT w_name, w_street_1, w_street_2, w_city, w_state, w_zip\n \
FROM warehouse\n \
WHERE w_id = %d

#define PAYMENT_2 \
UPDATE warehouse\n \
SET w_ytd = w_ytd + %f\n \
WHERE w_id = %d

#define 

[HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Mark Wong
After all this time I finally got around to vacuuming the database
with dbt2 with pg_autovacuum. :)
http://www.osdl.org/projects/dbt2dev/results/dev4-010/215/

Doesn't look so good though, probably because I'm not using optimal
settings with pg_autovacuum.  So far I have only tried the default
settings (running without any arguments, except -D).

The only thing that's peculiar is a number of unexpected rollbacks
across all of the transactions.  I suspect it was something to do with
these messages coming from pg_autovacuum:

[2004-12-20 15:48:18 PST] ERROR:   Can not refresh statistics information from 
the database dbt2.
[2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find 
parent key in pk_district
]

This is with 8.0rc1.  I can get rc2 installed since it just came out.
So let me know what I can try and what not.

Mark

---(end of broadcast)---
TIP 8: explain analyze is your friend


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Tom Lane
Mark Wong [EMAIL PROTECTED] writes:
 [2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find 
 parent key in pk_district
 ]

Yikes.  Is this reproducible?

regards, tom lane

---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Matthew T. O'Connor
Mark Wong wrote:
After all this time I finally got around to vacuuming the database
with dbt2 with pg_autovacuum. :)
	http://www.osdl.org/projects/dbt2dev/results/dev4-010/215/
 

Thanks!
Doesn't look so good though, probably because I'm not using optimal
settings with pg_autovacuum.  So far I have only tried the default
settings (running without any arguments, except -D).
 

I don't know what you mean by Not Good since I don't have graphs from 
a similar test without pg_autovacuum handy.  Do you have a link to such 
a test?

As for better pg_autovacuum settings, It appears that the little 
performance dips are happening about once every 5 minutes, which if I 
remember correctly is the default sleep time.  You might try playing 
with the lazy vacuum settings to see if that smooths out the curve.  
Beyond that all you can do is play with the thresholds to see if there 
is a better sweet spot than the defaults (which by the way I have no 
confidence in, they were just conservative guesses)

The only thing that's peculiar is a number of unexpected rollbacks
across all of the transactions.  I suspect it was something to do with
these messages coming from pg_autovacuum:
[2004-12-20 15:48:18 PST] ERROR:   Can not refresh statistics information from the database dbt2.
[2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find parent key in pk_district
]
 

Not sure what this is all about, but if you turn up the debug level to 4 
or greater (pg_autovacuum -d4), pg_autovacuum will log the query that is 
causing the problems, that would be helpful output to have.

This is with 8.0rc1.  I can get rc2 installed since it just came out.
So let me know what I can try and what not.
 

I don't think anything has changed for pg_autovacuum between rc1 and rc2.
thanks again for the good work!!!
---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Mark Wong
On Tue, Dec 21, 2004 at 02:23:41PM -0500, Tom Lane wrote:
 Mark Wong [EMAIL PROTECTED] writes:
  [2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find 
  parent key in pk_district
  ]
 
 Yikes.  Is this reproducible?
 
   regards, tom lane

Yes, and I think there is one for each of the rollbacks that are
occuring in the workload.  Except for the 1% that's supposed to happen
for the new-order transaction.

Mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Mark Wong
The overall throughput is better for a run like this:
http://www.osdl.org/projects/dbt2dev/results/dev4-010/207/

A drop from 3865 to 2679 (31%) by just adding pg_autovacuum.  That's
what I meant by not good. :)

I'll start with the additional debug messages, with 8.0rc2, before
I start changing the other settings, if that sounds good.

Mark

On Tue, Dec 21, 2004 at 02:33:57PM -0500, Matthew T. O'Connor wrote:
 Mark Wong wrote:
 
 After all this time I finally got around to vacuuming the database
 with dbt2 with pg_autovacuum. :)
  http://www.osdl.org/projects/dbt2dev/results/dev4-010/215/
   
 
 Thanks!
 
 Doesn't look so good though, probably because I'm not using optimal
 settings with pg_autovacuum.  So far I have only tried the default
 settings (running without any arguments, except -D).
   
 
 I don't know what you mean by Not Good since I don't have graphs from 
 a similar test without pg_autovacuum handy.  Do you have a link to such 
 a test?

 As for better pg_autovacuum settings, It appears that the little 
 performance dips are happening about once every 5 minutes, which if I 
 remember correctly is the default sleep time.  You might try playing 
 with the lazy vacuum settings to see if that smooths out the curve.  
 Beyond that all you can do is play with the thresholds to see if there 
 is a better sweet spot than the defaults (which by the way I have no 
 confidence in, they were just conservative guesses)
 
 The only thing that's peculiar is a number of unexpected rollbacks
 across all of the transactions.  I suspect it was something to do with
 these messages coming from pg_autovacuum:
 
 [2004-12-20 15:48:18 PST] ERROR:   Can not refresh statistics information 
 from the database dbt2.
 [2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find 
 parent key in pk_district
 ]
   
 
 Not sure what this is all about, but if you turn up the debug level to 4 
 or greater (pg_autovacuum -d4), pg_autovacuum will log the query that is 
 causing the problems, that would be helpful output to have.
 
 This is with 8.0rc1.  I can get rc2 installed since it just came out.
 So let me know what I can try and what not.
   
 
 I don't think anything has changed for pg_autovacuum between rc1 and rc2.
 
 
 thanks again for the good work!!!



---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose an index scan if your
  joining column's datatypes do not match


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Tom Lane
Mark Wong [EMAIL PROTECTED] writes:
 On Tue, Dec 21, 2004 at 02:23:41PM -0500, Tom Lane wrote:
 Mark Wong [EMAIL PROTECTED] writes:
 [2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to re-find 
 parent key in pk_district
 
 Yikes.  Is this reproducible?

 Yes, and I think there is one for each of the rollbacks that are
 occuring in the workload.  Except for the 1% that's supposed to happen
 for the new-order transaction.

Well, we need to find out what's causing that.  There are two possible
sources of that error (one elog in src/backend/access/nbtree/nbtinsert.c,
and one in src/backend/access/nbtree/nbtpage.c) and neither of them
should ever fire.

If you want to track it yourself, please change those elog(ERROR)s to
elog(PANIC) so that they'll generate core dumps, then build with
--enable-debug if you didn't already (--enable-cassert would be good too)
and get a debugger stack trace from the core dump.

Otherwise, can you extract a test case that causes this without needing
vast resources to run?

regards, tom lane

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
  subscribe-nomail command to [EMAIL PROTECTED] so that your
  message can get through to the mailing list cleanly


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Mark Wong
On Tue, Dec 21, 2004 at 05:56:47PM -0500, Tom Lane wrote:
 Mark Wong [EMAIL PROTECTED] writes:
  On Tue, Dec 21, 2004 at 02:23:41PM -0500, Tom Lane wrote:
  Mark Wong [EMAIL PROTECTED] writes:
  [2004-12-20 15:48:18 PST]  The error is [ERROR:  failed to 
  re-find parent key in pk_district
  
  Yikes.  Is this reproducible?
 
  Yes, and I think there is one for each of the rollbacks that are
  occuring in the workload.  Except for the 1% that's supposed to happen
  for the new-order transaction.
 
 Well, we need to find out what's causing that.  There are two possible
 sources of that error (one elog in src/backend/access/nbtree/nbtinsert.c,
 and one in src/backend/access/nbtree/nbtpage.c) and neither of them
 should ever fire.
 
 If you want to track it yourself, please change those elog(ERROR)s to
 elog(PANIC) so that they'll generate core dumps, then build with
 --enable-debug if you didn't already (--enable-cassert would be good too)
 and get a debugger stack trace from the core dump.
 
 Otherwise, can you extract a test case that causes this without needing
 vast resources to run?
 
   regards, tom lane

I was going to try Matthew's suggestion of turning up the debug on
pg_autovacuum, unless you don't that'll help find the cause.  I'm not
sure if I can more easily reproduce the problem but i can try.

I'll go ahead and make the elog() changes you recommended and do a run
overnight either way.

Mark

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Tom Lane
Mark Wong [EMAIL PROTECTED] writes:
 I was going to try Matthew's suggestion of turning up the debug on
 pg_autovacuum, unless you don't that'll help find the cause.

It won't help --- this is a backend-internal bug of some kind.

regards, tom lane

---(end of broadcast)---
TIP 6: Have you searched our list archives?

   http://archives.postgresql.org


Re: [HACKERS] pg_autovacuum w/ dbt2

2004-12-21 Thread Matthew T. O'Connor
Mark Wong wrote:
The overall throughput is better for a run like this:
http://www.osdl.org/projects/dbt2dev/results/dev4-010/207/
A drop from 3865 to 2679 (31%) by just adding pg_autovacuum.  That's
what I meant by not good. :)
 

I would agree that is not good :-)  It sounds like pg_autovacuum is 
being to aggressive for this type of load, that is vacuuming more often 
than needed, however the lazy vacuum options were added so as to reduce 
the performance impact of running a vacuum while doing other things, so, 
I would recommend both higher autovacuum thresholds and trying out some 
of the lazy vacuum settings. 

I'll start with the additional debug messages, with 8.0rc2, before
I start changing the other settings, if that sounds good.
Sounds fine.  From Tom Lane's response, we have a backend bug that needs 
to be resolved and I think that is the priority.


---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
   (send unregister YourEmailAddressHere to [EMAIL PROTECTED])