[HACKERS] deadlock problems with foreign keys

2002-03-28 Thread Mario Weilguni

I've a severe problem with deadlocks in postgres, when using referential integrity 
it's quite easy to trigger deadlocks. I think the may be a bug in ri_trigger.c 
(discussed later). Here's some short example:

create table languages (
  idinteger not null,
  name  textnot null,
  primary key(id)
);

create table entry (
  idinteger not null,
  lang_id   integer,
  sometext  text,
  primary key (id),
  foreign key ( lang_id ) references languages (id)
);

insert into languages values (1, 'english');
insert into languages values (2, 'german');

insert into entry values (1, 1, 'text 1');
insert into entry values (2, 1, 'text 2');


transaction A: begin;
transaction A: update entry set sometext='text 1.1' where id=1;
transaction A:  do more time-consuming processing here...
meanwhile, B: begin; 
   B: update entry set sometext='text 2.1' where id=2;

-- both processes hang now

I think this is too much locking here, because the logfile show's something like this:
'select 1 from languages where id=$1 for update' (2 times).

Now I've a lot of tables (around 30) and use referential integrity a lot on ~10 
columns (language, country) , and with more fields it's very easy to deadlock the 
whole system (it happens a lot in my web applicaiton with ~20 concorrent users).

IMHO the select ... for update on languages is not necessary, since I do not want to 
update lang_id, but I might be wrong. The other problem is, that this will make 
postgres in benchmarks very slow (with many concurrent connections), at least if the 
application is not trivial.

IMO the problem is in ri_trigger.c around line 390:
/* --
 * The query string built is
 *  SELECT 1 FROM ONLY pktable WHERE pkatt1 = $1 [AND ...]
 * The type id's for the $ parameters are those of the
 * corresponding FK attributes. Thus, SPI_prepare could
 * eventually fail if the parser cannot identify some way
 * how to compare these two types by '='.
 * --
 */

Any ideas if this is a bug or simply strict SQL standard? 

Best regards,
Mario Weilguni


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] Rough sketch for locale by default

2002-03-28 Thread Hannu Krosing

On Wed, 2002-03-27 at 19:26, Tom Lane wrote:
 Peter Eisentraut [EMAIL PROTECTED] writes:
  [ good stuff snipped ]
 
  ... Also, it prevents accidentally changing the locale when you
  (or someone else) fiddle with your environment variables.
 
 If I follow this correctly, the behavior would be that PG would not pay
 attention to *any* LC_xxx environment variables?  Although I agree with
 that principle in the abstract, it bothers me that PG will be out of
 step with every single other locale-using program in the Unix world.

IIRC oracle uses NLS_LANG and not any LC_* (even on unix ;)

it is set to smth like NLS_LANG=ESTONIAN_ESTONIA.WE8ISO8859P15


 We ought to think twice about whether that's really a good idea.
 
  Note that you get the same kind of command line options as in initdb:
  --lc-numeric, --locale, etc.  You can also run SHOW lc_numeric to see
  what's going on.
 
 Probably you thought of this already: please also support SHOW for the
 initdb-time variables (lc_collate, etc), so that one can find out the
 active locale settings without having to resort to
 contrib/pg_controldata.


Hannu



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] deadlock problems with foreign keys

2002-03-28 Thread Jan Wieck

Mario Weilguni wrote:
 I've a severe problem with deadlocks in postgres, when using referential integrity 
it's quite easy to trigger deadlocks. I think the may be a bug in ri_trigger.c 
(discussed later). Here's some short example:

 create table languages (
   idinteger not null,
   name  textnot null,
   primary key(id)
 );

 create table entry (
   idinteger not null,
   lang_id   integer,
   sometext  text,
   primary key (id),
   foreign key ( lang_id ) references languages (id)
 );

 insert into languages values (1, 'english');
 insert into languages values (2, 'german');

 insert into entry values (1, 1, 'text 1');
 insert into entry values (2, 1, 'text 2');


 transaction A: begin;
 transaction A: update entry set sometext='text 1.1' where id=1;
 transaction A:  do more time-consuming processing here...
 meanwhile, B: begin;
B: update entry set sometext='text 2.1' where id=2;

 -- both processes hang now

Cannot  reproduce that problem in v7.2. Only B blocks until A
either commits or rolls back. So what exactly is  your  more
time-consuming processing?


 I think this is too much locking here, because the logfile show's something like 
this:
 'select 1 from languages where id=$1 for update' (2 times).

 Now I've a lot of tables (around 30) and use referential integrity a lot on ~10 
columns (language, country) , and with more fields it's very easy to deadlock the 
whole system (it happens a lot in my web applicaiton with ~20 concorrent users).

 IMHO the select ... for update on languages is not necessary, since I do not want 
to update lang_id, but I might be wrong. The other problem is, that this will make 
postgres in benchmarks very slow (with many concurrent connections), at least if the 
application is not trivial.

 IMO the problem is in ri_trigger.c around line 390:
 /* --
  * The query string built is
  *   SELECT 1 FROM ONLY pktable WHERE pkatt1 = $1 [AND ...]
  * The type id's for the $ parameters are those of the
  * corresponding FK attributes. Thus, SPI_prepare could
  * eventually fail if the parser cannot identify some way
  * how to compare these two types by '='.
  * --
  */

 Any ideas if this is a bug or simply strict SQL standard?

It  does  a  SELECT  ...  FOR  UPDATE because we don't have a
SELECT ... AND PLEASE DO NOT REMOVE.

If we would only check  if  the  PK  is  there  now,  another
concurrent  transaction  could  delete the PK, it's own check
cannot see our uncommitted row yet  and  we  end  up  with  a
violation.  And if you look at the comment a few lines up, it
explains why we cannot skip the check even if the  key  value
doesn't change.


Jan

--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



[HACKERS] Alter function?

2002-03-28 Thread Rod Taylor

Now that create or replace function exists, what is alter function
supposed to do?  MSSQLs alter function does the same as REPLACE.  Is
it simply an alias to the REPLACE case?
--
Rod Taylor

Your eyes are weary from staring at the CRT. You feel sleepy. Notice
how restful it is to watch the cursor blink. Close your eyes. The
opinions stated above are yours. You cannot imagine why you ever felt
otherwise.



---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] deadlock problems with foreign keys

2002-03-28 Thread Jan Wieck

Rod Taylor wrote:
 There was no deadlock in 7.2 with what was provided -- but the second
 transaction was blocked from doing it's thing by the lock from the
 first.  Perhaps a deadlock is caused by 'do other stuff'?

 I will agree that a FOR UPDATE is heavy.  There is no intention to
 update the record, we just want to ensure it's NOT updated or deleted.
 A FOR PREVENT UPDATE lock may be preferable and it should block any
 other locks while allowing the lock to be 'upgraded' in the case where
 you hold the only PREVENT UPDATE lock.  It wouldn't be exclusive to
 itself, only other types of locks.


 All that said, SET CONSTRAINTS ALL DEFERRED at the beginning of the
 transaction also caused a block on the update with the second
 transaction.  That interests me.  Why doesn't the second transaction
 go through and block the first from using COMMIT?

SET   CONSTRAINTS   ALL   DEFERRED   only   set's  DEFERRABLE
constraints  to  DEFERRED.   Constraints   default   to   NOT
DEFERRABLE,  so  unless  you  explicitly  allowed it at table
creation, you did a noop.


Jan



 --
 Rod Taylor

 This message represents the official view of the voices in my head

 - Original Message -
 From: Mario Weilguni [EMAIL PROTECTED]
 To: Postgresql Mailinglist (E-Mail) [EMAIL PROTECTED]
 Sent: Thursday, March 28, 2002 9:44 AM
 Subject: [HACKERS] deadlock problems with foreign keys


 I've a severe problem with deadlocks in postgres, when using
 referential integrity it's quite easy to trigger deadlocks. I think
 the may be a bug in ri_trigger.c (discussed later). Here's some short
 example:

 create table languages (
   idinteger not null,
   name  textnot null,
   primary key(id)
 );

 create table entry (
   idinteger not null,
   lang_id   integer,
   sometext  text,
   primary key (id),
   foreign key ( lang_id ) references languages (id)
 );

 insert into languages values (1, 'english');
 insert into languages values (2, 'german');

 insert into entry values (1, 1, 'text 1');
 insert into entry values (2, 1, 'text 2');


 transaction A: begin;
 transaction A: update entry set sometext='text 1.1' where id=1;
 transaction A:  do more time-consuming processing here...
 meanwhile, B: begin;
B: update entry set sometext='text 2.1' where id=2;

 -- both processes hang now

 I think this is too much locking here, because the logfile show's
 something like this:
 'select 1 from languages where id=$1 for update' (2 times).

 Now I've a lot of tables (around 30) and use referential integrity a
 lot on ~10 columns (language, country) , and with more fields it's
 very easy to deadlock the whole system (it happens a lot in my web
 applicaiton with ~20 concorrent users).

 IMHO the select ... for update on languages is not necessary, since
 I do not want to update lang_id, but I might be wrong. The other
 problem is, that this will make postgres in benchmarks very slow (with
 many concurrent connections), at least if the application is not
 trivial.

 IMO the problem is in ri_trigger.c around line 390:
 /* --
 * The query string built is
 * SELECT 1 FROM ONLY pktable WHERE pkatt1 = $1 [AND ...]
 * The type id's for the $ parameters are those of the
 * corresponding FK attributes. Thus, SPI_prepare could
 * eventually fail if the parser cannot identify some way
 * how to compare these two types by '='.
 * --
 */

 Any ideas if this is a bug or simply strict SQL standard?

 Best regards,
 Mario Weilguni


 ---(end of
 broadcast)---
 TIP 5: Have you checked our extensive FAQ?

 http://www.postgresql.org/users-lounge/docs/faq.html



 ---(end of broadcast)---
 TIP 2: you can get off all lists at once with the unregister command
 (send unregister YourEmailAddressHere to [EMAIL PROTECTED])



--

#==#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me.  #
#== [EMAIL PROTECTED] #



_
Do You Yahoo!?
Get your free @yahoo.com address at http://mail.yahoo.com


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] Rough sketch for locale by default

2002-03-28 Thread Peter Eisentraut

Tom Lane writes:

 If I follow this correctly, the behavior would be that PG would not pay
 attention to *any* LC_xxx environment variables?  Although I agree with
 that principle in the abstract, it bothers me that PG will be out of
 step with every single other locale-using program in the Unix world.

During earlier discussions people had objected to enabling locale support
by default on the grounds that it is very hard to follow which locale is
getting activated when.  Especially from Japan I heard that a lot of
people have some locale settings in their environment, but that most
locales are unsuitable (broken) for use in the PostgreSQL server.  So
this approach would keep the behavior backward compatible with the
--disable-locale case.

Here's a possible compromise for the postmaster:

We let initdb figure out what locales the user wants and then not only
initialize pg_control appropriately, but also write the run-time
changeable categories into the postgresql.conf file.  That way, the
postmaster executable could still consult the LC_* variables, but in the
common case it would just be overridden when the postgresql.conf file is
read.

This way we also hide the details of what locale category gets what
treatment from users that only want one locale for all categories and
don't want to change it.  Futhermore it all but eliminates the problem I'm
concerned about that the locale may accidentally be changed when the
postmaster is restarted.

How does initdb figure out what locale is wanted?  I agree it makes sense
to use the setting in the environment, because in many cases the database
will want to use the same locale as everything else on the system.  We
could provide a flag --no-locale, which sets all locale categories to C,
as a clear and simple way to turn this off.

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



[HACKERS] v7.2.1 re-rolled ...

2002-03-28 Thread Marc G. Fournier


And available in /pub/source/v7.2.1 ... this one has both man.tar.gz and
postgres.tar.gz in it ... someone want to make a quick confirm while the
mirrors pick it up?


---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] v7.2.1 re-rolled ...

2002-03-28 Thread Tatsuo Ishii

 And available in /pub/source/v7.2.1 ... this one has both man.tar.gz and
 postgres.tar.gz in it ... someone want to make a quick confirm while the
 mirrors pick it up?

At a quick glance, it seems ok for me. All regression tests
passed. Docs version is ok. This is a Linux kernel 2.2.
--
Tatsuo Ishii

---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



[HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Laurette Cisneros


The archives search is not working on postgresql.org so I need to ask this
question...

We are using postgresql 7.2 and when dumping one of our larger databases,
we get the following error:

File size limit exceeded (core dumped)

We suspect pg_dump.  Is this true?  Why would there be this limit in
pg_dump?  Is it scheduled to be fixed?

Thanks,

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?



---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] v7.2.1 re-rolled ...

2002-03-28 Thread Trond Eivind Glomsrød

Marc G. Fournier [EMAIL PROTECTED] writes:

 And available in /pub/source/v7.2.1 ... this one has both man.tar.gz and
 postgres.tar.gz in it ... someone want to make a quick confirm while the
 mirrors pick it up?

When rerolling something which has been on a public ftp server, upping
the number to avoid confusion is always a good idea.
-- 
Trond Eivind Glomsrød
Red Hat, Inc.

---(end of broadcast)---
TIP 2: you can get off all lists at once with the unregister command
(send unregister YourEmailAddressHere to [EMAIL PROTECTED])



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Doug McNaught

Laurette Cisneros [EMAIL PROTECTED] writes:

 The archives search is not working on postgresql.org so I need to ask this
 question...
 
 We are using postgresql 7.2 and when dumping one of our larger databases,
 we get the following error:
 
 File size limit exceeded (core dumped)
 
 We suspect pg_dump.  Is this true?  Why would there be this limit in
 pg_dump?  Is it scheduled to be fixed?

This means one of two things:

1) Your ulimits are set too low, or
2) Your pg_dump wasn't compiled against a C library with large file
   support (greater than 2GB).

Is this on Linux?

-Doug
-- 
Doug McNaught   Wireboard Industries  http://www.wireboard.com/

  Custom software development, systems and network consulting.
  Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

---(end of broadcast)---
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Peter Eisentraut

Laurette Cisneros writes:

 We are using postgresql 7.2 and when dumping one of our larger databases,
 we get the following error:

 File size limit exceeded (core dumped)

 We suspect pg_dump.  Is this true?

No, it's your operating sytem.

http://www.us.postgresql.org/users-lounge/docs/7.2/postgres/backup.html#BACKUP-DUMP-LARGE

-- 
Peter Eisentraut   [EMAIL PROTECTED]


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Laurette Cisneros

Hi,

I'm on Red Hat.  Here's the uname info:
Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

What do I need to do to turn on large file support in the compile?

Thanks,

L.
On 28 Mar 2002, Doug McNaught wrote:

 Laurette Cisneros [EMAIL PROTECTED] writes:
 
  The archives search is not working on postgresql.org so I need to ask this
  question...
  
  We are using postgresql 7.2 and when dumping one of our larger databases,
  we get the following error:
  
  File size limit exceeded (core dumped)
  
  We suspect pg_dump.  Is this true?  Why would there be this limit in
  pg_dump?  Is it scheduled to be fixed?
 
 This means one of two things:
 
 1) Your ulimits are set too low, or
 2) Your pg_dump wasn't compiled against a C library with large file
support (greater than 2GB).
 
 Is this on Linux?
 
 -Doug
 

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?


---(end of broadcast)---
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Doug McNaught

Laurette Cisneros [EMAIL PROTECTED] writes:

 Hi,
 
 I'm on Red Hat.  Here's the uname info:
 Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

That's an old and buggy kernel, BTW--you should install the errata
upgrades, 

 What do I need to do to turn on large file support in the compile?

Make sure you are running the latest kernel and libs, and AFAIK
'configure' should set it up for you automatically.

-Doug
-- 
Doug McNaught   Wireboard Industries  http://www.wireboard.com/

  Custom software development, systems and network consulting.
  Java PostgreSQL Enhydra Python Zope Perl Apache Linux BSD...

---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Laurette Cisneros


Oops sent the wrong uname, here's the one from the machine we compiled on:
Linux lept 2.4.16 #6 SMP Fri Feb 8 13:31:46 PST 2002 i686 unknown

and has: libc-2.2.2.so 

We use ./configure 

Still a problem?

We do compress (-Fc) right now, but are working on a backup scheme that
requires and uncompressed dump.

Thanks for the help!

L.

On 28 Mar 2002, Doug McNaught wrote:

 Laurette Cisneros [EMAIL PROTECTED] writes:
 
  Hi,
  
  I'm on Red Hat.  Here's the uname info:
  Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown
 
 That's an old and buggy kernel, BTW--you should install the errata
 upgrades, 
 
  What do I need to do to turn on large file support in the compile?
 
 Make sure you are running the latest kernel and libs, and AFAIK
 'configure' should set it up for you automatically.
 
 -Doug
 

-- 
Laurette Cisneros
Database Roadie
(510) 420-3137
NextBus Information Systems, Inc.
www.nextbus.com
Where's my bus?


---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Trond Eivind Glomsrød

Laurette Cisneros [EMAIL PROTECTED] writes:

 Hi,
 
 I'm on Red Hat.  Here's the uname info:
 Linux visor 2.4.2-2 #1 Sun Apr 8 20:41:30 EDT 2001 i686 unknown

You should really upgrade (kernel and the rest), but this kernel
supports large files.

-- 
Trond Eivind Glomsrød
Red Hat, Inc.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Trond Eivind Glomsrød

Peter Eisentraut [EMAIL PROTECTED] writes:

 Laurette Cisneros writes:
 
  We are using postgresql 7.2 and when dumping one of our larger databases,
  we get the following error:
 
  File size limit exceeded (core dumped)
 
  We suspect pg_dump.  Is this true?
 
 No, it's your operating sytem.

Red Hat Linux 7.x which he seems to be using supports this.
-- 
Trond Eivind Glomsrød
Red Hat, Inc.

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org



Re: [HACKERS] pg_dump 2GB limit?

2002-03-28 Thread Christopher Kings-Lynne

  File size limit exceeded (core dumped)
 
  We suspect pg_dump.  Is this true?  Why would there be this limit in
  pg_dump?  Is it scheduled to be fixed?

Try piping the output of pg_dump through bzip2 before writing it to disk.
Or else, I think that pg_dump has -z or something parameters for turning
on compression.

Chris



---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Mailing List Question

2002-03-28 Thread Marc G. Fournier

On Wed, 27 Mar 2002, Thomas Lockhart wrote:

 Vince Vielhaber wrote:
 
  On Wed, 27 Mar 2002, Bruce Momjian wrote:
 
   Marc G. Fournier wrote:
   
checking the moderator-to-approve listing for you, here are the reason(s):
   
Reason: GLOBAL ADMIN HEADER: /^subject:\s*set\b/i matched Subject: SET
   OK, but should posters get email stating it is in the approval queue?
   He clearly didn't, and I don't either, but others say they do get such
   messages.
  Not necessarily if it's an admin command.

 istm that we should disable all administrative functions from the main
 mailing lists (this is settable in the configuration). the -request
 addresses handle administration, and it is just plain confusing to find
 that there are some special words that should never be mentioned in the
 subject or body of a message. That isn't appropriate behavior for those
 mailing lists!

I can do this ... it would just mean ppl erroneously sending
subscribe/unsubscribe messages to the list(s) will actually get through
...

Anyone disagre with this change?


---(end of broadcast)---
TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]



Re: [HACKERS] Mailing List Question

2002-03-28 Thread Marc G. Fournier

On Thu, 28 Mar 2002, Tom Lane wrote:

 Thomas Lockhart [EMAIL PROTECTED] writes:
  imho we should disable *any* special handling of posts to the mailing
  lists.

 It would be interesting to try that for awhile and see if the cure is
 worse than the disease or not.  How many clueless uns*bscr*be requests
 will hit the lists if there are no filters?

To be honest, not many ... 50% of what I have to moderate are plain and
simply spam (and that isn't an exaggeration, I wiped out something like
150 out of 350 messages the other day) ... maybe about 25% are duplicate
postings ... I'd say 1% are subscribe/unsubscribe ... and the rest are
mostly from ppl not subscribed to the lists at all ...

Let me disable the administrative stuff being blocked and we'll see if it
makes much of a difference in the way of 'false traffic' ...


---(end of broadcast)---
TIP 4: Don't 'kill -9' the postmaster



Re: [HACKERS] Mailing List Question

2002-03-28 Thread Bruce Momjian

Marc G. Fournier wrote:
 On Thu, 28 Mar 2002, Tom Lane wrote:
 
  Thomas Lockhart [EMAIL PROTECTED] writes:
   imho we should disable *any* special handling of posts to the mailing
   lists.
 
  It would be interesting to try that for awhile and see if the cure is
  worse than the disease or not.  How many clueless uns*bscr*be requests
  will hit the lists if there are no filters?
 
 To be honest, not many ... 50% of what I have to moderate are plain and
 simply spam (and that isn't an exaggeration, I wiped out something like
 150 out of 350 messages the other day) ... maybe about 25% are duplicate
 postings ... I'd say 1% are subscribe/unsubscribe ... and the rest are
 mostly from ppl not subscribed to the lists at all ...

Let me add that I have looked at some non-pg lists and it looks terrible
to see spam in there, right in the archives.  Marc's manual review is
clearly keeping our list of a high quality.  Removing the admin keyword
blocks should fix most of our problems.

-- 
  Bruce Momjian|  http://candle.pha.pa.us
  [EMAIL PROTECTED]   |  (610) 853-3000
  +  If your life is a hard drive, |  830 Blythe Avenue
  +  Christ can be your backup.|  Drexel Hill, Pennsylvania 19026

---(end of broadcast)---
TIP 6: Have you searched our list archives?

http://archives.postgresql.org