Hi,

Tom C. and I exchanged a few emails about the issue offlist; Tom spotted after a while 
that the PostgreSQL data had been clobbered by the output of my cron job (not possible 
in normal conditions I guess)... end result being he suspects its a hardware glitch or 
similar (also had 'wc' dump core that same night during the daily periodic run; I'm 
choosing cosmic radiation ;).

I'm going to upgrade to the latest release and also run up another shinier box and see 
if things work out, I'm sure they will (digging out a tripwire floppy to make ultra 
sure nothings changed). As a side issue, could/should the vacuumlo functionality be 
merged with vacuum?

Cheers,
Joe

> -----Original Message-----
> From: [EMAIL PROTECTED]
> [mailto:[EMAIL PROTECTED]]On Behalf Of Peter Mount
> Sent: Thursday, March 29, 2001 6:52 PM
> To: Joe Shevland; [EMAIL PROTECTED]
> Cc: [EMAIL PROTECTED]
> Subject: Re: [JDBC] Possible large object bug?
> 
> 
> At 10:37 27/03/01 +1000, Joe Shevland wrote:
> >Hi,
> >
> >Semi-topical I hope ;)
> 
> Yes semi ;-)
> 
> >  I've started using Postgres 7.1 (FreeBSD 4.2-S) and large objects via 
> > JDBC. (postmaster (PostgreSQL) 7.1beta5)
> 
> I'm forwarding this to the bugs list as it looks like something nasty in 
> the back end.
> 
> 
> >Everything has been working nicely with storing/retrieving blobs, until 
> >last night during a vacuum of the database the backend process crashed 
> >with the messages added to the end of this email. I'm also using the 
> >'vacuumlo' contributed code. The order of the cron jobs is:
> >
> >59 2  *  *  *  postgres /usr/local/pgsql/bin/vacuumlo -v db1 db2 db3
> >59 3  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db1
> >59 4  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db2
> >59 5  *  *  *  postgres /usr/local/pgsql/bin/vacuumdb -z db3
> >
> >so I was wondering if there might be a bug in the vacuumlo code (though 
> >its vacuumdb dying)? Or I was thinking, because they're 
> development db's, 
> >that frequent dropping/recreating of tables is maybe causing the 
> prob? The 
> >same vacuum commands have run fine before, both from cron and 
> the command 
> >line, the only difference was slightly heavier 
> dropping/recreating yesterday.
> >
> >I'm yet to see if that particular database is stuffed as I can recreate 
> >and retest easily enough. Let me know if I can give any further info,
> >
> >Regards,
> >Joe
> >
> >---
> >NOTICE:  Rel pg_attribute: TID 1/115: OID IS INVALID. TUPGONE 1.
> >...
> >NOTICE:  Rel pg_attribute: TID 1/6087: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6111: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6112: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6136: OID IS INVALID. TUPGONE 1.
> >NOTICE:  Rel pg_attribute: TID 1/6137: OID IS INVALID. TUPGONE 1.
> >pqReadData() -- backend closed the channel unexpectedly.
> >         This probably means the backend terminated abnormally
> >         before or while processing the request.
> >connection to server was lost
> >vacuumdb: vacuum  db2 failed
> >---
> >
> >with ~500 of the NOTICE lines then the crash. About 1% give a TUPGONE 0 
> >ending instead.
> >
> >
> >---------------------------(end of broadcast)---------------------------
> >TIP 1: subscribe and unsubscribe commands go to [EMAIL PROTECTED]
> 
> 
> ---------------------------(end of broadcast)---------------------------
> TIP 4: Don't 'kill -9' the postmaster
> 


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?

http://www.postgresql.org/search.mpl

Reply via email to