Dirk Lutzebaeck ([EMAIL PROTECTED]) wrote:
> I have observed the same problems and posted this a while ago. I'm
> *not* using large objects. On my side this seems to happen when making
> excessive update/inserts in conjunction with unique indexes.
And later:
> Only with 6.5beta so far, not with 6.4.2 but this may be because of
> other reasons.
I recreated the tables without large objects on Friday. (I created two
tables to hold 4000-byte chunks and keep them straight, and I wrote
frontend perl code which chops the files into pieces or reconstructs
them. I'm not quite done with it -- need to get the sequencing
straight when reconstructing -- but that shouldn't take long.)
I left all the cron jobs in place, and left the target e-mail alias in
place so that data would continue to come into the database over the
(holiday) weekend.
When I came back this morning, there were 62 jobs stuck in the mail
queue, and the database wasn't answering queries, etc.
So I've deleted the database again, and changed "UNIQUE INDEX" to
"INDEX" everywhere it appeared in the schema (about 4 places, I
think). Fortunately, I don't think I have any code that requires the
uniqueness....
I recreated the database and manually ran 5 submissions at the same
time. (In fact, due to the varying lengths of time to run the report
generator on the different systems, there were probably not more than 2
jobs actually hitting the database at once at any time.) So far, so
good.
We'll see whether it holds up over the next few days with the unique
indices gone. If not, I'll have to use something other than PostgreSQL.
(This is all with an unmodified 6.4.2 on AIX 4.3.2, for those just
joining this discussion.)