I have a Java
application which writes large objects to the database using the JDBC interface.
The application reads binary input from an input stream and writes large
objects to the database in 8K chunks and calculates the length of the data. At
the end of the input stream it closes the large object and commits the
large object and then updates associated tables with the large object id and
large object length and commits that info to the database. The application has
multiple threads (max 8) simultaneously writing these large objects
each using their own connection. Whenever the system has a problem we have a
monitor application which detects a need for a system shutdown and shuts down
Postgres using a smart shutdown.
What I am seeing is
that when all 8 threads are running and the system is shutdown, large
objects committed in transactions near to the shutdown are corrupt when the
database is restarted. I know the large objects are committed, because the
associated entries in the tables which point to the large objects are present
after the restart with valid information about the large object length and oid.
However when I access the large objects I am only returned a 2K chunk even
though the table entry tells me the entry should be 320K.
Anybody have any
ideas what is the problem? Are there any know issues with the recovery of large
objects?
Chris
White
