Thanks for the help, I will probably try this but I would like to know how
all of this was originated:

I am posting here the evolution of my db, hoping that somebody can help:

- I upgraded from 7.x to 8.1.5 about 20 days ago
- the db is about 2 / 3 giga and the application that works on it has not
been changed across pg versions(apart from the jdbc driver)
- db is in production, with an average of 2000 web sessions per day
launching several queries and everything was working perfectly

last night I noticed that the usual LARGE OBJECT "cleaning" wasn't been
executed with the new version due to a password problem
the java code that does the LO cleaning is this:

                       db1 =
DriverManager.getConnection("jdbc:postgresql://mydatabase",
"zzzzzzzzz", "xxxxxxxxx");
                       Statement stat = db1.createStatement();
                       ResultSet rs = stat.executeQuery("select loid,
imgoid from pg_largeobject LEFT JOIN images on pg_largeobject.loid=
images.imgoid group by loid, imgoid");
                       // query semplice: select * from pg_largeobject LEFT
JOIN images on pg_largeobject.loid=images.imgoid

                       db2 =
DriverManager.getConnection("jdbc:postgresql://mydatabase",
"zzzzzzzzz", "xxxxxxxxx");

                       // fetch the large object manager
                       LargeObjectManager lom = ((
org.postgresql.PGConnection) db2).getLargeObjectAPI();
                       while (rs.next()) {
                           count++;
                           imgoid = rs.getInt("imgoid");
                           loid = rs.getInt("loid");

                           if (imgoid == 0) {
                               ccount++;
                               lom.delete(loid);
                           }
                       }
                       stat.close();


after I have activated this code I started noticing these problems (database
won't reindex, dump doesn't work)
Does anybody knows if this way of dealing with LO's has changed from 7.x to
8.1.5?
Maybe this is not related at all with clog file problems?

any help apreciated
Edoardo


On 12/18/06, Shoaib Mir <[EMAIL PROTECTED]> wrote:

You can try creating the missing clog file as 256K of zeros, once that is
done you can now try going through the pg_dump process again.

This suggestion might help you as well -->
http://archives.postgresql.org/pgsql-admin/2003-02/msg00263.php

---------------------
Shoaib Mir
EnterpriseDB (www.enterprisedb.com)

On 12/18/06, Edoardo Ceccarelli <[EMAIL PROTECTED]> wrote:
>
> Hi,
>
> just encountered this error trying to dump my db:
> any ideas?
>
> Thank you
>
>
> pg_dump: ERROR:  could not access status of transaction 1629514106
>
> DETAIL:  could not open file "pg_clog/0612": No such file or directory
>
> pg_dump: SQL command to dump the contents of table "annuncio400" failed:
> PQendcopy() failed.
>
> pg_dump: Error message from server: ERROR:  could not access status of
> transaction 1629514106
>
> DETAIL:  could not open file "pg_clog/0612": No such file or directory
>
> pg_dump: The command was: COPY public.annuncio400 (testo, telef1, pref1,
> rubric, tiprec, uscita, telef2, pref2, giornouscita, meseuscita, annouscita,
> prezzo, annofabbr, locali, codedi, codcliente, email, tipoannuncio, ka) TO
> stdout;
>
>

Reply via email to