> Yup. We wrote the client that is accessing the database. It's using
> PHP, and we don't even *use* transactions currently. But that isn't the
> problem. From what I gather so far, the server is under fairly high
> load (6 right now) so vacuuming the database (520MB in files, 5MB dump)
> takes a *long* time. While it's vacuuming, anything using that database
> just has to wait, and that's our problem.
Well every query is in it's own transaction unless you explicitly say BEGIN
and END -- so you technically are using transactions...
I have a 14x larger (70 meg, dumped) database running on a dual PII400 that
only takes 2 minutes or so to vacuum analyze (lots-o-indexes too), though I
guess that's a long time in some settings, we only do it once day though..
> Actually, on a whim, I dumped that 520MB database to it's 5MB file, and
> reimported it into an entirely new DB. It was 14MB. We vacuum at least
> once an hour (we have a loader that runs every hour, it may run multiple
> concurrent insert scripts). We also use vacuum analyze. So, I really
> can't see a reason for it to balloon to that horridly expanded size.
> Maybe stale indexes? Aborted vacuums? What on earth would cause that?
I've read that you can take the size of the data out of the database,
multiply it by 6 and you'll get the approximate size of the same data stored
in the database... Obviously that's not working in your case..
Every UPDATE and DELETE leaves the tuple that was updated or deleted until
vacuum is run but I don't see how that would happen on a fresh import into a
newly created DB...
-Mitch
---------------------------(end of broadcast)---------------------------
TIP 3: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly