Joshua D. Drake wrote:
Hans-Juergen Schoenig wrote:
regards, tom lane
overhead is not an issue here - if i lose 10 or 15% i am totally fine
as long as i can reduce vacuum overhead to an absolute minimum.
overhead will vary with row sizes anyway - this is not the point.
I
Hans-Juergen Schoenig [EMAIL PROTECTED] writes:
i forgot to mention - i am on 8.1 here.
so, VACUUM is not so smart yet.
So even if we added 64-bit xids it wouldn't be useful to you. You would have
to update (at which point you get all the other improvements which make it
less useful.) Or at
Hans-Juergen Schoenig wrote:
i suggest to introduce a --with-long-xids flag which would give me 62 /
64 bit XIDs per vacuum on the entire database.
this should be fairly easy to implement.
i am not too concerned about the size of the tuple header here - if we
waste 500 gb of storage here i am
hello everybody,
i know that we have discussed this issue already. my view of the problem
has changed in the past couple of weeks, however. maybe other people had
similar experiences.
i have been working on a special purpose application which basically
looks like that:
- 150.000 tables
Hans-Juergen Schoenig [EMAIL PROTECTED] writes:
my DB is facing around 600mio transaction a month. 85% of those contain at
least some small modification so I cannot save on XIDs.
What's a mio? Assuming it's short for million I don't see the problem. The
transaction horizon is 2 *billion*. So
Gregory Stark [EMAIL PROTECTED] writes:
... Keep in mind you're proposing to make everything run 3% slower instead of
using that 3% i/o bandwidth headroom to run vacuum outside the critical path.
I think that's actually understating the problem. Assuming this is a
64-bit machine (which it had
Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
... Keep in mind you're proposing to make everything run 3% slower instead of
using that 3% i/o bandwidth headroom to run vacuum outside the critical path.
I think that's actually understating the problem. Assuming this is a
Hans-Juergen Schoenig [EMAIL PROTECTED] writes:
overhead is not an issue here - if i lose 10 or 15% i am totally fine as
long as i can reduce vacuum overhead to an absolute minimum.
I cannot see the sanity of taking a ~10% hit on all I/O activity
(especially foreground queries) to avoid having
Hans-Juergen Schoenig wrote:
regards, tom lane
overhead is not an issue here - if i lose 10 or 15% i am totally fine as
long as i can reduce vacuum overhead to an absolute minimum.
overhead will vary with row sizes anyway - this is not the point.
I am not buying this