On Fri, 28 May 2004 14:47:01 -0400, Tom Lane [EMAIL PROTECTED] wrote:
If putting back xmax is the price we must pay for nested transactions,
then we *will* pay that price. Maybe not in this release, but it will
inevitably happen.
we = every Postgres user, even those that do not use
On Fri, 28 May 2004 21:59:53 -0400, Alvaro Herrera
[EMAIL PROTECTED] wrote:
So the assumption was that when we see that this has
happenned, the Cmin is no longer important (== every future command can
already see the tuple)
If it was that simple, we wouldn't have had to invent the CMIN_IS_CMAX
On Tue, Jun 01, 2004 at 02:37:37PM +0200, Manfred Koizar wrote:
On Fri, 28 May 2004 21:59:53 -0400, Alvaro Herrera
[EMAIL PROTECTED] wrote:
So the assumption was that when we see that this has
happenned, the Cmin is no longer important (== every future command can
already see the tuple)
If
Manfred Koizar [EMAIL PROTECTED] writes:
On Thu, 27 May 2004 16:50:24 -0400, Alvaro Herrera
At first I thought I'd have to add back Xmax as a field on its own
Veto! This would increase heap tuple header size == less tuples per
page == more pages per table == more I/O == performance loss.
If
Alvaro Herrera [EMAIL PROTECTED] writes:
On Fri, May 28, 2004 at 03:48:11PM -0400, Tom Lane wrote:
Nope. Think about sub-subtransactions.
They are all aborted if the parent is, so
BEGIN;
BEGIN; -- cid = 1
BEGIN; -- cid = 2
On Thu, 27 May 2004 16:50:24 -0400, Alvaro Herrera
[EMAIL PROTECTED] wrote:
Now you are on the subject, can I ask you to take a peek at what I did
regarding tuple headers?
I did read your patch, but I didn't understand it. :-(
At first I thought I'd have to add back Xmax as a field on its own
On Fri, May 28, 2004 at 02:47:01PM -0400, Tom Lane wrote:
We could possibly avoid this particular issue with sufficiently complex
visibility rules. (I am thinking that we might be able to say that the
inner xact can't see the tuple in question unless the creating command
was done in the
On Fri, May 28, 2004 at 03:48:11PM -0400, Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
On Fri, May 28, 2004 at 03:19:29PM -0400, Tom Lane wrote:
We'd still need a plain CommandCounterIncrement facility, which means
that actually a subtransaction would have to be a group of CIDs
On Fri, May 28, 2004 at 03:19:29PM -0400, Tom Lane wrote:
We'd still need a plain CommandCounterIncrement facility, which means
that actually a subtransaction would have to be a group of CIDs not just
one.
Right, this is why I suggested runlength (the group is contiguous).
So there'd also
On Fri, May 28, 2004 at 04:45:28PM -0400, Tom Lane wrote:
However, I just remembered why we rejected this idea to start with :-(.
If we do it this way then when the overall xact commits, we no longer
have state that tells which particular tuples are good or not. We would
have to trawl for
I wrote a while ago
So pg_subtrans remains the same, and we assign a new Xid to each
subtransaction. Each tuple gets Xmin/Xmax according to the Xid of the
current subtransaction. Within the transaction tree we don't use the
Xid to check for visibility, but Cmin/Cmax and the abort bitmap.
On Wed, 26 May 2004, Tom Lane wrote:
if you have to do an UPDATE that affects every row of a large table
UPDATE tab SET col = col + 1
which leaves you with N live rows, N dead rows, and lots of pain to get
back down to a less-than-twice-normal-size table. (Traditional way is
What would be useful for that (as of 7.5) is a dummy ALTER TABLE TYPE,
viz
ALTER TABLE tab ALTER COLUMN col TYPE same-type-it-already-has
I think we should add special syntax for this purpose, since I would like to
(or someone else later on) see all possible cases of alter column
Dennis Bjorklund [EMAIL PROTECTED] writes:
On Wed, 26 May 2004, Tom Lane wrote:
I'm not real sure where to document this
trick but it seems like we ought to mention it someplace.
Isn't it better to detect a UPDATE without a where and do that update in
the same way as the alter table above?
I did a CLUSTER - it took just under 12 hours. The original load of the
table (including parsing all the geometries) took about 4 hours (+4 to
build the indexes and vacuum analyse). I dont know why the cluster is
so much slower...
My SELECT * FROM table LIMIT 1; is now fast.
Thanks for the
On Wed, 26 May 2004 18:17:55 -0400, Tom Lane [EMAIL PROTECTED] wrote:
(Some days I think we should dump
VACUUM FULL, because it's optimized for a case that's no longer very
interesting...)
So we still have to stick with VACUUM FULL for some time, right?
The next set of compatibility breakers I'm
Manfred Koizar [EMAIL PROTECTED] writes:
XX.. ..y.
and when y is updated the new version will be stored in a lower block
Oh? What makes you think that? I see no guarantee of it.
AFAICS this would make vacuum.c much simpler (no more chain moves).
How will
On Thu, 27 May 2004 14:23:07 -0400, Tom Lane [EMAIL PROTECTED] wrote:
and when y is updated the new version will be stored in a lower block
Oh? What makes you think that? I see no guarantee of it.
You're right, I see only a tendency, because the majority of free space
is before the last block
On Thu, May 27, 2004 at 09:52:30PM +0200, Manfred Koizar wrote:
I have no clear explanation at the moment, just some fuzzy ideas that
are beginning to crystallise. I'm messing around with heap tuple
headers again, and the Xvac field seems to get in the way, unless I can
cut down the number
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM table LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
wasted/retired tuples in the table so I vacuum
David Blasby wrote:
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM table LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
wasted/retired tuples in the
Gaetano Mendola wrote:
David Blasby wrote:
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM table LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
I had originally thought that there might be a large number of
David Blasby [EMAIL PROTECTED] writes:
I have a table with about 16,000,000 rows in it.
When I do a:
SELECT * FROM table LIMIT 1;
it takes about 10 minutes (thats about how long it takes to do a full
sequential scan).
The only explanation that comes to mind is huge amounts of dead space
David Blasby [EMAIL PROTECTED] writes:
I just did another vacuum analyse on the table:
Ah, here we go:
INFO: csn_edges: found 0 removable, 16289929 nonremovable row
versions in 2783986 pages
That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K wide
Tom Lane wrote:
INFO: csn_edges: found 0 removable, 16289929 nonremovable row
versions in 2783986 pages
That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K wide on average, but are they? (You might want
to run contrib/pgstattuple to get some exact
David Blasby [EMAIL PROTECTED] writes:
I just did another vacuum analyse on the table:
Ah, here we go:
INFO: csn_edges: found 0 removable, 16289929 nonremovable row
versions in 2783986 pages
That works out to just under 6 rows per 8K page, which wouldn't be too
bad if the rows are 1K
It wouldn't; you'd need vacuum full to collapse out the dead space.
You could also try CLUSTER which is faster than VACUUM FULL when most
of the data has to be moved anyway. (Some days I think we should dump
VACUUM FULL, because it's optimized for a case that's no longer very
interesting...)
Out
Christopher Kings-Lynne [EMAIL PROTECTED] writes:
It wouldn't; you'd need vacuum full to collapse out the dead space.
You could also try CLUSTER which is faster than VACUUM FULL when most
of the data has to be moved anyway. (Some days I think we should dump
VACUUM FULL, because it's optimized
David Blasby [EMAIL PROTECTED] writes:
I have another copy of this table in another database - vacuum analyse
verbose says its only 1,500,000 pages (vs 2,800,000).
Hmm ... this is consistent with the idea that you did an UPDATE
affecting every row of the table. That would generate N new rows
Tom Lane wrote:
of dead tuples followed by a lot of pages worth of live tuples. Plain
VACUUM cannot do much to fix this since it doesn't move rows around.
VACUUM FULL will fix it, but its index-update overhead is high enough
that CLUSTER is a better deal.
Tom: I was interested in performance
30 matches
Mail list logo