On Mon, Jan 14, 2008 at 10:10:54PM -0500, Tom Lane wrote:
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I went through all of the heap_beginscan calls in the code last night.
pgstattuple was broken but AFAICS none of the other callers care about
the visitation
On Mon, 2008-01-14 at 22:10 -0500, Tom Lane wrote:
It's a tossup from here. Anybody have a strong opinion one way or the
other?
To me a heapscan means read all the tuples (without implying order)
and an ordered heap scan is a special case that should be made explicit.
But this is not a
Jeff Davis [EMAIL PROTECTED] writes:
On Mon, 2008-01-14 at 22:10 -0500, Tom Lane wrote:
It's a tossup from here. Anybody have a strong opinion one way or the
other?
To me a heapscan means read all the tuples (without implying order)
and an ordered heap scan is a special case that should be
Gregory Stark [EMAIL PROTECTED] writes:
I had another thought. Perhaps in use_assert_checking mode we should have it
start from a random position every time. Or perhaps just a random position
from amongst the first n pages.
Interesting idea, but I fear it'd make a lot of the regression tests
Tom Lane wrote:
I've committed a patch to do that. Please test CVS HEAD and see if you
still see problems.
I'm happy to hear you found something and I will try CVS HEAD in a minute.
In the meantime let me report that the cluster issue happens with GIST
as well. I have load 5 million rows in
Tom Lane [EMAIL PROTECTED] writes:
Oooh ... I can't be sure that this is what's biting you, but I
definitely see a bug that seems to match the symptoms. As the comments
in index.c point out, CREATE INDEX CONCURRENTLY works like this:
* validate_index() works by first gathering all the
Hannes Dorbath [EMAIL PROTECTED] writes:
In the meantime let me report that the cluster issue happens with GIST
as well. ...
But as far as I understood this is already covered by your thesis.
Right, the bug is independent of which index AM you use (though the
symptoms may vary).
Tom Lane wrote:
I wrote:
I think it's okay for CREATE INDEX CONCURRENTLY to use bulk-read access
strategy (that is, seqscan using a limited number of buffers), but it
has to be able to force the scan to start at page zero.
I've committed a patch to do that. Please test CVS HEAD and see if
On Sun, 2008-01-13 at 18:52 -0500, Tom Lane wrote:
The scan is done using the regular heapscan code, which in 8.3 has been
modified to enable synchronized scanning, which means it might start
from the middle of the table and wrap around. If that happens, the
merge join will get totally
Jeff Davis [EMAIL PROTECTED] writes:
On Sun, 2008-01-13 at 18:52 -0500, Tom Lane wrote:
I wonder whether there are any other places that are silently assuming
that heapscans start from page zero ...
I considered that question when implementing sync scans, but I could not
think of any
Tom Lane [EMAIL PROTECTED] writes:
Jeff Davis [EMAIL PROTECTED] writes:
On Sun, 2008-01-13 at 18:52 -0500, Tom Lane wrote:
I wonder whether there are any other places that are silently assuming
that heapscans start from page zero ...
I considered that question when implementing sync scans,
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I went through all of the heap_beginscan calls in the code last night.
pgstattuple was broken but AFAICS none of the other callers care about
the visitation order. I wonder though about third-party add-ons :-(
Gregory Stark wrote:
Gregory Stark [EMAIL PROTECTED] writes:
On the other hand we can't just ignore all vacuums because someone could issue
a manual vacuum inside a transaction (I think?).
Doh, ignore this. sigh.
I started from scratch to put up a test case. I cannot trigger ERROR:
item
On Jan 13, 2008 7:50 PM, Hannes Dorbath [EMAIL PROTECTED] wrote:
I started from scratch to put up a test case. I cannot trigger ERROR:
item pointer (0,1) already exists again as the deadlock issue reported
by Gregory Stark hit me every time. Is this fixed in RC1? Can I retry
with that?
No,
Hannes Dorbath wrote:
Guillaume Smet wrote:
On Jan 13, 2008 7:50 PM, Hannes Dorbath [EMAIL PROTECTED]
wrote:
I started from scratch to put up a test case. I cannot trigger ERROR:
item pointer (0,1) already exists again as the deadlock issue reported
by Gregory Stark hit me every time. Is this
Guillaume Smet wrote:
On Jan 13, 2008 7:50 PM, Hannes Dorbath [EMAIL PROTECTED] wrote:
I started from scratch to put up a test case. I cannot trigger ERROR:
item pointer (0,1) already exists again as the deadlock issue reported
by Gregory Stark hit me every time. Is this fixed in RC1? Can I
Hannes Dorbath wrote:
ERROR: relation ts_test_tsv already exists
test=# drop INDEX ts_test_tsv ;
DROP INDEX
This is a general thing I'd like to ask. If the creation of an index
fails, why is it nevertheless there? No matter if deadlock or my GIN
error, why isn't the whole operation rolled
Hannes Dorbath [EMAIL PROTECTED] writes:
This is a general thing I'd like to ask. If the creation of an index
fails, why is it nevertheless there?
It's a rather ugly consequence of the fact that CREATE INDEX
CONCURRENTLY requires more than one transaction. If the later ones
fail, the invalid
Tom Lane wrote:
Hannes Dorbath [EMAIL PROTECTED] writes:
This is a general thing I'd like to ask. If the creation of an index
fails, why is it nevertheless there?
It's a rather ugly consequence of the fact that CREATE INDEX
CONCURRENTLY requires more than one transaction. If the later ones
Hannes Dorbath wrote:
Currently all I can say is: It happens the first time after I bulk load data
into that table.
I have the bad feeling that I need to correct this into It happens when
autovacuum is active on the table.
Is it by any chance possible that CREATE INDEX CONCURRENTLY might
Oooh ... I can't be sure that this is what's biting you, but I
definitely see a bug that seems to match the symptoms. As the comments
in index.c point out, CREATE INDEX CONCURRENTLY works like this:
* validate_index() works by first gathering all the TIDs currently in the
* index, using a
Hannes Dorbath [EMAIL PROTECTED] writes:
I have the bad feeling that I need to correct this into It happens when
autovacuum is active on the table.
Ah-hah, I realize how to explain that too, now. If you start the CREATE
INDEX CONCURRENTLY while an autovacuum is in progress on that table, the
I wrote;
Hannes Dorbath [EMAIL PROTECTED] writes:
I have the bad feeling that I need to correct this into It happens when
autovacuum is active on the table.
Ah-hah, I realize how to explain that too, now.
Hmm, no, scratch that: neither VACUUM nor ANALYZE use a standard
heapscan, so they
I wrote:
I think it's okay for CREATE INDEX CONCURRENTLY to use bulk-read access
strategy (that is, seqscan using a limited number of buffers), but it
has to be able to force the scan to start at page zero.
I've committed a patch to do that. Please test CVS HEAD and see if you
still see
Gregory Stark [EMAIL PROTECTED] writes:
Further poking around shows that the unrecognized locktag is because
lmgr.c:DescribeLockTag was never taught about virtual xids.
That's fixed, thanks for the patch.
The pid it's waiting on is long since gone but looks like it was probably an
autovacuum
Gregory Stark [EMAIL PROTECTED] writes:
The pid it's waiting on is long since gone but looks like it was probably an
autovacuum process. I have a vague recollection that you had rigged CREATE
INDEX CONCURRENTLY to ignore vacuum processes when checking for conflicting
processes. Since any such
Tom Lane [EMAIL PROTECTED] writes:
It cannot be one of the first two, because those only block
for xacts that *already have* a conflicting lock. The problem must be
at the third wait step, which waits out all xacts that might conceivably
be interested in recently-dead tuples that are not in
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
Now an unindexed dead tuple is not a problem from vacuum's point of
view, nor does ANALYZE care, so AFAICS there is no need for this step
to wait for autovacuum processes --- nor indeed for manual vacuums.
Also,
Gregory Stark wrote:
I am hoping our other things which ignore VACUUM such as the globalxmin
calculation are careful not to ignore VACUUM ANALYZE processes?
It doesn't matter -- the ANALYZE is done in a separate transaction (so
the VACUUM part is ignored, the ANALYZE part is not).
--
Alvaro
Gregory Stark [EMAIL PROTECTED] writes:
Tom Lane [EMAIL PROTECTED] writes:
I didn't have any luck reproducing either of these behaviors --- maybe
it's data-dependent. Can you extract a test case?
I haven't been able to reproduce this either but I produced an entirely
different problem:
Gregory Stark [EMAIL PROTECTED] writes:
On the other hand we can't just ignore all vacuums because someone could issue
a manual vacuum inside a transaction (I think?).
Doh, ignore this. sigh.
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Get trained by Bruce
31 matches
Mail list logo