Re: [PATCHES] PL/TCL Patch to prevent postgres from becoming multithreaded

2007-09-16 Thread Stefan Kaltenbrunner
Tom Lane wrote:
 Stefan Kaltenbrunner [EMAIL PROTECTED] writes:
 yeah testing that patch now (seems to apply just fine on -HEAD) but it
 seems that there is something strange going on because I just got:
 
 ! ERROR:  could not read block 2 of relation 1663/16384/2606: read only 0 of 
 8192 bytes
 
 Is that repeatable?  What sort of filesystem are you testing on?
 (soft-mounted NFS by any chance?)

doesn't seem to be repeatable :-(

on the postitive side - the pltcl patch does seem to fix the mentioned
regression failures quagga triggered earlier this year. I did about a
dozends of full buildfarm runs with that patch and about ten manual
executions of the pltcl regression tests without any sign of
misbehaviour so this seems like a clear candidate for at least -HEAD.


Stefan

---(end of broadcast)---
TIP 4: Have you searched our list archives?

   http://archives.postgresql.org


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Pavan Deolasee
On 9/16/07, Tom Lane [EMAIL PROTECTED] wrote:

 Attached is as far as I've gotten with reviewing the HOT patch; I see
 that Pavan is still fooling with it so we'd better re-sync.



I am done with most of the items on my plate. I was not sure how
far you have gone with the patch, so was trying to finish as many
items as possible myself. Now that I know you have started refactoring
the code, I would try not to change it unless its urgent. And when its
required, I will send a add-on patch just the way I did earlier.

Let me know if you want me to focus of something at this point.



 * Many API cleanups, in particular I didn't like what had been done
   to heap_fetch and thought it better to split out the chain-following
   logic



I liked those changes. I am wondering if we could have avoided
duplicating the chain following code in heap_hot_search_buffer and
index_getnext. But I trust your judgment.

I also liked the way you reverted the API changes to various index build
methods.

I would test the patch tomorrow in detail.

Thanks,
Pavan


-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [PATCHES] Latest README.HOT

2007-09-16 Thread Pavan Deolasee
On 9/16/07, Tom Lane [EMAIL PROTECTED] wrote:

 Forgot to include this in the patch ...

 I'm still unhappy about too much complexity for the VACUUM FULL case,
 but some other issues have gone away.



The entire complexity in the VACUUM FULL code is for the book keeping
purposes so that at the end we can recheck the index and heap tuples
count. VACUUM FULL converts HOT tuples into non-HOT tuples when it
moves them around and that introduces the extra complexity. Otherwise
there is no significant change to the VACUUM FULL logic itself.

Good to hear that we are now comfortable with most of the other items.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 I liked those changes. I am wondering if we could have avoided
 duplicating the chain following code in heap_hot_search_buffer and
 index_getnext.

I wasn't totally thrilled with duplicating that code, but the callers of
heap_hot_search are only interested in finding a single visible tuple,
whereas index_getnext has to find them all on successive visits.
Exposing a capability to do that in heap_hot_search seemed to make its
API ugly enough that duplicated code was better.  (I really didn't like
the original setup where it was supposed to find the next tuple after
the one passed in, thereby forcing the caller to handle the first
chain member...)

BTW, I'm in process of taking out the separate HEAPTUPLE_DEAD_CHAIN
return value from HeapTupleSatisfiesVacuum.  After looking through
all the callers I concluded that this complicated life more than it
helped.  The places that actually want to make the distinction can
check HeapTupleIsHotUpdated easily enough for themselves, and in
several of the places that don't want to make the distinction it's
notationally cumbersome to deal with it.  For instance, in the patch
as I posted it last night, both heap_hot_search and index_getnext
probably fail to detect all-dead HOT chains, because some of the
chain members are likely to be HEAPTUPLE_DEAD_CHAIN rather than plain
DEAD, and they're not coping.

regards, tom lane

---(end of broadcast)---
TIP 6: explain analyze is your friend


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Pavan Deolasee
On 9/16/07, Tom Lane [EMAIL PROTECTED] wrote:



 BTW, I'm in process of taking out the separate HEAPTUPLE_DEAD_CHAIN
 return value from HeapTupleSatisfiesVacuum.


I agree. I myself suggested doing so earlier in the discussion (I actually
even removed this before I sent out the add-on patch last night, but then
reverted back because I realized at least it is required at one place)
The place where I thought its required is to avoid marking an index tuple
dead
even though the corresponding root tuple is dead and the root tuple was
HOT updated. But seems like you have already put in a different mechanism
to handle that. So we should be able to get rid of HEAPTUPLE_DEAD_CHAIN.

Thanks,
Pavan




-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Tom Lane
BTW, I am still looking for a reason for the hard-prune logic to live.
It seems to complicate matters far more than it's worth --- in
particular the way that the WAL replay representation is set up seems
confusing and fragile.  (If prune_hard is set, the redirect entries
mean something completely different.)  There was some suggestion that
VACUUM FULL has to have it, but unless I see proof of that I'm thinking
of taking it out.

regards, tom lane

---(end of broadcast)---
TIP 9: In versions below 8.0, the planner will ignore your desire to
   choose an index scan if your joining column's datatypes do not
   match


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Pavan Deolasee
On 9/16/07, Tom Lane [EMAIL PROTECTED] wrote:


 Something else I was just looking at: in the pruning logic, SetRedirect
 and SetDead operations are done at the same time that we record them for
 the eventual WAL record creation, but SetUnused operations are
 postponed and only done at the end.  This seems a bit odd/nonorthogonal.
 Furthermore it's costing extra code complexity: if you were to SetUnused
 immediately then you wouldn't need that bitmap thingy to prevent you
 from doing it twice.  I think that the reason it's like that is probably
 because of the problem of potential multiple visits to a DEAD heap-only
 tuple, but it looks to me like that's not really an issue given the
 current form of the testing for aborted tuples, which I have as

 if (HeapTupleSatisfiesVacuum(tuple-t_data, global_xmin, buffer)
 == HEAPTUPLE_DEAD  !HeapTupleIsHotUpdated(tuple))
 heap_prune_record_unused(nowunused, nunused, unusedbitmap,
  rootoffnum);

 ISTM that if HeapTupleIsHotUpdated is false, we could simply SetUnused
 immediately, because any potential chain members after this one must
 be dead too, and they'll get reaped by this same bit of code --- there
 is no need to preserve the chain.  (The only case we're really covering
 here is XMIN_INVALID, and any later chain members must certainly be
 XMIN_INVALID as well.)  When the HOT-chain is later followed, we'll
 detect chain break anyway, so I see no need to postpone clearing the
 item pointer.



So are you suggesting we go back to the earlier way of handling
aborted tuples separately ? But then we can not do that by simply
checking for !HeaptupleIsHotUpdated. There could be several aborted
tuples at the end of the chain of which all but one are marked HotUpdated.
Or are you suggesting we also check for XMIN_INVALID for detecting
aborted tuples ?

If we handle aborted tuples differently, then we don't need that extra
bitmap and can in fact set the line pointer unused immediately. Otherwise,
as you rightly pointed out, we might break a chain before we prune
it.

Thanks,
Pavan

-- 
Pavan Deolasee
EnterpriseDB http://www.enterprisedb.com


Re: [PATCHES] HOT synced with HEAD

2007-09-16 Thread Tom Lane
Pavan Deolasee [EMAIL PROTECTED] writes:
 So are you suggesting we go back to the earlier way of handling
 aborted tuples separately ? But then we can not do that by simply
 checking for !HeaptupleIsHotUpdated. There could be several aborted
 tuples at the end of the chain of which all but one are marked HotUpdated.
 Or are you suggesting we also check for XMIN_INVALID for detecting
 aborted tuples ?

Yeah.  As the code stands, anything that's XMIN_INVALID will be
considered not-HotUpdated (look at the macro...).  So far I've seen no
place where there is any value in following a HOT chain past such a
tuple --- do you see any?  Every descendant tuple must be XMIN_INVALID
as well ...

regards, tom lane

---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster