[Sorry for the duplicate -- I accidentally sent the previous before I was
finished editing it]

Tom Lane <[EMAIL PROTECTED]> writes:

> I think we can solve this by having IndexBuildHeapScan not index
> DELETE_IN_PROGRESS tuples if it's doing a concurrent build.  The problem
> of old transactions trying to use the index does not exist, because
> we'll wait 'em out before marking the index valid, so we need not
> worry about preserving validity for old snapshots.  And if the deletion
> does in fact roll back, we'll insert the tuple during the second pass,
> and catch any uniqueness violation at that point.

Actually that's a bit of a pain. This function is called from the AM functions
so it doesn't have any access to whether it's doing a concurrent build or not.
I would have to stuff a flag in the indexInfo or change the AM api.

The API is pretty bizarre around this. The core calls the AM to build the
index, the AM calls back this function in core to do the scan, then this
function calls back into the AM to handle the inserts. 

It seems like it would be simpler to leave the core in charge the whole time.
It would call an AM method to initialize state, then call an AM method for
each tuple that should be indexed, and lastly call a finalize method.

Also, I think there may be another problem here with INSERT_IN_PROGRESS. I'm
currently testing unique index builds while pgbench is running and I'm
consistently getting unique index violations from phase 1. I think what's
happening is that insert that haven't committed yet (and hence ought to be
invisible to us) are hitting unique constraint violations against older
versions that are still alive to us. 

  Gregory Stark
  EnterpriseDB          http://www.enterprisedb.com

---------------------------(end of broadcast)---------------------------
TIP 1: if posting/reading through Usenet, please send an appropriate
       subscribe-nomail command to [EMAIL PROTECTED] so that your
       message can get through to the mailing list cleanly

Reply via email to