Greg Stark <[EMAIL PROTECTED]> writes: > It seems like it would be simpler to leave the core in charge the whole time. > It would call an AM method to initialize state, then call an AM method for > each tuple that should be indexed, and lastly call a finalize method.
[ shrug... ] I'm uninterested in refactoring the AM API right now. We've got enough stuff to deal with before beta, not to mention an uncommitted bitmap AM patch that it would certainly break. > Also, I think there may be another problem here with INSERT_IN_PROGRESS. I'm > currently testing unique index builds while pgbench is running and I'm > consistently getting unique index violations from phase 1. I think what's > happening is that insert that haven't committed yet (and hence ought to be > invisible to us) are hitting unique constraint violations against older > versions that are still alive to us. Hmm ... it's certainly highly likely that we could pick up multiple versions of a row during pass 1, but the uniqueness checker should notice that some versions are dead? Oooh, no there's a problem: the tuple we are inserting could become dead in the interval between when we pick it up and when we put it into the index. So we could try to put multiple versions of the same row into the uniqueness check. Right at the moment, unique index builds with this mechanism are looking unfixably broken :-(. Anyone see any chance at all of making them work? Maybe we should just cut our losses and go with the nonunique case only. That one is pretty easy: just stuff everything in the index ... regards, tom lane ---------------------------(end of broadcast)--------------------------- TIP 4: Have you searched our list archives? http://archives.postgresql.org