On Thu, Dec 20, 2018 at 9:29 AM Alvaro Herrera <alvhe...@2ndquadrant.com> wrote: > I can patch that one too, but it's separate -- it goes back to pg10 I > think (the other to pg11) -- and let's think about the lock mode for a > bit: as far as I can see, ShareUpdateExclusive is enough; the difference > with AccessExclusive is that it lets through modes AccessShare, RowShare > and RowExclusive. According to mvcc.sgml, that means we're letting > SELECT, SELECT FOR SHARE/UPDATE and INS/UPD/DEL in the partition being > detached, respectively. All those seem acceptable to me. All DDL is > blocked, of course.
One problem about which I thought is the partition check constraint. When we attach, we need to validate that the check constraint holds of every row in the partition, which means that we need to keep new rows from being added while we're attaching. But when we detach, there's no corresponding problem. If we detach a leaf partition, the check constraint just disappears; if we detach an intermediate partitioned table, the check constraint loses some clauses. Either way, there's no way for a row that was previously acceptable to become unacceptable after the detach operation. There is, however, the possibility that the invalidation messages generated by the detach operation might not get processed immediately by other backends, so those backends might continue to enforce the partition constraint for some period of time after it changes. That seems OK. There would be trouble, though, if we freed the old copy of the partition constraint during a relcache rebuild while somebody still had a pointer to it. I'm not sure that can happen, though. > So right now if you insert via the parent table, detaching the partition > would be blocked because we also acquire lock on the parent (and detach > acquires AccessExclusive on the parent). But after DETACH CONCURRENTLY, > inserting via the parent should let rows to the partition through, > because there's no conflicting lock to stop them. Inserting rows to the > partition directly would be let through both before and after DETACH > CONCURRENTLY. Yeah, that's worrisome. Initially, I thought it was flat-out insane, because if somebody modifies that table in some way that makes it unsuitable as an insertion target for the existing stream of tuples -- adding constraints, changing column definitions, attaching to a new parent, dropping -- then we'd be in big trouble. Then I thought that maybe it's OK, because all of those operations take AccessExclusiveLock and therefore even if the detach goes through without a lock, the subsequent change that is needed to create a problem can't. I can't say that I'm 100% convinced that's bulletproof, though. Suppose for example that a single session opens a cursor, then does the detach, then does something else, then tries to read more from the cursor. Now the locks don't conflict, but we're still OK (I think) because those operations should all CheckTableNotInUse(). But if we try to lower the lock level for some other things in the future, we might open up other problems here, and it's unclear to me what guiding principles we can rely on here to keep us out of trouble. > One note about DETACH CONCURRENTLY: detaching the index from parent now > uses AccessExclusiveLock (the block I just patched). For concurrent > detach that won't work, so we'll need to reduce that lock level too. Not > sure offhand if there are any problems with that. Me neither. I'm pretty sure that you need at least ShareUpdateExclusiveLock, but I don't know whether you need AccessExclusiveLock. Here again, the biggest risk I can see is the sinval race - nothing forces the invalidation messages to be read before opening the relation for read or write. I think it would be nice if it ended up that the lock level is the same for the table and the indexes, because otherwise the legal sequences of operations vary depending on whether you've got a table with inherited indexes or one without, and that multiplies the difficulty of testing. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company