Gurjeet Singh <gurj...@singh.im> writes: > On Wed, Apr 10, 2013 at 11:10 PM, Tom Lane <t...@sss.pgh.pa.us> wrote: >> The point you're missing is that the synchronization is self-enforcing:
> Let's consider a pathological case where a scan is performed by a user > controlled cursor, whose scan speed depends on how fast the user presses > the "Next" button, then this scan is quickly going to fall out of sync with > other scans. Moreover, if a new scan happens to pick up the block reported > by this slow scan, then that new scan may have to read blocks off the disk > afresh. Sure --- if a backend stalls completely, it will fall out of the synchronized group. And that's a good thing; we'd surely not want to block the other queries while waiting for a user who just went to lunch. > So, again, it is not guaranteed that all the scans on a relation will > synchronize with each other. Hence my proposal to include the term > 'probability' in the definition. Yeah, it's definitely not "guaranteed" in any sense. But I don't really think your proposed wording is an improvement. The existing wording isn't promising guaranteed sync either, to my eyes. Perhaps we could compromise on, say, changing "so that concurrent scans read the same block at about the same time" to "so that concurrent scans tend to read the same block at about the same time", or something like that. I don't mind making it sound a bit more uncertain, but I don't think that we need to emphasize the probability of failure. regards, tom lane -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers