On Tue, 2007-03-13 at 13:39 -0700, Jeff Davis wrote: > > > Do you have an opinion about sync_scan_threshold versus a simple > > > sync_scan_enable? > > > > enable_sync_scan? > > > > After looking at other GUC names, I suggest that it's either > "sync_scan" (for on/off) or "sync_scan_threshold" (if we do want to > allow a numerical threshold). All the GUCs beginning with "enable_" are > planner settings.
How about: sync_seqscans so the phrase matches the equivalent enable_ parameter > If we only allow on/off, we could probably just sync scan every table > because of your recycle_buffers patch. The buffer recycling only makes sense for large scans, so there's an exact match for when both techniques need to kick-in. I think I'd just lose this parameter and have it kick-in at either NBuffers or NBuffers/2. We don't need another parameter... I'm not planning to have scan_recycle_buffers continue into the production version. > > > > I'd still like to be able to trace each scan to see how far ahead/behind > > > > it is from the other scans on the same table, however we do that. > > > > > > > > Any backend can read the position of other backend's scans, so it should > > > > > > Where is that information stored? Right now my patch will overwrite the > > > hints of other backends, because I'm using a static data structure > > > (rather than one that grows). I do this to avoid the need for locking. > > > > OK, well, we can still read it before we overwrite it to calc the > > difference. That will at least allow us to get a difference between > > points as we go along. That seems like its worth having, even if it > > isn't accurate for 3+ concurrent scans. > > Let me know if the things I list below don't cover what the information > you're looking for here. It would be easy for me to emit a log message > at the time it's overwriting the hint, but that would be a lot of noise: > every time ss_report_loc() is called, which we discussed would be once > per 100 pages read per scan. > > > > > be easy enough to put in a regular LOG entry that shows how far > > > > ahead/behind they are from other scans. We can trace just one backend > > > > and have it report on where it is with respect to other backends, or you > > > > could have them all calculate their position and have just the lead scan > > > > report the position of all other scans. > > > > > > > > > > I already have each backend log it's progression through the tablescan > > > every 100k blocks to DEBUG (higher DEBUG gives every 10k blocks). I > > > currently use this information to see whether scans are staying together > > > or not. I think this gives us the information we need without backends > > > needing to communicate the information during execution. > > > > Well, that is good, thank you for adding that after initial discussions. > > > > Does it have the time at which a particular numbered block is reached? > > (i.e. Block #117 is not the same thing as the 117th block scanned). We > > can use that to compare the time difference of each scan. > > Right now it logs when a scan starts, what block number of the table it > starts on, and also prints out the current block it's scanning every N > blocks (100k or 10k depending on debug level). The time and the pid are, > of course, available from log_prefix. Can you make it log every block whose id is divisible by 100k or 10k? Otherwise one scan will log blocks 100,000... 200,000 ... etc and the next scan will log 17357.... 117357 ... etc which will be much harder to work out. That will give us "lap times" for every 100,000 blocks. I'm particularly interested in the turning point where the scan starts again at the beginning of the file. It would be good to know what blockid it turned at and when that was. We may get out of step at that point. Maybe. We'll find out. -- Simon Riggs EnterpriseDB http://www.enterprisedb.com ---------------------------(end of broadcast)--------------------------- TIP 5: don't forget to increase your free space map settings