On Mon, Feb 16, 2015 at 10:49 AM, Kevin Grittner <kgri...@ymail.com> wrote: > > What this discussion has made me reconsider is the metric for > considering a transaction "too old". The number of transaction IDs > consumed seems inferior as the unit of measure for that to LSN or > time. > > It looks to me to be pretty trivial (on the order of maybe 30 lines > of code) to specify this GUC in minutes rather than transaction > IDs.
It seems to me that SQL Server also uses similar mechanism to avoid the bloat in version store (place to store previous versions or record). Mechanism used by them is that during the shrink process, the longest running transactions that have not yet generated row versions are marked as victims. If a transaction is marked as a victim, it can no longer read the row versions in the version store. When it attempts to read row versions, an error message is generated and the transaction is rolled back. I am not sure how much weight it adds to the usefulness of this patch, however I think if other leading databases provide a way to control the bloat, it indicates that most of the customers having write-intesive workload would like to see such an option. With Regards, Amit Kapila. EnterpriseDB: http://www.enterprisedb.com