Hi Tom, On Fri, Jun 10, 2005 at 01:37:54PM -0400, Tom Lane wrote: > Josh Berkus <josh@agliodbs.com> writes: > > Yeah. I'd prefer per-database quotas, rather than per-user quotas, which > > seem kind of useless. The hard part is making any transaction which > > would exceed the per-database quota roll back cleanly with a > > comprehensible error message rather than just having the database shut > > down. > > That part doesn't seem hard to me: we already recover reasonably well > from smgrextend failures. The real difficulty is in monitoring the > total database size to know when it's time to complain. We don't > currently make any effort at all to measure that, let alone keep track > of it in real time. > > Given that there might be lots of processes concurrently adding pages > in different places, I don't think you could hope for an exact > stop-on-a-dime limit, but maybe if you're willing to accept some fuzz > it is doable ...
Well I think a fuzzy test is better than none. But I think one should be able to calculate how much later the quota is detected as exceeded than it is planed to be. Therefor a threshold is usefull as well (for alerting) Regards, Yann ---------------------------(end of broadcast)--------------------------- TIP 6: Have you searched our list archives? http://archives.postgresql.org