Gavin Sherry <[EMAIL PROTECTED]> writes:
> I've been thinking about resource management and postgres. I want to
> develop a user profile system (a-la oracle) which allows a DBA to
> restrict/configure access to system resources. This would allow a DBA to
> configure how much CPU time can be used per query/session for any user,
> the number blocks that can be read/written by a user per query, and
> perhaps some other things (see below).

I've got really serious reservations about this whole idea.  I don't
like expending even one CPU cycle on it, and I don't like introducing a
potential cause of unnecessary query failure, and I don't believe that
the average DBA would be capable of configuring it intelligently.

To point out just one problem: in the current system design, the backend
that actually issues a write request is not necessarily, or even
probably, the one that dirtied the page.  And you can NOT refuse to
write a dirtied page because of some misbegotten notion about resource
limits; system reliability will go to zero if you do.

Another example is that the cost of verifying transaction completion is
actually paid by the first transaction to visit a tuple after the
tuple's authoring transaction completes.  Should a transaction be
penalized if it's foolish enough to do a seqscan shortly after someone
else does a mass insert or update?

In general, I think that per-user resource management policies would
force us to adopt inefficient algorithms that don't share overhead costs
across the whole community.  I'm not eager for that...

                        regards, tom lane

---------------------------(end of broadcast)---------------------------
TIP 5: Have you checked our extensive FAQ?

http://www.postgresql.org/users-lounge/docs/faq.html

Reply via email to