(2010/10/19 21:31), Robert Haas wrote:
On Oct 19, 2010, at 4:34 AM, KaiGai Kohei<kai...@ak.jp.nec.com>  wrote:
(2010/10/14 1:52), Tom Lane wrote:
Robert Haas<robertmh...@gmail.com>   writes:
On Wed, Oct 13, 2010 at 11:45 AM, Tom Lane<t...@sss.pgh.pa.us>   wrote:
That's all true, but you have to consider how much the obstacle actually
gets in their way versus how painful it is on your end to create and
maintain the obstacle. �I don't think this proposed patch measures up
very well on either end of that tradeoff.

I think it would behoove us to try to separate concerns about this
particular patch from concerns about the viability of the whole
approach.  Whether or not it's useful to do X is a different question
than whether it can be done with few enough lines of code and/or
whether this patch actually does it well.

I think I left the wrong impression: I'm concerned about the whole
approach.  I haven't even read this particular patch lately.  I think
that trying to guarantee that the planner applies independent
constraints in a particular order will be expensive, fragile, and prone
to recurring security bugs no matter how it's implemented --- unless you
do it by lobotomizing query pullup/pushdown, which seems unacceptable
from a performance standpoint.

Just to give one example of what this patch misses (probably; as I said
I haven't read it lately), what about selectivity estimation?  One of
the things we like to do when we have an otherwise-unknown function is
to try it on all the histogram members and see what percentage yield
true.  That might already represent enough of an information leak to be
objectionable ... and yet, if we don't do it, the consequences for
performance could be pretty horrid because we'll have to work without
any reasonable selectivity estimate at all.  There are cases where this
technique isn't applied at the moment but probably should be, which is
why I characterize the leak-prevention idea as creating future security
issues: doing that is a constraint that will have to be accounted for in
every future planner change, and we are certain to miss the implications
sometimes.

Sorry, I might misread what you pointed out.

I think you're still misreading it.

Hmm. In my understanding, it seems to me he concerned about possible leaky
estimate functions, so he mentioned the horrible performance degrading, if
we don't allow to execute these functions.
So, I suggested an idea that enforces all estimate functions being installed
by superusers; it enables us to assume they are enough safe.

Want to try a third time?

However, actually, it is still unclear for me... :-(

Do you see oprrest/oprjoin being internally invoked as a problem?
Well, I also think it is a problem, as long as they can be installed
by non-superusers. But, it is a separated problem from the row-level
security issues.

In my opinion, origin of the matter is incorrect checks on creation
of operators. It allows non-superusers to define a new operator with
restriction and join estimator as long as he has EXECUTE privilege
on the functions. (not EXECUTE privilege of actual user who invokes
this function on estimation phase!)
Then, the optimizer may internally invoke these functions without
any privilege checks on either the function or the table to be
estimated. If a malicious one tries to make other users to invoke
a trojan-horse, he can define a trappy operator with malicious
estimator functions, cannot it?

This is a pretty poor excuse for a Trojan horse attack.



...Robert



--
KaiGai Kohei <kai...@kaigai.gr.jp>

--
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to