On Tue, May 31, 2011 at 1:07 PM, Alvaro Herrera
<alvhe...@commandprompt.com> wrote:
> Excerpts from Ross J. Reedstrom's message of mar may 31 14:02:04 -0400 2011:
>
>> Follows from one of the practical maxims of databases: "The data is
>> always dirty" Being able to have the constraints enforced at least for
>> new data allows you to at least fence the bad data, and have a shot at
>> fixing it all.
>
> Interesting point of view.  I have to admit that I didn't realize I was
> allowing that, even though I have wished for it in the past myself.

What happens when there's bad data that the new transaction touches in
some minor way? For example updating some other column of the row or
just locking the row? What about things like cluster or table
rewrites?

Also I think NOT NULL might be used in the join elimination patch.
Make sure it understands the "valid" flag and doesn't drop joins that
aren't needed. It would be nice to have this for unique constraints as
well which would *definitely* need to have the planner understand
whether they're valid or not.

-- 
greg

-- 
Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-hackers

Reply via email to