On Monday 17 November 2014 14:44:02 Michael Bayer wrote:
>> On Nov 17, 2014, at 1:55 PM, Guido Winkelmann
>> <[email protected]> wrote:> 
>> Am Montag, 17. November 2014 16:25:54 UTC+1 schrieb Michael Bayer:
>>> On Nov 17, 2014, at 7:37 AM, Guido Winkelmann
>>> <[email protected] <javascript:>> wrote:
>>> 
>>> Hi,
>>> 
>>> How can I go about having a cross-platform compatible compound unique
>>> constraint over two columns where there can be only one record with NULL
>>> in one column and a given value in the other?
>>> 
>>> I want something like this:
>>> 
>>> UniqueConstraint("parent_id", "name")
>>> 
>>> except "parent_id" is nullable, and I want to disallow creating multiple
>>> records with parent_id=NULL and the same name. The problem here is that
>>> some database management systems, PostgreSQL for example, will treat all
>>> NULL values as not-equal inside a unique constraint, which makes sense
>>> for a single column unique, but not really for a multi-column one. This
>>> will allow multiple records with the same name and parent_id=NULL again.
>>> 
>>> In PostgreSQL, apparently I would need to create a unique index in
>>> addition to the constraint. I would really like to solve this without
>>> writing database-specific code, though.
>>> 
>>> Does anybody have a good solution for that?
>> 
>> I’ve had an issue in this area before and decided just to forego it; I was
>> targeting PG and SQL Server, but I don’t remember exactly which NULL
>> behavior I was looking for.   per the SQL standard I think PG’s behavior
>> is correct. NULL means, “unknown”, so two NULL values are never
>> equivalent.> 
>>  If I wanted to add a manual check in Python for that, how would I go about
>>  that?> 
>> Could I register an event handler for the "before_insert" event for this
>> model?
>> 
>> What would the event handler have to do to refuse an insert? Throw an
>> exception?
>you could use a before_insert event, though that’s expensive as you’d need to
>query the whole table to check for dupes for every row.

That's not a big deal in this case. This particular table is never going to be 
very big or very busy. It probably won't ever have more than fifty records.

>I think organizing the kinds of indexes/constraints that apply to different
>backends would be the best approach here.

Backend specific code is exactly what I was trying to avoid here. I might come 
back to that if similar problems come up in more performance-sensitive parts, 
but for now, I think I will just stick with the before_insert event handler:

https://github.com/pyfarm/pyfarm-master/pull/271/files

Regards,

        Guido W.

-- 
You received this message because you are subscribed to the Google Groups 
"sqlalchemy" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/sqlalchemy.
For more options, visit https://groups.google.com/d/optout.

Reply via email to