For example, in WO, if you're using an NSDictionary "bindings" for query bindings, you might want to query for some attribute "myAttribute" that equals NULL in your database, so you would do something like bindings.setObjectForKey(NSKeyValueCoding.NullValue, "myAttribute"). So when EOF is constructing the qualifier, it knows that it should produce "WHERE my_attribute = NULL", but that when bindings.objectForKey("myOtherAttribute") is null (as in a null pointer), it knows that's not an attribute you want to qualify on.
This is definitely a valid scenario, but it seems like more an argument for the existence of NSNull (which I'm not really against, only as a hack around the lack of null-in-collection support) and less for why you can't add a real null as value. It doesn't seem like those two capabilities would be mutually exclusive.

ms

_______________________________________________
Do not post admin requests to the list. They will be ignored.
Webobjects-dev mailing list      ([email protected])
Help/Unsubscribe/Update your Subscription:
http://lists.apple.com/mailman/options/webobjects-dev/archive%40mail-archive.com

This email sent to [email protected]

Reply via email to