On 10/23/19 5:42 AM, Laurenz Albe wrote:
David G. Johnston wrote:
Now if only the vast majority of users could have and keep this level of 
understanding
in mind while writing complex queries so that they remember to always add 
protections
to compensate for the unique design decision that SQL has taken here...

You can only say that if you don't understand NULL (you wouldn't be alone).
If I modify a JSON with an unknown value, the result is unknown.
This seems very intuitive to me.

Would you expect modifying an array value with an unknown would result
in the entire array being unknown?

One could argue that whoever uses SQL should understand SQL.

But I believe that it is reasonable to suppose that many people who
use JSON in the database are more savvy with JSON than with SQL
(they might not have chosen JSON otherwise), so I agree that it makes
sense to change this particular behavior.

Yours,
Laurenz Albe

That (generally) SQL NULL results in NULL for any operation has been
brought up multiple times in this thread, including above, as a rationale
for the current jsonb behavior.  I don't think it is a valid argument.

When examples are given, they typically are with scalar values where
such behavior makes sense: the resulting scalar value has to be NULL
or non-NULL, it can't be both.

It is less sensible with compound values where the rule can apply to
individual scalar components.  And indeed that is what Postgresql does
for another compound type:

  # select array_replace(array[1,2,3],2,NULL);
   array_replace
  ---------------
   {1,NULL,3}

The returned value is not NULL.  Why the inconsistency between the array
type and json type?  Are there any cases other than json where the entire
compound value is set to NULL as a result of one of its components being
NULL?


Reply via email to