On Thu, Mar 7, 2013 at 2:48 PM, David E. Wheeler <da...@justatheory.com> wrote: > In the spirit of being liberal about what we accept but strict about what we > store, it seems to me that JSON object key uniqueness should be enforced > either by throwing an error on duplicate keys, or by flattening so that the > latest key wins (as happens in JavaScript). I realize that tracking keys will > slow parsing down, and potentially make it more memory-intensive, but such is > the price for correctness.
I'm with Andrew. That's a rathole I emphatically don't want to go down. I wrote this code originally, and I had the thought clearly in mind that I wanted to accept JSON that was syntactically well-formed, not JSON that met certain semantic constraints. We could add functions like json_is_non_stupid(json) so that people can easily add a CHECK constraint that enforces this if they so desire. But enforcing it categorically seems like a bad plan, especially since at this point it would require a compatibility break with previous releases. -- Robert Haas EnterpriseDB: http://www.enterprisedb.com The Enterprise PostgreSQL Company -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers