On 11/16/2013 12:15 AM, Josh Berkus wrote: > On 11/15/2013 02:59 PM, Merlin Moncure wrote: >> On Fri, Nov 15, 2013 at 4:31 PM, Hannu Krosing <ha...@2ndquadrant.com> >> wrote: >> I think you may be on to something here. This might also be a way >> opt-in to fast(er) serialization (upthread it was noted this is >> unimportant; I'm skeptical). I deeply feel that two types is not the >> right path but I'm pretty sure that this can be finessed. >> >>> As far as I understand merlin is mostly ok with stored json being >>> normalised and the problem is just with constructing "extended" >>> json (a.k.a. "processing instructions") to be used as source for >>> specialised parsers and renderers. > Thing is, I'm not particularly concerned about *Merlin's* specific use > case, which there are ways around. What I am concerned about is that we > may have users who have years of data stored in JSON text fields which > won't survive an upgrade to binary JSON, because we will stop allowing > certain things (ordering, duplicate keys) which are currently allowed in > those columns. At the very least, if we're going to have that kind of > backwards compatibilty break we'll want to call the new version 10.0. > > That's why naming old JSON as "json_text" won't work; it'll be a > hardened roadblock to upgrading. Then perhaps name the "new binary json" as jsob (JavaScript Object Binary) or just jsobj (JavaScript Object) and keep current json for what it is, namely JavaScript Object Notation.
Cheers -- Hannu Krosing PostgreSQL Consultant Performance, Scalability and High Availability 2ndQuadrant Nordic OÜ -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers