On Mon, Dec 12, 2011 at 5:36 PM, Andrew Dunstan <and...@dunslane.net> wrote: > The trouble with using JSON.parse() as a validator is that it's probably > doing way too much work. PLV8 is cool, and I keep trying to get enough time > to work on it more, but I don't think it's a substitute for a JSON type with > a purpose built validator and some native operations. I think these efforts > can continue in parallel.
Hmm. Maybe? While I'm sure things could be faster, we've had results that are fast enough to be usable even with constant reparsing. Here are some microbenchmarks I did some time ago where I tried to find the overhead of calling JSON.parse and doing some really simple stuff in V8 that I thought would maximize the amount of constant-time overhead: https://gist.github.com/1150804 On my workstation, one core was able to do 130,000 JSON.parses + other stuff necessary to create an index per second. One could maybe try to improve the speed and memory footprint on large documents by having validators that don't actually build the V8 representation and possibly defining a space of operators that are known to build, by induction, valid JSON without rechecks. But in the end, I think there's already a class of problem where the performance plv8 provides is already quite sufficient, and provides a much more complete and familiar approach to the problem of how people choose to navigate, project, and manipulate JSON documents. I also haven't tried this for larger documents, as I was trying to get a sense of how much time was spent in a few primitive operations, and not testing performance with regard to document length. -- fdr -- Sent via pgsql-hackers mailing list (pgsql-hackers@postgresql.org) To make changes to your subscription: http://www.postgresql.org/mailpref/pgsql-hackers