Having actually compiled the branch and tried it out, I have to say
regardless of whether validating arbitrarily large blocks of JSON without
being interested in the contents is a common or more niche use case, the
memory savings ARE highly impressive. I had thought that because the
function was built on top of the existing parser and is still parsing the
entire string (or up until invalid JSON is encountered), the performance
saving for a very large input would be smaller than it is.

I tested using a 75MB valid JSON input - a string large enough that it's
not going to be very common. The processing time isn't hugely different,
the saving appears to be around maybe 20-25% (and it's not a significant
amount of time using either json_decode or json_validate, even on an input
of this size, about half a second on my machine for both). But the memory
saving is enormous, almost total. Gone from needing ~5x the size of the
input to almost literally just a few extra bytes.

I'm persuaded now on both that benchmarking and having had a closer look at
the implementation PR, which is clearly a minimal and easily maintainable
change.

As I've said, my feelings are irrelevant to the extent I'm not a voter, but
I am in principle a +1 thumbs up for including this now.

Reply via email to