So these results have become a bit complex.  So spreadsheet time.

Some details:

The Length-and-Offset test was performed using a more recent 9.4
checkout than the other two tests.  This was regrettable, and due to a
mistake with git, since the results tell me that there have been some
other changes.

I added two new datasets:

errlog2 is a simple, 4-column error log in JSON format, with 2 small
values and 2 large values in each datum.  It was there to check if any
of our changes affected the performance or size of such simple
structures (answer: no).

processed_b is a synthetic version of Mozilla Socorro's crash dumps,
about 900,000 of them, with nearly identical JSON on each row. These are
large json values (around 4KB each) with a broad mix of values and 5
levels of nesting.  However, none of the levels have very many keys per
level; the max is that the top level has up to 40 keys.  Unlike the
other data sets, I can provide a copy of processed_b for asking.

So, some observations:

* Data sizes with lengths-and-offets are slightly (3%) larger than
all-lengths for the pathological case (jsonbish) and unaffected for
other cases.

* Even large, complex JSON (processed_b) gets better compression with
the two patches than with head, although only slightly better (16%)

* This better compression for processed_b leads to slightly slower
extraction (6-7%), and surprisingly slower extraction for
length-and-offset than for all-lengths (about 2%).

* in the patholgical case, length-and-offset was notably faster on Q1
than all-lengths (24%), and somewhat slower on Q2 (8%).  I think this
shows me that I don't understand what JSON keys are "at the end".

* notably, length-and-offset when uncompressed (EXTERNAL) was faster on
Q1 than head!  This was surprising enough that I retested it.

Overall, I'm satisfied with the performance of the length-and-offset

Josh Berkus
PostgreSQL Experts Inc.

Sent via pgsql-hackers mailing list (
To make changes to your subscription:

Reply via email to