Hi Robert,

On 1/7/20 6:33 PM, Stephen Frost wrote:

> These are issues that we've thought
> about and worried about over the years of pgbackrest and with that
> experience we've come down on the side that a JSON-based format would be
> an altogether better design.  That's why we're advocating for it, not
> because it requires more code or so that it delays the efforts here, but
> because we've been there, we've used other formats, we've dealt with
> user complaints when we do break things, this is all history for us
> that's helped us learn- for PG, it looks like the future with a static
> format, and I get that the future is hard to predict and pg_basebackup
> isn't pgbackrest and yeah, I could be completely wrong because I don't
> actually have a crystal ball, but this starting point sure looks really
> familiar.

For example, have you considered what will happen if you have a file in the cluster with a tab in the name? This is perfectly valid in Posix filesystems, at least. You may already be escaping tabs but the simple code snippet you provided earlier isn't going to work so well either way. It gets complicated quickly.

I know users should not be creating weird files in PGDATA, but it's amazing how often this sort of thing pops up. We currently have an open issue because = in file names breaks our file format. Tab is surely less common but it's amazing what users will do.

Another fun one is 03849840 which fixes the handling of \ characters in the code which checksums the manifest. The file is not fully JSON but the checksums are and that was initially missed in the C migration. The bug never got released but it easily could have been.

In short, using a quick-and-dirty homegrown format seemed great at first but has caused many headaches. Because we don't change the repo format across releases we are kind of stuck with past sins until we create a new repo format and write update/compatability code. Users are understandably concerned if new versions of the software won't work with their repo, some of which contain years of backups (really).

This doesn't even get into the work everyone else will need to do to read a custom format. I do appreciate your offer of contributing parser code to pgBackRest, but honestly I'd rather it were not necessary. Though of course I'd still love to see a contribution of some sort from you!

Hard experience tells me that using a standard format where all these issues have been worked out is the way to go.

There are a few MIT-licensed JSON projects that are implemented in a single file. cJSON is very capable while JSMN is very minimal. Is is possible that one of those (or something like it) would be acceptable? It looks like the one requirement we have is that the JSON can be streamed rather than just building up one big blob? Even with that requirement there are a few tricks that can be used. JSON nests rather nicely after all so the individual file records can be transmitted independently of the overall file format.

Your first question may be why didn't pgBackRest use one of those parsers? The answer is that JSON parsing/rendering is pretty trivial. Memory management and a (datum-like) type system are the hard parts and pgBackRest already had those.

Would it be acceptable to bring in JSON code with a compatible license to use in libcommon? If so I'm willing to help adapt that code for use in Postgres. It's possible that the pgBackRest code could be adapted similarly, but it might make more sense to start from one of these general purpose parsers.

Thoughts?

--
-David
da...@pgmasters.net


Reply via email to