On Fri, 15.05.15 16:03, Pavel Odvody (podv...@redhat.com) wrote: > On Fri, 2015-05-15 at 15:23 +0200, Lennart Poettering wrote: > > On Thu, 07.05.15 17:47, Pavel Odvody (podv...@redhat.com) wrote: > > > > Hmm, so if I grok this right, then this at's a DOM-like ("object > > model") parser for json, where we previously hat a SAX-like ("stream") > > parser only. What's the rationale for this? Why doesn't the stream > > parser suffice? > > > > I intentionally opted for a stream parser when I wrote the code, and > > that#s actually the primary reason why i roleld my own parser here, > > instead of using some existing library.... > > > > Hmm, I'd call it lexer/tokenizer, since the burden of syntactic analysis > is on the user. The parser is actually rather thin wrapper around > json_tokenize. > > Rationale: the v2 manifest (also) contains embedded JSON documents and > is itself versioned, so it will change sooner or later. > I believe that parsing the manifest, or any "decently" complex JSON > document, using the stream parser would yield equal or bigger chunk of > code than generic DOM parser + few lines that consume it's API.
Can you give an example of these embedded JSON documents? Couldn't this part be handled nicely by providing a call that skips nicely over json objects we don't want to process? Lennart -- Lennart Poettering, Red Hat _______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel