Am 12.08.2015 um 19:10 schrieb deadalnix:
On Wednesday, 12 August 2015 at 08:21:41 UTC, Sönke Ludwig wrote:
Just to state explicitly what I mean: This strategy has the most
efficient in-memory storage format and profits from all the static
type checking niceties of the compiler. It also means that there is a
documented schema in the code that be used for reference by the
developers and that will automatically be verified by the serializer,
resulting in less and better checked code. So where applicable I claim
that this is the best strategy to work with such data.

For maximum efficiency, it can also be transparently combined with the
pull parser. The pull parser can for example be used to jump between
array entries and the serializer then reads each single array entry.

Thing is, the schema is not always known perfectly? Typical case is JSON
used for configuration, and diverse version of the software adding new
configurations capabilities, or ignoring old ones.


For example in the serialization framework of vibe.d you can have @optional or Nullable fields, you can choose to ignore or error out on unknown fields, and you can have fields of type "Json" or associative arrays to match arbitrary structures. This usually gives enough flexibility, assuming that the program is just interested in fields that it knows about.

Of course there are situations where you really just want to access the raw JSON structure, possibly because you are just interested in a small subset of the data. Both, the DOM or the pull parser based approaches, fit in there, based on convenience vs. performance considerations. But things like storing data as JSON in a database or implementing a JSON based protocol usually fit the schema based approach perfectly.

Reply via email to