On 2012-10-17 19:39, Tyler Jameson Little wrote:
I could make my marshaller/unmarshaller only update objects in place. I
think this is more useful and would remove the overlap between orange
and the JSON library. We could then write a JSON archiver for orange and
include it in std.json as well.

The call to unmarshal would look like:

bool unmarshalJSON(T)(JSONValue val, out T ret);

Orange works with the archive at a lower level. For example, the archive doesn't really have to know how to (un)archive an object or struct. The serializer will break down the object into its fields and ask the archive to (un)archive the individual fields.

The only thing the archive needs to know is that "here starts an object, from now on all (un)archived values will be part of the object until I say otherwise".

The following restrictions would apply:

* T must be fully instantiated (all pointers are valid [not null])

That seems to be an unnecessary restriction.

* T must not be recursive (results in infinite recursion, and hence
stack overflow)

I think the serializer in Orange can handle this. That would mean the archive doesn't need to handle this.

And the marshaller:

JSONValue marshalJSON(T)(in T val);

For marshalling, the restrictions are:

* Slices are handled as if they were an array (copy all values)

So mean:

int[] a = [3, 4, 5, 6];
int[] b = [1 .. $ - 1];

That "a" and "b" would be marshaled as two distinct arrays? In Orange, I think the serializer will handle this and the archive doesn't need to care. I tried to but as much of the code in the serializer so the archives doesn't need to bother with these kind of things.

* Same as unmarshaller, except null pointers will be treated as JSON null

If you can marshal a null pointer, how can you not unmarshal it?

I really like Go's JSON marshaller/unmarshaller, so I'm trying to model
after that one. It allows updating an object in place, which was already
a goal.

There should probably be some standard D serialization format. In
working with a structure trained on data (for machine learning, natural
language processing, etc), a complete serialization solution makes
sense. But for simple data passing, JSON makes a lot of sense.

Absolutely, there is a need for both, see below.

What do you think, do you think there's a place in Phobos for a simple
JSON marshaller/unmarshaller?

Absolutely. I think there is a need for several types and variants of serialization. Sometimes you need to have a fully capable serialization library that can handle all types, custom serialization of third party types and so on. In other cases you don't really care an just want to dump some data to disk or whatever.

I'll have some updated code soon, and I'll post back when that's done,
in case you'd like to have a look.


--
/Jacob Carlborg

Reply via email to