[ 
https://issues.apache.org/jira/browse/FELIX-5332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15464837#comment-15464837
 ] 

David Leangen commented on FELIX-5332:
--------------------------------------

Thank you, [~bosschaert].

{quote}
To me, the serializer/schematizer sounds a bit like a special Codec to me. For 
example if you used it with XML it could produce an XML schema as part of the 
serialization and then do the deserialization together with the schema. 
{quote}

Yes, I agree! I did not want to complicate the existing code, so I put 
everything into a separate class, service. What I realised when writing this 
code, though, is that my original assumption ("we are parsing the object 
anyway, so handing the schema should be easy and we should therefore take 
advantage of this") was a bit off. True that we are doing parsing, but parsing 
for the Schema turns out to be different enough that either (1) it should be in 
a different class, or (2) the algorithms will be significantly complexified. 
That said, if the code is well structured, I'm sure it could be done in a nice 
way. My experience tells me (and probably you've noticed the same thing) that 
working with reflection code on this level can quickly turn into a big mess. 
That is my only worry.

{quote}
It would be nice if the codec (or other API) could be such that it would 
support such implementations.
{quote}

Yes, I agree!

{quote}
To produce a schema, this might just be done as a by-product of the encoding 
process? Maybe additionally configured via ConfigAdmin configuration factories? 
Do we need anything extra in the codec API for this?
{quote}

Again, I agree with you. This can, and probably should, be handled behind the 
scenes, at least for serialization/deserialization. For deep object 
introspection and manipulation, we would actually need access to the Schema 
object, which would require updating the API. Perhaps that could be offered as 
a "non-standard" service as a pilot to see if anybody would actually use it.

Back to the serialization-only part, though: I am using the [Prevayler 
Serializer|https://github.com/jsampson/prevayler/blob/master/core/src/main/java/org/prevayler/foundation/serialization/Serializer.java]
 as my test case. I just noticed that the current impl is lacking.

To accomplish what you suggest (i.e. keeping the Schema hidden internally), 
what we would need to do is provide a name/alias for each schema, and be able 
to save it and invoke it (internally should be ok) by name. Perhaps something 
like this in the Adapter:

{code}
<T>Adapter schema(String name, TypeReference<T> type);  // No path means the 
top level
<T>Adapter schema(String name, String path, TypeReference<T> type);  // Path 
indicates the object somewhere in the graph
{code}

When "saving" the schema, only the name would be serialized. If we don't do 
this, I found that, especially for small objects, there is waaaaay too much 
output/noise. Using the "alias" as the value to serialize works very nicely.

Also, the above rule would allow for multiple schemas. I just noticed a bug in 
my impl: right now you can only save one single schema, which is not useful.

{quote}
For decoding the story is probably different, as you need to be able to pass in 
the schema as context of the decode operation.
{quote}

My solution was to ensure that you use the same Adapter, which is already 
configured, for deserialisation. The Schema is already saved under an alias. 
Example:

{code}
{"schema":"Schema1", "payload":{ ... }}
{code}

The additional chars serialized is quite minimal, and the input is easy to 
parse. Since there is a Map in the Adapter that looks up the schema by name (in 
this case "Schema1"), then deserialization is quite easy. At least, that is 
what I tested, and it seems to work very well.

{quote}
This could potentially be done via configadmin too, but that would be awkward I 
think. Maybe if we add a method to the Decoding interface to provide 
context/schema it might be useful, so you could do 
mySpecialCodec.decode(sometextfile).withSchema(mySchema).to(MyDTO.class)
{quote}

Yes, that would also work!

However, if we work under the assumption that deserialization does not happen 
without serialization, and since we (must??) also control the serialization, 
then this stuff can be hidden internally, I think.

{quote}
Just an idea. I think it would be great if we could make the API such that 
special implementations like your schema-based one work within the general 
API...
{quote}

Great ideas! Thanks for supporting this. I personally think it is not only very 
important stuff, but *should* be part of the Converter/Codec.

I'll continue working a bit more with Prevayler so I can show you what I come 
up with in terms of a client-facing interface. So far, it is turning out to be 
quite elegant, actually. (At least from the outside!) :-)

> Serializer
> ----------
>
>                 Key: FELIX-5332
>                 URL: https://issues.apache.org/jira/browse/FELIX-5332
>             Project: Felix
>          Issue Type: New Feature
>          Components: Converter
>            Reporter: David Leangen
>         Attachments: diff-serializer.txt
>
>
> Test case and a bit of code to show how a Serializer could work.
> To work as a Serializer, the type information needs to be embedded into the 
> output.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to