Live Vincent, we use 2 sets of POJOs. As he said, what you expose and what
you store isn't necessarily the same stuff, and isn't necessarily in the
same format.

I think it partly depends on how you model your data/business layers. We
have very dumb POJOs for our data, and use DAOs for interfacing with the
data stores. So we do:

comments = dao.getCommentsForEntry(entry);

Not:

entry.getComments();

So for us, there are a lot of times when our REST POJO (we call it a
"container") contains data that comes from multiple data POJOs. For example,
the entry with comments would be like:

returnSerialized(new EntryContainer(entry, comments));

We use XStream to do all of our serialization, so we get to switch between
XML and JSON for free, which is really nice. Our containers end up really
cluttered with a lot of XStream related annotations (mainly @XStreamAlias
and @XStreamImplicit), so separating them helps keep the data POJOs much
cleaner.

We also use a third set of POJOs for processing incoming data. Data is
accepted as either form key/value pairs or XML or JSON. We have a custom
converter that does the deserialization into our "input POJOs", which are
all decorated with OVal (http://oval.sf.net) annotations for validating all
of the incoming data.

So our data flow ends up looking something like:

try {
  EntryInput entryInput = parseEntity(entity, EntryInput.class);
  dao.save(new Entry(user, entryInput.getTitle(), entryInput.getBody());
  returnSerialized(new EntryContainer(entry));
} catch(ValidationException e) {
  returnSerialized(new ErrorContainer(e));
}

I see places where we could merge some of the POJOs (like make incoming and
data object the same above), and I see ways that we could use even more
annotations and merge them all into one, but we don't for a few reasons: It
would make for much uglier code, this allows different developers to work on
different aspects of the pipeline without having to worry about stepping on
toes, and we feel like it's safer since we're never inputting or outputting
our actual data objects. That means it's harder for us to accidentally
output a field that we shouldn't, and we don't have to worry so much about
asserting which fields we can allow incoming data to write to.

I really like that input objects know how to deserialize themselves from
user input, data objects know how to interact with the database, and output
objects know how to represent themselves to the outside.

Plus, our data is pretty straight forward (not a lot of nesting), and there
aren't that many data objects (less than a dozen), so the benefits are worth
putting up with a little bit more code.

Working with XML in Java is really unpleasant to me. I'd much rather work
with POJOs. XML and JSON are wonderful data transport formats, and they
serialize really easily, but writing java code for working with XML is just
tedious.

Good luck with your setup. I'd be interested to hear what you end up
deciding on.

--Erik


On Mon, Oct 13, 2008 at 1:46 PM, Richard Hoberman <
[EMAIL PROTECTED]> wrote:

> Hi,
>
> My service accepts both XML and JSON representations.  Obviously the
> business logic should be implemented once, so I need to pick a canonical
> representation.  My options seem to be:
>
> 1.  XML
>
> There are great tools for working with XML.  I'd update the business
> layer objects using either XPath or SAX events.  JSON representations
> would be converted into XML for processing.
>
> 2.  POJO
>
> Deserialize representation into POJOs, directly from JSON or XML as
> appropriate.  This avoids working with XML directly, but requires an
> extra data model (the legacy business layer uses POJOs annotated for
> Hibernate, but I want to decouple the REST layer from the business
> layer, so I'd have to duplicate the data model to some extent)
>
> I'd love to hear about what has worked and what hasn't worked from those
> who have gone before.
>
> Best regards
>
> Richard Hoberman
>

Reply via email to