Hey Rob,

Thanks for your information.  It is great that many of the packaged ESBs
have this capability built in.  It seems to be a godsend that someone
has made a product out an ESB.

You're not trying to use the same model for all your entities are you?
I don't think that is possible.  From your post it seems that your XML
db sits in your ESB is that correct?  If so doesn't that seem like a
hub-and-spoke architecture?

Your staging integration approach is what I am doing now.  I massage,
validate and transform the data through a service in a lot of cases.  It
seems that is similar to what you are doing with your POJO but instead
I'm using a POCO. ;-)

The governance part is still something I haven't grasped completely yet.
I would love not to have to build it myself.

You're absolutely right that this is fun.  Hell if you ever feel like
talking about it over a beer, I'm right down the road.

-----Original Message-----
From: [email protected] [mailto:[EMAIL PROTECTED] On Behalf
Of Rob Brooks-Bilson
Sent: Thursday, April 03, 2008 7:41 PM
To: CFCDev
Subject: [CFCDEV] Re: architecture question: communication between
applications


Bryan,

Each of our ESB's act independently, allowing the one in the
philippines to serve local requirements that don't involve corporate
or shared data.  At the same time, the local ESB can act as an
extension of the corporate ESB when we need to communicate between the
two.  They can also be configured such that if the main ESB goes down,
the other ESB can assume it's duties.  Most of the commercial ESBs
have this capability built-in.  If you go with something like MULE,
you'll probably have more work to make this happen.

As far as data model goes, the approach we've taken is to create
schema for the various entities we're moving across the ESB.  For
example, we have a canonical model for customer, WIP transaction, PO,
Die Release, etc.  There are still a lot of areas we don't have
canonical models for, but we're adding new schema as we need them.
The schema are independent of any of the database structures where
data may come from, or where it may end up - what's important is that
you capture all of the data that is available/could be required for
the entity you are modeling.  Take a customer for example.  System A
may only need customer name and customer number, while system B needs
those fields as well as address, city, state, country, etc.  All of
those elements need to be part of the schema that you define.

All of the work that we've done actually started from external B2B
integration and has worked itself back into the rest of the
organization.  In our case, we participate in RosettaNet, which
provides standard XML formats (and more) for transaction types that
are specific to the electronics industry.  These schema represent the
data format that our customers expect to receive from our systems.
Our internal systems don't store data in the same format, so what we
do is this:  Data from source systems is first converted into our
canonical format, based on transaction type and temporarily stored in
an xml database.  When it's time to send to a customer, we pull the
data from the xml db and transform it into the RosettaNet specific
format (which almost never is really standard), then the data is sent
on to the customer.  We also have the same process run in reverse,
where we get customer data in RN format, transform it to our canonical
format before storing it in our xml db (most times), then transform it
again for our internal systems, whether they be SAP or our
manufacturing execution systems.

The way to actually connect a DB to your ESB really depends.  In some
cases, our ESB will reach directly into a DB (usually staging tables)
and extract the data it needs for a message.  We prefer it when
systems push data to the ESB, but that isn't always possible in our
environment.  In the case where we pull data directly, it's done using
JDBC.  In our case, we created a generic POJO that does the
extraction, and it simply becomes a step in a "sequence" or "flow".

For governance, there is really two types that come into play - design
time, and runtime.  What you are talking about regarding availability,
etc. is handled by runtime governance.  You don't want to build this
yourself unless you have to.  There are decent systems out now that
handle this (Software AG has a good one, as does Amberpoint).  Most
governance solutions include UDDI, but the directory is really only a
minor part of their overall functionality.  It can be difficult to
wrap your brain around all the SOA governance, but once the lightbulb
goes on, it gets easier.  Both SAG and Amberpoint regularly offer
webcasts and sometimes local seminars on SOA Governance.  I suggest
checking them out for some free info on exactly what they offer and
the problems they are trying to solve.  Eventually you'll get to the
point where you realize that services alone are only part of the story
and what's actually more important is that you treat them as
manageable resources (sounds like this is what you are saying), where
each service has an SLA, security, auditability, versioning, etc.

I hope this helps.  If you have more questions, keep them coming.
This is fun ;-)

-Rob


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"CFCDev" group.
To post to this group, send email to [email protected]
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at 
http://groups.google.com/group/cfcdev?hl=en
-~----------~----~----~----~------~----~------~--~---

Reply via email to