I don't consider it harsh; I think that at some level we're actually
agreeing and that you've misunderstood what I've said.  Read on...

On Wed, 23 Jul 2008 19:10:46 +0200, Frans Bouma <[EMAIL PROTECTED]> wrote:

<snip>
>        Of course there's coupling, they represent the same thing!

Yes, they represent the same things; but coupling to their implementation
is bad.

>        So you have this:
>Abstract entity E
>        -> is represented in the relational schema by 1-n tables/views
>        -> is represented in the class model by 1-m classes.

>        So the point isn't 'when the db changes a class has to change and
oh
>boy this implies coupling and therefore it's bad'. The point is: E
changes! So
>E's changes should be reflected in its physical representations: both in
the
>DB and in the class model. How, that's up to the context of how the
entity is
>represented. Perhaps a change has no effect in code but does in the DB,
or has
>an effect in code but no effect in the DB.

It depends on the type of coupling whether changes ripple through the
system.  An entity directly coupled to the db schema through some
nebulous "code generator" means every change in db schema will likely mean
a change in the entity.  With mapping layer (either an OR/M, and/or
application-specific mapping, and/or Repository pattern implementation,
etc.), you decouple the implementations (yes, they're still coupled; but
more indirectly).  And make it less likely changes will ripple throughout
the system.

I should have been more clear in what I blanketly described as "changes".
I mean one-sided changes.  If you're changing the abstract entity, that
change is realized as changes to the entity type AND the DB (and the
infrastructure in between, likely)--which isn't a one-sided change.  If
this nebulous "code gen" creates a one-to-one mapping from table to type,
or from column type to property type you're very tightly coupled.

>        That's also why O/R mapping theory even works: because class and
table
>(or tables) represent the same thing, you can pass data in class instance
to
>table and vice versa. If class and table wouldn't represent the same
thing,
>don't give meaning to the data that the data is an instance of the same
>entity, you can't use an O/R mapper at all, because you're passing data
back
>and forth but the data suddenly gets completely different meaning, or
better:
>it has meaning in a table row, but has no meaning in the class instance,
or
>vice versa.

I didn't intend to imply that OR/M isn't sufficient enough to implement
this decoupling.
>
>        One of the critical points about using a database is that the
tuples
>of data inside the database are just bits and represent something as soon
as
>meaning is given to them, e.g. they're seen as entity instances in a table
>(rows)

No, data in the database is a specific implementation of a representation
of concepts.  That implementation gives it meaning, in that context;
meaning that may or may not need to be translated into another context
Implementation that can be coupled to.

<snip>

>
>> Can software that works be written this way?  Of course (so, yes, this
isn't
>> really about capability, its about responsiveness to change).  But, it
>> means changes to one aspect of the system ripple through the rest of the
>> system.  This makes it difficult, costly, and time-consuming to make
>> changes.

>        Of course not. Only people who have no clue what the entity class
and
>the table in the schema represent suffer from this.

And this is what I'm talking about.  If you're using an OR/M and mapping
classes to tables, then I'm simply reinforcing what you're doing, i.e.
you're decoupling database implementation from business layer entity
implementation.  I don't see how you can do that when classes are
generated from DB schema...

<snip>

>        Not true, any change to an entity could ripple through in your
code,
>using repositories or not, after all: you consume the result objects a
>repository produces in code outside the repository. Don't think that that
code
>is safe from a change in an entity, it's not. Even worse: if you think
you can
>avoid it, you run the risk that the code you wrote LOOKS like it contains
data
>which are instances of a given entity, but in practise that's not the
case.
>

Every change to your abstract entity must ripple through the system; but
you're changing the abstract concept, not the implementation (yet).  When
DB drives the whole process it always influences/leaks-into implementation
of the rest of the system.  People should be thinking about the behaviour
of the abstract entity and that should both drive the business layer
implementation and the DB schema.  That way the DB representation can do
what it needs to do for that particular DB or scenario more independently
from the rest of the system.  If that means introducing a new table to
implement that, so be it; but the existence of that new table isn't
reflected by anything in the business layer.  How do you deal with that
scenario if your entity code is a generated representation of the DB
Schema?  By default generating code from DB schema is one-type-per-table,
and isn't detailed enough to define behaviour, making it really hard to
not to influence your domain model (both abstract and business layer
implementation of it).


>        Though you again seem to miss the point: a DB schema change
doesn't
>fall out of the sky: an abstract entity definition has changed and it has
had
>an effect on how the relational model representation of this entity: some
>changes are necessary. To be able to work with the changed entity (which
is
>the abstract definition) in your code, you too have to check if the
>representation of the entity (in one or more classes) needs a change.
That's
>not a given. If it does, you have to make the change there too. That's not
>strange, as both represent the same entity, and THAT entity got changed.

You've missed my point, see above, I should have clarified changes
better.  The DB will change over time independently of the domain model.
As load increase the admin might find that the schema needs to change to
accommodate performance, space, etc.   Changing the abstract entity is
different; that will drive changes to both the business layer and the DB.
And vice versa, changes to the business layer implementation of an entity
(when not changing the abstract entity) is an independent change.  But,
how do you manage that in a strictly business-layer-entity-generated-from-
DB-schema environment?

<snip>

As for the rest of your rant (nothing personal), see my clarifications
on "changes" and "generating code from schema".

===================================
This list is hosted by DevelopMentor®  http://www.develop.com

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to