Ron,

I'd like to reply to you, even though you didn't ask anything, just
commented on my mail. I know you don't agree with what I wrote, but maybe
this gives more context. Please see it inline.

Gyorgy Bozoki

> -----Original Message-----
> From: Discussion of advanced .NET topics.
> [mailto:[EMAIL PROTECTED] On Behalf Of RYoung
> Sent: Thursday, November 16, 2006 05:24
> To: ADVANCED-DOTNET@DISCUSS.DEVELOP.COM
> Subject: Re: [ADVANCED-DOTNET] Data Structures in .Net?
>
> > If you want to have an object with a FirstName and LastName
> attribute,
> > create and object like that, with two private string
> members and two
> > public properties. If you then need a FullName property,
> create a new
> > readonly property and return the concatenation of the two
> former; do
> > *not* use any padding in your object.
>
> I can agree that an object in the business realm is the
> correct way to go.
> But what's to say the converter you mention in the next
> paragraph can't be an object with property accessors to a
> character buffer?

Having such a converter object ties the physical implementation to the
logical meaning of the data. In the physical, legacy storage, the data may
be persisted in a fixed-width file in some way. If my business object then
reads from from the storage, it has to always trim the strings when it
returns them  to client code, etc. It can be done, but it's a lot of
unnecessary work, because the logical need tries to keep up with the
physical format. Then, if the phyiscal storage changes (the width changes),
all code that deals with properly keeping the in-memory string padded will
have to change. I understand that one could use some kind of mapping table
to make sure the code uses the right lengths all the time, but again, it's a
lot of work for no gain: the code will still only do the very basics of
reading and writing properties.

Jon Rothlander (the OP) has a *logical* need for a FirstName and LastName
property, so that's what he should use. He also has a physical storage need
for these, to store them in a padded way. The reason for using business
objects in the first place is to decouple these two needs - these are two
distinct layers in the final application.

When one creates an object in the logical layer that internally mimics the
behavior of the physical layer, the distinction is lost and the code becomes
a lot more complicated, intertwined: now the logical layer is not purely
logical anymore, it has some of the details of the lower physical layer. If
that layer changes, it'll have an impact on the logical layer. I could
describe this system as  3-tier program: the business object is the business
layer, the legacy application is the database (it stores and retrieves data
in some internal way) and the converter object is the "database layer" that
talks to the business layer and the database.

This is the reason I think a converter object should be used: it provides
very controlled access between the two layers. If one layer changes, the
only things it affects outside of itself are these converter objects. All
the rest of the code will work without changes. That's a huge thing!

One could say that doing it right is just as much extra work - and it's
true. But! This extra work actually gives one flexibility and isolation from
other parts of the code.

> I don't think issues like the valid length of the string are
> hardly issues at all, that can be moved out to a
> configuration file of sorts. Besides, if it's a legacy system
> with existing applications, how a change in the length
> affects this application is the least concern.

Yes, you're right, but I was not talking about affecting the legacy system.
I was talking about changes in the legacy system affecting the new system
that stores in-memory data in the legacy system's format. One could say this
is an unlikely scenario, but I don't think so. At my work, we have a huge
system that interfaces to several old mainframe applications that are still
being changed every once in a while. They're also being slowly phased out,
but it'll take at least a year or two when they're finally out. Our project
is in production for 5 years now, so we saw several changes in the mainframe
that *could've* had an effect on our system if we stored data in the
mainframe format. (I believe - but not sure - that much of our mainframe
data is in flat, fixed-width files.)

> > If you're reworking an old system you might as well do it right.
> > Trying to imitate COBOL behavior is certainly wrong in any
> > object-oriented language, Java and C++ included.
>
> Isn't it more that C# enables object oriented practices? Sure
> the .NET Framework is object oriented, and we design objects
> by default, but why does that bind us into striving to being
> the best of object oriented programmers, and if the solution
> proposed is not OOP in concept and design then it's deemed wrong?

Don't get me wrong, I'm not an OO-fanatic. I don't think that OO should be
used to solve a problem just because that's the only right way to do it.
However, in my experience, it's a lot easier that using old approaches. In
the future, OO may be replaced with something very different and more
efficient, but for past practices, we already know their limitations and the
amount of work they require. Since there is no way to change the legacy
systems, we have to work with them - but we can do so by weighing the pros
and cons of possible approaches. In this particular case, I think the
COBOL-like approach is waaay more expensive from almost any point of view.

===================================
This list is hosted by DevelopMentorĀ®  http://www.develop.com

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to