Hm, - this starts sliding a bit off topic. I will try to keep it on the
list as long as possible, because there is a definity good look into
some aspects of .NET development :-)

At the same time, following the threads starts getting tendious :-) I
add my comments with $$$$$ this time, but I strongly suggest we restart
with a new email after this run :-)

Thomas Tomiczek
THONA Consulting Ltd.
(Microsoft MVP C#/.NET)

-----Original Message-----
From: Joseph Healy [mailto:[EMAIL PROTECTED]] 
Sent: Montag, 14. Oktober 2002 21:40
To: [EMAIL PROTECTED]
Subject: Re: [ADVANCED-DOTNET] Strongly-Typed DataSets vs.
Strongly-Typed Collections


Thomas, my response is also inline (marked #####)...

While I get the drift of you code, the abstraction of the data layer
appears to be fairly wedded to the CMS architecture.

*** No, the opposite is true :-) See, the only reasons why I only choose
examples from the CMS is because that was code I was currently working
on. I have other systems here that use the Broker, too. The CMS is
wedded to the broker, not the other way around. Again, I just wanted to
give samples out of a "coherent suprrounding", otherwise I could have
choosen our address management system, our accounting system in
development or our license management system :-)

#### [I'm new to this list and have now just found your earlier postings
and your site so I have a better feel for it all now.]  I have seen
several other "Attribute-based" object mapping layers and my take on
them in general is they look to have a some overhead and place some
burden on developers from a design-time/coding perspective and possibly
incur a performance hit as well. These attribute approaches
[appropriately] place the emphasis on the developers designing objects
with the mapping code gen'ing/resync'ing with the database when they're
done. How transparent is all this for developers? Having gone down that
road further than most, could you comment on those aspects of using an
"Attribute-based" O-R mapping approach, particularly resyncing the DB
after changes to objects, prospects for hooks/templates for tools like
Visio Architecture, XDE, etc...

$$$$$ Ok, this is one aspect which bothers me, too. Basically we are as
good (and I like the approach much better) than any other one, because
as a developer I just type in the code anyway, and the attributes have
the advantage to be in one place, for example to look up precision
values etc.

$$$$$ There is no performance penalty with the attributes - at least not
with the ENtityBroker. The EntityBroker is basically using the
attributes ONCE - at startup. Only. It is generating more or less static
mappers, substitutes (subclasses of the business obejcts) and lookup
tables internally, and never looking at the attributes after this.

$$$$$ We plan to have an update next year tha automatically generates
the database schema aout of the objet model - taking the work further
down. This is not too simple, if you want to allow synchronisations
after schema changes :-) For hooks etc. - no idea. Not made too much
work into this. BUT - as long as the UML tool is template based, you
might be able to get the attributes in. The main problem with them is
that they contain MUCH MORRE data than the property himself (like the
maximum length for strings). Pure C#  UML tools will not know how to
deal with them. But I think it is important to have this data in the
schema.

$$$$$ Anyhow, back to your questions: I like the attribute based
approach much better than an external XML file. What CAN you do with the
XML file? Change the table names? Noone ever does this, IMHO. Anything
else is just craeting errors. You cant even change the length of a
string, because the length of the string will also be used in the forms
of the program - and does not change there. External configuration files
give you a flexibility you never wanted i nthe first place - again, just
my opinion. Transparency - well, there is not much more than what you
see. I am not a fan of transparent persistence, btw. - full transparency
means no control at the same time. I know all thse java approaches which
totally hide the database. IHO they suck - the EntityBroker does not try
to be invisible. It tries to take most of the burden out of the
developer and to give him a good API to work with.

I have a code layer that, given a connection string, generates stored
procedures for the base CRUD actions on a database as well as RUD
actions for unique/foreign keys.  It then gens a fully
SqlParamater(ized), subclassed, ICloneable SqlCommand object for each
one.  As it creates them it stuffs them in a synchronized hashtable and
either hands it back (WinFrom app) or stuffs it in the Application
object (ASP.Net).

$$$$$ Never talk about a performance penalty for me again :-) The
hashtable for every attribute is a killer. We dont work with this.
Basically our data store is an array of type object, and all properties
know their data index in this array :-) Thats hardcoded in the generated
proeprty code, regenerated on every startup. I think of allowing
automatically generated SP's on a later stage, too, but SQL Server 2003
will propably allow us to go the other way and get rid of SQL for CRUD
operations. Also note that things get more complicated with subtypes
dynamically determined by data. One thing you dont do, btw., is handling
object identity, as it looks like :-)

*** I have decided to go against SP's FOR NOW (means: the sql mapping
layer is free to generate and use them, but my current one does not).
The perforamance benefit is not yet big enough :-)

##### Given it's all generated code, I'll grant you it matters little
where the generated code resides at this point, though I believe the
performance benefits can accrue in a large [hierarchal] implementation
to the point where SP's can make a difference, but again, at this point
that's an optimization issue and not a big deal to me.

$$$$$ I am not so sure about it. The point is that recent versions of
oracle and SQL Server made significant progress on caching query plans.
As long as you dont rewrite the SQL on every request, but use
parameters, the query has ahigh chance NOT to be reanalysed. With this,
the performance benefit of stored procedures - gets way lower. We plan
to add this as an option for a later version though. And again, YUKON
will propably allow a compeltly different approach.

My next natural step is an O-R mapping layer in lieu of seeing MS
Objectspaces anytime soon.  I've looked at a dozen .Net O-R Mapping
layer products/projects/archtectures now and have done initial
prototypes with both your approach and the DataSet-centric approach Ben
has taken. I definitely agree MS dropped the ball here and I wish they
would just get on with Objectspaces and spare us all the hassle.


$$$$$ Wont happen until 2.0, and even then I would propably prefer the
ENtityBroker at this point - they will, IMHO, stay "generic", while at
this time I plan to have the lowest layer of our product wot work IN the
database server. Also, I am not sure how long they will take to add a
cross transaction cache, which I am making the drafts for right now -
and this is where the perforamcne fun starts.

*** ObjectSpaces will come with V2 of the framework. Still, IMHO, our
product will be superior then, becaue I plan to move the way of fully
integrating with SQL Server YUKON, while MS propably goes the "generic"
way.

##### I hope Yukon is the reason we haven't seen Objectspaces yet and
that it will be released concurrently and fully integrated with Yukon.

$$$$$ No chance. My word on ObjectSpaces is .NET 2.0 (read it
somewhere), and at this time the next veryion after YUKON is propably in
private beta already. ALSO - I dont think MS will limit themselves to
YUKON only. I hope, though, that YUKON will support things like table
inheritance, which would make a lot of work easier for the broker.

##### Kind of pointless if it isn't given it would be the prime vehicle
for .Net/Yukon integration.  Or maybe they'll scrap it all together and
do other form of XML/XSD-base serialization to Yukon. Hopefully all .Net
servers will employ the same O-R mapping technology; in fact, I'd prefer
they take even longer to ensure they do.

$$$$$ This would be presty stupid, if you refer to the .NET Server
versions (operating systems). I hope, too, ms gets the grip on this, but
SADLY their past has been weak architecturally. Anyhow, when this
happens the broker will integrate as good as possible - even if this
just means being a narrow api wrapper on top of their product for
existing customers.

I tend be fairly agnostic other than having a desire to have the entire
layer driven off an XSD repository (an DLL version of XSD would be
handy...).  That said, for less complex requirements I simply embed a
DataRow in objects and for the collection object use an embedded
DataSet.

$$$$$ I never want to work with untyped data again, except for reports
and other stuff. NEVER EVER. I prefer the compiler to find my erros as
early as possible.

*** I thought about using an external file, but then I found out two
things:
(a) You cant change most of your mapping info anyway. For strings, our
mapping layer handles SOME of the database functionality ,like makign a
dhar field "fixed length" by padding with spaces etc.
(b) as a programmer I prefer to have everything in one place.

*** We later will go even further. The generated schema (and generated
classes) will be compilable into a dll. And a schema syncrhonisation
tool will be able to genereate the ddatabase model from the embedded
schema :-)

##### I appreciate both those comments, but in general I like the way
XML Schema generally keeps everyone honest and then there is the Web
Services/WSDL angle that start adding some gravity to gen'ing the XSD's
as well.

$$$$$ Both wrong. Now, for the XML schema - the EntityBroker supports
dumping it's schema set down into an XML file.. It is more a debugging
tool for me (and other developers), because basically it allows me what
the EntitiyBroker sees. I dont really like that users otthe objects are
able to change this. Also, finally, the web services are something I
DONT EVER WANT to be entagled with my business object layer. Web
services are for external application attachments ONLY - thats their
design.

You seem to have been fairly thorough and thoughtful in your developing
your architecture (I particularly like the Condition object [if one is
going to use in-line SQL statements]).  Your's general tack reminds me
of Jeremy Miller's approach
(http://www.csharptoday.com/content.asp?id=1770).

$$$$$ We have plans to allow queries in other languages. These would
then compile into a query object graph - was the most sensible choice.

*** Hm, I dont read csharptoday, so I cant read this. But I was basiclly
following scott ambler :-)

##### Jeremy's cut at an O-R mapping layer is pattern-based to the hilt
and fully Ambler-ized.  It is well worth the read to anyone going down
this road - can't recommend it enough.  His also has "lazy-lookahead"
child instancing and he generally gets right to the heart of the
impedance mismatch.

$$$$$ I dont think tha the has anything to tell me - sorry, dont want to
sound high nosed here. But in the last two years I have propably read
everything besides his thing on the web, including scott ambler's site.
So I dont think I can stand more basics here :-) I consider myself to be
one of the people who made this work :-)

Are you commercializing this?  If not do you have any more detail you
can share (an article perhaps), or if you are, perhaps a white paper on
the general architecture.

*** it will be commercial soon :-) Not expensive, though, and with a
free for noncommercial license.

##### Good! Then possibly we'll see an article or white paper on the
architecture sometime soon -- [ not so subtle hint ;-) ].  Thomas, it
sounds great and I look forward to getting a better look at it.

$$$$$ Check our website :-) White Paper is a greeat idea. Anyhow, the
documentation is public now. As well as the first beta versions.

Thomas

You can read messages from the Advanced DOTNET archive, unsubscribe from Advanced 
DOTNET, or
subscribe to other DevelopMentor lists at http://discuss.develop.com.

Reply via email to