> > Ian Griffiths wrote:
> > >.NET remoting requires both ends to share type  information.  This 
> > >entails a degree of coupling  between your systems that is likely
to 
> > >be highly  undesirable.
>  
> Frans Bouma replied:
> > Everybody who has written a remoting setup knows that this is easily

> > circumvented by defining an assembly with just interface definitions

> > which are implemented by teh server and consumed by the client.
> 
> That mitigates the problem, but it doesn't solve it.  Sooner 
> or later you're going to want to evolve your remote 
> interface, which involves getting new versions of the 
> interfaces to both ends.

        Yes, new versions, but that doesn't mean the old ones are
removed. You see, if a remoted service provides services using interface
IFoo, it should stick with that interface until the service is removed
or no clients are using it anymore. If you want to upgrade your service
to IFoo2, you can do that of course, but the service should stick with
IFoo as well. A webservice has to do this as well, otherwise webclients
created with vs.net will fail as well: if a webmethod Bar() is called,
the client expects Bar() to do something ABC. If Bar() suddenly does
ABD, the client will fail, hence the tight coupling between client and
service. 

> In a situation where the same team owns both ends of the link 
> and can coordinates the releases of clients and servers, this 
> is not a problem. But not everyone gets to work in that scenario.  In
large 
> organizations, there's every chance that even though the 
> client and server may both be running .NET, the software on 
> each box may be owned by different teams.
> Coordinating updates is a real headache here when you have to 
> share type information.

        Why? You can just add IFoo2 to your remoting service, which
offers the upgraded functionality. Clients which can handle the IFoo2
functionality, and are build against it, can use it, older clients don't
know about IFoo2 and will use IFoo. I don't see the problem.

> It's not exactly a piece of cake with web services either of course.
> Evolving public APIs is hard.  But web services certainly 
> make it a lot easier, because there isn't that hard 
> dependence on a specific assembly being available at both ends.

        Erm... if the developer has written his webservice referencing
client with vs.net, the client has specific code inside itself targeting
a specific version of the webservice. Updating that webservice by adding
a parameter to a method for example will make the client fail as well.
In other words: you also need to support multiple interfaces on teh
webservice as well. And why not, it's a proven technique from COM. 

> > You don't need to store type implementations on server and client, 
> > just the interfaces.
> 
> As I said, you need to share type information - interfaces 
> are types too.  You don't need to share implementations, but 
> then I never said that you did.  The fact remains that you 
> need to get the interface definitions to both ends.  This 
> involves having equivalent .NET assemblies at both ends.  
> (The fact that the types are all interfaces doesn't make that 
> any easier - it's a file that has to be there either
> way.)

        I understand but why is that a problem? I have a service, if a
client wants to connect to it, it has to implement this interface I
distribute in abc.dll. What's the big deal? I don't see it. We're not
talking about anonymous random clients on multiple platforms which all
should be able to consume the service, because we already agreed on that
that an XML producing/consuming service together with SOAP is the only
way at the moment to make that possible. 

> The reason for using interfaces is to minimize coupling 
> between client and server, with the hope being that you can 
> evolve both relatively freely whilst minimizing the number of 
> changes you need to make to your shared interface component.  
> But sooner or later, you're likely to find that you want to 
> change something in that interface component, and that's 
> where the trouble starts.

        no, that's where the upgrade process starts to a new interface.
Once you've released an interface, you can't change it, OR you have to
be in the position where you can control the fact that you upgraded an
existing interface, for example by forcing recompilation of the client
as well. 

        The interface dll is there to make sure the client looks at a
given version of the server! If the server process offers 10 different
services, it can do that by implementing 10 interfaces, the client only
has to talk to 1 to use 1 service. 

> (Of course to do some of the things you were talking about 
> earlier - passing cyclic object graphs with polymorphic 
> members, you *are* going to need class implementations on 
> both ends.  But I'd strongly recommend against such a design 
> in most remoting scenarios.)

        Sometimes you have to. If I want to pass on an entity graph with
customers, which have Order collections which have OrderRow collections
I have to, OR have to use non-intrusive code where the objects are all
written with arraylists, simple classes and where I've added manually
all kinds of Xmlserializer attributes to control the xml processing. 

        Cyclic references are KEY for a good O/R mapper. If I do this:
// fetch customer, on server. 
CustomerEntity myCustomer = new CustomerEntity(customerID);
myAdapter.FetchEntity(myCustomer);

// new order, on client, has received myCustomer from server.
OrderEntity myOrder = new OrderEntity();

// set customer as the related customer for this order
// will sync FK-PK fields
myOrder.Customer = myCustomer;

then I also want this to be true:
myCustomer.Orders.Contains(myOrder);

and when I do instead of myOrder.Customer = myCustomer; this:
myCustomer.Orders.Add(myOrder);

I want this to be true:
(myOrder.Customer == myCustomer); 

These relations are object relations, and cyclic. These references are
essential for for example key syncing where a new entity has a reference
to (and relation with) another new entity which has an identity key
field, so when that entity is saved during the persistence action of the
complete graph, the referencing object's fk field has to be set equal to
the identity pk field just created. 

        On the client, this object graph is build, not persisted. The
graph is then send to the server to get persisted. You can't pass this
info now in a webservice, even though you want to. 

        People can say "I'd suggest not to do that", but why is it my
problem that the XmlSerializer can't handle cyclic references?
Apparantly webservices are limited to services which do not accept
object graphs with cyclic references. However that's not a limitation of
the webservice idea, it's a limitation of the xmlserializer. My
WriteXml() routine in an entity can write the complete object graph to
xml, including cyclic references, why can't the XmlSerializer do that? 

        I know, XML doesn't support cyclic references per se, but aren't
cyclic references easily created in XML by adding a tag which
semantically symbolizes a reference to a node? True, proprietry stuff,
if you build in that kind of support in teh XmlSerializer, but it also
has proprietry code for the DataSet, so why not this too, for services
handling .NET clients, where the xmlserializer isn't used for XML output
to XML consuming clients on f.e. Java or a Palmtop, but clients which do
not want to consume the XML, they want to consume teh actual object, or
better: the object re-incarnation at the other end :)

> > Why is it highly undesireable that systems are coupled?
> 
> The same reason you avoid putting concrete classes in your 
> shared interface DLL.
> 
> The more tightly coupled any pair of systems are, the harder 
> it is to evolve them independently.  (In fact, that's one 
> definition of 'tightly
> coupled'...)  In extreme cases, it is impossible to update 
> anything without updating everything - your distributed 
> system must be deployed monolithically.  The people 
> responsible for keeping a company's IT systems running are 
> usually (rightly) reluctant to do such big-bang deployments, 
> because it's a highly risky thing to do.  But if you have low 
> coupling, there's a much better chance that you'll be able to 
> update components independently.

        I don't see these problems. If your service keeps offering
interface IFoo, the client is able to contact it and use it, or I am
missing something completely. If the service is updated with another
interface, the client doesn't care nor doesn't have to know, it connects
to IFoo anyway. COM has this versioning and it works great, see f.e.
DirectX.

> The main reason for defining all your remoting APIs in a 
> separate DLL and making them interfaces reduces coupling - it 
> increases your chances of being able to update either end of 
> the connection in isolation.
> However, the success of this ploy depends on being able to 
> keep those shared interfaces stable.  Web services take the 
> decoupling one step further by observing that it's not 
> actually necessary to share the interface types at all.

        ... but the regeneration of the types on the client implies the
same, or not? I mean: teh clients do have the types, in code. If the
server changes the type, the client still has the old version as well...
same problem.

        Interfaces shouldn't be changed. There is one exception: if you
control the client, you CAN change an interface, otherwise an interface
can't be changed. (it can, but it's not wise to do that). THat's not a
limitation, it's common sense, and not only true in a SOA world, but
everywhere a piece of code calls into another piece of code, you can't
simply change that interface of the called code without running into
some trouble. 

> > Webservices are also coupled. these types are also known at the 
> > client!!
> 
> I disagree with the implied view of the relationship between 
> the XML documents that web services and clients exchange, and 
> the types those services and clients use internally to 
> represent those documents.  You appear to be assuming that 
> both ends must be using the same types.  This isn't actually 
> true; indeed, it is one of the fundamentally important 
> advantages of web services.
> 
> The fact is that with web services, the types the client code 
> uses don't need to be the same as the types the server code 
> uses.  All that matters is that the XML documents they 
> exchange look right.  And there are any number of ways of 
> representing an XML document in code.  (As an extreme 
> example, the client can use the type "text string" as its 
> input to the web service.  I've seen that done extremely 
> successfully in fact - for simple cases StringBuilder can be 
> a great expedient way of building a web service client...)
> 
> Even if the client uses something more sophisticated, then so 
> long as the XML documents it sends to the server conform to 
> the schema the server requires, and the returned documents 
> conform to the schema the client expects, all will be well.  
> The server doesn't care what types the client uses 
> internally, it only cares that the XML is receives looks 
> like.  This is still a sort of coupling of course, but a 
> significantly less restrictive kind of coupling than the need 
> to share type information.  (This doesn't even necessarily 
> require schema to be shared.  With careful design, it is 
> possible for the server to evolve its schemas for incoming 
> documents without breaking existing clients.
> As long as the new schema accepts a superset of the documents 
> acceptable under the old schema, it doesn't matter that the 
> two ends are
> different.)

        ok understood, the thing is: you move the problem of keeping an
interface the same to the area to keep the xml schema the same. In both
occasions in the situation of a changing service (new functionality is
added for example) the service has to be rewritten in some areas to
support current clients: in your situation, it has to be able to
produce/consume the same xml schema, in the remoting sample with
interfaces, it has to be able to support the same interface the clients
connect to. You make it sound as if it's more easy to get it going, but
that's of course not true :). In the case of a webservice you too have
to make sure the xml produced on the server looks the same (a renamed
member has to be serialized the same for example, a removed member still
has to be supported) as before, and the XmlSerializer isn't going to do
that out of the box.

> > In fact, VS.NET regenerates classes for those types at the 
> server when 
> > you add a web reference,
> 
> Not everyone consumes web services that way though, even if 
> they are using .NET at both ends.  And even if they are using 
> the wrappers VS.NET generates, this doesn't necessarily 
> preclude the kind of evolution I'm talking about, if you get 
> your schema designs right.  (But it's crucial to be in 
> control of your schemas.  If you regard schemas as being an 
> XML-flavoured definition of the concrete types you are using, 
> then your chances of success here are low, so you may as well 
> just use .NET remoting.)

        exactly. And if you're not using vs.net's client generating
(WSDL) code, why using webservices at all, as it is then as hard, if not
harder, to embed the service into the client code as with remoting.

> > but these regenerated classes can often be a big pain as they don't 
> > contain deeper logic
> 
> That's by design.  And it's a design choice that was informed 
> by the lessons learnt in the 1990s with CORBA, RMI, and, to a 
> lesser extent, COM.  You really don't want to be sharing 
> class implementation across remoting boundaries in most 
> applications.  (It definitely makes no sense in a web service 
> scenario, because sharing implementations across remoting 
> boundaries is a very highly coupled design approach.)

        Ian, please... stop mixing xml producing webservices for java
clients with webservices which are used to function as a BL tier in an
n-tier .NET application. You know very well I wasn't talking about xml
producing webservices for non-.net clients. 

        In the case where a webservice is used as a BL tier, or a
DAL/BL-Facade tier even, you NEED object trees to get accross, as I
described above, which ARE the same types on both ends, otherwise you
loose EXTRA functionality which makes life so easy. 

        People want to set up a service on their LAN which produces for
example accounting entities, which are persisted from/into an accounting
db. This service is then consumed in various applications on the LAN
targetting the accounting db. They just want to pull the entities out of
the service, work with them and push them back to the service to get
them persisted. 

        This is often seen as an area where webservices excell, well...
they do not, if you want advanced functionality on the client, in the
service consuming clients! If you don't want the advanced functionality
offered by the server on the client, go ahead, but who wants NOT to use
advanced functionality on teh client if it's available? If the same
setup is used with remoting, the client CAN use the advanced
functionality. With websevices they can't. "Oh, but that's very
advanded... not a lot of people need that..." and similar talk.. I've
heard it all. But they're wrong. People want the ease of use of working
with objects which are typed and they want a service layer which is
transparent. 

> > Could you please provide some argumentation why the close 
> binding of a 
> > server interface with the client is so bad?
> 
> Yes.  I'll provide two examples.
> 
> First example: Suppose I have a service accessible via .NET 
> remoting that has been up and running for 6 months now. It 
> exposes several different endpoints, and it has a number of 
> client systems using these various endpoints in various 
> different ways throughout my organization.
> Suppose I want to add a new method to one of the remote 
> interfaces it exposes.  

        *rrrrt* :) 
        You can't add a method to an existing interface. Lesson 1 of
interface versioning 101. :) IF you want to add that method, you have to
upgrade the interface to another version. 

> Suppose this method is for the 
> benefit of one particular client piece of software, but that 
> I have lots of other clients out there already using the 
> existing service, but which won't use this new feature.  (I 
> know I'm always berating you for not being concrete enough, 
> so I'll be a little more specific here.  One place I've had 
> to do this was when the management and monitoring API for a 
> service needed modifying but its main operational interfaces did not.)
> Conceptually, there's no good reason for most of the clients 
> to be disturbed. I'm augmenting a feature that most of them 
> don't even use - at most I should only need to update the 
> clients that rely on the interface being changed.  (And in an 
> ideal world, I should only need to update the clients that 
> are going to *use* the new functionality - there's no real 
> reason for existing clients to be affected by the addition of 
> features they're not going to use.  That shouldn't be a 
> breaking change.)  But I'm going to end up with a new version 
> of my remote interfaces DLL.  Would you really be happy if 
> your clients were all running with a different version of the 
> remote interfaces DLL from the version on the server?  
> (Indeed it might not even work - the change in version 
> numbers can be enough to make .NET remoting give up.  But 
> even if it did plough on, it's not the kind of setup you 
> really want on your production systems.)  So you're going to 
> have to update *all* your clients, even though nothing really 
> changed for many of them.

        and why is not a setup wanted in production systems? Windows'
foundation is build with this mechanism. 

        Create a new interface IFoo2, by inheriting from IFoo and add
the method. The new client compiles against IFoo2, the others stay with
the IFoo mechanism. Doesn't this work? If not, why not and isn't that a
BUG in the platform if it isn't working, after all, the whole purpose of
interfaces is that you can keep on using a type while upgrading the
implementing object!

> (Or you're going to have to start introducing multiple remoting DLLs.
> That might be an expedient solution, but seems like the start 
> of descent into chaos - how are your systems going to look 
> after a year of that kind of thing?  In any case, this only 
> allows isolated updates if you put every interface in its own 
> DLL to start with.  If you're starting from a position of 
> having one remote interface DLL, going to multiple DLLs 
> doesn't help - you're still going to have to update every 
> client the first time you do this kind of split.)

        If a person starts developing a scenario where software relies
on the interface definition stated in another assembly, versioning and
upgrade scenario's are one of the basic things which have to be designed
into the project from the beginning. This has nothing to do with
remoting, but with bad software engineering. Bottom line: if your
upgrade of an interface will break the client, don't upgrade the
interface OR upgrade the client and next time consider a better
approach. 

> Compare this with web services.  There is no requirement for 
> shared type information.  The only requirement is that the 
> documents are valid according to the schemas in play.  It is 
> possible to accommodate evolution of the remote API without 
> updating every client every time - introducing a brand new 
> element definition into an existing schema isn't going to 
> cause documents that were valid according to the old schema 
> to become invalid all of a sudden.  (Yes, I know, you could 
> contrive such a thing through careful use of xsd:anyType in 
> the original document.  But for a large and useful set of 
> cases, such modifications are safe.)

        How about removing a member, because it is moved into a new
class referenced by a new member? *poof*, schemas don't match and you
have to add code on the server to produce the same schema, code which is
totally unnecessary and only there to support older clients. Also messy.


> Another example: suppose that some of my clients are running 
> v1.0 of the .NET framework and some are running v1.1.  I've 
> had trouble connecting v1.0 and v1.1 systems together with 
> .NET remoting.  (Even if you stick to the rules for avoiding 
> problems with the security changes that came in with remoting 
> serialization in v1.1, I've still seen problems.)  It's not 
> always entirely clear what the degree of support is for 
> deserializing byte streams serialized by a different version 
> of the .NET framework.  One way of looking at this is that 
> even with the .NET Framework at both ends, you don't 
> necessarily have the same platform at both ends...  

        true, but it's MS's code that's buggy in one degree
(deserialization of binary class/struct streams bugs in .nET 1.0) and
extended in another (deserialization of sorted lists f.e. in .NET 1.1
vs. .NET 1.0). 

> Use of 
> remoting can introduce a new type of coupling that requires 
> me to be running the same *version* of the platform at both 
> ends (or that at least makes it impossible to be completely 
> confident that it's going to work when you have different 
> versions).  Again, this kind of thing doesn't tend to go down 
> well with the IT ops people - even with side-by-side .NET 
> frameworks, you're still going to end up with a big-bang 
> update of all the various clients when you move the server to 
> the new version.

        Yes, that's the aspect of a distributed application:
communication has to be there. :) It's however not the problem of
remoting that MS makes remoting relying on .NET's versioning, that's MS'
problem. Same problems arise if an assembly (DLL) compiled with .NET 1.1
is shipped to a .NET 1.0 using developer (vs.net 2002). No can do. 

> With web services, versions of the .NET framework are a 
> non-issue, because the .NET type system isn't involved at 
> all.  A given XML document is either valid or invalid for a 
> given XML schema, regardless of the nature of the runtime you 
> use to generate or consume that document.

        true, this can be a positive thing. However the real reason is
not the power of XML, but the lack of backwards compatibility in .NET
1.1. (which doesn't have to be a bad thing) in some areas. 

[...]
> So this comes back to my original point: sometimes web 
> services will be a more appropriate choice than .NET 
> remoting.  Not always, but sometimes.  That's all I'm saying. 

        however they limit you too to crippled implementations and these
limitatoins are not obvious from the beginning. 

> > > Sometimes it is.  But you seem to be saying that web services are 
> > > always the wrong choice, and I really can't agree with that.
> > 
> > As long as XmlSerializer is REQUIRED, webservices are the wrong 
> > choice,
> 
> Now that you've recanted some of your comments on 
> XmlSerializer performance in another email, I'm not sure if 
> you're still standing behind this particular comment.  

        Yes, for 100%. 
        THe ONLY situation webservices are ok, is in the situation where
the clients consume XML and are on another platform/can be on another
platform.

> I  still disagree that it has anything to do with XmlSerializer. 

        it has. It should have code to handle cyclic references and
interface typed members. This is not meant for non-.NET clients, it's
meant for .NET clients, consuming object data to reinstantiate the
object. XmlSerializer also contains code to handle the proprietry object
'DataSet'. Why is there code for that proprietry object, and no code to
handle other proprietry objects? The schema a DataSet produces is very
hard to parse, the xml the XmlSerializer produces often lacks detailed
information about the type of the data, like precision / scale of
floats/decimals. That's ok for xml consuming code perhaps, it's
essential for object reinstantiation. 

> > especially in situations where you are communicating between 100%
.NET 
> > applications.
> 
> Now that I've put forward the two examples above, is this 
> still what you think?

        yes, for 100%. I don't want to offer shabby dull objects with no
functionality on the client just because the xmlserializer can't get the
darn data accross! I understand it's part of an XML problem but why is
it MY problem that when I use a webservice project in vs.net, between a
.NET service and a .NET client, xml is used in between which limits the
functionality on teh client? This problem is there for a lot of
developers, who want to offer a generic service on the lan for various
client types and think 'webservice is what I need'. No, they need
remoting, webservices aren't meant for that.

> > The XmlSerializer is severily broken as it can't produce Xml for 
> > simple classes with interface typed members,
> 
> That's a good thing IMO.  If you're trying to pass *objects* 
> across a remoting boundary, then I believe you're making a mistake.

        erm... I want to reinstantiate the same types with the same data
at the other end. Why is that a bad thing? Is it a GOOD thing I'm forced
to work with dull objects without advanced features I will have when
remoting is used? I don't think that's a good thing, I think that's a
severe limitation. 

        I fully understand why it is there, though it also makes me
conclude that webservices are a nice idea, but not for .NET <-> .NET
communication if you just want to work with the OBJECTS the service
produces and want to push back the objects the server understands,
without loosing functionality you WOULD have had if you were using
remoting.

        FB

--------------------------------------------------------------------
Get LLBLGen Pro, the new O/R mapper for .NET: http://www.llblgen.com
My .NET Blog : http://weblogs.asp.net/FBouma 
--------------------------------------------------------------------

===================================
This list is hosted by DevelopMentorŪ  http://www.develop.com
Some .NET courses you may be interested in:

NEW! Guerrilla ASP.NET, 26 Jan 2004, in Los Angeles
http://www.develop.com/courses/gaspdotnetls

View archives and manage your subscription(s) at http://discuss.develop.com

Reply via email to