Andy,

Thanks for writing. I'm from the Java world, so I'm sure I'm still not
explaining properly after all this time.

If I understand the use of .NET interfaces to be the same as in other
technologies, the work is actually done on the machine serving up the
interfaces, or the web service provider, right? In that case, a client
would ask for work, and the server would do the work, finally giving
the client back the results of that work. That is not what needs to
happen when you have distributed computing.

Like I had said, I want the client to be a very small footprint, just
a shell. It knows how to read a manifest and run a program within the
DLL (collection of compiled classes) it was given with the manifest.
The client does all the work, and returns to the web service the
results of that work.

Passing the client a DLL seemed at first to be the ideal solution, and
if the client was asked to do the same job, it would not need to ask
for the DLL again (it would be kept for a period on the client).

Really, it is not more complicated than that.

So what are your thoughts on the pieces to make that happen? WCF?

TIA!

pat
:)

On Oct 14, 11:51 am, Andrew Badera <[email protected]> wrote:
> Why do you need the fullblown assembly? Can't you just share
> interfaces? I guess I don't understand enough about the need here.
>
> Using interfaces that describe the objects, you can distribute a common 
> library.
>
> That's a major aspect of the concept of services -- web, remoting,
> WCF. You have shared interfaces defining operations and objects.

Reply via email to