Awe-inspiring, educative and yes, funny too! ;-)

On Oct 8, 9:51 pm, Andrew Badera <[email protected]> wrote:
> You hilarious, but detailed and accurate, sonofabiotch.
>
> ∞ Andy Badera
> ∞ +1 518-641-1280
> ∞ This email is: [ ] bloggable [x] ask first [ ] private
> ∞ Google me:http://www.google.com/search?q=andrew%20badera
>
>
>
> On Thu, Oct 8, 2009 at 12:48 PM, Peter Smith <[email protected]> wrote:
> > On Thu, Oct 8, 2009 at 2:23 AM, Cerebrus <[email protected]> wrote:
>
> >> What I fail to understand is why you would transfer an assembly to the
> >> client rather than usable data.
>
> > Usable data isn't usable without the assembly to use it. If you have any
> > natural number of tasks you want the client to do, and the overall cost of
> > transferring the assembly AND the data is smaller than the cost of
> > maintaining changes in the assembly, then you transfer them both every time.
>
> > You're thinking of something like fold...@home, Cerebrus, where the
> > algorithm is pretty much static, and updates to the program that does the
> > work are done on a relatively infrequent basis, relative to the time that a
> > unit of work takes.
>
> > But what if by the time you're done with the data, your code needs an
> > upgrade? What if this is the rule, rather than the exception? Sure, you can
> > write the code to be self-updating (like a lot of software that understands
> > the web is these days), checking for an update whenever it fetches new
> > data...but again, if the cost of doing this is more than the cost of just
> > grabbing the new code every time, taking into consideration storage space,
> > machine profile, traffic flows, then you design accordingly.
>
> > Especially if you've got multiple tasks, and the code changes very
> > frequently.
>
> > However, given all that, I'd probably say that, in most cases, caching the
> > 'code block' somewhere, and only grabbing new ones when needed is a better
> > solution.
>
> > Andrew, man, get off the Freshie! The Grape's no good for you! Unity. Sigh.
> > All those IoC frameworks and DI containers out there, and you take the one
> > that the chip they put in your neck tells you to take. Admittedly, it's not
> > been supported/pushed by MS long enough to get bloated like everything else,
> > but....give it time.
>
> > But, regardless of my prejudices, Inversion of Control is the way to go. If
> > you can tell the client where to get the data, you can tell the client where
> > to update the assemblies that it's using.
>
> > - The client asks for code to run from either a web page or web service.
> > - The server responds with an assembly reference and a data reference.
> > - The client checks the assembly reference against what it's already
> > received, and updates the assembly if needed. Of course, you've been good
> > about defining an interface contract so that the new assembly doesn't have
> > any surprises.
> > - The client checks the data reference, and fetches a little metadata about
> > when to run, what to expect out, timeout value, location of needed files on
> > network, etc.
> > - The client kicks off the assembly, which does some work. The assembly will
> > return status periodically to the client either when asked or periodically,
> > as well as signaling completion.
> > - The client transmits the result of the work back to the server (or perhaps
> > the guest assembly did this itself).
> > - The client then deletes all traces of the data, and asks for another.
>
> > If the client is ALWAYS updating the assembly, then you just get rid of the
> > check. If you need the client to remain clean when not executing, then you
> > change 'data' to 'data and assembly'. Still, you want your compiled code to
> > at least have your contract for the assembly compiled into it.
>
> > Unless of course, you're writing a botnet. Then it's a whole different
> > discussion.
>
> > -- Peter Smith- Hide quoted text -
>
> - Show quoted text -

Reply via email to