> Where is Martin Fowler when I need him? :) Man, Martin and I would probably blow each up if we got into it. :)
> For this case there is indeed little difference between the anemic > model and rich model. But for this specific use-case I would flip the > question on you and ask: what do you benefit from a anemic model? > Given > the point of using C-style functions and global variables versus OOP > why > wouldn't you go for OOP every time? The anemic model will produce a well defined wire format that collects all the data for a Person together. This allows that data to flow through as many layers/tiers/servers as necessary and be worked on by anything that understands that class of data. In the end, a large distributed system is going to be a collection of functions that work on classes of data. I find myself coming back to this pattern more and more, but never taking that step over to fully functional languages. I think the reason is both due to the ability to organize and encapsulate in OO. And, I would definitely not use C as a base line. There are many far more elegant functional languages than C. I also still use rich-objects at times. It is just not something I force myself to use for everything. I find that model too restrictive and not very pragmatic. But, there are still many benefits that I can achieve from OO concepts like extension, aggregation, polymorphism and the like. I just find that they also work well with anemic objects and highly functional services. I guess in the end I don't find OO and functions to be opposing ideals. They are tools I mix and match to suit the task at hand. One idea I've been tossing around lately is a new approach with a separation of data and function. Not sure what it would look like or how it would work, but it seems interesting to me. It would allow you to define a data class such as Human, but have that Human behave like a Infant at one point in time and a Teenager later. In the reverse, the Infant might need Human data, but also other data classes to function properly. Then later on the Teenager might also need the Human data, but other data to function. Need to think about it for another year or so to figure it all out. ;) > > You have to pull it into an interface, yes, but not necessarily > into a > service. Nothing prevents you from defining an interface for > calculateIncomeTax() and providing different implementations in the > form > of rich models. There probably isn't much keeping us from doing this, even on the wire. If you consider any service where the wire protocol has the data and the operation together, you can imagine a system where on the server the data is first set into the rich-domain and then the operation is invoked on that same object. The key is to avoid DGC and shared state. I personally don't use this model, but you could. However, it will still tend towards anemia when the object on the server starts to move logic into other locations and other servers. > That's a requirement I never had in practice. My assumption has been > that if you have a fail-safe cluster in the first place then you just > update different servers at a time. I find the idea of swapping > implementations at runtime as a form of upgrading quite > questionable. It > sounds like it would be much harder to update all necessary classes > atomically and track/log problems if something goes wrong. I just bring up service discovery and updates because in larger distributed environments where you have up-time goals, you have to have some strategy for this. In terms of swapping implementations at runtime, it is definitely something that I've done in the past and it works pretty well, if the system is architected correctly. Once your users are global, you lose any windows for maintenance and this means that you need to be able to upgrade the system at runtime. > If you absolutely need service discovery then I agree that pretty > much > mandates anemic models, but again I question how frequently this > requirement comes up. I'd say that most larger distributed applications use service discovery and runtime updating. That would just be my guess. The ones I've worked on use it. BUT, that doesn't mean it isn't something that can't be figured into other systems without a lot of overhead. More languages are beginning to come around to the idea of multi-threading as a core concept and some languages even see runtime updates in that category as well. If you consider OSGi, JMS (Java Module System), and languages like Scala, Haskel, and Erlang, you begin to see this pattern. I find that systems designed this way are far easier to maintain over the lifetime of the system. -bp --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "google-guice" group. To post to this group, send email to [email protected] To unsubscribe from this group, send email to [EMAIL PROTECTED] For more options, visit this group at http://groups.google.com/group/google-guice?hl=en -~----------~----~----~----~------~----~------~--~---
