==== Gregory Woodhouse [EMAIL PROTECTED]

On Apr 4, 2005, at 9:30 AM, Greg Kreis wrote:

Gregory Woodhouse wrote:

Gee...I usually agree with him on software engineering issues, but it strikes me as rather odd to say that rewriting code from scratch is the worst thing you can do because it is harder to read to code than to write it. From a business perspective (i.e., time to market and such) he has a point -- in the short term. But in the long term, the growing maintenance and extensibility problem of old, patched and many times re-patched code will always come back to bite you.

Aren't you describing any of the major software products from most companies? I don't know if any are complete rewrites, but I can't remember one trumpeting that fact. They all talk about features. After all, the nasty stuff is our job as programmers... ;-)

Oh, I don't doubt it for a moment. But a problem is no less a problem if it's everyone's problem. More to the point, rather than saying this is "par for the course" (and it is), shouldn't we be asking how we can do better?

I think that one thing we are all coming to appreciate is that adapting VistA to new situations, or simply extending it, is no easy task. Why? We can argue that an EHR is a complicated animal (and it is), but it is much harder than it should be!

I would guess that is mostly due to the architecture. In the last twenty plus years, software and hardware advancements have been so dramatic that we can architect at ever smaller levels. This permits new ways of building that one could only dream of in the 80s. Imagine what kind of buildings you can create if you have materials as light as aluminum, many times stronger than steel and cheap. Finally, stir in standards, so you can buy interchangeable parts (at least advertised so... ;-) at a decent cost. Would that make an architect think differently?

I'm not sure if I quite understand your point here. Certainly it is true that architectural advances in the area of hardware have been dramatic, but I'm less sure that this is the case in the area of software. But more often than not, what counts as architectural advances in software is just new ways of packaging the same ideas. A possible exception is in the area of operating system design, where significant advances have been made at a fairly steady pace. Compiler design has also been an area of progress, but language paradigms, basic algorithms, and the like, have progressed very little.



I agree with the last few paragraphs of this article where he argues that the types of problems he describes are symptomatic of architectural problems. And in that sense, I agree with a caveat: simply rewriting your code is likely to be an (expensive) exercise in futility -- unless you address the architectural shortcomings of your code in the "rewrite". So, okay, maybe Joel is right that rewriting code is a major strategic error, but I would argue that failing to write new code that adequately addresses the shortcomings of the existing code base is equally a major error.

But what he didn't address was making a major technology change in the process. He seemed to be talking about rewriting for the same platform, probably with the same software technology. What if you feel you must rewrite to move to a radically different technology? Some might argue that the reasons given for the move are not sufficient to dictate such a drastic step, but forget that for the moment. Presume you agree with the idea. You would have no choice but to rewrite. But not the entire thing from scratch. Do it in well controlled phases, right?

That was my impression, too. When I spoke of writing "new" code rather than rewriting existing code, this is basically what I had in mind. I agree that re-working code in well controlled phases is certainly a workable strategy, and I believe that there has been a tendency to simply give up on this approach in the face of bad experiences with encapsulation or whatever one chooses to call it. In my view, the problem is that we've been a bit too myopic, often trying to mechanically translate call level interfaces into HL7 messages (say). The problem is that moving from a centralized to a distributed architecture necessarily requires a different way of thinking. One thing I have been preaching is that timing is just as important as bandwidth. Think about hardware: How much slower would our computers be with no memory cache? No pipelining? If ALU operations can never be started in parallel?



I am wondering how the introduction of Cache at all the VA sites is going to effect VistA. Cache offers many, many more features for software re-engineering than they had with DSM. So, will the national re-engineering using technology like Oracle and Java find itself in a race with local sites and VISNs that can extend M with the SQL/Java/Objects/XML support in Cache?


Interesting times.... ;-)



This is certainly an interesting question. I believe Cache features could certainly be used to advantage. I would hope that an important driver in the decision of whether or not these features should be used will be the extent to which they are needed. In the past, we've successfully encapsulated platform dependencies. That may be too much to ask for something like Cache objects, but surely if we decide that there a significant benefit to using Cache objects (say), then we can ask how to make use of this feature safely.




-------------------------------------------------------
SF email is sponsored by - The IT Product Guide
Read honest & candid reviews on hundreds of IT Products from real users.
Discover which products truly live up to the hype. Start reading now.
http://ads.osdn.com/?ad_id=6595&alloc_id=14396&op=click
_______________________________________________
Hardhats-members mailing list
Hardhats-members@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/hardhats-members

Reply via email to