begin  quoting boblq as of Thu, Mar 24, 2005 at 02:12:37AM -0800:
[snip]
> > I assert that it often doesn't matter.  It's easier to tackle a problem
> > if you keep an eye out for it than if you let yourself get blindsided.
> 
> And one often has ones eye out for the wrong problem
> hence is blindsided by a different problem, which is my 
> point. 

>From what I can tell, that's not the case. We get blindsided by the same
damn problems again and again.

This might be an illusion brought about by reading the RISKS digests.

> The only problems you can predict are the trivial ones. 
 
On the contrary. The only problems you can react to quickly are
the trivial ones.

> I believe in being able toreact quickly rather than predict. 
 
Then you should only tackle trivial problems.

> > I personally don't like crisis-mode.
> 
> Who does? That does not mean you know how to avoid it. 

Nor should you invite it.

> So it is important to code so that you can react quickly in a
> crisis. 
> 
> Get it? 

Yes.  I've gotten this for longer than I've know you. I suspect that
we've just got different ways of dealing with it.

This is why I like thinking ahead a little bit.  Define boundaries, and
don't breach 'em without good cause, and document it when you do. Don't
make needless assumptions about the target platform.  Defer a few 
decisions until you know more.  Build in hinges so that when something
changes, you can rewrite just a couple of pieces, and not the whole
damn program.

Spend a few minutes thinking about problems. Write 'em down. Write down
the effects, consequences, the tradeoffs, and a few approaches to dealing
with the problem.  Look over the code for a few places to insert a fix
should that (low-probability) problem arise.  If you encounter some cruft
that would make it impossible, see if you can't clean up the cruft a bit
at a time.

I'm not advocating an attempt to solve all of the problems ahead of
time. I'm just reacting against the "don't worry, we'll fix it when 
it's a problem" attitude.

> > There's a difference between thinking ahead and trying to be clever.
> 
> That is not what I said. I did not say clever. I say you are incapable
> of thinking ahead. You (here you in general, not just SS) are simply
> not able to see very far ahead. It is the human condition. Sorry. It
> applies to software as well as all kinds of other situations. 

I think that you *can* think ahead.  You can't think of everything, true.

But not trying *depends* on being clever enough to fix it when the time
comes.  

> > Simplicity is thinking ahead.  "I will have to maintain this code
> > someday."  Using configuration files is thinking ahead. "The user might
> > want to change some of these values."  Using constants is thinking ahead.
> > "You never know when the value of PI might change."  Separating the UI
> > from the logic is thinking ahead. "I might want want an X version of
> > this program."
> 
> Chuckle, and as often as not you pick the wrong thing to make a constant,
> configure the wrong variables, separate the UI from the logic in a way 
> that hindsight proves to be inseparably coupled.  Your X is more often wrong
> than right and hence wastes time and energy propagating a false
> generalization, which must them later be reeled back in and run off in
> another direction. 

If a constant never changes, it doesn't hurt.  If it does, you have
saved yourself considerable effort... you've mulitiplied your effective
ability.  If most of your configuration values never change, at least
you have a mechanism in place, which lets you make that change later, in
a crisis, without having to invent an ad-hoc kludge.

By "X", I meant "X Windowing System", but that's okay. It works either
way.  It doesn't matter if my UI layer is more wrong than right, and 
it's only inseparably coupled if the person responsible for that piece
doesn't understand the notion of "separation".  I've been burned by "it
can't be done so don't try" in this particular area far too many times.
Too often it's been a case of "throw away everything and reinvent from
scratch", because the work in trying to change the UI is significantly
more than redo-from-start.

Thinking ahead isn't a matter of "reeling back", but taking an axe
to the branch and chopping it off, and then heading off in the other
direction.  You can only do this if you generalize a bit.

> In short you do not really know what you are doing when you try to 
> predict the future of code. But arrogance, which especially the brightest 
> among us succumb to, makes us want to think differently. 
 
The brightest don't think ahead. They don't need to. They're smart
enough to keep the whole thing in their head, and to rewrite all the
complex interlinkings when they "change direction".

I've seen both extremes ... and neither side has much to say for it 
over the other.  And let's just not get into PhD code...

> > Not all generalizations are good, I agree. XML appears to be a
> > generalization for generalization's sake, for example. Many "frameworks" do
> > nothing but obscure the goal.  Too much abstraction is a problem in and of
> > itself, equal to the problems of no abstractions.
> 
> Right. You are beginning to get it. 

I've been under the impression that I've been asserting that the
extremes are bad all along.

> > But twice as opinated, and only half the calories!
> 
> What does that statement mean? 

Whatever you want it to.  I suspect that you're far cleverer than me,
despite clames to the contrary.  I'm not up to the task of rewriting
everything whenever some problem shows up.  I have to try to think
ahead, else I'm swamped with the implications and scope of the problem.

Or possibly, we're standing on the same hill, having approached it from
opposite directions.  "Don't go that way," we say, "it's terrible." And
indeed it is.

> boblq "no longer impressed by sound bytes" 

-Stewart "Not impressed by claims of what we can't do." Stremler
-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-list

Reply via email to