> I have yet to, in my short career, encounter a problem where I sat there
> and asked myself, "Hmm?  What is the Big O here?"

There are so many jokes I could make here, but I won't.

While I don't generally calculate O() on a set of code, that doesn't
mean I don't apply the general pricipals that come from it.  Had I not
learned how to calculate it (and had to do it on a number of problems) I
would have a less ready grasp of problem set complexity.

Every time I write code, I ask whether the complexity can be reduced for
the average case, whether the worst case is too bad, and where the best
trade between cycles used and RAM used is.  These are all things I
learned to do by studying complexity.

> Pointers, stacks, and cursors are less important now because every OS will
> handle it for you.
> The two languages (Java, .net) that are fighting for market share don't
> even give you clean access to most of the pointers, stacks, and cursors
> like good old C.
> Engineers don't really need to know this kind of stuff now.

Oh?  Personally, I find that knowing about pointers helps a lot when I'm
reading CF error messages, and CF is two steps removed from most pointer
math/notation.  I use stacks and queues on a regular basis.  I use
cursors when programming in PHP.  All of these things are tasks that web
programmers (not necessarily just the architects) should be able to do.

Not that most can.  :-)  I agree there.

> On the other hand computer scientist do need know this stuff.

I guess what I'm saying is that programmers may not need to know
computer theory per se, but they should understand how some of it
impacts their code.  I don't know enough about SQL to do some even
moderately complex tasks, but I have someone I can ask about it, and I
know how some of these complexities affect my code.

--BenD
[Todays Threads] [This Message] [Subscription] [Fast Unsubscribe] [User Settings]

Reply via email to