Jonathan:

On Tue, Mar 3, 2009 at 1:16 PM, Jonathan Rockway <j...@jrock.us> wrote:
> * On Tue, Mar 03 2009, Jonathan Yu wrote:
>> Choosing array-based parameterization instead of hashes seems to be a
>> bad idea to me, because you could potentially end up with lots of
>> cases of sparse arrays.
>
> ...
I just don't see anything wrong with hashes for passing around
parameters, and performance really isn't an issue to me in Perl
programs as much as my ability to quickly write them. If I need things
blisteringly fast, I can write it in C and inline it. :-)
>
>> Personally I wouldn't want the added overhead of that sort of checking
>> on each hash call (especially since TIEd interfaces are known to be
>> slow, or at least widely believed thus).
>
> ...
>
>> Anyway, my point is: to each their own, and profiling is more
>> important than Big Oh notation.
>
> The impression I get from your post is that big-O notation upsets you,
> and you say to measure instead.  OK.  But instead of analyzing
> algorithms or doing measurements, you just make stuff up.  Do you really
> think that speculation is better than mathematical reasoning?
>
I'm not saying Big-O notation doesn't have its uses, but I'm more an
Engineer than a Computer Science student -- I'm very pragmatic. I do
what works, and if there's a problem, I'll go back and try to fix it
by using different algorithms or something. And that's why profiling
is important.

There are many people smarter than me that do things like figure out
big O time of algorithms, and write the generic algorithms that
everyone uses. It doesn't take knowledge of the internals of Red-Black
trees to be able to benefit from them, but it is important to
understand the algorithm from a conceptual level, so as to know its
advantages as well as its limitations.

There are other issues to optimization than Big O time - namely, cache
affinity. Again, people smarter than me are tackling this problem,
with the Judy array, for example.

Programming/computer science/software engineering, to me, is just
about solving problems using computers. Like any project, you'll have
to focus your time on the constraints that are important to you -
usability, speed, efficiency, memory use, etc. And the argument seems
to be that programmer time costs more than CPU time and memory, so it
makes more sense at first to spend more time creating and less time
thinking about it all in gory depth.

On the other hand, reuse of existing algorithms is what makes it
possible for people to do things rather quickly and efficiently
without having a total understanding of the guts of things.

>> This is just one of the many things I have been upset with the
>> treatment of in my Computer Science program--it's way too academic,
>> and not applied enough, but I suppose that's University in general.
>
> Well, sort of.  Most CS programs don't cover anything academic either.
> This is why you end up with reimplementations of bubble sort and parsers
> built from hackish regular expressions.  I think we can all agree that
> that kind of lack of understanding makes software hard to maintain (and
> it makes it perform poorly too).
>
> (Oh, and don't get me started on the widely-held belief that relational
> databases are built from magic pixie dust rather than simple data
> structures.  That one really brings out the wackos.)
>
> Regards,
> Jonathan Rockway
>
> --
> print just => another => perl => hacker => if $,=$"
>

Reply via email to