> > > - easier for other people to maintain even years from now > > > - easier to migrate and expand > > > - objects make it easier to interface and work with other things, > > > consistantly and without much effort > > > - decrease cost( time, money, space, prcessing power ) > > > > Agreed, agreed, agreed, agreed, and agreed, but they don't take my > > word for it. I need a way to prove it. :o/ > > Hindsight is 20/20 and about the only way to "prove" it. Isn't the > sheer number of people doing it a good enough reason to at least > examine its abilities?
I'd say so, but I can't get them to convert the shop to ANSI C++ (we're still on K&R!) And good luck on Extreme Programming. ~sigh~ > > I just can't get them to look past "that arrow stuff". ~sigh~ > > (And these are intelligent, competent programmers, too. > > How can they not see that sometimes a learning curve is a > > worthwhile investment?) > > I would question whether they really are "intelligent, competent" > programmers if they are discarding OOP because of the learning curve, > rather than because it is overkill, etc. for the project. I am not > going to run around waving my arms and shouting that OOP is always > required for any successful project, I wouldn't be a very good Perl > programmer if I did (IMHO), but to adequately judge which approach to > take both need to be understood and it doesn't sound like they are, > but I am also making the assumption that you are correct and that the > project *does* warrant the OOP approach, and that the project > warrants it above and beyond the learning curve. "the project" is a nightmarish lack of a specific project. We're a troubleshooting group, called in when someone needs a (supposedly) one-shot datamining run on stuff that isn't available anywhere else in the company. We don't have the data available in a database -- not enough disk space, we only have a couple hundred gig. We regularly end up parsing though one or more gigabyte compressed files and number-crunching with yet-another custom written program, so our only consistent tools are the systems which read and scrub the data from the files. There are several C "front ends" that we include into new programs, but they are so far from industry standard that it's scary: the actual code is in the include file as text, *including* the main() function! So so write your program, all you do is #include "thelib.h" int help_doc() {} int drive_func(this,that) int this; int struct x *that; { then_here_per_rec_with_globals(); } All you really have to code is the then_here_per_rec_with_globals() stuff to handle each record. It's actually darned efficient, but UGLY and inside-out. There's way too much legacy code to convert all of it now, but not in Perl! So for Perl (which we ude more and more) I write modules: use FileX; my $o = new FileX $filename or die $!; while ($o->nextrec) { code_stuff(); } And they "don't like that arrow stuff", so they code the pipe to open the file themselves (even tho the constructor just let's them say month and year and finds the correct archive file for them, and opens a decompressing read pipe automatically), and use a bunch of substr()'s to get fields when I've GOT it already coded to do an efficient parse-on-read with unpack(). What happens when the layout changes? I change my module's layout string for unpack() once and all my code is fixed. They have to go through and count bytes again for every field they use in every program they wrote. I may be exaggerating a little, but not that much. Believe it. > p.s. I am fighting the same battle, though I think I may have won. Good luck. :) __________________________________________________ Do you Yahoo!? Yahoo! Tax Center - forms, calculators, tips, more http://taxes.yahoo.com/ -- To unsubscribe, e-mail: [EMAIL PROTECTED] For additional commands, e-mail: [EMAIL PROTECTED]