James G. Sack (jim) wrote:
SJS wrote:
begin  quoting James G. Sack (jim) as of Thu, Feb 21, 2008 at 09:38:15PM -0800:
[snip]
I'm no perl-wizard but I have spent a few years writing perl code, and I
would recommend something other than perl -- unless you already have a
good reason to specifically use perl. Perl is famous for text handling,
but other languages are nearly as good. Perl is famous for one-liners,
but only after you get fairly proficient. There are certainly other
pluses, but in each case, I find it hard to send beginners off chasing
them.
Are you perhaps thinking about perl as a general-purpose programming
language?  Or as a text-processing language that happens to be able
to do some general-purpose programming tasks as well?

Yes, I was answering the question I wanted to answer ;-) -- and that
question was what would offer reasonable benefits per invested learning
effort -- for someone who was only interested in script programming as a
step-up from shell as a general tool (and maybe somewhat as a hobby). I
was not thinking of text processing as the context, although that
probably is more the essence of the original question. :-[

Maybe I'm wrong, but I fear that perl makes it too easy to do write-only
coding. With discipline, one can avoid that, but I feel it is likely too
confusing for a beginner. It's better not to have so many ways of doing
it, I think.

For me, the benefit of there being more than one way to do it, I can select the one I like more, then continue. I remember doing it that way a few times when I was starting to learn Perl. And besides, sometimes the alternate ways can make for an easier cold read.

I would suggest python (my favorite) or maybe tcl.

Hmmm, maybe lua? (Andy:?)

??? or maybe even pascal ???
==> Hey Gus: does delphi fit in this context.
I'm guessing you're firmly in the general-purpose programming viewpoint,
with text processing as something you might do as an initial program to
get familiar with the language.

Yes, I admit I didn't really consider the question of what tools are
most suitable for the business of text processing. For that specialty,
sed/awk/perl would be fine or better.

I am also assuming, I suppose, that someone with a start in doing neat
things with (say) bash and then perl, might be tempted to start building
something like a application. Despite the good things that have been
done at a larger scale with perl (even bash to some extent!), I can't
help feeling that maintenance costs are steeper with perl -- with either
time or code size as the abcissa. I do believe that may be one of the
motivations for the perl 6 redesign.

Maybe I'm stuck in this biased opinion?

I would also suggest skipping sed and awk (unless you want to see some
of the ideas that led to perl). Both are good things for sysadmins to
have in their toolbags, but I think for an ordinary mortal just doing
occasional scripting, I would jump from shell to (say) python.
I think sed and awk are worthy of learning -- at least a subset -- so
that they can be used, or at least understood, on the command line.  I
find that I often prefer to use awk instead of cut to rearrange columns
of output, and sed for those quick modifications.

For example, I wanted to quickly get an idea of how many duplicate files
(with different names) I had in a directory, to see if it was worthwhile
trying to identify the duplicates.  Since I don't know of a command that
will do this out of the box, I evolved an answer without much effort:

% cksum * | awk '{print $1}' | sort | uniq -c | sed -e 's/ *1 .*//' \
          | sort -u | awk '{print $2}'

Now, I wouldn't be suprised if that were a perl one-liner, but I
couldn't do that in perl with twice the effort.  I could have used
cut, but I seem to always screw up cut's parameters, and have to
spend a lot more time experimenting or reading the manpage. Awk is
easier.

I could have written a little program with control structures and
so forth, but then I have to add a counter if I want to see just
how many duplicate files I have, while the command-line is just
"!! | wc -l" (or uparrow and append "| wc -l"). (I've yet to see
how to get the filenames back without resorting to a script, so
there /are/ limitations to this approach...)

Getting into awk or sed *programs*, such as require source files,
yah, don't bother.  Save that for entertainment, or when you inherit
someone else's (working) codebase.


I can't argue, and in fact, confess to similar experiences and
inclinations. I'm thinking that if I (or you) didn't do this sort of
thing so frequently, or hadn't passed a certain threshold of cumulative
usage with these tools, that each little task like you mention could
become more daunting. Maybe we have to ask Ralph?

I've never seen the guts of a sed program (or awk for that matter). But I learned just what I needed to know of sed in order to go thru a LOT of text and make substitutions. I even used tr to get rid of all the line feeds. (I think I replaced them with something else as markers, and then had sed replace all surplus line feed markers.)

Altho, I must admit that anytime I see some magic command line formula in the kplug lists, I do try to figure it out. Sometimes I do, sometimes not. (SS' magic formula above is beyond me.)

As an observation contrary to my thoughts above on the use of perl as an
occasional tool, I would have to  point out that there are some
excellent supporting tools -- for example the Perl Cookbook is probably
the best written book of that type I have ever read.

I may have to check into that.



--
Ralph

--------------------
This is the very key to success in anything—to be able to defer immediate gratification in pursuit of a more permanent and worthwhile future goal.
--John Walker

--
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to