begin  quoting James G. Sack (jim) as of Thu, Feb 21, 2008 at 09:38:15PM -0800:
[snip]
> I'm no perl-wizard but I have spent a few years writing perl code, and I
> would recommend something other than perl -- unless you already have a
> good reason to specifically use perl. Perl is famous for text handling,
> but other languages are nearly as good. Perl is famous for one-liners,
> but only after you get fairly proficient. There are certainly other
> pluses, but in each case, I find it hard to send beginners off chasing
> them.

Are you perhaps thinking about perl as a general-purpose programming
language?  Or as a text-processing language that happens to be able
to do some general-purpose programming tasks as well?

> I would suggest python (my favorite) or maybe tcl.
> 
> Hmmm, maybe lua? (Andy:?)
> 
> ??? or maybe even pascal ???
> ==> Hey Gus: does delphi fit in this context.

I'm guessing you're firmly in the general-purpose programming viewpoint,
with text processing as something you might do as an initial program to
get familiar with the language.

> I would also suggest skipping sed and awk (unless you want to see some
> of the ideas that led to perl). Both are good things for sysadmins to
> have in their toolbags, but I think for an ordinary mortal just doing
> occasional scripting, I would jump from shell to (say) python.

I think sed and awk are worthy of learning -- at least a subset -- so
that they can be used, or at least understood, on the command line.  I
find that I often prefer to use awk instead of cut to rearrange columns
of output, and sed for those quick modifications.

For example, I wanted to quickly get an idea of how many duplicate files
(with different names) I had in a directory, to see if it was worthwhile
trying to identify the duplicates.  Since I don't know of a command that
will do this out of the box, I evolved an answer without much effort:

% cksum * | awk '{print $1}' | sort | uniq -c | sed -e 's/ *1 .*//' \
          | sort -u | awk '{print $2}'

Now, I wouldn't be suprised if that were a perl one-liner, but I
couldn't do that in perl with twice the effort.  I could have used
cut, but I seem to always screw up cut's parameters, and have to
spend a lot more time experimenting or reading the manpage. Awk is
easier.

I could have written a little program with control structures and
so forth, but then I have to add a counter if I want to see just
how many duplicate files I have, while the command-line is just
"!! | wc -l" (or uparrow and append "| wc -l"). (I've yet to see
how to get the filenames back without resorting to a script, so
there /are/ limitations to this approach...)

Getting into awk or sed *programs*, such as require source files,
yah, don't bother.  Save that for entertainment, or when you inherit
someone else's (working) codebase.

-- 
I don't have to know how to do everything,
Only enough to figure it out pretty quick.
Stewart Stremler

-- 
[email protected]
http://www.kernel-panic.org/cgi-bin/mailman/listinfo/kplug-lpsg

Reply via email to