On Mon, 11 Nov 2002, Theuns Verwoerd wrote:
...
> > Anyone know of a command-line utility to split text columns into
> > separate files? I know "cut" will do it it you specify the column
> > positions, but I'm looking for something that attempts to auto-discover
> > column widths.
>
> To print the third column, space delimited:
>
> awk '{print $3}' < inputfile
...

Yes, awk is definitely _the_ tool to process files organized in
columns. Perl can do it, of course, but awk is really tailored to deal
with files where you have one "record" per line, each "record"
consisting of several "fields" separated by a field separator (FS)
that can be set to what you like.

Short complete awk example (part of a SCI -> bibtex converter I
wrote):

BEGIN { FS = "," }
{print "\t\\author " $1}
{print "\t\\title " $2}
{print "\t\\journal " $3}

Explanation: the BEGIN section (1st line) is only executed once, this
one here sets the field separator to the comma. All the other lines in
this example are executed for every line of the input. The input is
supposed to be of the form

Authorname1,Title1,Journal1
Authorname2,Title2,Journal2

The output will be

        \author Authorname1
        \title Title1
        \journal Journal1
        \author Authorname2
        \title Title2
        \journal Journal2

This shows that awk is quite simple and compact for that kind of
thing.

Cheers,

Helmut.

+----------------+
| Helmut Walle   |
| [EMAIL PROTECTED] |
+----------------+

Reply via email to