>I have a large fixed width database file that I would like to delimit
>with commas. For example here are 2 lines of the file:
...
>***note*** Since this is a fixed width data base file, I can import
>it into a database as is. I am only doing this as a learning
>experience.
Just as a fwiw, when I had to do something like this and needed to check
my progress through the file to make sure nothing was missed (eg, lines
that would fail the pattern-match and be left untouched, fields that
didn't conform to certain expectations (eg, variable date-formats), I'd
take a more piecemeal approach, eg, the first pass might be
:g/\([0-9]*\)\( *\)\(.*\)/s//\1, ###\3/
to "format" the first field and stick a "###" flag/bookmark in there.
Second pass would be something like
:g/\(.*\)\(###\)\([0-9]*\)\( *\)\(.*\)/s//\1, \3, ###\4/
and so on.
This way, if I were to encounter some lines that failed the pattern
match and weren't processed, I could simply undo the change and modify
the one-step pattern to suit.
Don't know if this helps at all, but if an all-or-nothing approach is
chancey, or just too long to type in one shot, I'd just throw it at a
quickie 'lex' script to process, or just 'sed' my way through it
(preferably as a script). With those, you can just edit the regexp
that's being used and rerun it with minimal retyping should the pattern
fail. Quick example how it *can* fail is back in your sample lines, the
lone "14" and "2" bracketted by whitespace. Do you want to grab a
variable amount of whitespace and end up with "14" and "2" exactly, or
grab a fixed amount of whitespace and end up with "14" and " 2" as
2-digit fields? And if the latter, what if you don't realise there's
one line in the file where that field is "102", and the pattern fails to
match that line because it's expecting to eat 4 spaces, and there are
only 3 between the preceeding number and the "102"?
Right tool for the job, and all... ;)