> It's always been easier for me to use python's/perl's regular
> expressions when I needed to process a text file than to use plan9's.
> For simple things, e.g. while editing an ordinary text in acme/sam,
> plan9's regexps are just fine.

i find it hard to think of cases where i would need
such sophistication and where tokenization or
tokenization plus parsing wouldn't be a better idea.

for example, you could write a re to parse the output
of ls -l and or ps.  but awk '{print $field}' is so much
easier to write and read.

so in all, i view perl "regular" expressions as a tough sell.
i think they're harder to write, harder to read, require more
and more unstable code, and slower.

one could speculate that perl, by encouraging a
monolithic, rather than tools-based approach;
and cleverness over clarity made perl expressions
the logical next step.  if so, i question the assumptions.

- erik

Reply via email to