Hi, all, I had a simple test on Perl vs. gawk: My data was pure text, and each line is a record with space-delimited fields. The task is to read in these records (say, 1 million of them) and parse them, e.g., get the 5th and 6 fields.
The result showed that gawk was about 10 times faster. Is this typical? What slows down Perl so much? I used "split" in Perl to parse the records. (Cannot use unpack because data is not fixed-width.) Also, after I embed awk in Perl, the speed was even slower than pure Perl. Why is that? Thanks! Steven