Put the search lines into a single file, one per line. Then run this
command:

grep -f searchfile reportfile

The -f will use each line in the "searchfile" as a grep pattern.

If you want to keep the if statements (or for future reference) you should
use an "else if" command. http://www.freeos.com/guides/lsst/ch03sec03.html

Jeremiah E. Bess
Network Ninja, Penguin Geek, Father of four


On Thu, Aug 20, 2009 at 20:45, Jason Montoya <[email protected]> wrote:

> I've been working on a way to parse a report file where there are several
> hierarchies of data (say, company totals, regional totals, local totals) and
> using a while read line loop with greps to identify the appropriate data.
> It works, but it's *very* slow.  Essentially it's in the form of
>
> tac reportfile |
> {
> while read line; do
>
> if [[ `echo $line | grep 'Company Total'` != "" ]]; then echo "$line"
> fi
>
> if [[ `echo $line | grep 'Regional Total'` !="" ]]; then echo "$line"
> fi
>
> if [[ `echo $line | grep 'Local Total'` != "" ]]; then echo "$line"
> fi
>
> done
> }
>
> The reason i do it this way is there are multiple of the lower totals and I
> only want the last one in each.  (there's logic for it but for simplicity
> I'm not showing it here).
>
> I know I could learn perl or python or something and do it faster than all
> these subshells to grep, but I'm interested to know if there is a faster way
> to do this in bash?
>
>
>
> >
>

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Linux Users Group.
To post a message, send email to [email protected]
To unsubscribe, send email to [email protected]
For more options, visit our group at 
http://groups.google.com/group/linuxusersgroup
-~----------~----~----~----~------~----~------~--~---

Reply via email to