On 6/27/09, Zhang Weiwu <zhangwe...@realss.com> wrote:
> Hello. I wrote this one-line command to fetch a page from a long uri,
> parse it twice: first time get subject & second time get content, and
> send it as email to me.
> $ w3m -dump
> 'http://search1.taobao.com/browse/33/n-g,w6y4zzjaxxymvjomxy----------------40--commen
> | grep -A 100 ¶Ô±È | mail -a 'Content-Type: text/plain; charset=UTF-8' -s
> '=?UTF-8?B?'`w3m -dump
> 'http://search1.taobao.com/browse/33/n-g,w6y4zzjaxxymvjomxy----------------40--commen
> | grep ÕÒµ½.*¼þ | base64 -w0`'?=' zhangwe...@realss.com
> The stupid part of this script is it fetches the page 2 times and parse
> 2 times, thus making the command very long. If I can write the command
> in a way that the URI only appear once, then it is easier for me to
> maintain it. I plan to put it in cron yet avoid having to modify two
> places when the URI changes (and it does!).
> How do you suggest optimizing the one-liner?
               Whenever I have to look through a long file more than once, I 
copy the relevant sections into another file (a RAM file if it is short enough 
and I have the RAM) and then parse it there as many times as I need to do it.
Criminal Lawyers - Click here.
freebsd-questions@freebsd.org mailing list
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to