at the risk of sounding lazy, i'm hoping someone can just hand me a
short (probably perl) script to do the following, since it's a long
time since i've played in perl.
i have a text file containing over 100 URLs (one per line), and i
want to produce that many PDF files of those URL pages,
On Wed, 26 Sep 2012 15:43:26 -0400 (EDT)
Robert P. J. Day rpj...@crashcourse.ca wrote:
for anyone who knows perl, i'm sure this will be trivial given the
right perl module. if anyone can help me out, i would be grateful to
the extent of a beer or three, or something along those lines.
On Wed, Sep 26, 2012 at 4:36 PM, Shawn H Corey shawnhco...@gmail.com wrote:
On Wed, 26 Sep 2012 15:43:26 -0400 (EDT)
Robert P. J. Day rpj...@crashcourse.ca wrote:
for anyone who knows perl, i'm sure this will be trivial given the
right perl module. if anyone can help me out, i would be
On Wed, Sep 26, 2012 at 03:43:26PM -0400, Robert P. J. Day wrote:
i have a text file containing over 100 URLs (one per line), and i
want to produce that many PDF files of those URL pages, with output
filenames simply 1.pdf, 2.pdf and so on -- that numbering is
required.
$ perl
On Wed, Sep 26, 2012 at 06:16:31PM -0400, Champoux wrote:
On Wed, Sep 26, 2012 at 03:43:26PM -0400, Robert P. J. Day wrote:
i have a text file containing over 100 URLs (one per line), and i
want to produce that many PDF files of those URL pages, with output
filenames simply 1.pdf, 2.pdf
On Wed, Sep 26, 2012 at 4:53 PM, Robert P. J. Day rpj...@crashcourse.ca wrote:
I'ld cheat:
#!/bin/bash
page=0
while read URL
do
page=$((page+1))
if [ $page ne SKIP ]; then
html2ps -o - $URL | ps2pdf - $(printf page-%03d.pdf $page)
hm ... that has
On Wed, 26 Sep 2012, Aidan Van Dyk wrote:
On Wed, Sep 26, 2012 at 4:53 PM, Robert P. J. Day rpj...@crashcourse.ca
wrote:
I'ld cheat:
#!/bin/bash
page=0
while read URL
do
page=$((page+1))
if [ $page ne SKIP ]; then
html2ps -o - $URL | ps2pdf -