On 6/24/06, Matt Johnson <[EMAIL PROTECTED]> wrote:
Mumia W. wrote:
> Dan wrote:
>> LWP or HTTP::Client?
>> i've used both and run across..some problems. [...]
>> i need the most reliable to fetch the feed, and pass me the body of
>> the page so i can pass it to an xml parser of sort.
>>
>> unless there's something else which can already do that? [...]
>
> Hi Dan.
>
> I've played with LWP before, and it worked okay.
>
> Another option is to use the lynx web browser to fetch the page source.
> As far as I know, lynx cannot parse XML, so you'd have to use a separate
> XML parser after fetching the page with lynx.
>
> More options for fetching pages are curl (the module) and curl (the
> program).
>
> Foremost among the XML parsers is XML::Parser; however, CPAN has many
> XML parsing modules.
>
>

Hi,
        I use WWW::Mechanize
http://search.cpan.org/~petdance/WWW-Mechanize-1.18/lib/WWW/Mechanize.pm
to get pages. I do some simple XML validation and manipulation in some
cases.


I've used HTTP::Request, curl is easy too...watch what you pass to the
command line though.


--
Anthony Ettinger
Signature: http://chovy.dyndns.org/hcard.html

--
To unsubscribe, e-mail: [EMAIL PROTECTED]
For additional commands, e-mail: [EMAIL PROTECTED]
<http://learn.perl.org/> <http://learn.perl.org/first-response>


Reply via email to