Hello,

I would sugest you to use element_by_xpath to get all <script> and <link>
tags and then extract the src and href attributes, but I couldn't get it to
work.

But fear not! This is your lucky day and I decided to take the challenge and
do it with Mechanize.

Below you will find the code that worked for me, but it's incomplete. You
can use it as a basis for your own code and improve it to download relative
links.

Once you finish, please send me the final version :)


=====

require "rubygems"
require "mechanize"
require "uri"
require "net/http"

url = "http://www.cnn.com";

@mech = Mechanize.new
@page = @mech.get(url)

@scripts = @page.search("//scri...@src]").map {|script| script["src"]}

@scripts.each do |script|
  puts script
end

@css = @page.search("//link[contains(@href, 'css')]").map {|link|
link['href'] }.compact
@css.each do |css|
    puts css
end

@scripts.each do |script|
  url = URI.parse(script)
  filename = script.split("/").last
  Net::HTTP.start(url.host, url.port) { |http|
    remote_file = http.get(script)
    open(filename, "wb") { |file|
      file.write(remote_file.body)
    }
  }
end



On Tue, Apr 13, 2010 at 1:33 AM, rainkinz <brendan.grain...@gmail.com>wrote:

> Hi,
>
> Is there an easy way to enumerate and save javascript files (and
> linked css files for that matter) using watir?
>
> Thanks
>
> --
> Before posting, please read http://watir.com/support. In short: search
> before you ask, be nice.
>
> You received this message because you are subscribed to
> http://groups.google.com/group/watir-general
> To post: watir-general@googlegroups.com
>
> To unsubscribe, reply using "remove me" as the subject.
>

-- 
Before posting, please read http://watir.com/support. In short: search before 
you ask, be nice.

You received this message because you are subscribed to 
http://groups.google.com/group/watir-general
To post: watir-general@googlegroups.com

Reply via email to