Traditionally when using WWW::Mechanize to dl images I first fetch the
root page:
  my $mech = WWW::Mechanize->new();
  $mech->get($url);

then proceed to find all images and 'get' them one by one: (forgive
the crude code)

  my @links = $mech->find_all_images();
  foreach my $link (@links){
    my $imageurl = $link->url_abs();
    $imageurl =~ m/([^\/]+)$/;
    $mech->get($imageurl, ':content_file' => $1);
  }

My current problem with this is that I'm trying to dl an image
generated with information from the session of the original
get($url).  It's not a static *.jpg or something simple it's a black
box that displays an image relevant to the session.   Meaning, when I
fetch the image (http://www.domain.com/image/ which is embedded in the
page) as shown above, it's a new request and I get a completely random
image.

Is there a way to cache the images that are loaded during the initial
get($url) so that the image matches the content of the page
retrieved?  Or even to capture the session information transmitted to
the black box, domain.com/image/, so I can clone the information and
submit it with the get($imageurl)?

Ideally I would effectively like a routine like:  $mech-
>getComplete($url,$directory); which would save the source and images/
etc associate with the page.  Analogous to the Save-> Web page,
Complete in Firefox.

Thanks all.  I think I'm getting pretty proficient with WWW::Mechanize
but don't be afraid to respond like I am an idiot so that we know your
answer doesn't go over my head.

Hikari

Reply via email to