I have put a new beta at http://www.sslug.dk/~micke/plucker/beta/
/Mike
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
With BBC News Online
http://news.bbc.co.uk/low/english/pda/
and Wired
http://www.wired.com/news_drop/palmpilot
plucking is stopping at the third level for some reason. For example, I get
the graphical Wired page and the list of stories, but not
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
plucking is stopping at the third level for some reason. For example, I
get the graphical Wired page and the list of stories, but not individual
stories, even though I have MAXDEPTH=3 set in home.html.
I just verified this, and oddly
Greetings, I am a newbie Clie owner and plucker user. This is wonder stuff!
I have a problem with the Spider.py script when it comes to URLs with
name=val parameters. My machine has Python 2.1.1 (Mandrake Linux 8.1
distro). Spider.py appears to truncate such href urls at the first . For
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
I have a problem with the Spider.py script when it comes to URLs with
name=val parameters.
a
HREF=http://www.hti.umich.edu/cgi/r/rsv/rsv-idx?type=DIV1byte=1801Genesis/a
gets scanned as
http://www.hti.umich.edu/cgi/r/rsv/rsv-idx?type=DIV1
Im getting the same problem with max depth on wrired and bbc news, but Im
also getting a problem with bbc recipies.
I wont starve as ive got all the recipies plucked using the old pyplucker,
but i cant sem to pluck any of the bcc
recipies pages using plucker desktop, and the new pypluker.
I'm having the problem with BBC and Sci-Fi (http://www.scifi.com/handheld/)
- Original Message -
From: John Albrecht [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Wednesday, August 14, 2002 5:00 PM
Subject: Other bugs ive noticed in plucker desktop
Im getting the same problem with max
Quoting the href=value did the trick. So my problem was really malformed
html on the target site.
Fun fact: I have since observed that the URL works fine without the quotes
with Python 1.5. Python 2.1 is apparently more tempermental. And where is
the shell involved in this? I'm afraid I
Hi everyone.
I'm starting out with Plucker, and I'm interested to know if pages can
be processed after download to my machine, before they're compressed to
go on my Palm m100. I have a weather report page at...
http://www.bom.gov.au/cgi-bin/wrap_fwo.pl?IDW12300.txt
...that I want to download,
Try SiteScooper. It has a bit of a learning curve, but it's well worth it.
It will do exactly what you are asking about. www.sitescooper.org
Troy Eckhardt
- Original Message -
From: Mark Hewitt [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Thursday, August 15, 2002 12:46 AM
Subject:
Look at a program called sitescooper. It will support plucker and can
trim down sites that are way too flashy.
--Wes
Mark Hewitt said:
Hi everyone.
I'm starting out with Plucker, and I'm interested to know if pages can
be processed after download to my machine, before they're compressed to
11 matches
Mail list logo