How about this for what might be a simpler way of doing more or less
what Bostjan requests:

Currently, when you click on a link that wasn't retrieved, you have
the option to copy the URL to the Memo database, and you can if you
wish add text after the URL. This feature could be left exactly as is;
no changes to the viewer would be necessary. Instead, there could be
an additional program that the user runs manually which would read
the Memo database that has been hotsynced to the PC, would pull
out all the Memo records that were saved from the Plucker viewer,
create a temporary HTML file from them. The new program would then
call the parser to fetch all links in that HTML file.

This might be the fastest way of getting the suggested functionality,
although it would require that the user runs the new program
themselves (or sets a cron job). However that could be marketed as a
Feature because it gives them full control over when the missing links
are retrieved. :)

To specify maxdepths, file names and other options, a section with a
certain name could be added to the usual Plucker config files by the
user. The new program would refer to that section when calling the
parser. Also (possibly as a future enhancement) the viewer could be
modified to add a little pull-down "maxdepth" selection list to the
"External Link" screen: before you click on the "Copy URL" button,
you select the maxdepth for that particular link from the list, and
it's saved in the Memo pad entry in some format that the new program
can read and understand.


I currently do something vaguely like this myself: I use pilot-link to
download the Memo database in the form of emails, manually view
the memo email folder in Mutt, filter for "Plucker URLs" items,
and send those filtered items into a very quick and dirty Perl
program which pulls out the URLs (and optional text that I might
have added) and creates an HTML file (or appends them to the file if
it already exists). A cron job then runs the normal Plucker parser
on that HTML file once a day, using the default maxdepth of 2.  It
works very well for me so I think that a program to automate the
pilot-link, Mutt and Perl steps would be useful for others.

Alys

--
Alice Harris
Internet Services / ESD Operations, CITEC
[EMAIL PROTECTED]     [EMAIL PROTECTED]




On Mon, Nov 26, 2001 at 11:25:19AM -0800, David A. Desrosiers wrote:
> 
> > Example: I'm reading the downloaded news and I click on a link that was
> > not downloaded. I select this link (via checkbox) I name the link i.e.
> > "Link that was not downloaded the last time" and I select the depth of
> > gathering the information. When I come to another non-downloaded link, I
> > repeat the process.
> 
>       In order to do this, we need to actually store the string of
> characters which make up the "out of bounds" URLs which were not fetched.
> For a very large fetch, or a site which contained a lot of links, this could
> be a considerable size.
> 
>       Then there's the --no-urlinfo complex. If it's used, you lose the
> ability to retrieve those "out of bounds" urls.
> 
> > All that while I'm using my Palm. And when I HotSync the pda, the
> > Plucker downloads the newly made HTML page (i.e. "Extra pages that have
> > to be downloaded") and uses it in its next session. Would that be
> > possible? Is there any way you could implement this, while not needing
> > to rewrite the whole program all over again? :)
> 
>       And this brings up another issue, which is that we don't currently
> touch (update) the databases on the Palm with the parser on the desktop. In
> order to do this, we would now require that the Palm be in the cradle at
> *GATHER* time, or we'd have to cache off the Plucker databases on the
> desktop. Both are not ideal, and would require a lot more space on the Palm,
> assuming we use the Palm to add records.
> 
>       Alternately, we pull the database from the Palm, run a gather on the
> desktop, comparing against what is in the database we just pulled from the
> Palm, and then integrate those "missing" records. However, would you just
> want to append those records? Or remove the ones already read, and then
> replace them with the "out of bounds" records you checked?
> 
>       In order to do this, you need parser and viewer changes which
> slightly change the architecture a bit, by adding a 360-degree sync
> capability. The original Plucker implementation did this, with a local cache
> directory and then actually created the PDB on the Palm, using the desktop
> conduit, vs. the Python parser today which creates the PDB on the desktop,
> which you then sync to your Palm with your desktop tools.
> 
> 
> 
> /d
> 

Reply via email to