On Thu, Dec 3, 2009 at 2:43 AM, George De Bruin <sndcha...@gmail.com> wrote:

> Actually, I was thinking about the ScrapBook extension when I started
> reading this thread...  The thing that the ScrapBook extension could be is a
> model of what a Zim plugin should look like...  The biggest thing is the
> ability to (a) scan X layers deep into a page, and (b) replace links with
> the local cached versions of the images / pages / etc, (c) the ability to
> selectively ignore sets of links, and (d) the ability to replace links that
> don't have pages / images in the cache with dummy links.
>
> IMO - this kind of functionality is really a bit more complicated than it
> appears at first blush.
>
>
Well, it doesn't have to be too complicated as long as you don't try to do
everything yourself. Downloading webpages can be done by a specialized
program like wget, which will also adopt all the links in the downloaded
pages to local links for you and has many other options.

Searching through a zim notebook and getting urls to feed to wget is
trivial, just open a page, iterate link elements in the parse tree and list
all links that start with "http" (or a short list of supported protocols).

Only part I would have to look into as where to place the hooks that allow
links to go through a plugin instead of to the browser directly when opening
a link in zim. But this should not be hard - just an implementation choice.

Regards,

Jaap
_______________________________________________
Mailing list: https://launchpad.net/~zim-wiki
Post to     : zim-wiki@lists.launchpad.net
Unsubscribe : https://launchpad.net/~zim-wiki
More help   : https://help.launchpad.net/ListHelp

Reply via email to