On Mon, Dec 12, 2011 at 12:33 PM, Massimo Di Pierro <
massimo.dipie...@gmail.com> wrote:

> I agree. I think this should be a component of a larger project. Is
> should include the ability to recursively download all linked pages
> from the same domain (including html, css, js, images, etc.). It
> should not take that much to do but one needs to decide what to do
> with the downloaded data. I think all files should go into uploads
> with permissions and html should go into database so it can be edited.
> Not sure about css.
>
>
So.. until now , it only gets a copy of a url (a copy of a html) and
translate it to web2py
right?

Javier

Reply via email to