> Thanks for the great answer, Scrapinghub looks really promising by the
> way. Generating Parsley sounds interesting, but I feel you've basically got
> that covered with slybot and an UI on top of that.
>
Sure. I think there is a lot of interesting work here, but it's not well
defined yet. There are many cases where slybot will not do exactly what you
want, so I like the idea of then generating python and continuing coding
from there. It's also better than browser addons for working with xpaths
(due to the fact it uses scrapy).


>
> I'm currently back to looking in the direction of an HTTP API, yet I feel
> the project as we discussed it before is a bit immature on its own. If
> anyone has had any uses for an HTTP API for their Scrapy spiders before
> that required some more intricate functionality, please get back to me so
> we could discuss how such an HTTP API could be extended beyond
> communicating with a simple spider. In the meanwhile, I'll be looking on on
> myself.
>

I agree, as it stands it's a bit light. I welcome some suggestions, I'll
think about it some more too.

One addition I thought about was instead of a single spider, wrap a project
and dispatch to any spider. Either based on spider name passed, or have
some domain -> spider mapping. This has come up before and would be useful.

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to