Hi all,

I have written a small Python wrapper that allows you to work with 
Scrapyd's API without the hassle of HTTP, parsing responses or worrying 
where the endpoints are etc.

The project and quickstart can be found on Github: 

- https://github.com/djm/python-scrapyd-api

and the project itself is on PyPI here:

- https://pypi.python.org/pypi/python-scrapyd-api

It's fully tested /w 97% coverage and also has complete documentation on 
ReadTheDocs:

- http://python-scrapyd-api.readthedocs.org/en/latest/


To quickly showcase why it makes your life easier:

>>> from scrapyd_api import ScrapydAPI

>>> scrapyd = ScrapydAPI('http://localhost:6800')

>>> scrapyd.list_projects()

[u'ecom_project', u'estate_agent_project', u'car_project']

>>> scrapyd.list_versions('ecom_project')

[u'345', u'346', u'347', u'348']

>>> scrapyd.delete_version('ecom_project', '348')

True

>>> scrapyd.list_versions('ecom_project')

[u'345', u'346', u'347']


etc, etc. It covers all the actions the API does, including adding new 
project eggs.

Thanks! I'm glad I can give back something to a project which has helped me 
out on numerous occasions.

Cheers,

Darian

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to