First time I installed *scrapyd *in Ubuntu 14.04**, I didn't use the 
generic way. 
Using apt-get, my *scrapyd *was considered a service that can be started 
and have (log/config/dbs...) dependencies but the *scrapy *version was very 
outdated.

So I installed *scrapyd *with pip in *virtualenv*. 
Although it is up to date I can't start *scrapyd *as a service and I can't 
find any dependencies. 
Where do I create the *Configuration *file to link (eggs/dbs/items/log) 
dependencies ? 

I have more than 10 spiders. Using a *remote Ubuntu server*, I want each 
spider to scrap *periodically *(once a weak for example) and send the data 
in a *mangodb *database. Most of the spiders don't have to scrap 
simultaneously. 

What is the *best approach* to run *scrapyd *as a service and run spiders 
periodically in my Ubuntu server?

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to