To keep the egg, you have to pass the debug argument to scrapyd-deploy.
Try it and look if the spider is put in the egg at all.
Does the command `scrapy list` work?

On Thursday, 16 February 2017 21:01:15 UTC+2, Arnaud Knobloch wrote:
>
> Hi there,
>
> I created my first scrapy project. I have an Ubuntu 16.04 server. I 
> installed scrapyd and scrapyd-client with pip (depency problems with 
> apt-get).
> When I deploy, there is no spider available...
>
> *scrapyd-deploy tre -p m_scrapy*
> fatal: No names found, cannot describe anything.
> Packing version r14-master
> Deploying to project "m_scrapy" in http://IP:6800/addversion.json
> Server response (200):
> {"status": "ok", "project": "m_scrapy", "version": "r14-master", "spiders": 
> 0, "node_name": "Tre"}
>
> *fatal: No names found, cannot describe anything. --> Seems not important. 
> I'm using version = GIT in my scrapy.cfg and I don't have annotated tag.*
>
> *curl http://IP:6800/schedule.json <http://IP:6800/schedule.json> -d 
> project=m_scrapy -d spider=m_scrapy*
> {"status": "error", "message": "spider 'm_scrapy' not found"}
>
> *curl http://IP:6800/listprojects.json <http://IP:6800/listprojects.json>*
> {"status": "ok", "projects": ["m_scrapy"], "node_name": "Tre"}
>
> *curl http://IP:6800/listspiders.json?project=m_scrapy 
> <http://IP:6800/listspiders.json?project=m_scrapy>*
> {"status": "ok", "spiders": [], "node_name": "Tre"}
>
> *Scrapy.cfg*
>
> [settings]
> default = m_scrapy.settings
>
> [deploy:local]
> url = http://localhost:6800/
> project = m_scrapy
> version = GIT
>
> [deploy:tre]
> url = http://IP:6800/
> project = m_scrapy
> version = GIT
>
> *Setting.py*
>
> BOT_NAME = 'm_scrapy'
>
> SPIDER_MODULES = ['m_scrapy.spiders']
> NEWSPIDER_MODULE = 'm_scrapy.spiders'
>
> ITEM_PIPELINES = {
>     'm_scrapy.pipelines.MPhoneImagePipeline':100,
>     'm_scrapy.pipelines.MAdItemPipeline':200,
>     'm_scrapy.pipelines.MAdImagesPipeline':300
> }
>
> MPHONEIMAGEPIPELINE_IMAGES_URLS_FIELD = 'phone_image_url'
> MPHONEIMAGEPIPELINE_RESULT_FIELD = 'phone_image'
>
> COOKIES_DEBUG = True
> LOG_ENABLED = True
> LOG_LEVEL = 'WARNING'
> LOG_STDOUT = False
> LOG_FILE = "%s_%s.log" % (BOT_NAME, time.strftime('%d-%m-%Y'))
> IMAGES_EXPIRES = 0
> MIMAGESPIPELINE_IMAGES_EXPIRES = 0
>
>
> When I'm using on my computer the crawler work fine.
> I already tried to delete *project-egg.info <http://project-egg.info>*, 
> *setup.py* and the *build* folder.
>
> Another question is: I don't have any .egg file in build, is it normal? 
>
> Thanks!
>
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at https://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to