http://doc.scrapy.org/en/latest/faq.html#scrapy-crashes-with-importerror-no-module-named-win32api

El miércoles, 27 de agosto de 2014 17:30:50 UTC-3, Deivanayaki Rathinam 
escribió:
>
> Hi,
>
>  When i run program "dmoz" sample given on the scrapy tutorial, not 
> successfully, i got following errors, if any body knows please tell me
>
> C:\Python27\Scripts\tutorial>scrapy crawl dmoz
> 2014-08-28 01:32:14+0530 [scrapy] INFO: Scrapy 0.24.4 started (bot: 
> tutorial)
> 2014-08-28 01:32:14+0530 [scrapy] INFO: Optional features available: ssl, 
> http11
>
> 2014-08-28 01:32:14+0530 [scrapy] INFO: Overridden settings: 
> {'NEWSPIDER_MODULE'
> : 'tutorial.spiders', 'SPIDER_MODULES': ['tutorial.spiders'], 'BOT_NAME': 
> 'tutor
> ial'}
> 2014-08-28 01:32:14+0530 [scrapy] INFO: Enabled extensions: LogStats, 
> TelnetCons
> ole, CloseSpider, WebService, CoreStats, SpiderState
> Traceback (most recent call last):
> File "C:\Python27\lib\runpy.py", line 162, in _run_module_as_main
> "__main__", fname, loader, pkg_name)
> File "C:\Python27\lib\runpy.py", line 72, in _run_code
> exec code in run_globals
> File "c:\python27\Scripts\scrapy.exe\__main__.py", line 9, in <module>
> File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 143, in 
> execute
> _run_print_help(parser, _run_command, cmd, args, opts)
> File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 89, in 
> _run_print
> _help
> func(*a, **kw)
> File "C:\Python27\lib\site-packages\scrapy\cmdline.py", line 150, in 
> _run_comm
> and
> cmd.run(args, opts)
> File "C:\Python27\lib\site-packages\scrapy\commands\crawl.py", line 60, in 
> run
>
> self.crawler_process.start()
> File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 92, in start
> if self.start_crawling():
> File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 124, in 
> start_cra
> wling
> return self._start_crawler() is not None
> File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 139, in 
> _start_cr
> awler
> crawler.configure()
> File "C:\Python27\lib\site-packages\scrapy\crawler.py", line 47, in 
> configure
> self.engine = ExecutionEngine(self, self._spider_closed)
> File "C:\Python27\lib\site-packages\scrapy\core\engine.py", line 64, in 
> __init
> __
> self.downloader = downloader_cls(crawler)
> File "C:\Python27\lib\site-packages\scrapy\core\downloader\__init__.py", 
> line
> 73, in __init__
> self.handlers = DownloadHandlers(crawler)
> File 
> "C:\Python27\lib\site-packages\scrapy\core\downloader\handlers\__init__.p
> y", line 22, in __init__
> cls = load_object(clspath)
> File "C:\Python27\lib\site-packages\scrapy\utils\misc.py", line 42, in 
> load_ob
> ject
> raise ImportError("Error loading object '%s': %s" % (path, e))
> ImportError: Error loading object 
> 'scrapy.core.downloader.handlers.s3.S3Download
> Handler': No module named win32api
>
> C:\Python27\Scripts\tutorial>
>
> Thanks,
> Selvi Rathinam.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to scrapy-users+unsubscr...@googlegroups.com.
To post to this group, send email to scrapy-users@googlegroups.com.
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.
  • Run dmoz Deivanayaki Rathinam
    • Re: Run dmoz Nicolás Alejandro Ramírez Quiros

Reply via email to