Hi @Pablo Hoffman and @Others,

I have looked at the documentation found in scrapy docs to run multiple 
spiders using python script. But what i need is. To take the main spider 
class names and sub spider class names and run for a particular url to 
check which spider works out for the given url.. 

While writing the spider, my spider stops after completing one spider 
execution, while trying to use reactor.run() inside a for loop like this... 
It is showing a "twister.internet error" that "reactor is not restartable"

def stop_reactor():
    reactor.stop()

    for spider in spider_classes:
       dispatcher.connect(stop_reactor, signal=signals.spider_closed)
        spider_obj = spider(url=args.url)
        crawler = Crawler(Settings())
        crawler.configure()
        crawler.crawl(spider_obj)
        crawler.start()
        reactor.run()

Can you please suggest me a solution for this??

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to