You need to save a reference to the db object somewhere at init time, in
start_requests where you initialize the db connection seems the most
appropriate
def start_requests(self):
self.db = MySQLdb.connect(host="localhost", user="root", passwd="",
db="crawler_engine", charset = 'utf8', use_unicode = False)
cur = self.db.cursor()
cur.execute("select url from urls where num_crawl=1")
for url in cur.fetchall():
yield Request(url)
On Monday, January 13, 2014 10:57:19 PM UTC+1, d4v1d wrote:
>
> Thanks I have one last error (I hope;-))
>
> cursor = selft.db.cursor()
> exceptions.attributeError: 'CrawlSpider' object has no attribute 'db'
>
>
--
You received this message because you are subscribed to the Google Groups
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/groups/opt_out.