May I suggest you using StatsCollector <http://doc.scrapy.org/en/latest/topics/api.html#scrapy.statscol.StatsCollector.inc_value>, it is accessible on spider.crawler.stats and doing a PR on Scrapy would be nice too.
El miércoles, 6 de agosto de 2014 05:31:51 UTC-3, Gabriel Birke escribió: > > I have a spider that returns different item types (books, authors and > publishers) and would like to export each item type to its own file, being > flexible about the format (CSV or JSON). My first approach would be to use > a pipeline class, but then I lose the easy functionality of specifying the > feed exporters on the command line. Now here is a bunch of questions: > > 1) Is this the right approach or should I implement this differently? > 2) How can I reuse the exporters in a pipeline? > 3) How can I pass command line arguments to my pipeline class? > -- You received this message because you are subscribed to the Google Groups "scrapy-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to scrapy-users+unsubscr...@googlegroups.com. To post to this group, send email to scrapy-users@googlegroups.com. Visit this group at http://groups.google.com/group/scrapy-users. For more options, visit https://groups.google.com/d/optout.