To persist across runs, I would recommend tying in with the spider_close
event.  Write a callback that saves the stats object (serialize and store
in text file or database).

On Tue, Dec 30, 2014 at 9:39 AM, Daniel Fockler <[email protected]>
wrote:

> For any future people, I haven't tried using it but, the Memory Stats
> Collector stores the last run in memory, and you can access it.
>
> http://doc.scrapy.org/en/latest/topics/stats.html#memorystatscollector
>
>
> On Monday, December 29, 2014 2:06:20 PM UTC-8, Eric Valente wrote:
>>
>> Did you ever get any information about this? I'd love this feature.
>>
>> On Thursday, June 7, 2012 7:34:12 PM UTC-4, Mr wrote:
>>>
>>> Hey,
>>>
>>> when I run a spider via console I get this message every minute via
>>> stdout:
>>> INFO: Crawled 467 pages (at 80 pages/min), scraped 2850 items (at 511
>>> items/min)
>>>
>>> How can I access this nice piece of information and save it to the
>>> Stats so I can still see it even when I use scheduling and scrapyd?
>>>
>>> Thanks!
>>
>>  --
> You received this message because you are subscribed to the Google Groups
> "scrapy-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To post to this group, send email to [email protected].
> Visit this group at http://groups.google.com/group/scrapy-users.
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"scrapy-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To post to this group, send email to [email protected].
Visit this group at http://groups.google.com/group/scrapy-users.
For more options, visit https://groups.google.com/d/optout.

Reply via email to