Hi, Stefan, To date this is not implemented. I would suggest that this is the case due to the requirement to design custom crawls. It would be relatively trivial to get it dumped from within your crawl script.
Lewis On Tue, Oct 23, 2012 at 2:04 PM, Stefan Scheffler <[email protected]> wrote: > But there is no way, to get this 2 weeks later or so, if i don't stored the > nohup? > My approach to solve this would be to dump out all the urls and take the > difference between first and last fetch-time, which is not exact, but quite > close. > > Regards > Stefan > > > On 23.10.2012 15:04, Markus Jelsma wrote: >> >> Hi - this is printed to the command line and log for each individual job. >> -----Original message----- >>> >>> From:Stefan Scheffler <[email protected]> >>> Sent: Tue 23-Oct-2012 14:39 >>> To: [email protected] >>> Subject: Crawling Time >>> >>> Hello, >>> Is there a possibility to check how long a whole crawl took, after it is >>> finished? >>> >>> Regards >>> Stefan >>> >>> -- >>> Stefan Scheffler >>> Avantgarde Labs GmbH >>> Löbauer Straße 19, 01099 Dresden >>> Telefon: + 49 (0) 351 21590834 >>> Email: [email protected] >>> >>> > > > -- > Stefan Scheffler > Avantgarde Labs GmbH > Löbauer Straße 19, 01099 Dresden > Telefon: + 49 (0) 351 21590834 > Email: [email protected] > -- Lewis

