Hi Found that there was a segment clean up script and 2 nutch crawls running simultaneously which caused problems for us.
We are good now. Regards -----Original Message----- From: Markus Jelsma [mailto:[email protected]] Sent: Thursday, July 07, 2016 9:34 PM To: [email protected] Subject: RE: Nutch 1.11 | memory leak? Hello - what memory is not getting released by what process? Crawls 'slowing down' is usually the case because more and more records are being fetched. I have never seen Nutch actually leaking memory in the JVM heap and since the process' memory is largely dictated by the max heap size (default 1g), the process' memory (RSS) usage can never exceed 1.2-1.5g. Additionally, each job in a crawl cycle is independent, the JVM exits and a new one is started. M. -----Original message----- > From:Megha Bhandari <[email protected]> > Sent: Thursday 7th July 2016 10:57 > To: [email protected] > Subject: Nutch 1.11 | memory leak? > > Hi > > After running multiple incremental crawls we are seeing a slowdown in our > Nutch box. Memory is not getting released. > We are using the following crawl command > ./crawl -i -D > solr.server.url=http://solrserver:8080/solr/solr_core_shard1_replica2 seeds > data6 1 > > Has anyone faced this issue in 1.11? > > Regards > Megha >

