I need to correct my previous message...it turns out we do have artifacts larger than 40M even though that is the defined maximum, I'm not sure at this point how that is happening.
In our internal repository we have 40 artifacts which are over 100M in size, with the largest one being 366M. In snapshots, we have 61 artifacts that are >100M, where the largest is 342M. I'm not sure how significant these sizes are in terms of the indexer, but wanted to accurately reflect what we're dealing with. -----Original Message----- From: Stallard,David Sent: Monday, October 31, 2011 9:43 AM To: '[email protected]' Subject: RE: 100% CPU in Archiva 1.3.5 Brett Porter said: >>It's not unexpected that indexing drives it to 100% CPU momentarily, but causing it to become unavailable is unusual. How big are the artifacts it is scanning?<< The CPU was still at 100% on Monday morning, so having the weekend to index didn't seem to improve anything; the indexing queue was up to about 3500. We got a report that downloads from Archiva are extremely slow, so I just bounced it. CPU was immedately at 100% after the bounce, and the indexing queue is at 6. I expect that queue to continually rise, based on what I've seen after previous bounces. Our upload maximum size was 10M for the longest time, but we had to raise it to 20M a while back and then recently we raised it to 40M. But I would think that the overwhelming majority of our artifacts are 10M or less. Is there a way to increase the logging level? Currently, the logs don't show any indication of what it is grinding away on. After the startup stuff, there really isn't anything in archiva.log except for some Authorization Denied messages -- but these have been occurring for months and months, I don't think they are related to the 100% CPU issue that just started up about a week ago.
