[ 
http://jira.codehaus.org/browse/MRM-1449?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=252425#action_252425
 ] 

Mike R. Haller commented on MRM-1449:
-------------------------------------

I'm going to try, the Archiva code base is very large.

I found the location which might be the first starting point:  
{{org.apache.maven.archiva.proxy.DefaultRepositoryProxyConnectors.transferFile()}}
 is where the Maven {{Wagon}} is asked by Archiva to retrieve files. E.g. for a 
HTTP Wagon, this is where the connection is opened. I assume keeping a counter 
in this class would be possible. (Are Plexus components stateful and 
singletons?)

Also, need to think about whether to 
1) stall the requests e.g. just wait a little bit before actually opening the 
connection. This may block the thread pretty long. (Bounded queue or counter?)
2) whether to cancel the request and return a "not connected" to the caller and 
let the Archiva core handle the deferred retry.

For incoming connections, this could also be done with an Apache bandwidth 
throttling module.

> Remote connections for many repos exhausts proxy limits
> -------------------------------------------------------
>
>                 Key: MRM-1449
>                 URL: http://jira.codehaus.org/browse/MRM-1449
>             Project: Archiva
>          Issue Type: New Feature
>          Components: remote proxy
>            Reporter: Mike R. Haller
>
> Our Archiva installation uses a company-internal caching proxy (ISA Server) 
> to connect to remote repositories.
> When there are many remote repositories and many developers trying to look up 
> artifacts (existing and non-existing artifacts, e.g. often -sources and 
> -javadoc attachments), Archiva is creating many HTTP connections to the 
> remote repositories.
> This leads to a situation where the caching proxy thinks Archiva is creating 
> too many connections. The ISA warning mail even suggests the host computer 
> may be infected with a worm because it creates so many new connections and 
> blocks the host completely for all outgoing HTTP requests.
> The policies for the remote repositories are configured for retrieving 
> "once", "never" or "daily", depending on whether it's releases or snapshots. 
> Caching failures is disabled and i'm trying with enabled failure caching, but 
> it doesn't make much difference and the problem still occurs once in a while.
> I think Archiva should have a configurable way to limit the number of (new) 
> connections made per time unit, e.g. "max 60 connections / minute" to prevent 
> this. It's kind of a potential denial of service vulnerability.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: 
http://jira.codehaus.org/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira

        

Reply via email to