missing caching for robots.txt
------------------------------

                 Key: DROIDS-105
                 URL: https://issues.apache.org/jira/browse/DROIDS-105
             Project: Droids
          Issue Type: Improvement
          Components: core
            Reporter: Paul Rogalinski
         Attachments: CachingContentLoader.java

the current implementation of the HttpClient will not cache any requests to the 
robots.txt file. While using the CrawlingWorker this will result in 2 requests 
to the robots.txt (HEAD + GET) per crawled URL. So when crawling 3 URLs the 
target server would get 6 requests for the robots.txt.

unfortunately the contentLoader is made final in HttpProtocol, so there is no 
possibility to replace it with a caching Protocol like that one you'll find in 
the attachment.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to