My view is that we don't want to postpone, but we don't want to release with this problem either. Releasing like this will essentially make many use cases unworkable, as most of the time it's simply not feasible to crawl an entire domain. I'm almost done with an intermediary patch (not final, but it should reactivate the validators). If someone can look at it and give me some feedback, perhaps we can move on with the release. Thanks.
On Mon, May 23, 2011 at 10:30 PM, Richard Frovarp <[email protected]>wrote: > On 05/23/2011 01:53 PM, Eugen Paraschiv wrote: > >> Unfortunately there may be a significant issue with the functionality as >> well. I'm having some difficulty crawling a site based on depth (only >> crawling up to a certain depth). This is related to >> https://issues.apache.org/jira/browse/DROIDS-56 >> and the ongoing discussion there. The gist of it is that, if I'm not >> missing >> something, DROIDS-56 removed the existing validator functionality >> (although >> the validators themselves are still in the droids codebase), without >> replacing it with something else. A suggestion was made to use the filters >> to achieve the same functionality, but that is not possible without some >> work. This means that the only way to crawl a site now is to crawl all of >> it, with no regards to depth, which I see as a major problem. >> Any thoughts on this? >> If it's OK, I would start work on a temporary small temporary patch for >> this, and follow up with more work in 0.2, so that the timeline for the >> release is not affected to much. >> Thanks. >> >> > Do we want to hold off for a patch, or release knowing you have to search > the whole path? I think I've got everything to where the release artifacts > will be clean. >
