[
https://issues.apache.org/jira/browse/NUTCH-1732?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=18059134#comment-18059134
]
ASF GitHub Bot commented on NUTCH-1732:
---------------------------------------
lewismc commented on PR #891:
URL: https://github.com/apache/nutch/pull/891#issuecomment-3916543906
Hi @igiguere I had a think about this over the weekend and will share my
thoughts on two issues
1. The new private `CrawlTestParserFailure` inner class always fails
parsing. To test the realistic scenario of "succeed first, fail second", you
could create/introduce a flip-flopping variant similar to how
`CrawlTestSignatureReset` alternates between`STATUS_FETCH_SUCCESS` and
`STATUS_FETCH_GONE`. Something like
```
private class CrawlTestParserFailureAlternating extends
ContinuousCrawlTestUtil {
int counter = 0;
CrawlTestParserFailureAlternating(Context context) {
super(context);
}
@Override
protected CrawlDatum fetch(CrawlDatum datum, long currentTime) {
counter++;
datum.setStatus(STATUS_FETCH_SUCCESS); // always fetched OK
datum.setFetchTime(currentTime);
return datum;
}
@Override
protected List<CrawlDatum> parse(CrawlDatum fetchDatum) {
List<CrawlDatum> parseDatums = new ArrayList<>(0);
if (counter % 2 == 0) {
// Even fetches: parsing fails
parseDatums.add(new CrawlDatum(STATUS_PARSE_FAILED, 0));
} else {
// Odd fetches: parsing succeeds (emit signature)
CrawlDatum sig = new CrawlDatum(STATUS_SIGNATURE, 0);
sig.setSignature(getSignature());
parseDatums.add(sig);
}
return parseDatums;
}
@Override
protected boolean check(CrawlDatum result) {
if (counter % 2 == 0) {
return result.getStatus() == STATUS_DB_PARSE_FAILED;
} else {
return result.getStatus() == STATUS_DB_FETCHED
|| result.getStatus() == STATUS_DB_NOTMODIFIED;
}
}
}
```
is seems fairly practical and aligns with existing test patterns.
There is another solution! We already have `CrawlDBTestUtil.getServer()` for
spinning up an embedded Jetty server. You could replace the static
`ResourceHandler` with a custom handler that tracks request count per URL and
serves different content on subsequent requests (similar principle as above and
maybe more representative of real life fetching). This is also a good test...
your code could be something like
```
import org.eclipse.jetty.server.Request;
import org.eclipse.jetty.server.handler.AbstractHandler;
public class FlipFlopHandler extends AbstractHandler {
private final AtomicInteger requestCount = new AtomicInteger(0);
@Override
public void handle(String target, Request baseRequest,
HttpServletRequest request, HttpServletResponse response)
throws IOException, ServletException {
int count = requestCount.incrementAndGet();
response.setStatus(HttpServletResponse.SC_OK);
if (count % 2 == 1) {
// 1st fetch: valid HTML
response.setContentType("text/html");
response.getWriter().write("<html><head><title>Test</title></head>"
+ "<body>Hello World</body></html>");
} else {
// 2nd fetch: serve binary garbage with text/html MIME type
// This will cause the HTML parser to fail
response.setContentType("text/html");
byte[] garbage = new byte[1024];
new java.util.Random(42).nextBytes(garbage);
response.getOutputStream().write(garbage);
}
baseRequest.setHandled(true);
}
}
```
It would be good to implement some kind of test though. I agree.
> IndexerMapReduce to delete explicitly not indexable documents
> -------------------------------------------------------------
>
> Key: NUTCH-1732
> URL: https://issues.apache.org/jira/browse/NUTCH-1732
> Project: Nutch
> Issue Type: Bug
> Components: indexer
> Affects Versions: 1.8
> Reporter: Sebastian Nagel
> Priority: Critical
> Fix For: 1.23
>
>
> In a continuous crawl a previously successfully indexed document (identified
> by a URL) can become "not indexable" for a couple of reasons and must then
> explicitly deleted from the index. Some cases are handled in IndexerMapReduce
> (duplicates, gone documents or redirects, cf. NUTCH-1139) but others are not:
> * failed to parse (but previously successfully parsed): e.g., the document
> became larger and is now truncated
> * rejected by indexing filter (but previously accepted)
> In both cases (maybe there are more) the document should be explicitly
> deleted (if {{-deleteGone}} is set). Note that this cannot be done in
> CleaningJob because data from segments is required.
> We should also update/add a description for {{-deleteGone}}: it does not only
> trigger deletion of gone documents but also of redirects and duplicates (and
> unparseable and skipped docs).
--
This message was sent by Atlassian Jira
(v8.20.10#820010)