Hi Tobias, Sorry for the delay. There are a number of reasons a document can be rejected for indexing. They are:
(1) URL criteria, as specified in the Web job's specification information (2) Maximum document length, as controlled by the output connection (you never told us what that was) (3) Mime type criteria, as controlled by the output connection So I bet this is a mime type issue. What content-type does the page have? What output connector are you using? Karl On Thu, Oct 6, 2011 at 7:18 AM, Wunderlich, Tobias <[email protected]> wrote: > Hey guys, > > > > I try to crawl a website generated with a Mediawiki-extension and always get > the message: > > > > “[WebcrawlerConnector.java:1312] - WEB: Decided not to ingest > 'http://wiki.<host>/index.php?title=Spezial%3AAlle+Seiten&from=p&to=s&namespace=0' > because it did not match ingestability criteria” > > > > Seed-url: > 'http://wiki.<host>/index.php?title=Spezial%3AAlle+Seiten&from=p&to=s&namespace=0 > > Inclusions (crawl and index): .* > > Exclusions: none > > > > Other sites are crawled without problems, so I’m wondering what those > ingestability criteria exactly are. > > > > Best regards, > > Tobias > >
