[ 
https://issues.apache.org/jira/browse/NUTCH-894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sertan Alkan updated NUTCH-894:
-------------------------------

    Attachment: NUTCH-894.patch

I agree to merging language extraction into one plugin and delegating this work 
to tika where possible, I am putting together a patch to just do this. This is 
mainly a housekeeping patch where it merges the two models in the parsing step 
and modifies the unit tests. Since we now rely on tika for language 
identification, patch removes any identification code and its test cases along 
with the resources, so beware, that it looks like rather a big diff.

Patch also introduces a new configuration option, lang.extraction.policy, to 
present users with an option to control the language extraction. So, the 
default action will stay the same, configured in the nutch-default.xml, the 
plugin will try to detect the language from headers and metadata, if this fails 
it will move on to use statistical identification. But, this way, users might 
be able to prefer one over another (only identification for instance).

Any thoughts on the approach?

> Move statistical language identification from indexing to parsing step
> ----------------------------------------------------------------------
>
>                 Key: NUTCH-894
>                 URL: https://issues.apache.org/jira/browse/NUTCH-894
>             Project: Nutch
>          Issue Type: Improvement
>          Components: parser
>    Affects Versions: 2.0
>            Reporter: Julien Nioche
>            Assignee: Julien Nioche
>             Fix For: 2.0
>
>         Attachments: NUTCH-894.patch
>
>
> The statistical identification of language is currently done part in the 
> indexing step, whereas the detection based on HTTP header and HTML code is 
> done during the parsing.
> We could keep the same logic i.e. do the statistical detection only if 
> nothing has been found with the previous methods but as part of the parsing. 
> This would be useful for ParseFilters which need the language information or 
> to use with ScoringFilters e.g. to focus the crawl on a set of languages.
> Since the statistical models have been ported to Tika we should probably rely 
> on them instead of maintaining our own.
> Any thoughts on this?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.

Reply via email to