[
https://issues.apache.org/jira/browse/LUCENE-6254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14325732#comment-14325732
]
Dawid Weiss commented on LUCENE-6254:
-------------------------------------
> Ambiguous forms can either be indexed or reduced to one lemma.
Sure, there's some sort of workaround for everything :) I'm not saying your
contribution is bad or anything, I just said in general it's a tricky problem.
The Polish dictionary in morfologik-stemming has 4,800,433 entries. That's
300mb of raw UTF8 where PoSs are highly ambiguous; most of it looks like this:
{code}
wracałyby wracać
verb:pot:pl:m2.m3.f.n1.n2.p2.p3:ter:imperf:nonrefl+verb:pot:pl:m2.m3.f.n1.n2.p2.p3:ter:imperf:refl.nonrefl
{code}
The PoS tag is a Cartesian product of all the alternatives separated by dots...
> Dictionary-based lemmatizer
> ---------------------------
>
> Key: LUCENE-6254
> URL: https://issues.apache.org/jira/browse/LUCENE-6254
> Project: Lucene - Core
> Issue Type: New Feature
> Components: modules/analysis
> Reporter: Erlend Garåsen
> Labels: patch
> Fix For: 5.0
>
> Attachments: LUCENE-6254.patch
>
>
> The only way to achieve lemmatization today is to use the
> SynonymFilterFactory. The available stemmers are also inaccurate since they
> are only following simplistic rules.
> A dictionary-based lemmatizer will be more precise because it has the
> opportunity to know the part of speech. Thus it provides a more precise
> method to stem words compared to other dictionary-based stemmers such as
> Hunspell.
> This is my effort to develop such a lemmatizer for Apache Lucene. The
> documentation is temporarily placed here:
> http://folk.uio.no/erlendfg/solr/lemmatizer.html
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]