[ 
https://issues.apache.org/jira/browse/LUCENE-8278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16489997#comment-16489997
 ] 

Steve Rowe commented on LUCENE-8278:
------------------------------------

I ran a test to check all TLDs appended to "example.", and 169 out of 1543 
possible TLDs have this problem:

{code:java}
"accountants", "ads", "aeg", "afl", "aig", "aol", "art", "audio", "autos", 
"aws", "axa", "bar", "bbc", "bet",
"bid", "bingo", "bms", "bnl", "bom", "boo", "bot", "box", "bzh", "cab", "cal", 
"cam", "camp", "car", "care", 
"careers", "cat", "cfa", "citic", "com", "coupons", "crs", "cruises", "deals", 
"dev", "dog", "dot", "eco", 
"esq", "eus", "fans", "fit", "foo", "fox", "frl", "fund", "gal", "games", 
"gdn", "gea", "gifts", "gle", 
"gmo", "goog", "hkt", "htc", "ing", "int", "ist", "itv", "jmp", "jot", "kia", 
"kpn", "krd", "lat", "law", 
"loans", "ltd", "man", "map", "markets", "med", "men", "mlb", "mma", "moe", 
"mov", "msd", "mtn", "nab", 
"nec", "new", "news", "nfl", "ngo", "now", "nra", "pay", "pet", "phd", 
"photos", "ping", "pnc", "pro", 
"prof", "pru", "pwc", "red", "reisen", "ren", "reviews", "run", "rwe", "sap", 
"sas", "sbi", "sca", "ses", 
"sew", "ski", "soy", "srl", "stc", "taxi", "tci", "tdk", "thd", "tjx", "top", 
"trv", "tvs", "vet", "vig", 
"vin", "wine", "works", "aco", "aigo", "arte", "bbt", "bio", "biz", "bmw", 
"book", "call", "cars", "cfd", 
"food", "gap", "gmx", "ink", "joy", "kim", "ltda", "menu", "meo", "mls", "moi", 
"mom", "mtr", "net", "nrw", 
"pink", "prod", "rent", "sapo", "sbs", "scb", "sex", "sexy", "skin", "sky", 
"srt", "vip"
{code}

In each of the above cases I've looked at, there is a TLD that is a prefix that 
is shorter by one letter (see the [branch_7x TLD 
regex|https://git1-us-west.apache.org/repos/asf?p=lucene-solr.git;a=blob;f=lucene/analysis/common/src/java/org/apache/lucene/analysis/standard/ASCIITLD.jflex-macro;hb=refs/heads/branch_7x]).
  Not sure if all such TLDs have this problem; I'll look.

Also, on branch_7x anyway (from which 7.3.0 was cut a few months ago), 
appending a space to the input still works around the problem for me, so I 
can't reproduce the non-working workaround that you say is a problem with 
7.3.0, [~drjz].

> UAX29URLEmailTokenizer is not detecting some tokens as URL type
> ---------------------------------------------------------------
>
>                 Key: LUCENE-8278
>                 URL: https://issues.apache.org/jira/browse/LUCENE-8278
>             Project: Lucene - Core
>          Issue Type: Bug
>            Reporter: Junte Zhang
>            Priority: Minor
>
> We are using the UAX29URLEmailTokenizer so we can use the token types in our 
> plugins.
> However, I noticed that the tokenizer is not detecting certain URLs as <URL> 
> but <ALPHANUM> instead.
> Examples that are not working:
>  * example.com is <ALPHANUM>
>  * example.net is <ALPHANUM>
> But:
>  * https://example.com is <URL>
>  * as is https://example.net
> Examples that work:
>  * example.ch is <URL>
>  * example.co.uk is <URL>
>  * example.nl is <URL>
> I have checked this JIRA, and could not find an issue. I have tested this on 
> Lucene (Solr) 6.4.1 and 7.3.
> Could someone confirm my findings and advise what I could do to (help) 
> resolve this issue?



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to