https://bugzilla.wikimedia.org/show_bug.cgi?id=8445


Brion Vibber <[email protected]> changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
             Status|NEW                         |RESOLVED
         Resolution|                            |FIXED




--- Comment #9 from Brion Vibber <[email protected]>  2009-06-24 02:28:23 UTC 
---
Implementation committed in r52338:

Big fixup for Chinese word breaks and variant conversions in the MySQL search
backend...
- removed redunant variant terms for Chinese, which forces all search indexing
to canonical zh-hans
- added parens to properly group variants for languages such as Serbian which
do need them at search time
- added quotes to properly group multi-word terms coming out of stripForSearch,
as for Chinese where we segment up the characters. This is based on
Language::hasWordBreaks() check.
- also cleaned up LanguageZh_hans::stripForSearch() to just do segmentation and
pass on the Unicode stripping to the base Language implementation, avoiding
scary code duplication. Segmentation was already pulled up to LanguageZh, but
was being run again at the second level. :P
- made a fix to Chinese word segmentation to handle the case where a Han
character is followed by a Latin char or numeral; a space is now added after as
well. Spaces are then normalized for prettiness.


-- 
Configure bugmail: https://bugzilla.wikimedia.org/userprefs.cgi?tab=email
------- You are receiving this mail because: -------
You are on the CC list for the bug.

_______________________________________________
Wikibugs-l mailing list
[email protected]
https://lists.wikimedia.org/mailman/listinfo/wikibugs-l

Reply via email to