Have you tried "in boolean mode"?
Santino Cusimano
At 16:30 -0500 20-11-2008, Little, Timothy wrote:
We are using MySQL 5.0.22 on CENTOS/redhat
linux. The table and database character-sets
are all utf8.
We have a database supporting numerous
languages. Of course, full-text works
beautifully with most of the languages.
But Chinese and Japanese are giving us problems,
and there is NO reason why it should be a
problem since we are taking measures to help the
database see word-breaks.
When we insert the Chinese and Japanese
passages, they have spaces (normal ASCII
$14-#32) between each word (verified). So
basically if you have two words like
{APPLE}{DRUM} then we put {APPLE} then space
then {DRUM}. If you have UTF-8 then you can
look at this sample, éOçø±Í¾ó ä åíËâÀ
When we try to match either {APPLE} or {DRUM}
individually (or technically éOçø±Í¾ó ä or
åíËâÀ ) then MySQL fails to find a match
against anything. But clearly it should find
those.
MySQL is only finding matches for Japanese and
Chinese on exact full-string matches, which is
clearly less than ideal.
I have already changed the ft min length setting to 1, to no avail.
What is going wrong, and how do I fix this?
Here is my sample query (selecting for ONE word
select *
from category_attributes
where match ( value ) against ( 'éOçø±Í¾ó ä' ) > 0
When I replace the word withåíËâÀ then it
still doesn't match anything. And there is a
row with merely
éOçø±Í¾ó ä spaceåíËâÀ
Tim...
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]