Hi there, Well, this approach has been investigated by some people already. Another approach that is easier to implement is use a javascript to translate the page on the browser side. For people using PHP, it's a couple on lines to open an output buffer that does the translation, and I'm sure we've seen that before in this list.
But to the main question, unfortunately no, Unicode does not define any kind of loose searching. There are some loose equivalency data in Unicode database, but that apparently does not include cases like Arabic and Persian Yeh. We at FarsiWeb are developing an standard for loose searching in Persian, but you know that's nothing to be implemented by Google. It's generally a tough problem. You can do much better in language-specific area, but a global loose searching scheme, I guess, typically gives a worse precision/recall, so will be avoided by search engines. behdad On Fri, 4 Jun 2004, Ordak D. Coward wrote: > Here is a solution (in fact a hack) that if implemented correctly, can > resolve some of the issues till people and Google start using correct > software: > > With a little tweaking, the web servers can translate the correct > Unicode to the incorrect unicode desired so much by the Win9X users. > That is, the web severs looks at the browser request, and if it can > detect Win9X, translates all U+06CC's in the document to U+064A (and > all other required translations). The same technique could be used to > fool google into generating correct search results. That, is the web > server generates a Win9X friendly version of the document and appends > it to the original document. You can also allocate tags that the user > of the web server can disable or enable some of these features. This > may even make one gain some advatnage over other web hosting > companies. > > Of course, the solution above is only a transient one, and it is up to > people to upgrade their Win9X machines to something that is > Unicode-compliant, also it is up to Google to program their systems > such that it can understand that both U+06CC and U+064A are the same > shape and hence should be regarded the same for searching unless user > requests otherwise. This is the same as case-insensitive search that > is usually implemented by mapping all upper and lower case characters > -- in documents and queries alike -- to uppercase. > > Behdad, does Unicode consortium provide a search collation table in > addition to the collation table used for sorting? Or can the same > table be used for this seach purposes as well? > > On Fri, 4 Jun 2004 08:50:41 -0400, Behdad Esfahbod > <[EMAIL PROTECTED]> wrote: > > > > Thanks for you note. > > > > There's a difference in the case of C++ standard and web > > standards: Writing non-standard C++ code only produces > > compile-time problems, but if you happen to compile the code, it > > works correctly (or supposed to do so). But it's quite a > > different case in web. 30-40 percent is low enough to get > > ignored, counting that the other way you are sacrificing the > > other 60-70% for not being able to find the document by searching > > in Google. And note that even with Win9x and a recent IE, and > > updated fonts, there's no problem. > > > > About using HTML entities, no matter what the encoding of the > > page is, HTML entities generate Unicode characters. It's quite > > common to see people exporting Persian documents in MS Word, and > > get an HTML page encoded in MS Arabic encoding, with Persian Yeh > > and Keh encoded in HTML entities. > > > > behdad > > > > PS. BTW, I just found that using Harakat (kasre, fathe, ...) > > also prevent a hit in Google search :(. That's quite expected, > > but perhaps I should reconsider my habbit of putting those tiny > > marks everywhere. > > > > > > On Fri, 4 Jun 2004, Ehsan Akhgari wrote: > > > > > > Unfortunately this kind of misinforming is quite popular in weblogs, > > > > where people only care about being visible to more people. > > > > > > I confess that I'm one of those who use this technique on their web sites. > > > I don't believe it's correct, and I don't think of it even as a semi-elegant > > > solution. It's a solution which just works on the largest number of > > > platforms. By inspecting the web server logs, I notice that still an > > > average of 30-40 percent of the visitors are using Win9x. Hopefully one can > > > start dropping support for Win9x users as their number is constantly > > > decreasing, but right now if I choose the standards compliant route of using > > > FARSI YEH everywhere, those Win9x-ers will not be able to browse my sites. > > > > > > I have a high respect and tendency to the standards. I'm mostly a C++ > > > programmer, and I'm one of those "preachers" of the C++ Standard. However, > > > today's C++ compilers are still not fully compliant to the C++ Standard, so > > > whenever someone asks me for advice on how to accomplish a certain task on a > > > non-conformant compiler, I show them the non-standards way, and also mention > > > the standards way, so that they know what the *right* way is, and also what > > > the way to do their job right now is. I see little difference in the web > > > standards land as well. > > > > > > Of course this 'solution' (if it can be called so) poses other problems, > > > such as the inability of correctly indexing of such words with both forms of > > > YEH by search engine spiders such as Google's, which must be addressed > > > separately. Also, if you choose to use the FARSI YEH form everywhere, then > > > again such problems will occur (such as a Win9x-er can neither correctly see > > > your pages nor fine them in Google; if they query for a word containing > > > YEH.) > > > > > > > They even go on and use HTML entities (like ٚ) instead of UTF-8, > > > > just because if the user's browser is set to something other than auto > > > > and UTF-8, the page is still rendered correctly... > > > > > > This one is silly, and I don't see how this can solve any problem. The > > > browsers are required to be able to correctly resolve such numerical > > > entities only if the page's encoding is already UTF-8, and if it is so, why > > > not use UTF-8 encoded characters in the first place? Also, some agents have > > > difficulties interpreting such numerical forms. Furthermore, maintaining > > > them is impossible (not hard), and even they can't be treated as text by > > > most software packages (for example, they can't be searched for by many > > > programs.) And the last, but not least, for a regular Persian document, > > > they're likely to increase the document size by more than two times. > > > > > > They have their own usage, of course, but I don't see any sense in using > > > them instead of UTF-8 characters for regular web pages. > > > > > > ------------- > > > Ehsan Akhgari > > > > > > Farda Technology (http://www.farda-tech.com/) > > > > > > List Owner: [EMAIL PROTECTED] > > > > > > [ Email: [EMAIL PROTECTED] ] > > > [ WWW: http://www.beginthread.com/Ehsan ] > > > > > > > > > > > > _______________________________________________ > > > PersianComputing mailing list > > > [EMAIL PROTECTED] > > > http://lists.sharif.edu/mailman/listinfo/persiancomputing > > > > > > > > > > --behdad > > behdad.org > > > > > > _______________________________________________ > > PersianComputing mailing list > > [EMAIL PROTECTED] > > http://lists.sharif.edu/mailman/listinfo/persiancomputing > > > _______________________________________________ > PersianComputing mailing list > [EMAIL PROTECTED] > http://lists.sharif.edu/mailman/listinfo/persiancomputing > > --behdad behdad.org _______________________________________________ PersianComputing mailing list [EMAIL PROTECTED] http://lists.sharif.edu/mailman/listinfo/persiancomputing