On 08/03/15 13:53, Grégory Planchat wrote:
>> On 08/03/15 10:03, Grégory Planchat wrote:
>>> Then using multiple encodings in a same script or using a same script
>>> for multiple encodings becomes straightforward and standard. Most PHP
>>> developers doesn't even know what is Unicode or a character encoding,
>>> they just see "odd characters that are removed with a header() call or
>>> utf8_decode()", no teasing intended, they just don't want to have to
>>> handle this. PHP should not let this sort of consideration to the sole
>>> awareness of user-space developers.
>>
>> Not part of THIS discussion exactly, but I have to take that in
>> isolation. 'Most PHP developers' need to be very aware of Unicode these
>> days. Simply pretending it does not exist is a dangerous exercise and
>> my own code base has been UTF8 for several years now. Even though I
>> don't speak anything but English, a large section of the material one
>> has to handle has characters which get lost if one does not maintain
>> UTF8 through out the process. People are going on about 'data loss' when
>> converting, and that applies equally to strings as numbers.
>>
>> The default encoding these days is UTF8 ...
> 
> This is not exactly what I meant, and your point is the way things
> should be, of course.
> 
> What I meant is that a text search or fetching the size of a string
> *MUST* behave the same way, whatever which encoding you use, without
> having to know what is the actual encoding of the string at any time.
> 
> Currently a strlen on an UTF-8 behaves more like a C "sizeof(str) - 1"
> when you are using other characters than the ASCII page.
> 
> The idea is really making these statements work, whatever the encoding
> you are using :
> 
> "Lorem ipsum dolor sit amet"->length();
> "Lorem ipsum dolor sit amet"->search('lorem');
> "Lorem ipsum dolor sit amet"->replace('lorem', 'Lorem'); 

This is actually the problem that trying to ignore unicode then creates
a black hole. The amount of space needed to store the string is a
variable once one moves outside the single byte encodings, but where
legacy systems only allow buffering for the single byte version, one
gets a number of problems where the data returned has multi-byte
characters. The first example has several answers depending on what one
is doing with the return. Size of buffer needed (sizeof in my crib
sheet), or one of the methods of counting the number of symbols used
(count but with an agreed decoding). The other two actually work with
multi-byte strings until one adds 'adornments' to the characters which
may need a search to look for a set of similar words all with the same
meaning, just encoded differently.

My point is perhaps that it is all to easy nowadays or post/get data to
have multi-byte strings from different languages which trying to map to
a single byte solution is no longer appropriate. I've just been
downloading a set of documents which are essentially all English, but
the file names includes words from a number of other languages resulting
in UTF8 being the only way to store them, and ideally the search engine
should be able to find them again in the future.

-- 
Lester Caine - G8HFL
-----------------------------
Contact - http://lsces.co.uk/wiki/?page=contact
L.S.Caine Electronic Services - http://lsces.co.uk
EnquirySolve - http://enquirysolve.com/
Model Engineers Digital Workshop - http://medw.co.uk
Rainbow Digital Media - http://rainbowdigitalmedia.co.uk

-- 
PHP Internals - PHP Runtime Development Mailing List
To unsubscribe, visit: http://www.php.net/unsub.php

Reply via email to