Willem Bogaerts skribis :


In this case, it will go wrong in two places: utf-8 will be imported as
latin-1, and upon export the opposite will occur. For example, a euro
sign (0xE282AC) will be interpreted as "€", which might look as
garbage, but is perfectly valid latin-1. If your table fields are
defined as utf-8 encoded, these three characters are then RE-encoded to
the byte sequence 0xC3A2E2809AC2AC. On export, the exact opposite will
occur, so you will end up with the correct original data in the exported
file.

As a test, import some csv data with special characters (accented
characters, euro sign, etc.) and just view the table contents in Knoda.
If your test data contained "€", it would be rendered as a euro sign.

Example definition for a test table:
CREATE TABLE character_test(contents VARCHAR(255))CHARACTER SET utf8;

It will create a table called "character_test" with just one field,
called "contents".

Thanks for your answer,
Indeed, looking at one of my tables with Mysql-browser, it's obvious that I see UTF-8 coded-datas using ISO-something But looking at the same data through Knoda I just see according to UTF-8 encoding... Anyhow, on the hard didk there is just a sussession of 0 and 1, and the interpretation is in the software.

So, if I understand, the problem you are talking of arises because you don't use always Knoda to read the datas

(meantime I "read Google", and found some french explanations...thanks for the light you casted on this topic.
-------------------------------------------------------------------------
Check out the new SourceForge.net Marketplace.
It's the best place to buy or sell services for
just about anything Open Source.
http://ad.doubleclick.net/clk;164216239;13503038;w?http://sf.net/marketplace
_______________________________________________
Hk-classes-discuss mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/hk-classes-discuss

Reply via email to