Hi Sven,
sorry, but currently MaxDB only supports the codepage ISO-8859-1 for ascii
data. This includes the character conversion from ascii to unicode in the
client interfaces and also the character manipulition functions in the
database kernel. If you want to use other code pages then you need to use
unicode columns in the database kernel and unicode hosttypes in your
application. But then the conversion from ascii data with your specific
codepage to unicode has to done by your application.

Because the string class in java supports unicode the conversion to a MaxDB
unicode columns should work without problems. The connect attribute
"unicode" does not influence the conversion of input and output values. It
is used to signal the driver that the username, password or the sql
statements contain unicode data. Then the driver sends the whole
communication pakets encoded in UCS2 and not only the input-/output data.
But this will increase the comunication overhead and the performance
significantly. Therefor "unicode=yes" is not the default. Besides connecting
an ascci database with "unicode=yes" leads to an error "-4025 Unknown or not
loaded character set".

Sorry for the inconvenience,
Marco

> -----Original Message-----
> From: news [mailto:[EMAIL PROTECTED] On Behalf Of Sven Köhler
> Sent: Sonntag, 4. September 2005 18:50
> To: maxdb@lists.mysql.com
> Subject: Client-APIs, unicode and charsets
> 
> Hi,
> 
> i always wondered, what the JDBC driver does if it isn't accessing a
> unicode-database. What is does, is quite inconvenient. AFAIK 
> is assumes
> ISO-8859-1. IMHO, the charset used for binary data _should_ be a
> parameter of the connection-url and it should default to 
> java's default
> charset - well, or to ISO-8859-1 if you like that better.
> 
> On the other hand, there's another problem:
> What if i connect with "unicode=yes" to a non-unicode 
> database? I guess,
> the MaxDB-kernel will convert the unicode-strings back to 
> byte-strings -
> but which charset is used for that? I guess this questions 
> also applies
> to writing strings to the database.
> 
> IMHO the JDBC should default to "unicode=yes" but with an adjustable
> charset for all conversions from unicode- to byte-strings - even those
> conversions that take place in the MaxDB kernel.
> 
> Non-unicode database are therefor currently unusable for JDBC-clients,
> if the applications that write into the database (non-JDBC-clients)
> don't use ISO-8859-1. In most cases, the charset these 
> applications use
> will depend on their current environment (i.e. the locale).
> The main point is: i guess that byte-strings are copied into the
> database uncheck - i mean, you cannot assume that these strings are
> ISO-8859-1 or anythings else. They are just byte strings.
> 
> On the other hand, currently unicode-database are currently 
> unsuable for
> clients like DBD::MaxDB, ODBC (using the byte-string-API), ...
> 
> All that i've said also applies to ODBC (if the ODBC-unicode API is
> used). The ODBC-driver doesn't accept any charset-parameter 
> too. So any
> conversions that takes place will again be based on some 
> charset that's
> either forced by the current locale-settings or hardcoded.
> 
> 
> Well, are you aware of all the problems?
> 
> When will that change?
> 
> 
> Thanks
>   Sven
> 

Attachment: smime.p7s
Description: S/MIME cryptographic signature

Reply via email to