Hi, Raynold

Sorry for slow response. Thanks for your suggestion. I'd like to document this in the API docs - SQL built-in functions. 

BTW, this is a real case we met in production, the Turkish data is from other systems through ETL. As what you mentioned, we use UDFs to avoid issues. E.g. for the special Turkish character "İ"(u+0130), we first
process by regexp_replace(c,'İ','I') before further processing.

Thanks,
Chenxiao Mao (Sean)
On 09/19/2018 14:18Reynold Xin<r...@databricks.com> wrote:
I'd just document it as a known limitation and move on for now, until there are enough end users that need this. Spark is also very powerful with UDFs and end users can easily work around this using UDFs.

--
excuse the brevity and lower case due to wrist injury


On Tue, Sep 18, 2018 at 11:14 PM seancxmao <seancx...@gmail.com> wrote:
Hi, all

We found that there are some differences about case handling of special characters between Spark and other database systems. You may see blow list for an example (you may also check attached pictures)

select upper("i"), lower("İ"), upper("ı"), lower("I");
------------------------------------------------------
Spark      I, i with dot, I, i
Hive       I, i with dot, I, i
Teradata   I, i,          I, i
Oracle     I, i,          I, i
SQLServer  I, i,          I, i
MySQL      I, i,          I, i

"İ" and "ı" are Turkish characters. If locale-sensitive case handling is used, the expected results of above upper/lower functions should be:
select upper("i"), lower("İ"), upper("ı"), lower("I");
------------------------------------------------------
İ, i, I, ı

But, it seems that these systems all do local-insensitive mapping. Presto explicitly describe this as a known issue in their docs (https://prestodb.io/docs/current/functions/string.html)
> The lower() and upper() functions do not perform locale-sensitive, context-sensitive, or one-to-many mappings required for some languages. Specifically, this will return incorrect results for Lithuanian, Turkish and Azeri.

Java besed systems have same behaviors since they all depend on the same JDK String methods. Teradata/Oracle/SQLServer/MySQL also have same behaviors. However Java based systems return different results for lower("İ"). Java based systems (Spark/Hive) return "i with dot" while other database systems(Teradata/Oracle/SQLServer/MySQL) return "i".

My questions:
(1) Should we let Spark return "i" for lower("İ"), which is same as other database systems?
(2) Should Spark support locale-sensitive upper/lower functions? Because row of a table may need different locales, we cannot even set locale at table level. What we might do is to provide upper(string, locale)/lower(string, locale), and let users decide what locale they want to use.

Some references below. Just FYI.


Your comments and advices are highly appreciated.

Many thanks!
Chenxiao Mao (Sean)

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscr...@spark.apache.org

Reply via email to