[ 
https://issues.apache.org/jira/browse/SPARK-5918?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14329417#comment-14329417
 ] 

Michael Armbrust commented on SPARK-5918:
-----------------------------------------

This was a conscious design decision since the optimizations that fixed sized 
strings allow you to do are not very relevant when you aren't managing your own 
memory.  We want to be tolerant of schemas from other systems, but don't 
optimize this ourselves.   We can revisit this if there are use cases that need 
varchar.

> Spark Thrift server reports metadata for VARCHAR column as STRING in result 
> set schema
> --------------------------------------------------------------------------------------
>
>                 Key: SPARK-5918
>                 URL: https://issues.apache.org/jira/browse/SPARK-5918
>             Project: Spark
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 1.1.1, 1.2.0
>            Reporter: Holman Lan
>            Assignee: Cheng Lian
>
> This is reproducible using the open source JDBC driver by executing a query 
> that will return a VARCHAR column then retrieving the result set metadata. 
> The type name returned by the JDBC driver is VARCHAR which is expected but 
> reports the column type as string[12] and precision/column length as 
> 2147483647 (which is what the JDBC driver would return for STRING column) 
> even though we created a VARCHAR column with max length of 1000.
> Further investigation indicates the GetResultSetMetadata Thrift client API 
> call returns the incorrect metadata.
> We have confirmed this behaviour in  versions 1.1.1 and 1.2.0. We have not 
> yet tested this against 1.2.1 but will do so and report our findings.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to