[
https://issues.apache.org/jira/browse/DRILL-7308?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16872964#comment-16872964
]
Paul Rogers edited comment on DRILL-7308 at 6/26/19 6:11 AM:
-------------------------------------------------------------
[~cgivre], the problem here is that the code shown earlier is counting on a
Protobuf implementation detail that is not actually a part of the Drill schema
specification (to the degree there is such a specification.) For VarChar, a
precision of 0 means that the user requested {{VARCHAR}}, while a precision of,
say, 10 means the user requested {{VARCHAR(10}}. The scale is never valid for
{{VARCHAR}}, it is an artifact of the incorrect way the above code was written.
The Protobuf issue is that, unlike a regular Java object, if we never actually
write to the precision field, then the value is unset. If we write, even if we
write 0, the value is set. We certainly don't want to litter our code with
things like:
{code:java}
if (precision != 0) { schemaBuilder.setPrecision(precision); }
{code}
So, we should ask if the precision is set and non-zero.
In fact, the type formatting code should not even be in the REST API. The
proper place for it is in {{Types}}. In fact, that class already has the
desired function: {{getExtendedSqlTypeName()}}. However, this function only
formats decimals; we need to add a case clause for VARCHAR.
That said, I actually have not seen any places in Drill where we set or use the
VARCHAR width. So, no point in trying to format it. In this case, you can just
use {{getExtendedSqlTypeName()}} directly as-is.
Please file a separate JIRA for the UDF issue. Please provide an attachment or
link to a sample UDF. I'll see if I can track down that CSV-specific issue in
case it relates to the EVF.
was (Author: paul.rogers):
[~cgivre], the problem here is that the code shown earlier is counting on a
Protobuf implementation detail that is not actually a part of the Drill schema
specification (to the degree there is such a specification.) For VarChar, a
precision of 0 means that the user requested {{VARCHAR}}, while a precision of,
say, 10 means the user requested {{VARCHAR(10}}. The scale is never valid for
{{VARCHAR}}, it is an artifact of the incorrect way the above code was written.
The Protobuf issue is that, unlike a regular Java object, if we never actually
write to the precision field, then the value is unset. If we write, even if we
write 0, the value is set. We certainly don't want to litter our code with
things like:
```
if (precision != 0) { schemaBuilder.setPrecision(precision); }
```
So, we should ask if the precision is set and non-zero.
In fact, the type formatting code should not even be in the REST API. The
proper place for it is in {{Types}}. In fact, that class already has the
desired function: {{getExtendedSqlTypeName()}}. However, this function only
formats decimals; we need to add a case clause for VARCHAR.
That said, I actually have not seen any places in Drill where we set or use the
VARCHAR width. So, no point in trying to format it. In this case, you can just
use {{getExtendedSqlTypeName()}} directly as-is.
Please file a separate JIRA for the UDF issue. Please provide an attachment or
link to a sample UDF. I'll see if I can track down that CSV-specific issue in
case it relates to the EVF.
> Incorrect Metadata from text file queries
> -----------------------------------------
>
> Key: DRILL-7308
> URL: https://issues.apache.org/jira/browse/DRILL-7308
> Project: Apache Drill
> Issue Type: Bug
> Components: Metadata
> Affects Versions: 1.17.0
> Reporter: Charles Givre
> Priority: Major
> Attachments: Screen Shot 2019-06-24 at 3.16.40 PM.png, domains.csvh
>
>
> I'm noticing some strange behavior with the newest version of Drill. If you
> query a CSV file, you get the following metadata:
> {code:sql}
> SELECT * FROM dfs.test.`domains.csvh` LIMIT 1
> {code}
> {code:json}
> {
> "queryId": "22eee85f-c02c-5878-9735-091d18788061",
> "columns": [
> "domain"
> ],
> "rows": [}
> { "domain": "thedataist.com" } ],
> "metadata": [
> "VARCHAR(0, 0)",
> "VARCHAR(0, 0)"
> ],
> "queryState": "COMPLETED",
> "attemptedAutoLimit": 0
> }
> {code}
> There are two issues here:
> 1. VARCHAR now has precision
> 2. There are twice as many columns as there should be.
> Additionally, if you query a regular CSV, without the columns extracted, you
> get the following:
> {code:json}
> "rows": [
> {
> "columns": "[\"ACCT_NUM\",\"PRODUCT\",\"MONTH\",\"REVENUE\"]" }
> ],
> "metadata": [
> "VARCHAR(0, 0)",
> "VARCHAR(0, 0)"
> ],
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)