[
https://issues.apache.org/jira/browse/DRILL-5970?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16329290#comment-16329290
]
salim achouche commented on DRILL-5970:
---------------------------------------
My recommendation is to work with Pritesh to:
* Create a Drill Data Model (as suggested by Paul) to clarify all mapping &
conversion rules
* We need to figure out whether mapping rules are universal or reader specific
(e.g., [] is mapped to what?)
* We need to ensure that all Drill operators can handle compatible schema
changes so that we don't need to take drastic decisions (if technically
feasible)
* Example
** Col-A is reported as required; another data file has it as optional
** Can the operator handle this condition?
** This way the normal path runs in an optimal fashion
** Same idea for int to string conversion
* Aman suggested that we expose new modes to turn on / off the implicit
conversion functionality
* Future features might enable users to even filter out such rows and report
them to the user (views could be used to specify the desired schema)
In summary, the above discussion shed light to so many ambiguities that Drill
is suffering from (because of the schema free feature). Removing such
ambiguities (through a Data Model document) will go a long way in helping us
(incrementally) solving such issues.
> DrillParquetReader always builds the schema with "OPTIONAL" dataMode columns
> instead of "REQUIRED" ones
> -------------------------------------------------------------------------------------------------------
>
> Key: DRILL-5970
> URL: https://issues.apache.org/jira/browse/DRILL-5970
> Project: Apache Drill
> Issue Type: Bug
> Components: Execution - Codegen, Execution - Data Types, Storage -
> Parquet
> Affects Versions: 1.11.0
> Reporter: Vitalii Diravka
> Assignee: Vitalii Diravka
> Priority: Major
>
> The root cause of the issue is that adding REQUIRED (not-nullable) data types
> to the container in the all MapWriters is not implemented.
> It can lead to get invalid schema.
> {code}
> 0: jdbc:drill:zk=local> CREATE TABLE dfs.tmp.bof_repro_1 as select * from
> (select CONVERT_FROM('["hello","hai"]','JSON') AS MYCOL, 'Bucket1' AS Bucket
> FROM (VALUES(1)));
> SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
> SLF4J: Defaulting to no-operation (NOP) logger implementation
> SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
> details.
> +-----------+----------------------------+
> | Fragment | Number of records written |
> +-----------+----------------------------+
> | 0_0 | 1 |
> +-----------+----------------------------+
> 1 row selected (2.376 seconds)
> {code}
> Run from Drill unit test framework (to see "data mode"):
> {code}
> @Test
> public void test() throws Exception {
> setColumnWidths(new int[] {25, 25});
> List<QueryDataBatch> queryDataBatches = testSqlWithResults("select * from
> dfs.tmp.bof_repro_1");
> printResult(queryDataBatches);
> }
> 1 row(s):
> -------------------------------------------------------
> | MYCOL<VARCHAR(REPEATED)> | Bucket<VARCHAR(OPTIONAL)>|
> -------------------------------------------------------
> | ["hello","hai"] | Bucket1 |
> -------------------------------------------------------
> Total record count: 1
> {code}
> {code}
> vitalii@vitalii-pc:~/parquet-tools/parquet-mr/parquet-tools/target$ java -jar
> parquet-tools-1.6.0rc3-SNAPSHOT.jar schema /tmp/bof_repro_1/0_0_0.parquet
> message root {
> repeated binary MYCOL (UTF8);
> required binary Bucket (UTF8);
> }
> {code}
> To simulate of obtaining the wrong result you can try the query with
> aggregation by using a new parquet reader (used by default for complex data
> types) and old parquet reader. False "Hash aggregate does not support schema
> changes" error will happen.
> 1) Create two parquet files.
> {code}
> 0: jdbc:drill:schema=dfs> CREATE TABLE dfs.tmp.bof_repro_1 as select * from
> (select CONVERT_FROM('["hello","hai"]','JSON') AS MYCOL, 'Bucket1' AS Bucket
> FROM (VALUES(1)));
> +-----------+----------------------------+
> | Fragment | Number of records written |
> +-----------+----------------------------+
> | 0_0 | 1 |
> +-----------+----------------------------+
> 1 row selected (1.122 seconds)
> 0: jdbc:drill:schema=dfs> CREATE TABLE dfs.tmp.bof_repro_2 as select * from
> (select CONVERT_FROM('[]','JSON') AS MYCOL, 'Bucket1' AS Bucket FROM
> (VALUES(1)));
> +-----------+----------------------------+
> | Fragment | Number of records written |
> +-----------+----------------------------+
> | 0_0 | 1 |
> +-----------+----------------------------+
> 1 row selected (0.552 seconds)
> 0: jdbc:drill:schema=dfs> select * from dfs.tmp.bof_repro_2;
> {code}
> 2) Copy the parquet files from bof_repro_1 to bof_repro_2.
> {code}
> [root@naravm1 ~]# hadoop fs -ls /tmp/bof_repro_1
> Found 1 items
> -rw-r--r-- 3 mapr mapr 415 2017-07-25 11:46
> /tmp/bof_repro_1/0_0_0.parquet
> [root@naravm1 ~]# hadoop fs -ls /tmp/bof_repro_2
> Found 1 items
> -rw-r--r-- 3 mapr mapr 368 2017-07-25 11:46
> /tmp/bof_repro_2/0_0_0.parquet
> [root@naravm1 ~]# hadoop fs -cp /tmp/bof_repro_1/0_0_0.parquet
> /tmp/bof_repro_2/0_0_1.parquet
> [root@naravm1 ~]#
> {code}
> 3) Query the table.
> {code}
> 0: jdbc:drill:schema=dfs> ALTER SESSION SET `planner.enable_streamagg`=false;
> +-------+------------------------------------+
> | ok | summary |
> +-------+------------------------------------+
> | true | planner.enable_streamagg updated. |
> +-------+------------------------------------+
> 1 row selected (0.124 seconds)
> 0: jdbc:drill:schema=dfs> select * from dfs.tmp.bof_repro_2;
> +------------------+----------+
> | MYCOL | Bucket |
> +------------------+----------+
> | ["hello","hai"] | Bucket1 |
> | null | Bucket1 |
> +------------------+----------+
> 2 rows selected (0.247 seconds)
> 0: jdbc:drill:schema=dfs> select bucket, count(*) from dfs.tmp.bof_repro_2
> group by bucket;
> Error: UNSUPPORTED_OPERATION ERROR: Hash aggregate does not support schema
> changes
> Fragment 0:0
> [Error Id: 60f8ada3-5f00-4413-a676-4881fc8cb409 on naravm3:31010]
> (state=,code=0)
> {code}
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)