[ 
https://issues.apache.org/jira/browse/DRILL-8507?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17878091#comment-17878091
 ] 

ASF GitHub Bot commented on DRILL-8507:
---------------------------------------

ychernysh commented on PR #2937:
URL: https://github.com/apache/drill/pull/2937#issuecomment-2321094956

   @paul-rogers 
   I have tested the fix the following way. I have a Hadoop/Drill cluster with 
2 nodes running DataNode/Drillbit. I have a `dfs.tmp.people` table consisting 
of 100 parquet files each having 10 or more row groups with such schemas:
   ```
   # case 1:
   /tmp/people/{0..49}.parquet: id<INT(REQUIRED)> | name<VARCHAR(OPTIONAL)> | 
age<INT(OPTIONAL)>
   /tmp/people/{50..99}.parquet: id<INT(REQUIRED)>
   # case 2:
   /tmp/people/{50..99}.parquet: id<INT(REQUIRED)>
   /tmp/people/{100..149}.parquet: id<INT(REQUIRED)> | name<VARCHAR(OPTIONAL)> 
| age<INT(OPTIONAL)>
   ```
   The files are spread evenly across all DataNodes and when I run a query, I 
see (in each Drillbit's logs, and in query profile) that Drill reads in 
parallel in 2 Drillbits. I run such queries:
   ```
   SELECT age FROM dfs.tmp.people ORDER BY age;
   SELECT name FROM dfs.tmp.people ORDER BY name;
   SELECT age FROM dfs.tmp.people UNION ALL (VALUES(1));
   SELECT age FROM dfs.tmp.people UNION (VALUES(1));
   SELECT name FROM dfs.tmp.people UNION (VALUES ('Bob'));
   SELECT name FROM dfs.tmp.people UNION ALL (VALUES ('Bob'));
   ```
   They all succeeded. Without the fix, Drill would fail. And note that we need 
all of the 3 solutions provided in this PR to make all of them pass:
   1. Solve the naming problem
   2. Set the correct minor type
   3. Set the correct data mode
   
   The main idea of 
[DRILL-8508](https://issues.apache.org/jira/browse/DRILL-8508) solution is that 
since we scan footers of all the parquet files to read back at the planning 
phase in Foreman, we should already know what columns are (partially) missing 
and what are not. Knowing that `file1.parquet` contains `(a: VARCHAR:REQUIRED)` 
and `file2.parquet` has no `a` column at the planning phase, we can tell the 
reader 1 to forcefully put `a` column in a nullable vector (and not `REQUIRED`) 
and the reader 2 to create a missing column vector of type `VARCHAR` (and not 
default to `INT`). And since we've got this information even before any of the 
readers start to actually read, it doesn't matter what file would be read 
first. So this solution is order-agnostic 
   
   **Note about tests**: Unit tests for the fix, however, require having 
specific file read order. This is due to some operators such `UNION ALL`, who 
build their own output schema based on [the first prefetched batches from left 
and right 
inputs](https://github.com/apache/drill/blob/11aaa3f89cb85f7ef63e6cc5416c6b2f90e8322c/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/union/UnionAllRecordBatch.java#L89).
 Here it does matter what file would be read first since it defines the `UNION 
ALL` output schema. So the unit tests aim to do such an order that without the 
fix there would have been an error, while with the fix it succeeds.




> Missing parquet columns quoted with backticks conflict with existing ones
> -------------------------------------------------------------------------
>
>                 Key: DRILL-8507
>                 URL: https://issues.apache.org/jira/browse/DRILL-8507
>             Project: Apache Drill
>          Issue Type: Bug
>    Affects Versions: 1.21.2
>            Reporter: Yaroslav
>            Priority: Major
>         Attachments: people.tar.gz
>
>
> {*}NOTE{*}: I worked on this issue along with DRILL-8508. It turned out that 
> it required this bug to get fixed first. And since these 2 are about a little 
> bit different things it was decided to report them as separate issues. So, 
> I'm going to link this issue as a requirement for that issue and open one PR 
> for both (if it's allowed..). I think single PR would make it easier to 
> review the code since the issues are quite related anyway.
> h3. Prerequisites
> If a {{ParquetRecordReader}} doesn't find a selected column, it creates a 
> null-filled {{NullableIntVector}} with the column's name and the correct 
> value count set. The field name for the vector is derived from 
> {{SchemaPath#toExpr}} method, which always enquotes the outcome string with 
> backticks.
> h3. Problems
> This causes some wrong field name equality checks (comparing two strings of 
> field names, non-quoted and quoted, returns false, but essentially is 
> supposed to return true) that lead to some errors.
> For example, the errors occur when you select a column from a table where 
> some parquet files contain it and some do not. Consider a {{dfs.tmp.people}} 
> table with such parquet files and their schemas:
> {code:java}
> /tmp/people/0.parquet: id<INT(REQUIRED)> | name<VARCHAR(OPTIONAL)> | 
> age<INT(OPTIONAL)>
> /tmp/people/1.parquet: id<INT(REQUIRED)>{code}
> Now let's try to use an operator that doesn't support schema change. For 
> example, {{{}ORDER BY{}}}:
> {code:java}
> apache drill> SELECT age FROM dfs.tmp.people ORDER BY age;
> Error: UNSUPPORTED_OPERATION ERROR: Schema changes not supported in External 
> Sort. Please enable Union type.
> Previous schema: BatchSchema [fields=[[`age` (INT:OPTIONAL)]], 
> selectionVector=NONE]
> Incoming schema: BatchSchema [fields=[[`age` (INT:OPTIONAL)], [``age`` 
> (INT:OPTIONAL)]], selectionVector=NONE]
> Fragment: 0:0
> [Error Id: d3efffd4-cf31-46d5-9f6a-141a61e71d12 on node2.vmcluster.com:31010] 
> (state=,code=0)
> {code}
> ORDER BY error clearly shows us that ``age`` is an extra column here and the 
> incoming schema should only have the unquoted field to match the previous 
> schema.
> Another example is in {{UNION ALL}} operator:
> {code:java}
> apache drill> SELECT age FROM dfs.tmp.people UNION ALL (VALUES (1));
> Error: SYSTEM ERROR: IllegalArgumentException: Input batch and output batch 
> have different field counthas!
> Fragment: 0:0
> Please, refer to logs for more information.
> [Error Id: 81680275-92ee-4d1b-93b5-14f4068990eb on node2.vmcluster.com:31010] 
> (state=,code=0)
> {code}
> Again, "different field counts" issue is caused by an extra quoted column 
> that counts as different field.
> h3. Solution
> The solution for these errors would be to replace {{SchemaPath#toExpr}} call 
> with {{{}SchemaPath#getAsUnescapedPath{}}}, which doesn't enquote the outcome 
> string. Simply enough, but note that we used to use 
> {{SchemaPath#getAsUnescapedPath}} before DRILL-4264  where we switched to 
> {{{}SchemaPath#toExpr{}}}. The author even put a comment:
> {code:java}
> // col.toExpr() is used here as field name since we don't want to see these 
> fields in the existing maps{code}
> So it looks like moving to {{SchemaPath#toExpr}} was a conscious decision. 
> But, honestly, I don't really understand the motivation for this, even with 
> the comment. I don't really understand what "existing maps" are here. Maybe 
> someone from the community can help here, but I will further consider it as a 
> mistake, simply because it causes the above problems.
> h3. Regressions
> The change brings some regressions detected by unit tests. Those I found fail 
> because they changed their execution flow after the fix in these places:
>  * 
> [FieldIdUtil#getFieldId|https://github.com/apache/drill/blob/7c813ff6440a118de15f552145b40eb07bb8e7a2/exec/java-exec/src/main/java/org/apache/drill/exec/vector/complex/FieldIdUtil.java#L206]
>  * 
> [ScanBatch$Mutator#addField|https://github.com/apache/drill/blob/097a4717ac998ec6bf3c70a99575c7ff53f47430/exec/java-exec/src/main/java/org/apache/drill/exec/physical/impl/ScanBatch.java#L523-L524]
> Before the fix, the field name equality checks returned false from comparing 
> quoted and unquoted field names. The failing tests relied on that behavior 
> and worked only with this condition. But after the fix, when we compare two 
> quoted names and return true, we fall to a different branch and the tests 
> aren't ready for that.
> But obviously the change _is fixing the problem_ so the tests that relied on 
> that problem should now be adjusted.
> Please see more technical details on each failed unit test in the linked PR.
>  
>  
>  
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to