[
https://issues.apache.org/jira/browse/DRILL-5797?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16538552#comment-16538552
]
ASF GitHub Bot commented on DRILL-5797:
---------------------------------------
arina-ielchiieva commented on a change in pull request #1370: DRILL-5797: Use
Parquet new reader more often
URL: https://github.com/apache/drill/pull/1370#discussion_r201331494
##########
File path:
exec/java-exec/src/main/java/org/apache/drill/exec/store/parquet/ParquetReaderUtility.java
##########
@@ -140,15 +140,87 @@ public static int getIntFromLEBytes(byte[] input, int
start) {
return out;
}
+ /**
+ * Map full schema paths in format `a`.`b`.`c` to respective SchemaElement
objects.
+ *
+ * @param footer Parquet file metadata
+ * @return schema full path to SchemaElement map
+ */
public static Map<String, SchemaElement>
getColNameToSchemaElementMapping(ParquetMetadata footer) {
- HashMap<String, SchemaElement> schemaElements = new HashMap<>();
+ Map<String, SchemaElement> schemaElements = new HashMap<>();
FileMetaData fileMetaData = new
ParquetMetadataConverter().toParquetMetadata(ParquetFileWriter.CURRENT_VERSION,
footer);
- for (SchemaElement se : fileMetaData.getSchema()) {
- schemaElements.put(se.getName(), se);
+
+ Iterator iter = fileMetaData.getSchema().iterator();
+
+ // skip first default 'root' element
+ if (iter.hasNext()) {
+ iter.next();
+ }
+ while (iter.hasNext()) {
+ addSchemaElementMapping(iter, new StringBuilder(), schemaElements);
}
return schemaElements;
}
+ /**
+ * Populate full path to SchemaElement map by recursively traversing schema
elements referenced by the given iterator
+ *
+ * @param iter file schema values iterator
+ * @param path parent schema element path
+ * @param schemaElements schema elements map to insert next iterator element
into
+ */
+ private static void addSchemaElementMapping(Iterator iter, StringBuilder
path,
+ Map<String, SchemaElement> schemaElements) {
+ SchemaElement se = (SchemaElement)iter.next();
+ path.append('`').append(se.getName().toLowerCase()).append('`');
+ schemaElements.put(path.toString(), se);
+
+ int remainingChildren = se.getNum_children();
+
+ while (remainingChildren > 0 && iter.hasNext()) {
Review comment:
Why we need to count remaining children? Why `iterator.hasNext()` is not
sufficient?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]
> Use more often the new parquet reader
> -------------------------------------
>
> Key: DRILL-5797
> URL: https://issues.apache.org/jira/browse/DRILL-5797
> Project: Apache Drill
> Issue Type: Improvement
> Components: Storage - Parquet
> Reporter: Damien Profeta
> Assignee: Oleksandr Kalinin
> Priority: Major
> Fix For: 1.15.0
>
>
> The choice of using the regular parquet reader of the optimized one is based
> of what type of columns is in the file. But the columns that are read by the
> query doesn't matter. We can increase a little bit the cases where the
> optimized reader is used by checking is the projected column are simple or
> not.
> This is an optimization waiting for the fast parquet reader to handle complex
> structure.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)